Empty ZFS SAN file system with high read I/O using MPXIO


 
Thread Tools Search this Thread
Operating Systems Solaris Empty ZFS SAN file system with high read I/O using MPXIO
# 1  
Old 01-18-2012
Empty ZFS SAN file system with high read I/O using MPXIO

I am researching the cause of an issue. The SAN file system /export/pools/zd-xxxxxxxxxxx is having a high amount of read traffic even though it is empty. It is ZFS with MPXIO. Any ideas? It's really strange considering the file system is empty, and I don't see any errors.

Code:
 
     cpu
 us sy wt id
  1  2  0 97
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 lofi1
    0.8    0.0   16.7    0.0  0.0  0.0    0.0    7.2   0   1 c1t2d0
   88.9    0.0  915.0    0.0  0.0  0.4    0.0    4.0   0  15 c1t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c3t204300A0B85637A0d31
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t201200A0B85637A0d31
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004F24A6DD8B2d0
    0.2    6.4   12.8  127.6  0.0  1.3    0.0  198.0   0  39 c6t600A0B8000338556000004F04A6DD891d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004EE4A6DD873d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004EC4A6DD855d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004EA4A6DD7C3d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E84A6DD75Fd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E64A6DD6E8d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E44A6DD65Ad0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E24A6DD4FEd0
    0.2    0.0    0.9    0.0  0.0  0.0    0.0    0.2   0   0 c6t600A0B80003385560000047D4A66F733d0
  158.7   13.4 11836.0  527.9  0.0  6.5    0.0   38.0   0  99 c6t600A0B8000338556000004804A66F782d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004824A66F86Bd0
    0.0    0.2    0.0    7.2  0.0  0.0    0.0   45.1   0   1 c6t600A0B80003385560000060F4C94C963d0
  135.5   15.4 11292.9  692.6  0.0  6.9    0.0   45.4   0  97 c6t600A0B80003385560000060D4C94C947d0
    0.0    0.2    0.0    7.2  0.0  0.1    0.0  306.0   0   6 c6t600A0B80003385560000060B4C94C931d0
    0.6    5.2   38.4  124.0  0.0  1.1    0.0  188.6   0  33 c6t600A0B8000338556000006094C94C91Bd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000006074C94C901d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000006054C94C8E3d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B80005637A00000061C4C98AF1Bd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000563766000006614C98AF62d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd1
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd1
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd
 
root@serverXXXX# zpool status zd-xxxxxxxxxxx
  pool: zd-xxxxxxxxxxx
 state: ONLINE
 scrub: none requested
config:
        NAME                                     STATE     READ WRITE CKSUM
        zd-xxxxxxxxxxx                          ONLINE       0     0     0
          c6t600A0B8000338556000004804A66F782d0  ONLINE       0     0     0
          c6t600A0B80003385560000060D4C94C947d0  ONLINE       0     0     0
errors: No known data errors
 
root@serverXXXX# zfs list zd-xxxxxxxxxxx
NAME              USED  AVAIL  REFER  MOUNTPOINT
zd-xxxxxxxxxxx  25.8G  72.2G    18K  /export/pools/zd-xxxxxxxxxxx
 
root@serverXXXX# df -h .
Filesystem             size   used  avail capacity  Mounted on
zd-xxxxxxxxxxx         98G    18K    72G     1%    /export/pools/zd-xxxxxxxxxxx
 
root@serverXXXX# luxadm display /dev/rdsk/c6t600A0B8000338556000004804A66F782d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c6t600A0B8000338556000004804A66F782d0s2
  Vendor:               SUN
  Product ID:           CSM200_R
  Revision:             0760
  Serial Num:           SG82421613
  Unformatted capacity: 51200.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x3
    Maximum prefetch:   0x0
  Device Type:          Disk device
  Path(s):
  /dev/rdsk/c6t600A0B8000338556000004804A66F782d0s2
  /devices/scsi_vhci/ssd@g600a0b8000338556000004804a66f782:c,raw
   Controller           /devices/pci@400/pci@0/pci@d/SUNW,emlxs@0,1/fp@0,0
    Device Address              201200a0b833854a,1
    Host controller port WWN    100xxxxxxxxxxx
    Class                       secondary
    State                       STANDBY
   Controller           /devices/pci@500/pci@0/pci@c/SUNW,emlxs@0/fp@0,0
    Device Address              204300a0b833854a,1
    Host controller port WWN    100xxxxxxxxxxx
    Class                       primary
    State                       ONLINE
root@serverXXXX# luxadm display /dev/rdsk/c6t600A0B80003385560000060D4C94C947d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c6t600A0B80003385560000060D4C94C947d0s2
  Vendor:               SUN
  Product ID:           CSM200_R
  Revision:             0760
  Serial Num:           SG82421610
  Unformatted capacity: 51200.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x3
    Maximum prefetch:   0x0
  Device Type:          Disk device
  Path(s):
  /dev/rdsk/c6t600A0B80003385560000060D4C94C947d0s2
  /devices/scsi_vhci/ssd@g600a0b80003385560000060d4c94c947:c,raw
   Controller           /devices/pci@400/pci@0/pci@d/SUNW,emlxs@0,1/fp@0,0
    Device Address              201200a0b833854a,10
    Host controller port WWN    100yyyyyyyyyyyyy
    Class                       secondary
    State                       ONLINE
   Controller           /devices/pci@500/pci@0/pci@c/SUNW,emlxs@0/fp@0,0
    Device Address              204300a0b833854a,10
    Host controller port WWN    100xxxxxxxxxxx
    Class                       primary
    State                       STANDBY
 
root@serverXXXX# luxadm -e port
/devices/pci@400/pci@0/pci@d/SUNW,emlxs@0/fp@0,0:devctl            NOT CONNECTED
/devices/pci@400/pci@0/pci@d/SUNW,emlxs@0,1/fp@0,0:devctl          CONNECTED
/devices/pci@500/pci@0/pci@c/SUNW,emlxs@0/fp@0,0:devctl            CONNECTED
/devices/pci@500/pci@0/pci@c/SUNW,emlxs@0,1/fp@0,0:devctl          NOT CONNECTED
 
root@serverXXXX# cd /export/pools/zd-xxxxxxxxxxx
root@serverXXXX# ls -la
total 6
drwxr-xr-x   2 root     root           2 Jan 18 02:26 .
drwxr-xr-x   9 root     root           9 Nov 18  2010 ..
root@serverXXXX#

# 2  
Old 01-22-2012
Try dtrace, there is a"canned" module that will help - assuming Solaris 10
Code:
dtrace -s  /usr/demo/dtrace  iotime.d | awk '/ R/ && /devname/'

Where devname is the device name you want
You can see if there is a process blasting away at a file. You may want to hack a local copy of the iotime.d script to print the process PID(s).
# 3  
Old 01-22-2012
On second thought - try the canned whoio.d script, it prints more like what you need.

Assuming the problem is a single process or a bunch of process LWP's.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

How to grow a zfs file system?

Hi I have the following file system, that needs to be expanded to more 500Gb, so a total of 1Tb: df -h /oradata1 Filesystem Size Used Available Capacity Mounted on oradata1 587G 517G 69G 89% /oradata1 I am not familiar with zfs, I am more... (17 Replies)
Discussion started by: fretagi
17 Replies

2. Solaris

Solaris 11.3 - SAN mount QFS or ZFS

Hi all, I'm using Solaris 11.3. HBA port connected to SAN disk 3T. AVAILABLE DISK SELECTIONS: 0. c0t600A0B800033696A0000214B571938F1d0 <SUN-CSM200_R-0760 cyl 44556 alt 2 hd 255 sec 189> /scsi_vhci/ssd@g600a0b800033696a0000214b571938f1 1. c2t3C58620E0C565100d0... (1 Reply)
Discussion started by: manhte1
1 Replies

3. Solaris

Not able to increase ZFS file system on NGZ

I have Solaris-10 server running ZFS file-system. ctdp04_vs03-pttmsp01 is one of the non global zone. I wanted to increase a /ttms/prod file-system of zone, which is actually /zone/ctdp04_vs03-pttmsp01/ttms/prod on global server. I have added a new disk of 9 GB, which is emcpower56a and now I can... (16 Replies)
Discussion started by: solaris_1977
16 Replies

4. Solaris

MPXIO not working on a machine with both SAN and Tape drives.

Hi, I have a machine, and mpzio fails everytime i reboot the server. The machine has tape drives and SAN storage, I'm not sure if that is my issue or not. I was told to not enable mpxio globally as that would do something bad to the tape drives. So, I set it to only make the two SAN connected... (4 Replies)
Discussion started by: BG_JrAdmin
4 Replies

5. Emergency UNIX and Linux Support

Not able to extend ZFS file system

Hi All, I have Solaris-10 configured with two non-global zones. All file-systems are mounted on global zone and data file-systems are mounted on non-global zone as lofs. I have added 4 luns of 100 GB each and still not able to extend a file-system. This is production server, so I can not... (5 Replies)
Discussion started by: solaris_1977
5 Replies

6. Solaris

Patching on ZFS file-system

Hi, I have Solaris-10 (Update-7). This is having ZFS file-system and 10 sparse-root zones are there. I want to install Solaris-10 recommended patch cluster on it, but not sure, how to go ahead with procedure. I want to patch one side of the mirror and keep intact another side safe in case of... (6 Replies)
Discussion started by: solaris_1977
6 Replies

7. Solaris

increase SWAP on ZFS file system

Hi All, I am using this commands to dynamically increase ZFS swap space on Solaris my question is: 1- after i make these commands it will permanent or it will remove after restart 2- how to make it permanent # swap -l swapfile dev swaplo bloques libre /dev/zvol/dsk/rpool/swap... (4 Replies)
Discussion started by: osmanux
4 Replies

8. Shell Programming and Scripting

ZFS file system - memory monitoring

I am working on a server where the 'root' user ZFS filesystem. Now when I do Top commands it says only 750M free .But when I count the actual memory utilized it comes only to 12 GB and the total size of the server is 32G. I think rest of the space is held up by ZFS file system. Is there a... (5 Replies)
Discussion started by: prasperl
5 Replies

9. Solaris

Ldom OS on SAN based zfs volume

Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain VDS NAME LDOM VOLUME DEVICE primary-vds0 primary iso sol-10-u6-ga1-sparc-dvd.iso cdrom ... (16 Replies)
Discussion started by: fugitive
16 Replies

10. UNIX for Dummies Questions & Answers

zfs file system

Hi, I try add a new file system: #zfs create dsk1/mqm it came back with: #cannot create 'dsk1/mqm': no such pool 'dsk1' what do I have to do? Kind regards Mehrdad (2 Replies)
Discussion started by: mehrdad68
2 Replies
Login or Register to Ask a Question