How to map device to mount point?


 
Thread Tools Search this Thread
Operating Systems Solaris How to map device to mount point?
# 8  
Old 10-15-2018
Quote:
Originally Posted by Sean
Code:
In any case, the iostat numbers you posted do not look to show any issue.

Correct, it is for test server, not the production server which has the performance issue. The test and the production are setup the same.

I wanted to show the people how to utilize iostat to identify the I/O with the mount points.
But I do not have the root access on production.
I am no Solaris expert by any stretch, but some principles in performance tuning remain the same in every OS: does the production server have "real" disks or is it a virtual guest operating on virtual disks too? If the latter is the case it is probably the wrong place you are looking at anyway. Under the virtual disks there have to be some real devices - the LUNs on a storage box, members of a RAID in the host server, whatever. It is at these systems where you have to measure I/O, not on your virtualised guest.

Consider this (hypothetical) scenario: a server with 5 guests, g1-5 and a disk in this server where virtual disks for these guests reside. If g5 has heavy I/O this will influence the remaining available bandwidth which g1-4 could use. Therefore measurements on g1 because this guest has "intermittent performance issues" will tell you nothing about real issues, it will in fact only tell you when g5 has load peaks. You may not even know what you measure because maybe you don't know what g5 is doing and when.

It is a worthwhile effort to first get a detailed setup so that you can visualise the "flow" between the various interdependent parts of the machinery. Only then test/measure one component after the other to find out where the bottleneck is located.

I hope this helps.

bakunin
# 9  
Old 10-15-2018
Hi Sean,

I have had a little think about this problem during my break and I now realise that the information that you are looking for will not be easy to come by, the nature of ZFS makes it increasingly difficult as you add more disks to the zpool.

ZFS dynamically creates a block to vdev relationship based on block size (recordsize) and the number of disks in the pool. So if we create a pool with four disks and a block size of 128k (default), the blocks are allocated basically on a round robin basis across the four disks.

So identifying a file system to vdev relationship will not be easy, you could tackle it like this;

Code:
root@fvssphsun01:~# zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
rpool                             325G   224G  73.5K  /rpool
rpool/ROOT                       9.29G   224G    31K  legacy
rpool/ROOT/s11331                9.20G   224G  4.13G  /
rpool/ROOT/s11331/var            3.07G  60.9G  1.69G  /var
rpool/ROOT/solaris               92.9M   224G  2.93G  /
rpool/ROOT/solaris/var           7.08M  64.0G  1.32G  /var
rpool/S11.3_GA                     31K   224G    31K  /Shared/S11.3_GA
rpool/S11.3_REPO                  134G   224G   134G  /export/s11repo
rpool/S11.3_SRU_17.5               31K   224G    31K  /Shared/S11.3_SRU_17.5
rpool/VARSHARE                   9.47G   224G  36.8M  /var/share
rpool/VARSHARE/pkg               9.43G   224G    32K  /var/share/pkg
rpool/VARSHARE/pkg/repositories  9.43G   224G  9.43G  /var/share/pkg/repositories
rpool/VARSHARE/zones               31K   224G    31K  /system/zones
rpool/backup                      144K   224G   144K  /backup
rpool/backups                      31K   224G    31K  /backups
rpool/dump                        132G   228G   128G  -
rpool/export                     8.08G   224G    33K  /export
rpool/export/home                8.08G   224G  4.04G  /export/home
rpool/export/home/e400007        44.1M   224G  44.1M  /export/home/e400007
rpool/export/home/e415243        4.00G   224G  4.00G  /export/home/e415243
rpool/patrol                     2.84G   224G  2.84G  /usr/local/patrol
rpool/patroltmp                  1.27M   224G  1.27M  /usr/local/patrol/tmp
rpool/swap                       16.5G   225G  16.0G  -
rpool/swap2                      12.4G   225G  12.0G  -
root@fvssphsun01:~# dd if=/dev/zero of=/export/home/e415243/test bs=128k count=1
1+0 records in
1+0 records out
root@fvssphsun01:~# zdb -dddddddd rpool/export/home/e415243/test
zdb: can't find 'rpool/export/home/e415243/test': No such file or directory
root@fvssphsun01:~# zdb -dddddddd rpool/export/home/e415243 > /export/home/e415243/tmp/delete_me

Now you have to go and have a look at the output and find what you want - but be warned;

Code:
root@fvssphsun01:~# cd /export/home/e415243/tmp
root@fvssphsun01:/export/home/e415243/tmp# ls -l
total 6153
-rw-r--r--   1 root     root     3072629 Oct 15 13:03 delete_me
root@fvssphsun01:/export/home/e415243/tmp#

Getting the required output;

Code:
  Object  lvl   iblk   dblk  dsize  lsize   %full  type
        73    1    16K   128K   128K   128K  100.00  ZFS plain file (K=inherit) (Z=inherit)
                                        168   bonus  System attributes
        dnode flags: USED_BYTES USERUSED_ACCOUNTED
        dnode maxblkid: 0
        path    /test
        uid     0
        gid     0
        atime   Mon Oct 15 13:00:33 2018
        mtime   Mon Oct 15 13:02:29 2018
        ctime   Mon Oct 15 13:02:29 2018
        crtime  Mon Oct 15 13:00:33 2018
        gen     3333994
        mode    0100644
        size    131072
        parent  4
        links   1
        pflags  0x40800000204
Indirect blocks:
                 0 L0 0:0x6633a25a00:0x20000 0x20000L/0x20000P F=1 B=3334017/3334017 ---

                segment [000000000000000000, 0x0000000000020000) size  128K

Where I have a single line across the bottom (beginning with 0), your pool should show 5 lines - one for each vdev you should be able to see which vdev the output was written to. If you write a file bigger than 640K it will write at least one block to each - zfs manages that bit. As for the zfs file systems they are striped across however many disks are in the pool.

Can you tell us what the hardware is, this looks suspiciously like the view from inside an ldom.

Please post the output from echo | format (or a part of it if it's too big ) and if possible /usr/sbin/virtinfo -a this will give a good starting point.

Regards

Gull04

Last edited by gull04; 10-15-2018 at 09:44 AM.. Reason: More Information Added
# 10  
Old 10-15-2018
Solaris 11 on x86 or SPARC ?
I'll presume it's SPARC as far as Oracle VM info goes ...

Try the following iostat command :
Code:
iostat -xcnzCTd 3 10

As manual states :
Code:
....
     -x          Report  extended  disk  statistics.  By  default, disks are
                   identified by instance names such as ssd23 or  md301.  Com-
                   bining the x option with the -n option causes disk names to
                   display in the cXtYdZsN format, more easily associated with
                   physical  hardware characteristics. Using the cXtYdZsN for-
                   mat is particularly helpful in  the  FibreChannel  environ-
                   ments where the FC World Wide Name appears in the t field.
...

Outside of yourldom, on the control/service domain which is hosting that disk service, you will need to match the disks added in virtual disk service (vds) and ID chosen when disk is added to yourldom
Code:
ldm add-vdisk id=N backend-disk backend-disk@some-vds yourldom

Where N above is the number you see for that specified disk inside ldom on iostat/format/zpool commands and the numeration of disk(s) you see when doing ldm list -l yourldom.

This assumes you are not using ZVOLs or metadevices as disk backends on control/service domain.
If you do, more stuff will need to be done to match the physical to virtual disk.

But ZVOL as disk backend to ldom and then vxfs while having zfs filesystem as well in ldom sounds like a nightmare....

For further analysis, i would required output of following command, which can be quite long so can attach them or something.

Code:
# On control/service domain 
ldm list-services
ldm list -l <yourldom>
ldm list
echo "::memstat" | mdb -k
tail -10 /etc/system
# On LDOM for start
echo | format

Hope that helps
Regards
Peasant.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to create a new mount point with 600GB and add 350 GBexisting mount point? IN AIX

How to create a new mount point with 600GB and add 350 GBexisting mount point Best if there step that i can follow or execute before i mount or add diskspace IN AIX Thanks (2 Replies)
Discussion started by: Thilagarajan
2 Replies

2. Red Hat

NFS mount point

Hi, Can you tell me something about NFS mount point ? Regards, Maddy (3 Replies)
Discussion started by: Maddy123
3 Replies

3. Solaris

Mount point in a server

Hi , How to find out mount point in a server ? OS -- SunOS 5.6 Generic sun4u sparc SUNW Thanks (4 Replies)
Discussion started by: Maddy123
4 Replies

4. AIX

Change Mount point

Deart All, can any one help to do this, i need to change mount point in AIX 6 /opt/OM should be /usr/lpp/OM, how do i do.... Please help me Urgent issue (2 Replies)
Discussion started by: gulamibrahim
2 Replies

5. Solaris

Mount Point Sorting?

Dear Gurus, Could it be possible to have the output of df -k sorted? The df -k output messed up after recent power trip. Also, is there any folders that I should look into to reduce the root size (other than /var/adm and /var/crash) after server crash? Many thanks in advance. ... (2 Replies)
Discussion started by: honmin
2 Replies

6. AIX

Creating a new mount point

Hello, I have an AIX Oracle database server that I need to create a new filesystem/mount where I can create a new ORacle home to install 11g on. What are the needed steps to create this? There are mounts for Oracle 9i and 10g already. Thank you. - David (7 Replies)
Discussion started by: dkranes
7 Replies

7. UNIX for Advanced & Expert Users

Mount point options

Hello all, I'm sharing 1 volume from a Sun Storage array (6130), out to 2 servers. Created a slice on one server and mounted a filesystem. On the other server the disk already sees the created slice from the other server (shared throught the storage array, so mounted this filesystem as well. ... (1 Reply)
Discussion started by: Sunguy222
1 Replies

8. UNIX for Dummies Questions & Answers

concept of mount point

Hi All I Know it is a really basic and stupid question perhaps...But I am going bonkers.. I have following valid paths in my unix system: 1. /opt/cdedev/informatica/InfSrv/app/bin 2. /vikas/cdedev/app Both refer to the same physical location. So if I created one file 'test' in first... (3 Replies)
Discussion started by: Vikas Sood
3 Replies

9. UNIX for Dummies Questions & Answers

auto mount point

hi can i know what is the command to create auto mount point in my unix server? is there any directory which i have to go? (1 Reply)
Discussion started by: legato
1 Replies

10. UNIX for Dummies Questions & Answers

mount point

hi people, I'm trying to create a mount point, but am having no sucess at all, with the following: mount -F ufs /dev/dsk/diskname /newdirectory but i keep getting - mount-point /newdirectory doesn't exist. What am i doing wrong/missing? Thanks Rc (1 Reply)
Discussion started by: colesy
1 Replies
Login or Register to Ask a Question