Correct, it is for test server, not the production server which has the performance issue. The test and the production are setup the same.
I wanted to show the people how to utilize iostat to identify the I/O with the mount points.
But I do not have the root access on production.
I am no Solaris expert by any stretch, but some principles in performance tuning remain the same in every OS: does the production server have "real" disks or is it a virtual guest operating on virtual disks too? If the latter is the case it is probably the wrong place you are looking at anyway. Under the virtual disks there have to be some real devices - the LUNs on a storage box, members of a RAID in the host server, whatever. It is at these systems where you have to measure I/O, not on your virtualised guest.
Consider this (hypothetical) scenario: a server with 5 guests, g1-5 and a disk in this server where virtual disks for these guests reside. If g5 has heavy I/O this will influence the remaining available bandwidth which g1-4 could use. Therefore measurements on g1 because this guest has "intermittent performance issues" will tell you nothing about real issues, it will in fact only tell you when g5 has load peaks. You may not even know what you measure because maybe you don't know what g5 is doing and when.
It is a worthwhile effort to first get a detailed setup so that you can visualise the "flow" between the various interdependent parts of the machinery. Only then test/measure one component after the other to find out where the bottleneck is located.
I have had a little think about this problem during my break and I now realise that the information that you are looking for will not be easy to come by, the nature of ZFS makes it increasingly difficult as you add more disks to the zpool.
ZFS dynamically creates a block to vdev relationship based on block size (recordsize) and the number of disks in the pool. So if we create a pool with four disks and a block size of 128k (default), the blocks are allocated basically on a round robin basis across the four disks.
So identifying a file system to vdev relationship will not be easy, you could tackle it like this;
Now you have to go and have a look at the output and find what you want - but be warned;
Getting the required output;
Where I have a single line across the bottom (beginning with 0), your pool should show 5 lines - one for each vdev you should be able to see which vdev the output was written to. If you write a file bigger than 640K it will write at least one block to each - zfs manages that bit. As for the zfs file systems they are striped across however many disks are in the pool.
Can you tell us what the hardware is, this looks suspiciously like the view from inside an ldom.
Please post the output from echo | format (or a part of it if it's too big ) and if possible /usr/sbin/virtinfo -a this will give a good starting point.
Regards
Gull04
Last edited by gull04; 10-15-2018 at 09:44 AM..
Reason: More Information Added
Solaris 11 on x86 or SPARC ?
I'll presume it's SPARC as far as Oracle VM info goes ...
Try the following iostat command :
As manual states :
Outside of yourldom, on the control/service domain which is hosting that disk service, you will need to match the disks added in virtual disk service (vds) and ID chosen when disk is added to yourldom
Where N above is the number you see for that specified disk inside ldom on iostat/format/zpool commands and the numeration of disk(s) you see when doing ldm list -l yourldom.
This assumes you are not using ZVOLs or metadevices as disk backends on control/service domain.
If you do, more stuff will need to be done to match the physical to virtual disk.
But ZVOL as disk backend to ldom and then vxfs while having zfs filesystem as well in ldom sounds like a nightmare....
For further analysis, i would required output of following command, which can be quite long so can attach them or something.
How to create a new mount point with 600GB and add 350 GBexisting mount point
Best if there step that i can follow or execute before i mount or add diskspace IN AIX
Thanks (2 Replies)
Deart All,
can any one help to do this,
i need to change mount point in AIX 6
/opt/OM should be /usr/lpp/OM, how do i do....
Please help me Urgent issue (2 Replies)
Dear Gurus,
Could it be possible to have the output of df -k sorted? The df -k output messed up after recent power trip.
Also, is there any folders that I should look into to reduce the root size (other than /var/adm and /var/crash) after server crash?
Many thanks in advance.
... (2 Replies)
Hello, I have an AIX Oracle database server that I need to create a new filesystem/mount where I can create a new ORacle home to install 11g on. What are the needed steps to create this? There are mounts for Oracle 9i and 10g already. Thank you.
- David (7 Replies)
Hello all,
I'm sharing 1 volume from a Sun Storage array (6130), out to 2 servers. Created a slice on one server and mounted a filesystem. On the other server the disk already sees the created slice from the other server (shared throught the storage array, so mounted this filesystem as well.
... (1 Reply)
Hi All
I Know it is a really basic and stupid question perhaps...But I am going bonkers..
I have following valid paths in my unix system:
1. /opt/cdedev/informatica/InfSrv/app/bin
2. /vikas/cdedev/app
Both refer to the same physical location. So if I created one file 'test' in first... (3 Replies)
hi people,
I'm trying to create a mount point, but am having no sucess at all, with the following:
mount -F ufs /dev/dsk/diskname /newdirectory
but i keep getting - mount-point /newdirectory doesn't exist.
What am i doing wrong/missing?
Thanks
Rc (1 Reply)