Sponsored Content
Operating Systems Solaris How to map device to mount point? Post 303024682 by bakunin on Monday 15th of October 2018 12:17:24 AM
Old 10-15-2018
Quote:
Originally Posted by Sean
Code:
In any case, the iostat numbers you posted do not look to show any issue.

Correct, it is for test server, not the production server which has the performance issue. The test and the production are setup the same.

I wanted to show the people how to utilize iostat to identify the I/O with the mount points.
But I do not have the root access on production.
I am no Solaris expert by any stretch, but some principles in performance tuning remain the same in every OS: does the production server have "real" disks or is it a virtual guest operating on virtual disks too? If the latter is the case it is probably the wrong place you are looking at anyway. Under the virtual disks there have to be some real devices - the LUNs on a storage box, members of a RAID in the host server, whatever. It is at these systems where you have to measure I/O, not on your virtualised guest.

Consider this (hypothetical) scenario: a server with 5 guests, g1-5 and a disk in this server where virtual disks for these guests reside. If g5 has heavy I/O this will influence the remaining available bandwidth which g1-4 could use. Therefore measurements on g1 because this guest has "intermittent performance issues" will tell you nothing about real issues, it will in fact only tell you when g5 has load peaks. You may not even know what you measure because maybe you don't know what g5 is doing and when.

It is a worthwhile effort to first get a detailed setup so that you can visualise the "flow" between the various interdependent parts of the machinery. Only then test/measure one component after the other to find out where the bottleneck is located.

I hope this helps.

bakunin
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

mount point

hi people, I'm trying to create a mount point, but am having no sucess at all, with the following: mount -F ufs /dev/dsk/diskname /newdirectory but i keep getting - mount-point /newdirectory doesn't exist. What am i doing wrong/missing? Thanks Rc (1 Reply)
Discussion started by: colesy
1 Replies

2. UNIX for Dummies Questions & Answers

auto mount point

hi can i know what is the command to create auto mount point in my unix server? is there any directory which i have to go? (1 Reply)
Discussion started by: legato
1 Replies

3. UNIX for Dummies Questions & Answers

concept of mount point

Hi All I Know it is a really basic and stupid question perhaps...But I am going bonkers.. I have following valid paths in my unix system: 1. /opt/cdedev/informatica/InfSrv/app/bin 2. /vikas/cdedev/app Both refer to the same physical location. So if I created one file 'test' in first... (3 Replies)
Discussion started by: Vikas Sood
3 Replies

4. UNIX for Advanced & Expert Users

Mount point options

Hello all, I'm sharing 1 volume from a Sun Storage array (6130), out to 2 servers. Created a slice on one server and mounted a filesystem. On the other server the disk already sees the created slice from the other server (shared throught the storage array, so mounted this filesystem as well. ... (1 Reply)
Discussion started by: Sunguy222
1 Replies

5. AIX

Creating a new mount point

Hello, I have an AIX Oracle database server that I need to create a new filesystem/mount where I can create a new ORacle home to install 11g on. What are the needed steps to create this? There are mounts for Oracle 9i and 10g already. Thank you. - David (7 Replies)
Discussion started by: dkranes
7 Replies

6. Solaris

Mount Point Sorting?

Dear Gurus, Could it be possible to have the output of df -k sorted? The df -k output messed up after recent power trip. Also, is there any folders that I should look into to reduce the root size (other than /var/adm and /var/crash) after server crash? Many thanks in advance. ... (2 Replies)
Discussion started by: honmin
2 Replies

7. AIX

Change Mount point

Deart All, can any one help to do this, i need to change mount point in AIX 6 /opt/OM should be /usr/lpp/OM, how do i do.... Please help me Urgent issue (2 Replies)
Discussion started by: gulamibrahim
2 Replies

8. Solaris

Mount point in a server

Hi , How to find out mount point in a server ? OS -- SunOS 5.6 Generic sun4u sparc SUNW Thanks (4 Replies)
Discussion started by: Maddy123
4 Replies

9. Red Hat

NFS mount point

Hi, Can you tell me something about NFS mount point ? Regards, Maddy (3 Replies)
Discussion started by: Maddy123
3 Replies

10. UNIX for Beginners Questions & Answers

How to create a new mount point with 600GB and add 350 GBexisting mount point? IN AIX

How to create a new mount point with 600GB and add 350 GBexisting mount point Best if there step that i can follow or execute before i mount or add diskspace IN AIX Thanks (2 Replies)
Discussion started by: Thilagarajan
2 Replies
guestfs-performance(1)					      Virtualization Support					    guestfs-performance(1)

NAME
guestfs-performance - engineering libguestfs for greatest performance DESCRIPTION
This page documents how to get the greatest performance out of libguestfs, especially when you expect to use libguestfs to manipulate thousands of virtual machines or disk images. Three main areas are covered. Libguestfs runs an appliance (a small Linux distribution) inside qemu/KVM. The first two areas are: minimizing the time taken to start this appliance, and the number of times the appliance has to be started. The third area is shortening the time taken for inspection of VMs. BASELINE MEASUREMENTS
Before making changes to how you use libguestfs, take baseline measurements. BASELINE: STARTING THE APPLIANCE On an unloaded machine, time how long it takes to start up the appliance: time guestfish -a /dev/null run Run this command several times in a row and discard the first few runs, so that you are measuring a typical "hot cache" case. Explanation This command starts up the libguestfs appliance on a null disk, and then immediately shuts it down. The first time you run the command, it will create an appliance and cache it (usually under "/var/tmp/.guestfs-*"). Subsequent runs should reuse the cached appliance. Expected results You should expect to be getting times under 6 seconds. If the times you see on an unloaded machine are above this, then see the section "TROUBLESHOOTING POOR PERFORMANCE" below. BASELINE: PERFORMING INSPECTION OF A GUEST For this test you will need an unloaded machine and at least one real guest or disk image. If you are planning to use libguestfs against only X guests (eg. X = Windows), then using an X guest here would be most appropriate. If you are planning to run libguestfs against a mix of guests, then use a mix of guests for testing here. Time how long it takes to perform inspection and mount the disks of the guest. Use the first command if you will be using disk images, and the second command if you will be using libvirt. time guestfish --ro -a disk.img -i exit time guestfish --ro -d GuestName -i exit Run the command several times in a row and discard the first few runs, so that you are measuring a typical "hot cache" case. Explanation This command starts up the libguestfs appliance on the named disk image or libvirt guest, performs libguestfs inspection on it (see "INSPECTION" in guestfs(3)), mounts the guest's disks, then discards all these results and shuts down. The first time you run the command, it will create an appliance and cache it (usually under "/var/tmp/.guestfs-*"). Subsequent runs should reuse the cached appliance. Expected results You should expect times which are <= 5 seconds greater than measured in the first baseline test above. (For example, if the first baseline test ran in 5 seconds, then this test should run in <= 10 seconds). UNDERSTANDING THE APPLIANCE AND WHEN IT IS BUILT
/CACHED The first time you use libguestfs, it will build and cache an appliance. This is usually in "/var/tmp/.guestfs-*", unless you have set $TMPDIR in which case it will be under that temporary directory. For more information about how the appliance is constructed, see "SUPERMIN APPLIANCES" in febootstrap(8). Every time libguestfs runs it will check that no host files used by the appliance have changed. If any have, then the appliance is rebuilt. This usually happens when a package is installed or updated on the host (eg. using programs like "yum" or "apt-get"). The reason for reconstructing the appliance is security: the new program that has been installed might contain a security fix, and so we want to include the fixed program in the appliance automatically. These are the performance implications: o The process of building (or rebuilding) the cached appliance is slow, and you can avoid this happening by using a fixed appliance (see below). o If not using a fixed appliance, be aware that updating software on the host will cause a one time rebuild of the appliance. o "/var/tmp" (or $TMPDIR) should be on a fast disk, and have plenty of space for the appliance. USING A FIXED APPLIANCE
To fully control when the appliance is built, you can build a fixed appliance. This appliance can and should be stored on a fast, local disk. To build the appliance, run the command: libguestfs-make-fixed-appliance <directory> replacing "<directory>" with the name of a directory where the appliance will be stored (normally you would name a subdirectory, for example: "/usr/local/lib/guestfs/appliance" or "/dev/shm/appliance"). Then set $LIBGUESTFS_PATH (and ensure this environment variable is set in your libguestfs program), or modify your program so it calls "guestfs_set_path". For example: export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance Now you can run libguestfs programs, virt tools, guestfish etc. as normal. The programs will use your fixed appliance, and will not ever build, rebuild, or cache their own appliance. (For detailed information on this subject, see: libguestfs-make-fixed-appliance(1)). PERFORMANCE OF THE FIXED APPLIANCE In our testing we did not find that using a fixed appliance gave any measurable performance benefit, even when the appliance was located in memory (ie. on "/dev/shm"). However there are three points to consider: 1. Using a fixed appliance stops libguestfs from ever rebuilding the appliance, meaning that libguestfs will have more predictable start- up times. 2. By default libguestfs (or rather, febootstrap-supermin-helper(8)) searches over the root filesystem to find out if any host files have changed and if it needs to rebuild the appliance. If these files are not cached and the root filesystem is on an HDD, then this generates lots of seeks. Using a fixed appliance avoids all this. 3. The appliance is loaded on demand. A simple test such as: time guestfish -a /dev/null run does not load very much of the appliance. A real libguestfs program using complicated API calls would demand-load a lot more of the appliance. Being able to store the appliance in a specified location makes the performance more predictable. REDUCING THE NUMBER OF TIMES THE APPLIANCE IS LAUNCHED
By far the most effective, though not always the simplest way to get good performance is to ensure that the appliance is launched the minimum number of times. This will probably involve changing your libguestfs application. Try to call "guestfs_launch" at most once per virtual machine. Instead of using a separate instance of guestfish(1) to make a series of changes to the same guest, use a single instance of guestfish and/or use the guestfish --listen option. Consider writing your program as a daemon which holds a guest open while making a series of changes. Or marshal all the operations you want to perform before opening the guest. You can also try adding disks from multiple guests to a single appliance. Before trying this, note the following points: 1. Adding multiple guests to one appliance is a security problem because it may allow one guest to interfere with the disks of another guest. Only do it if you trust all the guests, or if you can group guests by trust. 2. In current qemu, there is a limit of around 26 disks that can be added to the appliance. In future versions of qemu (and hence libguestfs) we hope to lift this limit. 3. Using libguestfs this way is complicated. Disks can have unexpected interactions: for example, if two guests use the same UUID for a filesystem (because they were cloned), or have volume groups with the same name (but see "guestfs_lvm_set_filter"). virt-df(1) adds multiple disks by default, so the source code for this program would be a good place to start. SHORTENING THE TIME TAKEN FOR INSPECTION OF VMs The main advice is obvious: Do not perform inspection (which is expensive) unless you need the results. If you previously performed inspection on the guest, then it may be safe to cache and reuse the results from last time. Some disks don't need to be inspected at all: for example, if you are creating a disk image, or if the disk image is not a VM, or if the disk image has a known layout. Even when basic inspection ("guestfs_inspect_os") is required, auxiliary inspection operations may be avoided: o Mounting disks is only necessary to get further filesystem information. o Listing applications ("guestfs_inspect_list_applications") is an expensive operation on Linux, but almost free on Windows. o Generating a guest icon ("guestfs_inspect_get_icon") is cheap on Linux but expensive on Windows. TROUBLESHOOTING POOR PERFORMANCE
ENSURE HARDWARE VIRTUALIZATION IS AVAILABLE Use "/proc/cpuinfo" and this page: http://virt-tools.org/learning/check-hardware-virt/ to ensure that hardware virtualization is available. Note that you may need to enable it in your BIOS. Hardware virt is not usually available inside VMs, and libguestfs will run slowly inside another virtual machine whatever you do. Nested virtualization does not work well in our experience, and is certainly no substitute for running libguestfs on baremetal. ENSURE KVM IS AVAILABLE Ensure that KVM is enabled and available to the user that will run libguestfs. It should be safe to set 0666 permissions on "/dev/kvm" and most distributions now do this. PROCESSORS TO AVOID Avoid processors that don't have hardware virtualization, and some processors which are simply very slow (AMD Geode being a great example). DETAILED TIMINGS USING SYSTEMTAP
You can use SystemTap (stap(1)) to get detailed timings from libguestfs programs. Save the following script as "time.stap": global last; function display_time () { now = gettimeofday_us (); delta = 0; if (last > 0) delta = now - last; last = now; printf ("%d (+%d):", now, delta); } probe begin { last = 0; printf ("ready "); } /* Display all calls to static markers. */ probe process("/usr/lib*/libguestfs.so.0") .provider("guestfs").mark("*") ? { display_time(); printf (" %s %s ", $$name, $$parms); } /* Display all calls to guestfs_* functions. */ probe process("/usr/lib*/libguestfs.so.0") .function("guestfs_[a-z]*") ? { display_time(); printf (" %s %s ", probefunc(), $$parms); } Run it as root in one window: # stap time.stap ready It prints "ready" when SystemTap has loaded the program. Run your libguestfs program, guestfish or a virt tool in another window. For example: $ guestfish -a /dev/null run In the stap window you will see a large amount of output, with the time taken for each step shown (microseconds in parenthesis). For example: xxxx (+0): guestfs_create xxxx (+29): guestfs_set_pgroup g=0x17a9de0 pgroup=0x1 xxxx (+9): guestfs_add_drive_opts_argv g=0x17a9de0 [...] xxxx (+8): guestfs_safe_strdup g=0x17a9de0 str=0x7f8a153bed5d xxxx (+19): guestfs_safe_malloc g=0x17a9de0 nbytes=0x38 xxxx (+5): guestfs_safe_strdup g=0x17a9de0 str=0x17a9f60 xxxx (+10): guestfs_launch g=0x17a9de0 xxxx (+4): launch_start [etc] You will need to consult, and even modify, the source to libguestfs to fully understand the output. SEE ALSO
febootstrap(8), febootstrap-supermin-helper(8), guestfish(1), guestfs(3), guestfs-examples(3), libguestfs-make-fixed-appliance(1), stap(1), <http://libguestfs.org/>. AUTHORS
Richard W.M. Jones ("rjones at redhat dot com") COPYRIGHT
Copyright (C) 2012 Red Hat Inc. <http://libguestfs.org/> This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA libguestfs-1.18.1 2013-12-07 guestfs-performance(1)
All times are GMT -4. The time now is 04:03 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy