Sponsored Content
Operating Systems AIX VIOS backupios -mksysb - does it need to be served by a NIM server ? Post 302965104 by bakunin on Saturday 23rd of January 2016 08:54:40 PM
Old 01-23-2016
Quote:
Originally Posted by maraixadm
Question is how - do I want to burn time burning it to media ? Or get a NIM server together...)
If you know your way around Linux you can easily jailbreak the HMC and become root. From there it is no problem to loop-mount an ISO-image, no? Basically the HMC is a customized Linux with the "appliance"-part on top.

If you want to set up a NIM-server: i was planning for a longer time to update and expand the article i once wrote, so this would present a good inducement to do so. If you have questiona planning this: just ask (but, please, in a different thread, we like to have our threads focused on one theme at a time).

I hope this helps.

bakunin
This User Gave Thanks to bakunin For This Post:
 

10 More Discussions You Might Find Interesting

1. AIX

NIM mksysb recovery

I have multiple LPARS on p650 with mix of AIX5.2 and AIX5.1 with different rml level My understanding is creating mksysb nim on NIM server is for new installation of LPARS Then; How do I do mksysb backup on existing lpars for recovery purpose or how do I do all mksysb to a NFS mounted Filesystem... (1 Reply)
Discussion started by: melindaj
1 Replies

2. UNIX for Advanced & Expert Users

NIM - mksysb of remote server

Hi, I'm trying to use my NIM server to get a mksysb of an lpar on another machine on my network. I'm trying to define a resource through smitty and i've entered the resource name, Server of resource & location of resource, they are as follows Resource Name : gmt_fail_mksysb Server of... (1 Reply)
Discussion started by: KeesH
1 Replies

3. Shell Programming and Scripting

script to automate mksysb via nim in AIX 5.3

#!/bin/ksh # # nim_mksysb # get mksysb from each client machine specified with -m. If no # machines specified, get mksysb from ALL machines. -r flag says # remove oldest existing mksysb for the machines being backed up. # use -n no_make flag with -r to remove a generation of mksysb, #... (0 Replies)
Discussion started by: barkath
0 Replies

4. AIX

NIM client mksysb restore

Can a NIM client mksysb restore be performed via NIM (smitty nim) without the NIM client machine having the NIM server's IP and hostname in its /etc/hosts file? (10 Replies)
Discussion started by: kah00na
10 Replies

5. AIX

nim mksysb buffer overflow error

Hi, I am trying to backup a system ("client") through a slow network using "nim mksysb" on a nim server ("master") The backup starts, but doesn't success. Thanks if you can help me to resolve this problem : Creating information file (/image.data) for rootvg... Creating list of files to... (2 Replies)
Discussion started by: astjen
2 Replies

6. AIX

how to create a package from mksysb via nim ?

Hello, I need to install "bos.adt.libm 5.3.0.0" on a server (AIX5.3 power 5). How to install it from a nim mksysb of an another partition ? Thank you (3 Replies)
Discussion started by: astjen
3 Replies

7. AIX

AIX 5L nim mksysb command

Hi All, Please excuse the possibly naive question but I'm trying to clone/install a new AIX 5.3 LPAR on a p570 from a mksysb image file using nim. Has anyone done this before and if so, what would the exact command look like? Does it even remotely resemble something like nim -o... (1 Reply)
Discussion started by: combustables
1 Replies

8. AIX

mksysb restoration using NIM

Hi, I just want to ask whether anyone has experience on restoring mksysb backup in NIM. We have taken the mksysb backup and the SPOT has been configured on NIM also. I just want to know the checkpoints before doing this. Is there any checkpoints we need to do? Do we need to unmirrorvg? This... (12 Replies)
Discussion started by: depam
12 Replies

9. AIX

mksysb using NIM

I am implementing mksysb backups using NIM. I am using nimsh as client communication and mksysb backup are working fine, but this requires remote logon to be enabled for root. Any thoughts on how can I implement mksysb backups using NIM without enabling remote logon for "root" ? (1 Reply)
Discussion started by: mk8570
1 Replies

10. AIX

NIM - install up-to-date VIOS ?

Hello, Is it possible to install up-to-date vios in one NIM operation ? For example I have vios v2.1 DVD and current vios version is v2.2.1.3(we update to fixpack v2.2.1.1_FP25 and apply service pack 2.2.1.3-FP25-SP01) I know it's possible if we install AIX using NIM - we just update... (0 Replies)
Discussion started by: vilius
0 Replies
guestfs-performance(1)					      Virtualization Support					    guestfs-performance(1)

NAME
guestfs-performance - engineering libguestfs for greatest performance DESCRIPTION
This page documents how to get the greatest performance out of libguestfs, especially when you expect to use libguestfs to manipulate thousands of virtual machines or disk images. Three main areas are covered. Libguestfs runs an appliance (a small Linux distribution) inside qemu/KVM. The first two areas are: minimizing the time taken to start this appliance, and the number of times the appliance has to be started. The third area is shortening the time taken for inspection of VMs. BASELINE MEASUREMENTS
Before making changes to how you use libguestfs, take baseline measurements. BASELINE: STARTING THE APPLIANCE On an unloaded machine, time how long it takes to start up the appliance: time guestfish -a /dev/null run Run this command several times in a row and discard the first few runs, so that you are measuring a typical "hot cache" case. Explanation This command starts up the libguestfs appliance on a null disk, and then immediately shuts it down. The first time you run the command, it will create an appliance and cache it (usually under "/var/tmp/.guestfs-*"). Subsequent runs should reuse the cached appliance. Expected results You should expect to be getting times under 6 seconds. If the times you see on an unloaded machine are above this, then see the section "TROUBLESHOOTING POOR PERFORMANCE" below. BASELINE: PERFORMING INSPECTION OF A GUEST For this test you will need an unloaded machine and at least one real guest or disk image. If you are planning to use libguestfs against only X guests (eg. X = Windows), then using an X guest here would be most appropriate. If you are planning to run libguestfs against a mix of guests, then use a mix of guests for testing here. Time how long it takes to perform inspection and mount the disks of the guest. Use the first command if you will be using disk images, and the second command if you will be using libvirt. time guestfish --ro -a disk.img -i exit time guestfish --ro -d GuestName -i exit Run the command several times in a row and discard the first few runs, so that you are measuring a typical "hot cache" case. Explanation This command starts up the libguestfs appliance on the named disk image or libvirt guest, performs libguestfs inspection on it (see "INSPECTION" in guestfs(3)), mounts the guest's disks, then discards all these results and shuts down. The first time you run the command, it will create an appliance and cache it (usually under "/var/tmp/.guestfs-*"). Subsequent runs should reuse the cached appliance. Expected results You should expect times which are <= 5 seconds greater than measured in the first baseline test above. (For example, if the first baseline test ran in 5 seconds, then this test should run in <= 10 seconds). UNDERSTANDING THE APPLIANCE AND WHEN IT IS BUILT
/CACHED The first time you use libguestfs, it will build and cache an appliance. This is usually in "/var/tmp/.guestfs-*", unless you have set $TMPDIR in which case it will be under that temporary directory. For more information about how the appliance is constructed, see "SUPERMIN APPLIANCES" in febootstrap(8). Every time libguestfs runs it will check that no host files used by the appliance have changed. If any have, then the appliance is rebuilt. This usually happens when a package is installed or updated on the host (eg. using programs like "yum" or "apt-get"). The reason for reconstructing the appliance is security: the new program that has been installed might contain a security fix, and so we want to include the fixed program in the appliance automatically. These are the performance implications: o The process of building (or rebuilding) the cached appliance is slow, and you can avoid this happening by using a fixed appliance (see below). o If not using a fixed appliance, be aware that updating software on the host will cause a one time rebuild of the appliance. o "/var/tmp" (or $TMPDIR) should be on a fast disk, and have plenty of space for the appliance. USING A FIXED APPLIANCE
To fully control when the appliance is built, you can build a fixed appliance. This appliance can and should be stored on a fast, local disk. To build the appliance, run the command: libguestfs-make-fixed-appliance <directory> replacing "<directory>" with the name of a directory where the appliance will be stored (normally you would name a subdirectory, for example: "/usr/local/lib/guestfs/appliance" or "/dev/shm/appliance"). Then set $LIBGUESTFS_PATH (and ensure this environment variable is set in your libguestfs program), or modify your program so it calls "guestfs_set_path". For example: export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance Now you can run libguestfs programs, virt tools, guestfish etc. as normal. The programs will use your fixed appliance, and will not ever build, rebuild, or cache their own appliance. (For detailed information on this subject, see: libguestfs-make-fixed-appliance(1)). PERFORMANCE OF THE FIXED APPLIANCE In our testing we did not find that using a fixed appliance gave any measurable performance benefit, even when the appliance was located in memory (ie. on "/dev/shm"). However there are three points to consider: 1. Using a fixed appliance stops libguestfs from ever rebuilding the appliance, meaning that libguestfs will have more predictable start- up times. 2. By default libguestfs (or rather, febootstrap-supermin-helper(8)) searches over the root filesystem to find out if any host files have changed and if it needs to rebuild the appliance. If these files are not cached and the root filesystem is on an HDD, then this generates lots of seeks. Using a fixed appliance avoids all this. 3. The appliance is loaded on demand. A simple test such as: time guestfish -a /dev/null run does not load very much of the appliance. A real libguestfs program using complicated API calls would demand-load a lot more of the appliance. Being able to store the appliance in a specified location makes the performance more predictable. REDUCING THE NUMBER OF TIMES THE APPLIANCE IS LAUNCHED
By far the most effective, though not always the simplest way to get good performance is to ensure that the appliance is launched the minimum number of times. This will probably involve changing your libguestfs application. Try to call "guestfs_launch" at most once per virtual machine. Instead of using a separate instance of guestfish(1) to make a series of changes to the same guest, use a single instance of guestfish and/or use the guestfish --listen option. Consider writing your program as a daemon which holds a guest open while making a series of changes. Or marshal all the operations you want to perform before opening the guest. You can also try adding disks from multiple guests to a single appliance. Before trying this, note the following points: 1. Adding multiple guests to one appliance is a security problem because it may allow one guest to interfere with the disks of another guest. Only do it if you trust all the guests, or if you can group guests by trust. 2. In current qemu, there is a limit of around 26 disks that can be added to the appliance. In future versions of qemu (and hence libguestfs) we hope to lift this limit. 3. Using libguestfs this way is complicated. Disks can have unexpected interactions: for example, if two guests use the same UUID for a filesystem (because they were cloned), or have volume groups with the same name (but see "guestfs_lvm_set_filter"). virt-df(1) adds multiple disks by default, so the source code for this program would be a good place to start. SHORTENING THE TIME TAKEN FOR INSPECTION OF VMs The main advice is obvious: Do not perform inspection (which is expensive) unless you need the results. If you previously performed inspection on the guest, then it may be safe to cache and reuse the results from last time. Some disks don't need to be inspected at all: for example, if you are creating a disk image, or if the disk image is not a VM, or if the disk image has a known layout. Even when basic inspection ("guestfs_inspect_os") is required, auxiliary inspection operations may be avoided: o Mounting disks is only necessary to get further filesystem information. o Listing applications ("guestfs_inspect_list_applications") is an expensive operation on Linux, but almost free on Windows. o Generating a guest icon ("guestfs_inspect_get_icon") is cheap on Linux but expensive on Windows. TROUBLESHOOTING POOR PERFORMANCE
ENSURE HARDWARE VIRTUALIZATION IS AVAILABLE Use "/proc/cpuinfo" and this page: http://virt-tools.org/learning/check-hardware-virt/ to ensure that hardware virtualization is available. Note that you may need to enable it in your BIOS. Hardware virt is not usually available inside VMs, and libguestfs will run slowly inside another virtual machine whatever you do. Nested virtualization does not work well in our experience, and is certainly no substitute for running libguestfs on baremetal. ENSURE KVM IS AVAILABLE Ensure that KVM is enabled and available to the user that will run libguestfs. It should be safe to set 0666 permissions on "/dev/kvm" and most distributions now do this. PROCESSORS TO AVOID Avoid processors that don't have hardware virtualization, and some processors which are simply very slow (AMD Geode being a great example). DETAILED TIMINGS USING SYSTEMTAP
You can use SystemTap (stap(1)) to get detailed timings from libguestfs programs. Save the following script as "time.stap": global last; function display_time () { now = gettimeofday_us (); delta = 0; if (last > 0) delta = now - last; last = now; printf ("%d (+%d):", now, delta); } probe begin { last = 0; printf ("ready "); } /* Display all calls to static markers. */ probe process("/usr/lib*/libguestfs.so.0") .provider("guestfs").mark("*") ? { display_time(); printf (" %s %s ", $$name, $$parms); } /* Display all calls to guestfs_* functions. */ probe process("/usr/lib*/libguestfs.so.0") .function("guestfs_[a-z]*") ? { display_time(); printf (" %s %s ", probefunc(), $$parms); } Run it as root in one window: # stap time.stap ready It prints "ready" when SystemTap has loaded the program. Run your libguestfs program, guestfish or a virt tool in another window. For example: $ guestfish -a /dev/null run In the stap window you will see a large amount of output, with the time taken for each step shown (microseconds in parenthesis). For example: xxxx (+0): guestfs_create xxxx (+29): guestfs_set_pgroup g=0x17a9de0 pgroup=0x1 xxxx (+9): guestfs_add_drive_opts_argv g=0x17a9de0 [...] xxxx (+8): guestfs_safe_strdup g=0x17a9de0 str=0x7f8a153bed5d xxxx (+19): guestfs_safe_malloc g=0x17a9de0 nbytes=0x38 xxxx (+5): guestfs_safe_strdup g=0x17a9de0 str=0x17a9f60 xxxx (+10): guestfs_launch g=0x17a9de0 xxxx (+4): launch_start [etc] You will need to consult, and even modify, the source to libguestfs to fully understand the output. SEE ALSO
febootstrap(8), febootstrap-supermin-helper(8), guestfish(1), guestfs(3), guestfs-examples(3), libguestfs-make-fixed-appliance(1), stap(1), <http://libguestfs.org/>. AUTHORS
Richard W.M. Jones ("rjones at redhat dot com") COPYRIGHT
Copyright (C) 2012 Red Hat Inc. <http://libguestfs.org/> This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA libguestfs-1.18.1 2013-12-07 guestfs-performance(1)
All times are GMT -4. The time now is 05:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy