Sponsored Content
Full Discussion: vio server and vio client
Operating Systems AIX vio server and vio client Post 302339999 by dig1tal on Saturday 1st of August 2009 01:43:45 PM
Old 08-01-2009
you have a few options here depending on your environment.

in the ivm, under view/modify virtual storage, you can create virtual disks to be used for the aix lpars. the downside to this option is that you may get i/o contention from the vios and aix lpar sharing the same disk.

a better option (requiring a san) would be to assign a separate lun for the vios and aix lpar. this allows for easy recovery should you lose the vios and want to migrate the aix lpar to another vios.
 

10 More Discussions You Might Find Interesting

1. AIX

rebooting vio client

Hi, I would like to reboot vio client but I am not able to access vio client(I am not able to get putty) , I am able to get putty of vio server, is there any command by using which from vio server I can reboot vio client? (3 Replies)
Discussion started by: manoj.solaris
3 Replies

2. AIX

Finding cpu information on vio client

Hi, I am having single p series blade with Single Physcial CPU with dual core, on that vio server is installed, I have created vio client allocate 0.9 each cpu , now when I am running prtconf command on vio client it is showing "2" no of processor, My query using which command it will... (1 Reply)
Discussion started by: manoj.solaris
1 Replies

3. AIX

Unable to connect VIO client

Hi I am facing very strange issue on my vio server 5 vio clients are confgured, now I am to connect 3 vio client , i am unable to connect 2 vio client my ip address,subnet mask,gateway is correct. i have rebooted and reconfigured the ip address, but issue is persists. Kindly suggest how to... (0 Replies)
Discussion started by: manoj.solaris
0 Replies

4. AIX

how will i know if a lun has been already mapped to a vio client

Hi im logged in to the vio servers now. when i give # lspv | wc -l i get the count as 6246 how will i know if a lun has been already mapped to a vio client or it is left free without mapping to any of the vio client ? (1 Reply)
Discussion started by: newtoaixos
1 Replies

5. AIX

vio server ethernet to vio client ethernet(concepts confusing)

Hi In the vio server when I do # lsattr -El hdisk*, I get a PVID. The same PVID is also seen when I put the lspv command on the vio client partition. This way Im able to confirm the lun using the PVID. Similarly how does the vio client partition gets the virtual ethernet scsi client adapter... (1 Reply)
Discussion started by: newtoaixos
1 Replies

6. AIX

Mirroring vio server

Hi, I would like to know installing vio server on local disk and mirroring rootvg, if I am creating AIX VIO CLIENT(lpar), and any of single local hard disk failuare. will it affect lpars? will lpars able to boot. what needs to be done? (1 Reply)
Discussion started by: manoj.solaris
1 Replies

7. AIX

cdrom confusion on the vio client lpar

Hi In my vio server I have the below output $ lsvopt | grep -i SAPSITGS sapsitgs_cdrom TL12UP.iso 3182 In my vio client lpar I have the below output root@sapsitgs:/ # lsdev -Cc cdromcd0 Available Virtual SCSI Optical Served by VIO Server cd1... (1 Reply)
Discussion started by: newtoaixos
1 Replies

8. AIX

VIO Server

Hi, I am facing an issue in vio server. When I run bosboot -ad /dev/hdisk0 I am getting an error trustchk: Verification of attributes failed: /usr/sbin/bootinfo : accessauths regards, vjm Please use code tags next time for your code and data. (8 Replies)
Discussion started by: vjm
8 Replies

9. AIX

Chef client on VIOs? How do you manage your VIO configs?

I know the VIOs are generally to be treated as an appliance and one should never drop down to oem_setup_env. In reality however, oem is a very useful tool to get the job done. So that leads me into the question of using the Chef client on a VIO. Currently a big push to manage all our *nix... (4 Replies)
Discussion started by: RecoveryOne
4 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
volrecover(8)						      System Manager's Manual						     volrecover(8)

NAME
volrecover - Performs volume recovery operations SYNOPSIS
/sbin/volrecover [-g diskgroup] [-sb] [-o options] [volume | medianame...] OPTIONS
Options that can be specified to volrecover are: Starts disabled volumes that are selected by the operation. Volumes will be started before any other recovery actions are taken. Volumes will be started with the -o delayrecover start option. This requests that any opera- tions that can be delayed in starting a volume will be delayed. In other words, only those operations necessary to make a volume available for use will occur. Other operations, such as mirror resynchronization, attaching of stale plexes and subdisks, and recovery of stale RAID5 parity will normally be delayed. Performs recovery operations in the background. With this option, volrecover will put itself in the back- ground to attach stale plexes and subdisks, and to resynchronize mirrored volumes and RAID5 parity. If this is used with -s, volumes will be started before recovery begins in the background. Performs no recovery operations. If used with -s, volumes will be started, but no other actions will be taken. If used with -p, the only action of volrecover will be to print a list of startable volumes. Prints the list of selected volumes that are startable. For each startable volume, a line is printed containing the following information: the volume name, the disk group ID of the volume, the volume's usage type, and a list of state flags pertaining to mirrors of the volume. State flags and their meanings are: One of the mirrors was detached by an I/O failure One of the mirrors needs recovery, but the recovery is related to an administrative operation, not an I/O failure Neither kdetach nor stale is appropriate for the volume. Displays information about each task started by volrecover. For recovery operations (as opposed to start operations), a completion status is printed when each task completes. Displays commands that volrecover would execute without actually executing them. Lim- its operation of the command to the given disk group, as specified by disk group ID or disk group name. If no volume or medianame operands are given, all disks in this disk group will be recovered; otherwise, the volume and medianame operands will be evaluated relative to the given disk group. Without the -g option, if no operands are given, all volumes in all imported disk groups will be recovered; otherwise, the disk group for each medianame operand will be determined based on name uniqueness within all disk groups. Passes the given option argu- ments to the -o options for the volplex att and volume start operations generated by volrecover. An option argument of the form pre- fix:options can be specified to restrict the set of commands that the -o option should be applied to. Defined prefixes are: Applies to all invocations of the volume utility (volume starts, mirror resynchronizations, RAID5 partity rebuilds, and RAID5 subdisk recoveries) Applies to all invocations of the volplex utility (currently used only for attaching plexes) Applies specifically to plex attach operations applies specifically to volume start operations Applies to subdisk recoveries Applies to mirror resynchronization and RAID5 parity recovery DESCRIPTION
The volrecover program performs plex attach, RAID5 subdisk recovery, and resynchronize operations for the named volumes, or for volumes residing on the named disks (medianame). If no medianame or volume operands are specified, the operation applies to all volumes (or to all volumes in the specified disk group). If -s is specified, disabled volumes will be started. With -s and -n, volumes are started, but no other recovery takes place. Recovery operations will be started in an order that prevents two concurrent operations from involving the same disk. Operations that involve unrelated disks will run in parallel. EXAMPLES
To recover, in the background, any detached subdisks or plexes that resulted from replacement of a specified disk, use the command: # volrecover -b medianame If you want to monitor the operations, use the command: # volrecover -v medianame SEE ALSO
volintro(8), volplex(8), volume(8) volrecover(8)
All times are GMT -4. The time now is 11:30 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy