Sponsored Content
Operating Systems AIX Rebooting redundant VIOs and mirroring of PVs they serve to client LPARs Post 302968200 by MichaelFelt on Saturday 5th of March 2016 04:52:42 AM
Old 03-05-2016
What was not clear to me is whether the storage was physically located.

It sounds like it was using local storage (local, i.e., physical, not from a SAN) and the LPAR (aka virtual machine) was using LVM mirroring.

This was the preferred (read almost only) solution in 2005 before SAN storage was common. HOWEVER, if you have SAN backed storage what is 'common' when you have local (i.e., in the CEC aka Compute node) storage is to use the local storage for rootvg, dvd repositories, secure logs, etc. and SAN and MPIO for hosting VM (or client LPAR) storage.

re: active/passive LVM mirroring: the consistency check was to help 'guess' which mirror was last written when recovering from a crash. In your case, while the VIOS was down, one path was down and it is clear which disk is stale and needs to be synced.

Over time, the disks will sync automatically - but you can speed the process by running syncvg (lsvg vgName will tell you if there are stale disks and if so, how many stale PP)
This User Gave Thanks to MichaelFelt For This Post:
 

10 More Discussions You Might Find Interesting

1. IP Networking

to serve or be served??

I have two machines on my network - one OSX mac and one linux box. The mac is my main workhorse, and the linux box does occasional chores and webserving. Currently the mac shares (via NFS) files with the Liinux box. Would it be less demanding on the mac if I made it a client, and moved my files... (2 Replies)
Discussion started by: mistafeesh
2 Replies

2. AIX

rebooting vio client

Hi, I would like to reboot vio client but I am not able to access vio client(I am not able to get putty) , I am able to get putty of vio server, is there any command by using which from vio server I can reboot vio client? (3 Replies)
Discussion started by: manoj.solaris
3 Replies

3. AIX

DUAL VIOS & Client LPAR hangs at 25b3

I have a DUAL VIO ( IBM Virtual I/O ) setup on p 570. Two Vio server ( VIOS ) and many LPAR clients. VIO ( latest version + service pack + applied the fix ) and AIX 6.1 ML2 When both VIOs are running, and if I turn on a Client LPAR, the LPAR hangs at LED 25b3 for more than 1 hour then it... (2 Replies)
Discussion started by: filosophizer
2 Replies

4. HP-UX

How to Mirror LV and umirror after to change PVs...

Greetings Im running HP UX B 11.11 and Im not sure on how to do this request to "mirror current 5 LVs on vgSPAN to the new LUNs assigned to the VG and unmirror the LVs and finally return the 12 LUNs to SAN storage" The existing LVs were extended to accommodate a user request to extend 2 FS on... (3 Replies)
Discussion started by: hedkandi
3 Replies

5. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

6. AIX

Shared Disk in VIOS between two LPARs ?

is there any way to create shared virtual disk between two LPARs like how you can do it using Storage through Fiber on two servers ? Trying to stimulate HACMP between two LPARs (1 Reply)
Discussion started by: filosophizer
1 Replies

7. HP-UX

PVS command in HP-UX

Dear Engineer, Is there any command in HP-UX work similiar to PVS command in Linux? With Best Regards, Md. Abdullah-Al Kauser (4 Replies)
Discussion started by: makauser
4 Replies

8. UNIX for Advanced & Expert Users

Unable to install client AIX LPAR to vscsi hdisk provided from VIOS

Hi everybody, I have Power5 server with 4 internal hdisks each of 70Gb. VIOS server was installed via Virtual I/O Server Image Repository on the HMC. HMC release - 7.7.0 VIOS rootvg installed on 2 disk(these disks merged to one storage pool during VIOS install process),and 2 others hdisks... (2 Replies)
Discussion started by: Ravil Khalilov
2 Replies

9. AIX

Chef client on VIOs? How do you manage your VIO configs?

I know the VIOs are generally to be treated as an appliance and one should never drop down to oem_setup_env. In reality however, oem is a very useful tool to get the job done. So that leads me into the question of using the Chef client on a VIO. Currently a big push to manage all our *nix... (4 Replies)
Discussion started by: RecoveryOne
4 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
vgmove(1M)																vgmove(1M)

NAME
vgmove - move data from an old set of disks in a volume group to a new set of disks SYNOPSIS
autobackup] diskmapfile vg_name autobackup] diskfile diskmapfile vg_name DESCRIPTION
The command migrates data from the existing set of disks in a volume group to a new set of disks. After the command completes successfully, the new set of disks will belong to the same volume group. The command is intended to migrate data on a volume group from old storage to new storage. The diskmapfile specifies the list of source disks to move data from, and the list of destination disks to move data to. The user may choose to list only a subset of the existing physical volumes in the volume group that need to be migrated to a new set of disks. The format of the diskmapfile file is shown below: source_pv_1 destination_pv_1_1 destination_pv_1_2 .... source_pv_2 destination_pv_2_1 destination_pv_2_2 .... .... source_pv_n destination_pv_n_1 destination_pv_n_2 .... If a destination disk is not already part of the volume group, it will be added using see vgextend(1M). Upon successful completion of the source disk will be automatically removed from the volume group using see vgreduce(1M). After successful migration, the destination disks are added to the LVM configuration files; namely, or The source disks along with their alternate links are removed from the LVM configuration files. A sample diskmapfile is shown below: /dev/disk/disk1 /dev/disk/disk51 /dev/disk/disk52 /dev/disk/disk2 /dev/disk/disk51 /dev/disk/disk3 /dev/disk/disk53 The diskmapfile can be manually created, or it can be automatically generated using the diskfile and diskmapfile options. The argument diskfile contains a list of destination disks, one per line such as the sample file below: /dev/disk/disk51 /dev/disk/disk52 /dev/disk/disk53 When the option is given, reads a list of destination disks from diskfile, generates the source to destination mapping, and saves it to diskmapfile. The volume group must be activated before running the command. If the command is interrupted before it completes, the volume group is in the same state it was at the beginning of the command. The migration can be continued by running the command with the same options and disk mapping file. Options and Arguments The command recognizes the following options and arguments: vg_name The path name of the volume group. Set automatic backup for this invocation of autobackup can have one of the following values: Automatically back up configuration changes made to the volume group. This is the default. After this command executes, the command is executed for the volume group; see vgcfgbackup(1M). Do not back up configuration changes this time. Specify the name of the file containing the source to destination disk mapping. If the option is also given, will generate the disk mapping and save it to this filename. (Note that if the diskmapfile already exists, the file will be overwritten). Otherwise, will perform the data migration using this diskmapfile. Specify the name of the file containing the list of destination disks. This option is used with the option to generate the diskmapfile. When the option is used, no volume group data is moved. Preview the actions to be taken but do not move any volume group data. Shared Volume Group Considerations For volume group version 1.0 and 2.0, cannot be used if the volume group is activated in shared mode. For volume groups version 2.1 (or higher), can be performed when activated in either shared, exclusive, or standalone mode. Note that the daemon must be running on all the nodes sharing a volume group activated in shared mode. See lvmpud(1M). When a node wants to share the volume group, the user must first execute a if physical volumes were moved in or out of the volume group at the time the volume group was not activated on that node. LVM shared mode is currently only available in Serviceguard clusters. EXTERNAL INFLUENCES
Environment Variables determines the language in which messages are displayed. If is not specified or is null, it defaults to "C" (see lang(5)). If any internationalization variable contains an invalid setting, all internationalization variables default to "C" (see environ(5)). EXAMPLES
Move data in volume group from to After the migration, remove from the volume group: Generate a source to destination disk map file for where the destination disks are and SEE ALSO
lvmpud(1M), pvmove(1M), vgcfgbackup(1M), vgcfgrestore(1M), vgextend(1M), vgreduce(1M), intro(7), lvm(7). vgmove(1M)
All times are GMT -4. The time now is 08:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy