Sponsored Content
Operating Systems AIX Rebooting redundant VIOs and mirroring of PVs they serve to client LPARs Post 302968236 by maraixadm on Sunday 6th of March 2016 09:57:32 AM
Old 03-06-2016
yep, we're serving disk local to the VIOs in chunks as LVs to the client LPARs. Thank you for your explanation of active/passive MWC ("in your case, ... one path was down and it is clear..."). That was missing in that clear form from the IBM docs... We had one good update with no complaints, headed for the next.
 

10 More Discussions You Might Find Interesting

1. IP Networking

to serve or be served??

I have two machines on my network - one OSX mac and one linux box. The mac is my main workhorse, and the linux box does occasional chores and webserving. Currently the mac shares (via NFS) files with the Liinux box. Would it be less demanding on the mac if I made it a client, and moved my files... (2 Replies)
Discussion started by: mistafeesh
2 Replies

2. AIX

rebooting vio client

Hi, I would like to reboot vio client but I am not able to access vio client(I am not able to get putty) , I am able to get putty of vio server, is there any command by using which from vio server I can reboot vio client? (3 Replies)
Discussion started by: manoj.solaris
3 Replies

3. AIX

DUAL VIOS & Client LPAR hangs at 25b3

I have a DUAL VIO ( IBM Virtual I/O ) setup on p 570. Two Vio server ( VIOS ) and many LPAR clients. VIO ( latest version + service pack + applied the fix ) and AIX 6.1 ML2 When both VIOs are running, and if I turn on a Client LPAR, the LPAR hangs at LED 25b3 for more than 1 hour then it... (2 Replies)
Discussion started by: filosophizer
2 Replies

4. HP-UX

How to Mirror LV and umirror after to change PVs...

Greetings Im running HP UX B 11.11 and Im not sure on how to do this request to "mirror current 5 LVs on vgSPAN to the new LUNs assigned to the VG and unmirror the LVs and finally return the 12 LUNs to SAN storage" The existing LVs were extended to accommodate a user request to extend 2 FS on... (3 Replies)
Discussion started by: hedkandi
3 Replies

5. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

6. AIX

Shared Disk in VIOS between two LPARs ?

is there any way to create shared virtual disk between two LPARs like how you can do it using Storage through Fiber on two servers ? Trying to stimulate HACMP between two LPARs (1 Reply)
Discussion started by: filosophizer
1 Replies

7. HP-UX

PVS command in HP-UX

Dear Engineer, Is there any command in HP-UX work similiar to PVS command in Linux? With Best Regards, Md. Abdullah-Al Kauser (4 Replies)
Discussion started by: makauser
4 Replies

8. UNIX for Advanced & Expert Users

Unable to install client AIX LPAR to vscsi hdisk provided from VIOS

Hi everybody, I have Power5 server with 4 internal hdisks each of 70Gb. VIOS server was installed via Virtual I/O Server Image Repository on the HMC. HMC release - 7.7.0 VIOS rootvg installed on 2 disk(these disks merged to one storage pool during VIOS install process),and 2 others hdisks... (2 Replies)
Discussion started by: Ravil Khalilov
2 Replies

9. AIX

Chef client on VIOs? How do you manage your VIO configs?

I know the VIOs are generally to be treated as an appliance and one should never drop down to oem_setup_env. In reality however, oem is a very useful tool to get the job done. So that leads me into the question of using the Chef client on a VIO. Currently a big push to manage all our *nix... (4 Replies)
Discussion started by: RecoveryOne
4 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated disk driver SYNOPSIS
pseudo-device ccd [count] DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you're familiar with how to generate kernels, how to properly configure disks and pseudo-devices in a kernel con- figuration file, and how to partition disks. Note that the 'raw' partitions of the disks must not be combined. Each component partition should be offset at least one cylinder from the beginning of the component disk. This avoids potential conflicts between the component disk's disklabel and the ccd's disklabel. The kernel will only allow component partitions of type FS_CCD. But for now, it allows partition of all types since some port lacks support of an on- disk BSD disklabel. The partition of FS_UNUSED may be rejected because device driver of component disk will refuse it. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: pseudo-device ccd 4 # concatenated disk devices The count argument is how many ccds memory is allocated for at boot time. In this example, no more than 4 ccds may be configured. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase performance. Since the interleave factor is expressed in units of DEV_BSIZE, one must account for sector sizes other than DEV_BSIZE in order to calculate the correct interleave. The kernel will not allow an interleave factor less than the size of the largest component sector divided by DEV_BSIZE. Note that best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. Also note that the total size of concatenated disk may vary depending on the interleave factor even if the exact same components are concate- nated. And an old on-disk disklabel may be read after interleave factor change. As a result, the disklabel may contain wrong partition geometry and will cause an error when doing I/O near the end of concatenated disk. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. WARNINGS
If just one (or more) of the disks in a non-mirrored ccd fails, the entire file system will be lost. FILES
/dev/{,r}ccd* ccd device special files. SEE ALSO
config(1), MAKEDEV(8), ccdconfig(8), fsck(8), mount(8), newfs(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
March 5, 2004 BSD
All times are GMT -4. The time now is 01:40 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy