Sponsored Content
Operating Systems AIX Inherited VIO server an LPARs Post 302498186 by ross.mather on Sunday 20th of February 2011 07:39:09 AM
Old 02-20-2011
One or two VIO is largely a matter of what the Availability requirements of the server are. In general a Production environment should have dual VIO servers. In this config howeer I expect that the issue will be around the use of the Disk Adapters.

Each disk adapter can belong to exactly one LPAR (incl a VIO), so the existing arrangement probably only has one disk adapter - hence one VIO.

If you are buying a new CEC (or is it an I/O drawer? ) you will add at least one more Disk Adapter so you could go to twin VIO. That said its still recommended to mirror the rootvg even in that environment so now 4 disks would be taken up with VIO operating systems.

Even so you'd still be left with the headache of multiple LVs being allocated to each LPAR from each VIO server and the inevitable number of resyncs you'd need after a failure or reboot of a VIO.

The best solution in these kind of environments is to use a Fibre Channel SAN storage array that can then have LUNs connected to each LPAR via one or both VIO servers (depending how you implement it). In that way its the additional pathing that is redundant not the additional disk copy.

If you are going to stick with one VIO and all those new disks - try and get a RAID card so you can create a single RAID array with all the disks, that at least would ease the management of it as the RAID array will take care of keeping the 2 copies of data on separate disks. That said - the I/O performance of this environment will be very poor.

Quote:
I'm afraid there has been no training
This is criminal, drop me a PM and I may be able to point you in the right direction in the UK to close that skills gap.

cheers
Ross
 

10 More Discussions You Might Find Interesting

1. SCO

Inherited server - need help

I have inherited admin duties on an old server running SCO Open Server 5. No changes have been made to this thing for a very long time. The server has started randomly disconnecting itself from the network. I have to reboot the server to get it to find the network again. It happens about once a... (3 Replies)
Discussion started by: jmickens
3 Replies

2. AIX

Virtual Ethernet VIO HMC LPARs

Hi, I am little confused about the virtual Ethernet configuration on VIO and Client Partitions. There is alot of info on the internet but it gets more confusing.... If I have LHEA, it is very simple. Just assign LHEA (logical host ethernet adapter) to client partition -> run smitty tcpip and... (10 Replies)
Discussion started by: filosophizer
10 Replies

3. AIX

vio server and vio client

Hi, I want to know wheather partition size for installation of vio client can be specified on vio server example If I am installing vio server on blade with 2*300gb hard disk,after that I want to create 2 vio client (AIX Operating system) wheather I can specify hard disk size while... (1 Reply)
Discussion started by: manoj.solaris
1 Replies

4. AIX

Identifying the vio server names

Hi My vio client is getting its virtual Ethernet services from dual vio servers. What command if I execute on the vio client will get me the names of the vio servers ? (2 Replies)
Discussion started by: samsungsamsung
2 Replies

5. AIX

VIO server on p520

trying to put VIO server software into this p520, firmware upgraded to the latest and greatest..SF240_415_318 , I think a few questions, on this practice AIX machine on AIX 7.1 VIO can only be installed if ASMI or HMC is running ??? if so, perhaps ASMI is much simpler, since I will need to... (2 Replies)
Discussion started by: ppchu99
2 Replies

6. AIX

vio server ethernet to vio client ethernet(concepts confusing)

Hi In the vio server when I do # lsattr -El hdisk*, I get a PVID. The same PVID is also seen when I put the lspv command on the vio client partition. This way Im able to confirm the lun using the PVID. Similarly how does the vio client partition gets the virtual ethernet scsi client adapter... (1 Reply)
Discussion started by: newtoaixos
1 Replies

7. AIX

Mirroring vio server

Hi, I would like to know installing vio server on local disk and mirroring rootvg, if I am creating AIX VIO CLIENT(lpar), and any of single local hard disk failuare. will it affect lpars? will lpars able to boot. what needs to be done? (1 Reply)
Discussion started by: manoj.solaris
1 Replies

8. AIX

VIO Server

Hi, I am facing an issue in vio server. When I run bosboot -ad /dev/hdisk0 I am getting an error trustchk: Verification of attributes failed: /usr/sbin/bootinfo : accessauths regards, vjm Please use code tags next time for your code and data. (8 Replies)
Discussion started by: vjm
8 Replies

9. UNIX for Advanced & Expert Users

How to identify the blade from VIO server?

Hello, I would like to identify the blade or/and bladecenter from the hosted VIO server. I prefer the "command line" solution. May be kdb. Like, I want to ask the child about his father. Thank you (1 Reply)
Discussion started by: x41
1 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
GRAID(8)						    BSD System Manager's Manual 						  GRAID(8)

NAME
graid -- control utility for software RAID devices SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ... graid add [-f] [-S size] [-s strip] name label level graid delete [-f] name [label | num] graid insert name prov ... graid remove name prov ... graid fail name prov ... graid stop [-fv] name ... graid list graid status graid load graid unload DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced users are free to do it using this utility. The first argument to graid indicates an action to be performed: label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as "Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name "NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level and metadata format. Additional options include: -f Enforce specified configuration creation if it is officially unsupported, but technically can be created. -o fmtopt Specifies metadata format options. -S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller components going to be inserted later. Defaults to size of the smallest component. -s strip Specifies strip size in bytes. Defaults to 131072. add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The rest of arguments are the same as for the label command. delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion. Additional options include: -f Delete volume(s) even if it is still open. insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo- nents, mark disk(s) as spare. remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s) will be replaced by spares. fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are spare disks present - failed disk(s) will be replaced with one of them. stop Stop the given array. The metadata will not be erased. Additional options include: -f Stop the given array even if some of its volumes are opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol- lowing formats: DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware RAID controllers. Because of high format flexibility different implementations support different set of features and have different on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array. Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con- figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used by some Adaptec controllers. Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). JMicron The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). NVIDIA The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks). Promise The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur- rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF. RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption! 2TiB BARRIERS NVIDIA metadata format does not support volumes above 2TiB. SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class. kern.geom.raid.aggressive_spare: 0 Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much care to not lose data if connecting unrelated disk! kern.geom.raid.clean_time: 5 Mark volume as clean when idle for the specified number of seconds. kern.geom.raid.debug: 0 Debug level of the RAID GEOM class. kern.geom.raid.enable: 1 Enable on-disk metadata taste. kern.geom.raid.idle_threshold: 1000000 Time in microseconds to consider a volume idle for rebuild purposes. kern.geom.raid.name_format: 0 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. kern.geom.raid.read_err_thresh: 10 Number of read errors equated to disk failure. Write errors are always considered as disk failures. kern.geom.raid.start_timeout: 30 Time to wait for missing array components on startup. kern.geom.raid.X.enable: 1 Enable taste for specific metadata or transformation module. kern.geom.raid.legacy_aliases: 0 Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases. EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails. SEE ALSO
geom(4), geom(8), gvinum(8) HISTORY
The graid utility appeared in FreeBSD 9.0. AUTHORS
Alexander Motin <mav@FreeBSD.org> M. Warner Losh <imp@FreeBSD.org> BSD
April 4, 2013 BSD
All times are GMT -4. The time now is 04:08 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy