Sponsored Content
Operating Systems AIX Inherited VIO server an LPARs Post 302498186 by ross.mather on Sunday 20th of February 2011 07:39:09 AM
Old 02-20-2011
One or two VIO is largely a matter of what the Availability requirements of the server are. In general a Production environment should have dual VIO servers. In this config howeer I expect that the issue will be around the use of the Disk Adapters.

Each disk adapter can belong to exactly one LPAR (incl a VIO), so the existing arrangement probably only has one disk adapter - hence one VIO.

If you are buying a new CEC (or is it an I/O drawer? ) you will add at least one more Disk Adapter so you could go to twin VIO. That said its still recommended to mirror the rootvg even in that environment so now 4 disks would be taken up with VIO operating systems.

Even so you'd still be left with the headache of multiple LVs being allocated to each LPAR from each VIO server and the inevitable number of resyncs you'd need after a failure or reboot of a VIO.

The best solution in these kind of environments is to use a Fibre Channel SAN storage array that can then have LUNs connected to each LPAR via one or both VIO servers (depending how you implement it). In that way its the additional pathing that is redundant not the additional disk copy.

If you are going to stick with one VIO and all those new disks - try and get a RAID card so you can create a single RAID array with all the disks, that at least would ease the management of it as the RAID array will take care of keeping the 2 copies of data on separate disks. That said - the I/O performance of this environment will be very poor.

Quote:
I'm afraid there has been no training
This is criminal, drop me a PM and I may be able to point you in the right direction in the UK to close that skills gap.

cheers
Ross
 

10 More Discussions You Might Find Interesting

1. SCO

Inherited server - need help

I have inherited admin duties on an old server running SCO Open Server 5. No changes have been made to this thing for a very long time. The server has started randomly disconnecting itself from the network. I have to reboot the server to get it to find the network again. It happens about once a... (3 Replies)
Discussion started by: jmickens
3 Replies

2. AIX

Virtual Ethernet VIO HMC LPARs

Hi, I am little confused about the virtual Ethernet configuration on VIO and Client Partitions. There is alot of info on the internet but it gets more confusing.... If I have LHEA, it is very simple. Just assign LHEA (logical host ethernet adapter) to client partition -> run smitty tcpip and... (10 Replies)
Discussion started by: filosophizer
10 Replies

3. AIX

vio server and vio client

Hi, I want to know wheather partition size for installation of vio client can be specified on vio server example If I am installing vio server on blade with 2*300gb hard disk,after that I want to create 2 vio client (AIX Operating system) wheather I can specify hard disk size while... (1 Reply)
Discussion started by: manoj.solaris
1 Replies

4. AIX

Identifying the vio server names

Hi My vio client is getting its virtual Ethernet services from dual vio servers. What command if I execute on the vio client will get me the names of the vio servers ? (2 Replies)
Discussion started by: samsungsamsung
2 Replies

5. AIX

VIO server on p520

trying to put VIO server software into this p520, firmware upgraded to the latest and greatest..SF240_415_318 , I think a few questions, on this practice AIX machine on AIX 7.1 VIO can only be installed if ASMI or HMC is running ??? if so, perhaps ASMI is much simpler, since I will need to... (2 Replies)
Discussion started by: ppchu99
2 Replies

6. AIX

vio server ethernet to vio client ethernet(concepts confusing)

Hi In the vio server when I do # lsattr -El hdisk*, I get a PVID. The same PVID is also seen when I put the lspv command on the vio client partition. This way Im able to confirm the lun using the PVID. Similarly how does the vio client partition gets the virtual ethernet scsi client adapter... (1 Reply)
Discussion started by: newtoaixos
1 Replies

7. AIX

Mirroring vio server

Hi, I would like to know installing vio server on local disk and mirroring rootvg, if I am creating AIX VIO CLIENT(lpar), and any of single local hard disk failuare. will it affect lpars? will lpars able to boot. what needs to be done? (1 Reply)
Discussion started by: manoj.solaris
1 Replies

8. AIX

VIO Server

Hi, I am facing an issue in vio server. When I run bosboot -ad /dev/hdisk0 I am getting an error trustchk: Verification of attributes failed: /usr/sbin/bootinfo : accessauths regards, vjm Please use code tags next time for your code and data. (8 Replies)
Discussion started by: vjm
8 Replies

9. UNIX for Advanced & Expert Users

How to identify the blade from VIO server?

Hello, I would like to identify the blade or/and bladecenter from the hosted VIO server. I prefer the "command line" solution. May be kdb. Like, I want to ask the child about his father. Thank you (1 Reply)
Discussion started by: x41
1 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 07:35 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy