Sponsored Content
Operating Systems AIX Inherited VIO server an LPARs Post 302498186 by ross.mather on Sunday 20th of February 2011 07:39:09 AM
Old 02-20-2011
One or two VIO is largely a matter of what the Availability requirements of the server are. In general a Production environment should have dual VIO servers. In this config howeer I expect that the issue will be around the use of the Disk Adapters.

Each disk adapter can belong to exactly one LPAR (incl a VIO), so the existing arrangement probably only has one disk adapter - hence one VIO.

If you are buying a new CEC (or is it an I/O drawer? ) you will add at least one more Disk Adapter so you could go to twin VIO. That said its still recommended to mirror the rootvg even in that environment so now 4 disks would be taken up with VIO operating systems.

Even so you'd still be left with the headache of multiple LVs being allocated to each LPAR from each VIO server and the inevitable number of resyncs you'd need after a failure or reboot of a VIO.

The best solution in these kind of environments is to use a Fibre Channel SAN storage array that can then have LUNs connected to each LPAR via one or both VIO servers (depending how you implement it). In that way its the additional pathing that is redundant not the additional disk copy.

If you are going to stick with one VIO and all those new disks - try and get a RAID card so you can create a single RAID array with all the disks, that at least would ease the management of it as the RAID array will take care of keeping the 2 copies of data on separate disks. That said - the I/O performance of this environment will be very poor.

Quote:
I'm afraid there has been no training
This is criminal, drop me a PM and I may be able to point you in the right direction in the UK to close that skills gap.

cheers
Ross
 

10 More Discussions You Might Find Interesting

1. SCO

Inherited server - need help

I have inherited admin duties on an old server running SCO Open Server 5. No changes have been made to this thing for a very long time. The server has started randomly disconnecting itself from the network. I have to reboot the server to get it to find the network again. It happens about once a... (3 Replies)
Discussion started by: jmickens
3 Replies

2. AIX

Virtual Ethernet VIO HMC LPARs

Hi, I am little confused about the virtual Ethernet configuration on VIO and Client Partitions. There is alot of info on the internet but it gets more confusing.... If I have LHEA, it is very simple. Just assign LHEA (logical host ethernet adapter) to client partition -> run smitty tcpip and... (10 Replies)
Discussion started by: filosophizer
10 Replies

3. AIX

vio server and vio client

Hi, I want to know wheather partition size for installation of vio client can be specified on vio server example If I am installing vio server on blade with 2*300gb hard disk,after that I want to create 2 vio client (AIX Operating system) wheather I can specify hard disk size while... (1 Reply)
Discussion started by: manoj.solaris
1 Replies

4. AIX

Identifying the vio server names

Hi My vio client is getting its virtual Ethernet services from dual vio servers. What command if I execute on the vio client will get me the names of the vio servers ? (2 Replies)
Discussion started by: samsungsamsung
2 Replies

5. AIX

VIO server on p520

trying to put VIO server software into this p520, firmware upgraded to the latest and greatest..SF240_415_318 , I think a few questions, on this practice AIX machine on AIX 7.1 VIO can only be installed if ASMI or HMC is running ??? if so, perhaps ASMI is much simpler, since I will need to... (2 Replies)
Discussion started by: ppchu99
2 Replies

6. AIX

vio server ethernet to vio client ethernet(concepts confusing)

Hi In the vio server when I do # lsattr -El hdisk*, I get a PVID. The same PVID is also seen when I put the lspv command on the vio client partition. This way Im able to confirm the lun using the PVID. Similarly how does the vio client partition gets the virtual ethernet scsi client adapter... (1 Reply)
Discussion started by: newtoaixos
1 Replies

7. AIX

Mirroring vio server

Hi, I would like to know installing vio server on local disk and mirroring rootvg, if I am creating AIX VIO CLIENT(lpar), and any of single local hard disk failuare. will it affect lpars? will lpars able to boot. what needs to be done? (1 Reply)
Discussion started by: manoj.solaris
1 Replies

8. AIX

VIO Server

Hi, I am facing an issue in vio server. When I run bosboot -ad /dev/hdisk0 I am getting an error trustchk: Verification of attributes failed: /usr/sbin/bootinfo : accessauths regards, vjm Please use code tags next time for your code and data. (8 Replies)
Discussion started by: vjm
8 Replies

9. UNIX for Advanced & Expert Users

How to identify the blade from VIO server?

Hello, I would like to identify the blade or/and bladecenter from the hosted VIO server. I prefer the "command line" solution. May be kdb. Like, I want to ask the child about his father. Thank you (1 Reply)
Discussion started by: x41
1 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated Disk driver SYNOPSIS
device ccd DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you are familiar with how to generate kernels, how to properly configure disks and devices in a kernel configura- tion file, and how to partition disks. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: device ccd # concatenated disk devices As of the FreeBSD 3.0 release, you do not need to configure your kernel with ccd but may instead use it as a kernel loadable module. Simply running ccdconfig(8) will load the module into the kernel. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. Note that mirroring may not be used with an interleave factor of 0. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. The Interleave Factor If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase sequential read/write performance. The interleave factor is expressed in units of DEV_BSIZE (usually 512 bytes). For large writes, the optimum interleave factor is typically the size of a track, while for large reads, it is about a quarter of a track. (Note that this changes greatly depending on the number and speed of disks.) For instance, with eight 7,200 RPM drives on two Fast-Wide SCSI buses, this translates to about 128 for writes and 32 for reads. A larger interleave tends to work better when the disk is taking a multitasking load by localizing the file I/O from any given process onto a single disk. You lose sequential performance when you do this, but sequential performance is not usually an issue with a multitasking load. An interleave factor must be specified when using a mirroring configuration, even when you have only two disks (i.e., the layout winds up being the same no matter what the interleave factor). The interleave factor will determine how I/O is broken up, however, and a value 128 or greater is recommended. ccd has an option for a parity disk, but does not currently implement it. The best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. For random-access oriented workloads, such as news servers, a larger interleave factor (e.g., 65,536) is more desirable. Note that there is not much ccd can do to speed up applications that are seek-time limited. Larger interleave factors will at least reduce the chance of having to seek two disk-heads to read one directory or a file. Disk Mirroring You can configure the ccd to ``mirror'' any even number of disks. See ccdconfig(8) for how to specify the necessary flags. For example, if you have a ccd configuration specifying four disks, the first two disks will be mirrored with the second two disks. A write will be run to both sides of the mirror. A read will be run to either side of the mirror depending on what the driver believes to be most optimal. If the read fails, the driver will automatically attempt to read the same sector from the other side of the mirror. Currently ccd uses a dual seek zone model to optimize reads for a multi-tasking load rather than a sequential load. In an event of a disk failure, you can use dd(1) to recover the failed disk. Note that a one-disk ccd is not the same as the original partition. In particular, this means if you have a file system on a two-disk mir- rored ccd and one of the disks fail, you cannot mount and use the remaining partition as itself; you have to configure it as a one-disk ccd. You cannot replace a disk in a mirrored ccd partition without first backing up the partition, then replacing the disk, then restoring the partition. Linux Compatibility The Linux compatibility mode does not try to read the label that Linux' md(4) driver leaves on the raw devices. You will have to give the order of devices and the interleave factor on your own. When in Linux compatibility mode, ccd will convert the interleave factor from Linux terminology. That means you give the same interleave factor that you gave as chunk size in Linux. If you have a Linux md(4) device in ``legacy'' mode, do not use the CCDF_LINUX flag in ccdconfig(8). Use the CCDF_NO_OFFSET flag instead. In that case you have to convert the interleave factor on your own, usually it is Linux' chunk size multiplied by two. Using a Linux RAID this way is potentially dangerous and can destroy the data in there. Since FreeBSD does not read the label used by Linux, changes in Linux might invalidate the compatibility layer. However, using this is reasonably safe if you test the compatibility before mounting a RAID read-write for the first time. Just using ccdconfig(8) without mounting does not write anything to the Linux RAID. Then you do a fsck.ext2fs (ports/sysutils/e2fsprogs) on the ccd device using the -n flag. You can mount the file system read-only to check files in there. If all this works, it is unlikely that there is a problem with ccd. Keep in mind that even when the Linux compatibility mode in ccd is working correctly, bugs in FreeBSD's ex2fs implemen- tation would still destroy your data. WARNINGS
If just one (or more) of the disks in a ccd fails, the entire file system will be lost unless you are mirroring the disks. If one of the disks in a mirror is lost, you should still be able to back up your data. If a write error occurs, however, data read from that sector may be non-deterministic. It may return the data prior to the write or it may return the data that was written. When a write error occurs, you should recover and regenerate the data as soon as possible. Changing the interleave or other parameters for a ccd disk usually destroys whatever data previously existed on that disk. FILES
/dev/ccd* ccd device special files SEE ALSO
dd(1), ccdconfig(8), config(8), disklabel(8), fsck(8), mount(8), newfs(8), vinum(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
August 9, 1995 BSD
All times are GMT -4. The time now is 07:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy