Adding disk to my lpar


 
Thread Tools Search this Thread
Operating Systems AIX Adding disk to my lpar
# 1  
Old 02-27-2016
Adding disk to my lpar

hi all i have entered Aix environment 4 months had experienced in linux
what i am facing is i am unable to do sort of RnD with aix like
installation on my own, creating vgs managing networks, the VIOS, storage,lpars,

So we have a setup here almost all are in live production environment
with 2 VIOS server, storage,HMC

I got one lpar for RnD but it has only 1 hdisk say "hdisk0" as rootvg

i would like to have another hdisk say hdisk1 to make say "datavg"

now the problem is how can i add that disk to my lpar
# 2  
Old 02-28-2016
Quote:
Originally Posted by vax
hi all i have entered Aix environment 4 months had experienced in linux
hmm, your name suggests you know VMS too, LOL.

Quote:
Originally Posted by vax
now the problem is how can i add that disk to my lpar
There is no direct answer to this because it will depend on so many details of your setup (which you didn't describe in detail so far). I will try to give you an overview and you may want to ask details as we go along.

It is possible to put physical disks into POWER-systems and use these, but this is commonly only done for VIOS systems. All the other LPARs get disks usually from some external (SAN) storage. I will describe this here.

There are two common ways to attach external disks to an LPAR: vSCSI and NPIV. Both involve the VIOS and the most common setup is to use both: vSCSI for the boot disks (that is: rootvgs) of the LPARs and NPIV for the data/application disks.

vSCSI is the simplest method: first, on the VIOS a virtual SCSI adapter is created and then attached to the LPAR. Now it is possible to create virtual SCSI disks from SAN disks attached to the VIOS and attach these virtual SCSI disks to a certain LPAR/certain adapter. When the LPAR boots it sees this virtual disk as a SCSI-disk attached to a SCSI-adapter.

The advantage of doing it this way is that you can do everything you do with a real SCSI disk, including booting from it, from the LPAR. Still, because the gear the LPAR uses is in fact virtualised, you can still use LPM (Live Partition Mobility). The VIOS of the source- and target-machines (="Managed System") will move the virtualised adapters and disks around so that on the target-MS you have the LPAR running the same way you had it on the source-MS. On the downside the VIOS is involved a lot relatively in the handling and utilisation of the vSCSI disks and when you have much traffic on vSCSI-disks the VIOS needs an increasing amount of (processor and memory) resources.

This is why disks you do not need to boot from ("data disks") are commonly not vSCSI but NPIV: the VIOS only creates a virtual FC-adapter (the VIOS gets the physical FC-adapter attached) and exports that to the LPAR, which in turn uses it to connect to SAN LUNs directly. The LPAR needs to use the respective FC-driver (multipath for IBM storage, powermt for EMC, ...) to access the LUNs. The zoning gets a bit more complicated too, because the LUNs need to be zoned to the LPAR now. On the upside the VIOS is less involved and some limitations of SCSI do not apply.

There are a few redbooks from IBM about their virtualisation techniques which i suggest you download and read.

I hope this helps.

bakunin
These 2 Users Gave Thanks to bakunin For This Post:
# 3  
Old 03-01-2016
@bakunin

Thanks for this brief and prompt reply..although what ever you debriefed i was trying to summarize and understand but was unable to do since no experience in this field.

if you can tell what else i can provide you with info so that i can get my resolution and perform some RnD

---------- Post updated at 04:30 PM ---------- Previous update was at 04:02 PM ----------

All i have a rack with
Two p-Series 710 & 750
A storage v7000 with controller and expansions
SAN switches and HMC these are the physical components

now when i login with HMC i see two server as mentioned
one server is used as bkup server i.e. 710
and another one have 25 Lpars :O where all 22 Lpars are LIVE !!
i managed to get one lpar when i logged in and issued lspv command i see one hdisk0 as rootvg
so i want another hdisk to add in that lpar and make it as datavg

Note: among 25 lpars two lpars are mentioned as VIO server and rest 23 are AIX lpars
will these info helps to resolve my queries ??
# 4  
Old 03-01-2016
Quote:
Originally Posted by vax
Thanks for this brief and prompt reply..although what ever you debriefed i was trying to summarize and understand but was unable to do since no experience in this field.
OK. I think what you need is to first understand how virtualisation in general and virtualisation the IBM way in particular works. Only then we can care to certain specifics. Lets start:

Take a look at your PC: you have a mainboard inside, onto which a SCSI-Adapter is attached. To this SCSI-Adapter is a disk (or several of them, maybe other devices like CD-ROM drives, etc.) attached. Further, there is some amount of memory and one or more processors.

Let us suppose for a moment that we want to virtualise this system so that we run several logical systems off this hardware. We could divide the memory up between the logical systems and we could also divide the processors between these systems. It would be even possible to makde these shares dynamic so that we take away reosurces from a system which doesn't need it at the time and give it to another which does, then reverse the process as the loads change.

This works well, because processors and memory are "anonymous" resources. They have no content of their own and can therefore be easily attached and reattached to a system. This is not the case with disks, because disks have a certain content. You cannot add "more disk" or take away "some disk" like you could add or remove some GB of memory. This is why virtualisation needs to treat disks differently than other resources.

For this, storage devices were invented. You no longer have physical disks locally installed in your computer (which is already virtualised) but you have a specialised system - the "storage" - which is basically a lot of disks with some logic on top to create virtualised disks of arbitrary size and attach them to some external systems. Examples for such storage boxes are the DS8000 from IBM, the VMax from EMC (these are both the enterprise-level systems), the IBM V7000 and EMCs VNX (the respective midrange category) and similar systems.

In most cases you have several hardware boxes, each with several virtualised systems needing disks and one or more storage boxes providing these disks. To organize all this it is common not to attach the storage directly to such a system but to connect all involved parties to a common network connected via specialised switches and utilizing special broadband communication pathes - the "Storage Area Network" or "SAN". SANs usually run on fibre optic cables and use FC (Fibre Channel) communication. Like Cisco is industry-standard for network switches Brocade is industry-standard for FC switches.

Note that this network of disks connecting to their respective hosts needs to be a lot more reliable than a normal network: in a normal network if a packet is lost it is simply retransmitted - IP has a lot of error-checking and -correcting built in. When a packet in a SAN is lost it results in a disk read/write error. This is why communication pathes are usually redundant, with two or even more parallel connections just in case. This is why the whole system is often called a "fabric" rather than a "network".

Further, with so many logical disks (sometimes thousands of them) and systems involved there needs to be some sort of security so that each host only sees the disks it is supposed to use: this is usually done on the switches of the SAN network and the process is called "zoning". A "zone" basically states that only adapter X is allowed to see disk Y. Because adapter X is a virtual adapter attached to system Z only this system can see the disk. The identification works by a system similar to the MAC-addresses of a network: WWNNs (World Wide Node Name) and WWPNs (World Wide Port Name). Read the Wikipedia article about FC for more details.

So, basically, to get a disk to your system: you need to create one on the storage box, get it zoned to your system, then start using it. I know, this, while this is true, it is like explaining how to fly a plane by "get into the plane, take a seat in the pilots seat and start flying" - way too general. It was the best i could come up right now, though. You need to understand some vital basics before you can even ask the right questions.

I hope this helps.

bakunin
# 5  
Old 03-05-2016
Something that might help is a listing of the virtual devices - does not need to be all of them.

So, the command you run (for yourself) as padmin on VIOS is:

$ lsdev -virtual

From the output I would like to know if you see and vfchost devices. If you do, that implies you may be using NPIV for your storage. V7000 can certainly support this.

I assume you will also have some vhost devices.

Using the command

$ lsmap -all

look through the output to see if any hdisk or logical volumes are included in the output. If there are then you are (also) using VSCSI for storage.

To add a disk to a partition using NPIV - you need to find it's WWER number and zone a new LUN to it, and then in the partition run 'cfgmgr' and the partition should see it. No change is needed in the VIOS, HMC, etc.

If you know you are not using NPIV then you will need to find a free disk on a VIOS and 'attach' that to the correct vhost adapter.

Assuming hdisk93 is free, and the RnD partition has vhost7 assigned to it the command is:
$ mkvdev -vdev hdisk93 -vadapter vhost7

Now run cfgmgr in the client and disk should appear as hdisk1

If you do not have a free hdisk on the VIOS then you zone an additional disk to the VIOS (as root run cfgmgr on the VIOS, I continue to forget the padmin equivalent) and then do the steps above (Assume ... hdisk93...)

Hope this helps!
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

LPAR cannot added disk

Dear All, I created a new partition through "Integrated Virtualization Manager" but there have an error when I added a new disk to the partition. The disk already created without any issue, Below error is to add the disk to the partition An error occured while modifying the assignments... (5 Replies)
Discussion started by: lckdanny
5 Replies

2. AIX

AIX lpar bad disk I/O performance - 4k per IO limitation ?

Hi Guys, I have fresh new installed VIO 2.2.3.70 on a p710, 3 physical SAS disks, rootvg on hdisk0 and 3 VIO clients through vscsi, AIX7.1tl4 AIX6.1tl9 RHEL6.5ppc, each lpar has its rootvg installed on a LV on datavg (hdisk2) mapped to vhost0,1,2 There is no vg on hdisk1, I use it for my... (1 Reply)
Discussion started by: frenchy59
1 Replies

3. AIX

VIOS: Extend virtual disk assigned to running lpar?

Hello, VIOS 2.2.1.4 using IVM. I'm trying to extend a virtual disk assigned to a running lpar so that I can expand the lpar's datavg and grow some filesystems for the user. Storage admin expanded the lun and new size was reflected in VIO right away. I then needed the storage pool to... (2 Replies)
Discussion started by: j_aix
2 Replies

4. AIX

VIO and LPAR disk/fcs communications

:wall::wall::wall: 1. I have created an LPAR in the HMC. 2. I have allocated the storage from an Hitachi AMS2500 and assigned it to the host group. 3. I have zoned the LPAR and Storage on a Brocade 5100. (The zone sees the AMS) Next I activated the LPAR in the HMC, SMS mode for the mksysb... (3 Replies)
Discussion started by: Dallasguy7
3 Replies

5. AIX

Adding Virtual Adapter to LPAR

Hello, hopefully someone can help me out with this, I have created a virtual Ethernet adapter on the VIO but would like to add it to my new LPAR. On my VIO only my disk is mapped. $ lsmap -all SVSA Physloc Client Partition ID --------------- --------------------------------------------... (5 Replies)
Discussion started by: audis$
5 Replies

6. AIX

LPAR and AIX VIO Disk Mappring for Linux Clients

VIO Server is managing both AIX Clients and Linux Clients. For AIX Clients, we could do a disk mapping from slot numbers to VIO and also uname -L to determine the lparid and serial number frame its running on. From a Linux Client, How do I know which IBM frame its running on? Any command to... (4 Replies)
Discussion started by: rpangulu
4 Replies

7. AIX

p595 LPAR no longer sees SAN boot disk

Hello, we have a wierd and urgent problem, with a few of our p595 LPARs running AIX 5.3. The LPARs ran AIX 5.3 TL 7 and booted off EMC SAN disks, using EMC Powerpath. Every boot we run "pprootdev on" and "pprootdev fix". We can issue "bosboot -a" and we can reboot the machines. Now, on two... (2 Replies)
Discussion started by: rwesterik
2 Replies

8. AIX

VIO Backing Disk LPAR how to find which one ?

hello Folks, my vio: $ lsmap -all SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost0 U9117.MMA.6534BE4-V2-C11 0x00000003 VTD ... (0 Replies)
Discussion started by: filosophizer
0 Replies

9. AIX

Problem mapping LUN disk from VIOS to the LPAR

Hello guys, It would be so nice of you if someone can provide me with these informations. 1) My SAN group assigned 51G of LUN space to the VIO server.I ran cfgdev to discover the newly added LUN. Unfortunately most of the disks that are in VIO server is 51G. How would I know which is the newly... (3 Replies)
Discussion started by: solaix14
3 Replies

10. AIX

LPAR and vio disk mapping

We have a frame the uses 2 vios that assign disk storage to LPAR's. We have a LPAr with multiple disk and I want to know how do I tell which vio is serving the disk. For example the LPAr has hdisk 0, 1, 2, 3 all the same size. I want to know which vio is serving hdisk0, 1. (4 Replies)
Discussion started by: daveisme
4 Replies
Login or Register to Ask a Question