New IBM Power8 (S822) and StorWiz V3700 SAN, best practices for production setup/config?


 
Thread Tools Search this Thread
Operating Systems AIX New IBM Power8 (S822) and StorWiz V3700 SAN, best practices for production setup/config?
# 1  
Old 01-24-2015
Wrench New IBM Power8 (S822) and StorWiz V3700 SAN, best practices for production setup/config?

Hello,

Got a IBM Power8 box (S822) that I am configuring for replacement of our existing IBM machine.

Wanted to touch base with the expert community here to ensure I don't miss anything critical in my setup/config of AIX.

Did a fresh AIX 7.1 install on the internal scsi hdisk, mirror'ed the rootvg and made sure it can boot from both hdisks in case one fails using this guide.

The AIX oslevel is 7100-03, going to check for any updates.

What about firmware/microcode? How does one go about updating that for the head unit?

Anything else I should consider doing before I start adding my SAN FC LUNs and moving over our application?
# 2  
Old 01-25-2015
Quote:
Originally Posted by c3rb3rus
Did a fresh AIX 7.1 install on the internal scsi hdisk, mirror'ed the rootvg and made sure it can boot from both hdisks in case one fails using this guide.
This is all fine, but you should be aware that this way you will never have the possibility of LPM (live partition mobility). LPM means the ability to move an LPAR to another hardware box ("managed system" in IBM-speak) on the same HMC without even stopping it. For this to work you (obviously) must not have any non-virtualised components in the LPAR: disks, adapters, etc..

If you want LPM you need to create 1 (2) VIOS LPARs, give these all the physical adapters, then create virtual adapters and give out these to the other LPARs. Instead of physical SCSI disks you usually create LUNs on the SAN, connect them to the VIOS, give them out to the LPARs as virtual SCSI (vscsi) disks and use these. Usually this is used for boot disks, for data disks you either use the same or FC-connected ("NPIV")-disks. When you move a LPAR via LPM the vscsi-disks are moved to the VIOS on the target managed system in the process.


Quote:
Originally Posted by c3rb3rus
The AIX oslevel is 7100-03, going to check for any updates.
That is OK. In fact some applications will prescribe exact versions anyway, so, as long as your version is supported (which is the case with 7.1), you are on the safe side.

Quote:
Originally Posted by c3rb3rus
What about firmware/microcode? How does one go about updating that for the head unit?
The POWER8 is a new hardware so expect the microcode to change quite oftenly in the next future. In general you only update when you must, not, when you can. It is good practice to install the very latest revision before the system goes productive because this way you might avoid the one or other downtime. Save for that: wait until you have a support case and support advises you to update microcode or a software update makes it necessary. Only then install the - at that time - latest microcode. As long as you haven't got a problem related to it: leave it alone.

Do you have a NIM server? If so make sure this is at the absolutely latest AIX version there is, because it can only serve systems at the same or lower versions than it is itself.

I hope this helps.

bakunin
# 3  
Old 01-26-2015
Thanks this helps Smilie I will make sure the microcode/firmware is up to the latest available before I move this into production.

This is a standalone box, so no LPM.

---------- Post updated at 02:04 PM ---------- Previous update was at 01:36 PM ----------

Hey bakunin,

Here is a question for you on pv/vg in regards to my new system setup...

I have carved a 350GB LUN from my StorWize SAN and presented it to the Power8 box via FC.

The pv hdisk2 has the following attributes (the hdisk2 is the LUN I carved and presented)..

Code:
# lspv hdisk2
PHYSICAL VOLUME:    hdisk2                   VOLUME GROUP:     vg_usr2
PV IDENTIFIER:      00f9af9427d70816 VG IDENTIFIER     00f9af9400004c000000014b28233d7d
PV STATE:           active
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            512 megabyte(s)          LOGICAL VOLUMES:  1
TOTAL PPs:          699 (357888 megabytes)   VG DESCRIPTORS:   2
FREE PPs:           0 (0 megabytes)          HOT SPARE:        no
USED PPs:           699 (357888 megabytes)   MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..00..00..00..00
USED DISTRIBUTION:  140..140..139..140..140
MIRROR POOL:        None

After some googling I understand that a normal VG is limited to 32512 physical partitions (32 physical volumes each with 1016 partitions) and 256 logical volumes.

Now I am already confused, hope you can demystify some stuff..

My "hdisk2" has 699 total physical partitions (Total PPs?) based on PP Size being 512mb chunks. So this is 699 out of 32,512 or 1016?

If I changed the hdisk2 to have a PP Size of 256mb chunks, the Total PPs would be 1398 correct? Is there any performance benefits of making the PP Size smaller vs. larger? I am trying to decide what I want my PP Size to be for the PVs.

In the end I will have 4 enhanced JFS2 FS each allocated 350GB, 1TB, 1T, and 2.45TB.

Moderator's Comments:
Mod Comment edit by bakunin: changed the QUOTE- to CODE-tags, the output is easier to read that way.

Last edited by bakunin; 01-26-2015 at 06:14 PM..
# 4  
Old 01-26-2015
Quote:
Originally Posted by c3rb3rus
After some googling I understand that a normal VG is limited to 32512 physical partitions (32 physical volumes each with 1016 partitions) and 256 logical volumes.

Now I am already confused, hope you can demystify some stuff..

My "hdisk2" has 699 total physical partitions (Total PPs?) based on PP Size being 512mb chunks. So this is 699 out of 32,512 or 1016?

If I changed the hdisk2 to have a PP Size of 256mb chunks, the Total PPs would be 1398 correct? Is there any performance benefits of making the PP Size smaller vs. larger? I am trying to decide what I want my PP Size to be for the PVs.
You should open different threads for different questions. This here deals with LVM concepts and has nothing to do with your previous question. Let us see where this takes us and maybe i will split the thread in two for the different topics. OK, so much for organisational stuff, here is the LVM 101:

The following will be about "classic" VGs, there are also "big" and "scalable" VGs, which lift several of the restraints. Still, the basic concepts remain the same.

We start with raw disks. "Disk" here is anything with a hdisk-device entry: physical disks, RAIDsets, LUNs, whatever. One or more such disks (up to 32) build a "Volume Group". Every disk can be member of exactly one VG. When the disk is assigned to a VG it becomes a "PV" (physical volume) and is formatted to be used by the VG. Some information about the VG is written to it (the VGDA - volume group descriptor area and the PVID, a unique serial number by which the LVM can recognize the disk even if it gets a different hdisk-device during a reboot).

The disk is also sliced into logical chunks: the PPs (phyiscal partitions). How big such a physical partition is is up to you, but it can't be changed any more once it is set. to change it you will have to backup your data, delete the VG (and all data in it), recreate it and restore the data back.

PPs are the smallest unit of disk space the LVM can deal with. On any single PV there can be up to 1016 PPs. This means that a small PP size will limit the size of disks you can put into your VG. Roughly the size of a single PP in MB is the size in GB your disk can be at max: 512MB PPsize means disks up to 512GB in size. Because a VG can hold only up to 32 PVs this means that with a PP size of 512MB your VG can grow to ~16TB and not more.

It is therefore wise to plan how much data the VG is going to hold in the near and not so near future, because the above process - backup, delete, recreate, the restore - is a time-consuming process.

Interlude: there is oftenly the "factor" mentioned as a remedy. Yes, this might help you in certain situations, no, it won't lift the size limit of the VG. How comes? Because many VGs were planned badly admins tried to put disks into their VGs which were too big to fit because they had room for more than th 1016 PPs. IBM invented a workaround: you can introduce a "factor" (the command is "chvg -t <factor>") so that a multiple of 1016 PPs can reside on a PV. On the downside this reduces the number of PVs this VG can hold: with a factor of 2 the single PV can hold 2032 (2x1016) PPs (so you can put in a bigger disk) but the VG can only hold 16 (32/2) PV now. With a factor of 4 a single PV can hold 4064 PPs but the VG is reduced to 8 possible PVs.

This is why you should make your PP size rather on the big side than too small. Performance-wise this will change nothing. the only downside is that you will waste some space, because you have to deal with bigger chunks and a logical volume (LV) has to consist of at least one PP. Also the log LV will be one PP, regardless of what the PP size is.

After creating a VG you can create logical volumes in it. Notice that LVs are raw diskspace, not filesystems. you can create a FS on a LV but you do not have to. You can use the LV for all sorts of other things: swap space, raw devices, log-LVs, etc.. I will leave out the option to mirror or stripe the LV here, look it up in the documentation.

Once you are done with the LV you can create a FS on it. Notice that there are two options for JFS-logs: a dedicated log LV (which is created automatically when you create the first FS without an inline log) or inline logs. Inline logs are somewhat faster in most circumstances, so they are preferable for DB FSes. Speaking of DBs: stay away from raw devices, even if the DBA wants them! You get a microscopic gain in performance in exchange for a lot of trouble and loss of manageability.

Notice that inline logs are 0.5% of the FS's size by default. When you increase the size of the FS/LV, the log size is increased too if you do not explicitly state that it shouldn't be increased. This is a slight waste of disk space (inline log sizes above 64MB are pointless) but the amounts involved are just not worth any effort. Only tinker with this if you are really really tight on space.

I hope this helps.

bakunin

Last edited by bakunin; 01-26-2015 at 07:09 PM..
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

IBM Power Linux Cluster Fence device on Power8 Platform

wasn't quite sure which forum to post in. What typical fence device to configure for a Power Linux PaceMaker Cluster running on the Power8 Platform (S822 Model of hardware), or what should be ordered with the S822 for use as a Fence Device? (5 Replies)
Discussion started by: mrmurdock
5 Replies

2. AIX

StorWize v3700 and Power8 (S822) AIX, configuration best practice for LUNs?

Hello, We have an Power8 System (S822) and a IBM StorWize v3700 SAN. The OS is AIX 7.1. With this hardware from what I read I need to download/install special SDDPCM drivers, so I did (SDDPCM VERSION 2.6.6.0 (devices.sddpcm.71.rte). I carved my volumes in the StorWize and presented to... (3 Replies)
Discussion started by: c3rb3rus
3 Replies

3. AIX

IBM AIX - SAN Storage DS4300 issue

Hi, This is follow up to the post https://www.unix.com/aix/233361-san-disk-appearing-double-aix.html When I connected Pseries Machine HBA Card ( Dual Port ) directly to the SAN Storage DS4300 , I was able to see Host Port Adapter WWN numbers , although I was getting this message... (2 Replies)
Discussion started by: filosophizer
2 Replies

4. AIX

IBM SAN TO SAN Mirroring

Has anyone tried SAN to SAN mirroring on IBM DS SAN Storage. DS5020 mentions Enhanced Remote Mirror to multi-LUN applications I wonder if Oracle High availibility can be setup using Remote Mirror option of SAN ? (1 Reply)
Discussion started by: filosophizer
1 Replies

5. AIX

IBM SAN storage -- cache battery

Hello, I have IBM SAN STORAGE DS4100 and one of the cache battery for the controller is dead. Suddenly the performance has been degraded and access to SAN disks ( reading and writing ) became very slow ? My query: Replacing the battery will take 6 days, so in the mean time what are the ways... (1 Reply)
Discussion started by: filosophizer
1 Replies

6. Solaris

Cannot see the IBM SAN storage

HI all, I had recently change the Server storage from EMC to the IBM SAN. but after the configuration, the IBM success to see the server HBA port and successfully assign a LUN for the server. When i go to the server, and restarted it. i use the "format" command to check, but din see any... (1 Reply)
Discussion started by: SmartAntz
1 Replies

7. AIX

Which Forum for IBM Storage production are good?

Which Forum for IBM Storage production are good? (1 Reply)
Discussion started by: rainbow_bean
1 Replies

8. AIX

Question about IBM San Ds4500

I have a question about SAN commands I have almost 15Tb of disk on my san but assigned and something else I have almost 11Tb There is a command to know, what its my real total storage capacity and another command to know how much I used .? Thanks again in advance (0 Replies)
Discussion started by: lo-lp-kl
0 Replies

9. Filesystems, Disks and Memory

First steps on Ibm SAN DS4500

Hello everyone ! Im new on Ibm San DS4500. Can you give me some tips to this, because I dont want to make a mistake. I have some questions. How can I know how much space get on the san, I cant find it. How can add more space to a partition. Do you have some tutorial about this. I... (0 Replies)
Discussion started by: lo-lp-kl
0 Replies

10. AIX

ibm san cache battery with aix

Hi All, I would like to share this incident that happened the other day. I have a question with this, https://www.unix.com/aix/64921-create-new-vg-san-rename-fs.html And I thought it's related to the above link but the problem was the ibm san 4300 cache battery was dead and I need to click... (2 Replies)
Discussion started by: itik
2 Replies
Login or Register to Ask a Question