Quote:
Originally Posted by
c3rb3rus
After some googling I understand that a normal VG is limited to 32512 physical partitions (32 physical volumes each with 1016 partitions) and 256 logical volumes.
Now I am already confused, hope you can demystify some stuff..
My "hdisk2" has 699 total physical partitions (Total PPs?) based on PP Size being 512mb chunks. So this is 699 out of 32,512 or 1016?
If I changed the hdisk2 to have a PP Size of 256mb chunks, the Total PPs would be 1398 correct? Is there any performance benefits of making the PP Size smaller vs. larger? I am trying to decide what I want my PP Size to be for the PVs.
You should open different threads for different questions. This here deals with LVM concepts and has nothing to do with your previous question. Let us see where this takes us and maybe i will split the thread in two for the different topics. OK, so much for organisational stuff, here is the LVM 101:
The following will be about "classic" VGs, there are also "big" and "scalable" VGs, which lift several of the restraints. Still, the basic concepts remain the same.
We start with raw disks. "Disk" here is anything with a hdisk-device entry: physical disks, RAIDsets, LUNs, whatever. One or more such disks (up to 32) build a "Volume Group". Every disk can be member of exactly one VG. When the disk is assigned to a VG it becomes a "PV" (physical volume) and is formatted to be used by the VG. Some information about the VG is written to it (the VGDA - volume group descriptor area and the PVID, a unique serial number by which the LVM can recognize the disk even if it gets a different hdisk-device during a reboot).
The disk is also sliced into logical chunks: the PPs (phyiscal partitions). How big such a physical partition is is up to you, but it can't be changed any more once it is set. to change it you will have to backup your data, delete the VG (and all data in it), recreate it and restore the data back.
PPs are the smallest unit of disk space the LVM can deal with. On any single PV there can be up to 1016 PPs. This means that a small PP size will limit the size of disks you can put into your VG. Roughly the size of a single PP in MB is the size in GB your disk can be at max: 512MB PPsize means disks up to 512GB in size. Because a VG can hold only up to 32 PVs this means that with a PP size of 512MB your VG can grow to ~16TB and not more.
It is therefore wise to plan how much data the VG is going to hold in the near and not so near future, because the above process - backup, delete, recreate, the restore - is a time-consuming process.
Interlude: there is oftenly the "factor" mentioned as a remedy. Yes, this might help you in certain situations, no, it won't lift the size limit of the VG. How comes? Because many VGs were planned badly admins tried to put disks into their VGs which were too big to fit because they had room for more than th 1016 PPs. IBM invented a workaround: you can introduce a "factor" (the command is "chvg -t <factor>") so that a multiple of 1016 PPs can reside on a PV. On the downside this reduces the number of PVs this VG can hold: with a factor of 2 the single PV can hold 2032 (2x1016) PPs (so you can put in a bigger disk) but the VG can only hold 16 (32/2) PV now. With a factor of 4 a single PV can hold 4064 PPs but the VG is reduced to 8 possible PVs.
This is why you should make your PP size rather on the big side than too small. Performance-wise this will change nothing. the only downside is that you will waste some space, because you have to deal with bigger chunks and a logical volume (LV) has to consist of at least one PP. Also the log LV will be one PP, regardless of what the PP size is.
After creating a VG you can create logical volumes in it. Notice that LVs are raw diskspace, not filesystems. you can create a FS on a LV but you do not have to. You can use the LV for all sorts of other things: swap space, raw devices, log-LVs, etc.. I will leave out the option to mirror or stripe the LV here, look it up in the documentation.
Once you are done with the LV you can create a FS on it. Notice that there are two options for JFS-logs: a dedicated log LV (which is created automatically when you create the first FS without an inline log) or inline logs. Inline logs are somewhat faster in most circumstances, so they are preferable for DB FSes. Speaking of DBs: stay away from raw devices, even if the DBA wants them! You get a microscopic gain in performance in exchange for a lot of trouble and loss of manageability.
Notice that inline logs are 0.5% of the FS's size by default. When you increase the size of the FS/LV, the log size is increased too if you do not explicitly state that it shouldn't be increased. This is a slight waste of disk space (inline log sizes above 64MB are pointless) but the amounts involved are just not worth any effort. Only tinker with this if you are really really tight on space.
I hope this helps.
bakunin