volume group lun sizes and no of file systems for optimal performance


 
Thread Tools Search this Thread
Operating Systems AIX volume group lun sizes and no of file systems for optimal performance
# 1  
Old 02-04-2011
volume group lun sizes and no of file systems for optimal performance

Hello,

It's been a while since I've done AIX..., but I'm planning
a new TSM on AIX disk-only backup solution.

I'm planning to make an AIX volume group out of 40 luns of 1 TB.
I'm planning to create one big file system on here.
The purpose for this is to use this as a device class FILE for a TSM disk-only backup solution.

My question is,
would it make any difference, performance-wise, if I split up this capacity over multiple file systems, or even multiple VG's ?

right now this is the plan:
---------------------------------

  • volume group: VG_TSMFILECLASS -> 40 luns of 1TB
-> one filesystem on there /FILECLASS/ -> 40 TB
  • device class FILE has /FILECLASS/ specified as directory to work in (at TSM level)
(all backup data will be written to the same file system and VG)

would this be better?:
----------------------------

  • volume group: VG_TSMFILECLASS -> 40 luns of 1TB
-> two filesystems on here: /FILECLASS1/ -> 20 TB
/FILECLASS2/ -> 20 TB
  • device class FILE has /FILECLASS1/ + /FILECLASS2/ specified as diectories to work in
(when you specify multiple directories for a device class file in TSM, it spreads its volumes in
a balanced way over the directories that you specify,
so in this case the backups would be landing on 2 different filesystems in the same VG)

would this be even better ?:
------------------------------------

  • volume group: VGTSMFILECLASS1 -> 20 luns of 1 TB
-> one FILESYSTEM on here: /FILECLASS1/
  • volume group: VGTSMFILECLASS2 -> 20 luns of 1 TB
-> one FILESYSTEM on here: /FILECLASS2/
  • device class FILE has /FILECLASS1/ + /FILECLASS2/ specified as diectories to work in
(when you specify multiple directories for a device class file in TSM, it spreads it volumes in
a balanced way over the directories that you specify,
so in this case the backups would be landing on 2 different VG's)

would this make a difference for performance or might I just as well put everything on one file system in one VG ?

Also is there some kind of optimal lun size for this (from an AIX OS perspective) ?


some extra info:

  • I would use JFS2,
(without DIO since the TSM device class FILE works in a sequential way).

  • all storage is on one dedicated storage box
(EMC Ax4 with 2TB SATA2 disks)

any hints/thoughts would be appreciated Smilie
# 2  
Old 02-06-2011
There will be no difference if you have 1 or 10 filesystems - and you should put all of your luns into the same VG to make sure you have the max buffers. As these are on a per disk basis, this will in any case require a lot of file system buffer tuning. I think I would rather use 100 GB luns and corresponding more adapters if I really want more performance. Apart from that, you can choose a smaller pp unit, what will distribute your content more evenly across the disks.
Make sure that your filesystem is spread across the luns with maximum distribution. If you can, use more than one pair of hba's. And I would mount your filesystem(s) with rbrw to save memory and with noatime as well - dio would be JFS, and cio would saturate the disks and make no sense anyways.
Regards
zxmaus
# 3  
Old 02-22-2011
Thanks alot for your reply zxmaus,
very helpful information, much appreciated!

I think we will go ahead as planned in the classical way,
however...
someone has also explained to me the possibility of META-LUNS on this EMC disksystem:

We have 3 RAID6 arrays on the storage system.
Instead of creating 40 1 TB luns,
We could also make 1 huge lun per array,
so we would have 3 huge luns,
and then concatenate these 3 luns together into a META-LUN.
The storage system then stripes over all the disks automatically this way.

This way, one giant disk of 40 TB would be presented to AIX,
and a VG could be created, out of 1 PV.

So all the striping would be handled by the storage system itself and
there is no more parallelism in AIX then.

However,
I have never heard of anybody working like this,
and the thought of having one big 40 TB lun is pretty scary.
Does anybody have any experience with this?


I guess we could also make a combination of both:
We could also make 500 GB luns on the 3 arrays,
and concatenate each time 3 500 GB luns into 1.5 TB meta luns for example,
presenting about 26 1.5 TB disks to AIX in this way.
this way we would be using both parallellism mechanisms:
- on the AIX OS with the VG
- on the storage box with META-LUNs

I'm starting to consider the combination of both mechanisms
as described above.

Any thoughts on this?
# 4  
Old 02-22-2011
well if you write to ONE lun - its slower than if you write to 10 luns in parallel as AIX simply doesnt care about what is behind the adapters.

Real life experience about a year ago - attaching my 124 luns - 33 GB each - from one to 3 pairs of hba's and changing the 4 DB filesystems from minimum to maximum distribution across the disks - improved our EOD batch (oracle) from 3.5 hrs to 14 min.

Regards
zxmaus
# 5  
Old 02-23-2011
Thanks again for your quick reply zxmaus.

What exactly do you mean with
Quote:
changing the 4 DB filesystems from minimum to maximum distribution across the disks
?
something with the PP size?
could you clarify this a bit?

as for the HBAs, we are really limited here as all HBAs on the server are in use... Smilie
* 2 for a VTL
* 2 for another disk system
* 2 for tape

so we'll have to share HBA's with one of the above (no other choice...),
it will be the 2 HBA's from the other disk system that will be shared with the new disk system.

We could also share the HBAs that are currently used for tape devices with our new disk system, but this seems like a much worse idea.
Though the device class FILE, at TSM level,works in a sequential way,
it will be writing in the background into files on an AIX JFS2 file system,
so sharing the tape HBAs seemed like a no go to me (though the FILE device class is sequential I/O at application level (TSM)) my head hurts.. Smilie

Possibly we will try both (sharing HBAs with disk system first, afterwards share with the tape HBAs and see what gives best performance).
# 6  
Old 02-23-2011
what I meant with maximum distribution is a chlv -e x logicalvolume - basically you make sure that the first physical partition goes onto disk1, the next one onto disk 2 ... if you have 10 disks, than the 11th physical partition goes again onto disk1 and the 12th onto disk2 ... in opposite to minimum distribution where the first x physical partitions go onto disk1 until its full and so on ...
Hope that clarifies it now a little more Smilie

Kind regards
zxmaus
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

No space in volume group. How to create a file system using existing logical volume

Hello Guys, I want to create a file system dedicated for an application installation. But there is no space in volume group to create a new logical volume. There is enough space in other logical volume which is being mounted on /var. I know we can use that logical volume and create a virtual... (2 Replies)
Discussion started by: vamshigvk475
2 Replies

2. Solaris

Restore of Netapp FC lun targets used as the disks for a zpool with exported zfs file systems

So, We have a Netapp storage solution. We have Sparc T4-4s running with LDOMS and client zones in the LDOMS, We are using FC for storage comms. So here's the basic setup FC luns are exported to the primary on the Sparc box. using LDM they are then exported to the LDOM using vdisk. at the... (4 Replies)
Discussion started by: os2mac
4 Replies

3. UNIX for Dummies Questions & Answers

How to create a volume group, logical volume group and file system?

hi, I want to create a volume group of 200 GB and then create different file systems on that. please help me out. Its becomes confusing when the PP calculating PP. I don't understand this concept. (2 Replies)
Discussion started by: kamaldev
2 Replies

4. UNIX for Dummies Questions & Answers

Confusion Regarding Physical Volume,Volume Group,Logical Volume,Physical partition

Hi, I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies 1)Physical Volume 2)Volume Group 3)Logical Volume 4)Physical Partition Please help me to understand these concepts. (6 Replies)
Discussion started by: kashifsd17
6 Replies

5. HP-UX

HP-UX cómo saber si una LUN esta en un Volume Group

Hola a todos, me gustaría saber cómo chequear si estas LUN´s objecthexuid .........................: 6005-08b4-0001-529c-0005-6000-45d6-0000 objecthexuid .........................: 6005-08b4-0001-529c-0005-6000-45d9-0000 objecthexuid .........................:... (1 Reply)
Discussion started by: aav1307
1 Replies

6. AIX

Logical volume name conflict in two volume group

Hello, I am a french computer technician, and i speak English just a little. On Aix 5.3, I encounter a name conflict logical volume on two volume group. The first volume lvnode01 is OK in rootvg and mounted. It is also consistent in the ODM root # lsvg -l rootvg |grep lvnode01 ... (10 Replies)
Discussion started by: dantares
10 Replies

7. UNIX for Dummies Questions & Answers

I/O performance in HPUX file systems

Hi guys, what is the relation between I/O performance and file systems. I have a file systems called /dcs/data01 which is having 4Tb size. According our application we can split the file system like dcs/data01 -> 1Tb dcs/data02 -> 1Tb dcs/data03 -> 1Tb dcs/data04 -> 1Tb do you... (4 Replies)
Discussion started by: Davinzy
4 Replies

8. UNIX for Dummies Questions & Answers

Remove header from files: optimal performance

I need to concatenate about a thousand files (using a loop) on a UNIX server. Also, each file has a header row which is to be removed. Which of the following would give better performance? tail+2 <filename> or sed '1,1d' <filename> Or is there another, faster way? Thanks, Kaus (4 Replies)
Discussion started by: kausmone
4 Replies

9. UNIX for Advanced & Expert Users

LVM - Extending Logical Volume within Volume Group

Hello, I have logical volume group of 50GB, in which I have 2 logical volumes, LogVol01 and LogVol02, both are of 10GB. If I extend LogVol01 further by 10GB, then it keeps the extended copy after logical volume 2. I want to know where it keeps this information Regards Himanshu (3 Replies)
Discussion started by: ghimanshu
3 Replies

10. AIX

volume group space addition after LUN creation

Hi, I was told that 300GB of LUN has been allocated to my server by the SAN group. apart from rootvg i have 2 volume groups(oracle) for which i need to add space as follows: oradbvg 250GB to be added oralvg 50GB what are the steps that i should follow after iam being told that LUN has been... (4 Replies)
Discussion started by: karthikosu
4 Replies
Login or Register to Ask a Question