Advice on allocating SAN storage to a virtual database server on VMware


 
Thread Tools Search this Thread
Operating Systems Linux Red Hat Advice on allocating SAN storage to a virtual database server on VMware
# 1  
Old 12-28-2018
Advice on allocating SAN storage to a virtual database server on VMware

I am relatively new to Linux and we are getting ready to convert our current oracle database servers from the AIX platform to RHEL7 servers on VMWare. I would appreciate any advice on how best to allocate storage to these machines. I plan on using LVM to maintain the disks/filesystems but am unsure on how to initially assign the SAN storage to ensure the best performance possible.

In our current AIX environment we have separate disks/volume groups for the different types of data being stored. i.e. database files in a different volume group than O/S data, etc. On the SAN side the disks are on different storage devices as well so we can avoid any type of bottlenecks.

When creating a VM would I take this same approach?

Any advice on this would be appreciated.
# 2  
Old 12-28-2018
Hi,

In general the same approach is probably the way to go - however if you are going to be doing significant amounts of migration work it is worth getting the planning right.

Much of the configuration is dependent on the way that the storage is configured and whether you are using Oracle ASM or not, ASM is worth having if you have significant I/O on the database but with a high performance backend the additional managemnt overhead may not be worth while.

If the storage is tiered and you think it will deal efficiently with the I/O, I'd go for the presentation of a single LUN - this makes future migrations, DR and failover really simple. The drawback would be if you have to run fsck or some other tool, it has to do the whole LUN not just a couple of small partitions or LUN's. Where you have SAN replication the single LUN approach makes life easy, you won't miss something if it's all in one place.

If performance is key, then you may want to take advantage of the tiering in a different way. There is much to consider when doing migration work like this, it is well worth while runing a proof of concept (if you can) just to check things out.

Regards

Gull04
# 3  
Old 12-28-2018
Thanks for the reply. We will not be using ASM. I guess coming from an AIX environment I'm having a hard time wrapping my head around having 1 large LUN (approx 1 tb) on 1 storage device for all the different types of data.

Currently on AIX I have 5 volume groups all with their own set of luns/disks which are allocated across 6 SAN devices on the back end.
# 4  
Old 12-28-2018
Hi dkmartin,

From the way you describe the layout, I'm going to assume that you're running something like SVC for the management of the LUN's. Are the AIX environment LPAR's or are they standalone machines, or something else of course.

Managing LUN's accross six individual SAN devices is very costly in resource, along with the planning headache when it comes to new system builds - well inless you have plenty of disk. I currently have two devices - a 1.5Pb VMax and a 250Gb VNX and that provides enough confusion.

Where ver possible our approach is to go for a single LUN, but we do have quite a number of systems with large numbers of luns (some are in excess of 200) on AIX, Solaris and RHEL. We do have some very old versions on VMware, but they are slowly being decommissioned.

So in reality I would if time permits look at load and I/O on a system by system basis, where the results indicate there would be a benefit - replicate the existing model. If the results show it would be OK, go for the single LUN, if you're using Metro or Global mirror you may have to delve a bit deeper and look at inter system bandwidth.

Regards

Gull04
# 5  
Old 12-28-2018
So let me ask this. When presenting a single LUN to a linux server it would show up as 1 physical device, correct? If I am assuming correctly, then would you carve that LUN up into smaller partitions giving you multiple disks to allocate to different volume groups/file systems?
# 6  
Old 12-28-2018
For both performance and availability reasons I would keep tend the setup of different luns for different part of the database , data, redo, archive, duplex, unless there are small database and/or only crash recovery is required.. You can create different Volume groups for each set of disks.. Definitely also a different VG for OS data.
It may be beneficial to spread the date over several LUNS in the data VG, with or without a small stripe, that depends on your workload and underlying SAN storage, to overcome bottlenecks due to the sequential nature of SAN connectivity (Fibre Channel, iSCSI). An alternative to the latter may be to enlarge the queuing depth, it all depends. The other Oracle VG's require sequential access, where a single disk (two lun paths) will probably suffice..

If you don't use ASM, you would need to determine if you want to use raw or cooked logical volumes within the VG's.
You need to set multipathing, and then there is the backup and recovery method you need to choose, etc..

Last edited by Scrutinizer; 12-28-2018 at 10:31 AM..
# 7  
Old 12-28-2018
Hi dkmartin,

In essence that is correct, but there are a number of caveats - as I said earlier if you have tiered storage and depending on how it is configured things change.

If we take an example of a single LUN it is likely to be sliced up along the lines of what you expect usage to be with with a single slice assigned to each VG, you have to bear in mind that this single LUN may be accross many spindles at the back end - this is invisible to the system.

As an example in our VNX which has three tiers of disk at the end of the 8*16Gb agregated fibres (4 on fabric A and 4 on Fabric B) the breakdown of the system build is 30Tb of SSD for tier 1, 90Tb of 15K RPM SAS for tier 2 and 120Tb of SATA for tier 3. This has been carved into disk groups which are then sub-divided into LUN's, however the ratio of disk 1:3:4 assigned to each LUN and the intelligence of the VNX means that the parts of the system experiencing sgnificant I/O are moved dynamically to the tier 1 disk allocated to that particular LUN.

Each of the disk groups mentioned above comprises a number of physical disks, strangely enough in the 1:3:4 ratio for SSD, SAS and SATA generally twenty something disks given the sizes of the disks in the VNX.

So following the logic of your 1Tb Lun you would have available 125Gb of SSD that you do not have to manage, the system does it for you. It may well be that the storage technology you have is much older and that it does not thave this functionallity, in that case you'll have to provide a bit more information about the setup.

Regards

Gull04
Login or Register to Ask a Question

Previous Thread | Next Thread

5 More Discussions You Might Find Interesting

1. Homework & Coursework Questions

How to mount a 79TB SAN storage to another server?

Hi Team, How do i mount or connect the SAN storage to a specific folder. I have tried to mount it but each time i can only mount 900GB of the storage to the folder: ipmi1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-root_vol ... (0 Replies)
Discussion started by: ElVista
0 Replies

2. UNIX for Dummies Questions & Answers

Allocating Unallocated Drive Space from a SAN to a filesystem

Good Morning everyone, I want to know how to allocate unallocated drive space from a SAN to a file system that desperately needs the drive space. Does anyone have any documentation or tips on how to accomplish this? I am running on AIX version 6.1. (2 Replies)
Discussion started by: ryanco
2 Replies

3. SuSE

Hot-add memory to SuSE / VMware virtual server

Hi, Here is the issue. Some more memory has been added from vCenter to the virtual machine. From the virtual machine running SuSE 11 SP3. # modprobe acpiphp # modprobe acpi-memhotplug # grep -v online /sys/devices/system/memory/*/state # It looks like there is no offline memory, but free... (1 Reply)
Discussion started by: aixlover
1 Replies

4. Solaris

SAN Storage to solaris 10 server

Hi, I have configured our SAN Storage to be connected to our new SUN T5220. On the SAn it looks all fine on the server I do not see any connection: cfgadm -al Ap_Id Type Receptacle Occupant Condition c1 scsi-bus connected ... (4 Replies)
Discussion started by: manni2
4 Replies

5. Solaris

Using San storage - advice needed

Thinking of using our San for network backups.. Have a Netra 240 being installed and planning to get some space on our San. Do you know what software is used to access the San from my server or what I would need to do? I know how to connect to local storage, disk arrays etc but not sure what... (1 Reply)
Discussion started by: frustrated1
1 Replies
Login or Register to Ask a Question