Sponsored Content
Operating Systems Linux Red Hat Advice on allocating SAN storage to a virtual database server on VMware Post 303027927 by Scrutinizer on Friday 28th of December 2018 09:23:31 AM
Old 12-28-2018
For both performance and availability reasons I would keep tend the setup of different luns for different part of the database , data, redo, archive, duplex, unless there are small database and/or only crash recovery is required.. You can create different Volume groups for each set of disks.. Definitely also a different VG for OS data.
It may be beneficial to spread the date over several LUNS in the data VG, with or without a small stripe, that depends on your workload and underlying SAN storage, to overcome bottlenecks due to the sequential nature of SAN connectivity (Fibre Channel, iSCSI). An alternative to the latter may be to enlarge the queuing depth, it all depends. The other Oracle VG's require sequential access, where a single disk (two lun paths) will probably suffice..

If you don't use ASM, you would need to determine if you want to use raw or cooked logical volumes within the VG's.
You need to set multipathing, and then there is the backup and recovery method you need to choose, etc..

Last edited by Scrutinizer; 12-28-2018 at 10:31 AM..
 

5 More Discussions You Might Find Interesting

1. Solaris

Using San storage - advice needed

Thinking of using our San for network backups.. Have a Netra 240 being installed and planning to get some space on our San. Do you know what software is used to access the San from my server or what I would need to do? I know how to connect to local storage, disk arrays etc but not sure what... (1 Reply)
Discussion started by: frustrated1
1 Replies

2. Solaris

SAN Storage to solaris 10 server

Hi, I have configured our SAN Storage to be connected to our new SUN T5220. On the SAn it looks all fine on the server I do not see any connection: cfgadm -al Ap_Id Type Receptacle Occupant Condition c1 scsi-bus connected ... (4 Replies)
Discussion started by: manni2
4 Replies

3. SuSE

Hot-add memory to SuSE / VMware virtual server

Hi, Here is the issue. Some more memory has been added from vCenter to the virtual machine. From the virtual machine running SuSE 11 SP3. # modprobe acpiphp # modprobe acpi-memhotplug # grep -v online /sys/devices/system/memory/*/state # It looks like there is no offline memory, but free... (1 Reply)
Discussion started by: aixlover
1 Replies

4. UNIX for Dummies Questions & Answers

Allocating Unallocated Drive Space from a SAN to a filesystem

Good Morning everyone, I want to know how to allocate unallocated drive space from a SAN to a file system that desperately needs the drive space. Does anyone have any documentation or tips on how to accomplish this? I am running on AIX version 6.1. (2 Replies)
Discussion started by: ryanco
2 Replies

5. Homework & Coursework Questions

How to mount a 79TB SAN storage to another server?

Hi Team, How do i mount or connect the SAN storage to a specific folder. I have tried to mount it but each time i can only mount 900GB of the storage to the folder: ipmi1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-root_vol ... (0 Replies)
Discussion started by: ElVista
0 Replies
ifc_disable_mgmt_lun(5) 					File Formats Manual					   ifc_disable_mgmt_lun(5)

NAME
ifc_disable_mgmt_lun - select whether or not management luns are discovered in the legacy representation of mass storage devices VALUES
Failsafe Default Allowed values Enable discovery of management luns for array devices. Disable discovery of management luns for array devices. Recommended values DESCRIPTION
Management luns provide the means to access the control ports of Fibre Channel disk arrays. They are represented in both the legacy and the agile view of storage devices. The legacy representation of a management lun is typically created under a separate virtual extension bus node. The entry for a legacy management lun looks like this: On a system with a large number of storage devices connected, creation of management luns can result in an insufficient number of virtual extension buses for the legacy representation of data luns. Since the number of extension buses on a system is limited to 256, luns requiring virtual extension buses above 256 will be unusable through legacy device special files. Disabling the discovery and creation of management luns will free up virtual extension buses for use by data luns. Who Is Expected to Change This Tunable? System administrators who expect to use a large number of storage devices requiring more than 256 extension buses through legacy device special files. Restrictions on Changing Changes to this tunable take effect immediately. However, existing management luns will not be removed. When Should the Tunable Be Turned On? This tunable should be turned on when the system runs out of extension buses for configuring the legacy representation of data luns. What Are the Side Effects of Turning the Tunable Off? None. When Should the Tunable Be Turned Off? When the functionality provided by the management luns is desired. What Are the Side Effects of Turning the Tunable On? Existing management luns in the legacy representation will show up in the state. What Other Tunables Should Be Changed at the Same Time? None. WARNINGS
All HP-UX kernel tunable parameters are release specific. This parameter may be removed or have its meaning changed in future releases of HP-UX. Installation of optional kernel software, from HP or other vendors, may cause changes to tunable parameter values. After installation, some tunable parameters may no longer be at the default or recommended values. For information about the effects of installation on tun- able values, consult the documentation for the kernel software being installed. For information about optional kernel software that was factory installed on your system, see at AUTHOR
was developed by HP. SEE ALSO
ioinit(1M), ioscan(1M), kctune(1M). Tunable Kernel Parameters ifc_disable_mgmt_lun(5)
All times are GMT -4. The time now is 04:05 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy