Sponsored Content
Operating Systems Linux Red Hat Advice on allocating SAN storage to a virtual database server on VMware Post 303027927 by Scrutinizer on Friday 28th of December 2018 09:23:31 AM
Old 12-28-2018
For both performance and availability reasons I would keep tend the setup of different luns for different part of the database , data, redo, archive, duplex, unless there are small database and/or only crash recovery is required.. You can create different Volume groups for each set of disks.. Definitely also a different VG for OS data.
It may be beneficial to spread the date over several LUNS in the data VG, with or without a small stripe, that depends on your workload and underlying SAN storage, to overcome bottlenecks due to the sequential nature of SAN connectivity (Fibre Channel, iSCSI). An alternative to the latter may be to enlarge the queuing depth, it all depends. The other Oracle VG's require sequential access, where a single disk (two lun paths) will probably suffice..

If you don't use ASM, you would need to determine if you want to use raw or cooked logical volumes within the VG's.
You need to set multipathing, and then there is the backup and recovery method you need to choose, etc..

Last edited by Scrutinizer; 12-28-2018 at 10:31 AM..
 

5 More Discussions You Might Find Interesting

1. Solaris

Using San storage - advice needed

Thinking of using our San for network backups.. Have a Netra 240 being installed and planning to get some space on our San. Do you know what software is used to access the San from my server or what I would need to do? I know how to connect to local storage, disk arrays etc but not sure what... (1 Reply)
Discussion started by: frustrated1
1 Replies

2. Solaris

SAN Storage to solaris 10 server

Hi, I have configured our SAN Storage to be connected to our new SUN T5220. On the SAn it looks all fine on the server I do not see any connection: cfgadm -al Ap_Id Type Receptacle Occupant Condition c1 scsi-bus connected ... (4 Replies)
Discussion started by: manni2
4 Replies

3. SuSE

Hot-add memory to SuSE / VMware virtual server

Hi, Here is the issue. Some more memory has been added from vCenter to the virtual machine. From the virtual machine running SuSE 11 SP3. # modprobe acpiphp # modprobe acpi-memhotplug # grep -v online /sys/devices/system/memory/*/state # It looks like there is no offline memory, but free... (1 Reply)
Discussion started by: aixlover
1 Replies

4. UNIX for Dummies Questions & Answers

Allocating Unallocated Drive Space from a SAN to a filesystem

Good Morning everyone, I want to know how to allocate unallocated drive space from a SAN to a file system that desperately needs the drive space. Does anyone have any documentation or tips on how to accomplish this? I am running on AIX version 6.1. (2 Replies)
Discussion started by: ryanco
2 Replies

5. Homework & Coursework Questions

How to mount a 79TB SAN storage to another server?

Hi Team, How do i mount or connect the SAN storage to a specific folder. I have tried to mount it but each time i can only mount 900GB of the storage to the folder: ipmi1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-root_vol ... (0 Replies)
Discussion started by: ElVista
0 Replies
lvm(7)							 Miscellaneous Information Manual						    lvm(7)

NAME
lvm - Logical Volume Manager (LVM) DESCRIPTION
The Logical Volume Manager (LVM) is a subsystem for managing disk space. The HP LVM subsystem offers value-added features, such as mirror- ing (with the optional HP MirrorDisk/UX software), high availability (with the optional HP Serviceguard software), and striping, that enhance availability and performance. Unlike earlier arrangements where disks were divided into fixed-sized sections, LVM allows the user to consider the disks, also known as as a pool (or volume) of data storage, consisting of equal-sized extents. The size of an extent can vary from 1 MB to 256 MB. An LVM system consists of arbitrary groupings of physical volumes, organized into A volume group can consist of one or more physical vol- umes. There can be more than one volume group in the system. Once created, the volume group, and not the disk, is the basic unit of data storage. Thus, whereas earlier one would move disks from one system to another, with LVM, one would move a volume group from one system to another. For this reason it is often convenient to have multiple volume groups on a system. Volume groups can be subdivided into virtual disks, called A logical volume can span a number of physical volumes or represent only a por- tion of one physical volume. The pool of disk space that is represented by a volume group can be apportioned into logical volumes of vari- ous sizes. The size of a logical volume is determined by its number of extents. Once created, logical volumes can be treated just like disk partitions. Logical volumes can be assigned to file systems, used as swap or dump devices, or used for raw access. Commands LVM information can be created, displayed, and manipulated with the following commands: Change logical volume characteristics Stripe, create logical volume in volume group Display information about logical volumes Increase space, increase mirrors for logical volume Prepare logical volume to be root, primary swap, or dump volume Display limits associated with a volume group version Decrease number of physical extents allocated to logical volume Remove one or more logical volumes from volume group Remove logical volume link to root, primary swap, or dump volume Change characteristics of physical volume in volume group Create physical volume for use in volume group Display information about physical volumes within volume group Move allocated physical extents from one physical volume to other physical volumes Create or update volume group configuration backup file Display or restore volume group configuration from backup file Set volume group availability Create volume group Display information about volume groups Export a volume group and its associated logical volumes Extend a volume group by adding physical volumes Import a volume group onto the system Modify volume group attributes Move data from old set of disks to a new set of disks Remove physical volumes from a volume group Remove volume group definition from the system Scan physical volumes for volume groups Migrate a volume group from one volume group version to another The following commands are also available if the HP MirrorDisk/UX software is installed: Merge two logical volumes into one logical volume Split mirrored logical volume into two logical volumes Synchronize stale mirrors in logical volumes Synchronize stale logical volume mirrors in volume groups Device Special Files Starting with HP-UX 11i Version 3, the Mass Storage Stack supports two naming conventions for the device special files used to identify devices (see intro(7)). Devices can be represented using: o Persistent device special files, or o Legacy device special file names, While LVM supports the use of both conventions within the same volume group, the examples shown in the LVM man pages are all using the legacy device special file convention. Alternate Links (PVLinks) In this release of HP-UX, LVM continues to support Alternate Links to a device to allow continued access to the device, if the primary link fails. This multiple link or multipath solution increases data availability, but continues disallowing the use of multiple paths simulta- neously. A new feature was introduced in the Mass Storage Subsystem on HP-UX 11i Version 3 that supports multiple paths to a device and allows simultaneous access to these paths. The Mass Storage Subsystem will balance the I/O load across the valid paths. Multipathing is the default unless the command is used to enable legacy multipathing and also the active path is a legacy device special file. See scsimgr(1M) for details. Even though the Mass Storage Subsystem supports 32 multiple paths per physical volume on this version of HP-UX, LVM does not support more than eight paths to any physical volume. As a result, commands like and will not succeed in adding more than eight paths per physical vol- ume. Additionally, and cannot write more than eight paths per physical volume in the file. If users want to use any specific path other than these eight paths, then they have to one of the alternate paths in the volume group and add that specific path using It is no longer required or recommended to configure LVM with alternate links. However, it is possible to maintain the traditional LVM behavior. To do so, both of the following criteria must be met: o Only the legacy device special file naming convention is used in the volume group configuration. o The command is used to enable the legacy multipath behavior for each physical volume in the volume group. LVM's Volume Group Versions 1.0, 2.0, and 2.1 LVM now has three different volume group version, and The original version of LVM volume group is 1.0. Versions 2.0 and 2.1 volume groups allows LVM to increase many of the limits constraining the size of volume groups, logical volumes, and so on. To see a comparison of lim- its for volume groups version 1.0, 2.0, and 2.1, use the command (see lvmadm(1M)). The procedures and command syntax for managing volume groups version 1.0 is unchanged. To take advantage of the improvements in volume groups version 2.0 or higher, a volume group is declared to be version 2.0 or 2.1 at cre- ation time using the new option to the command. The command will create the volume group directory and file if they do not already exist. This is independent of the volume group version. There are several differences in the procedure for creating a volume group which is to be version 2.0 or higher. o The volume group directory and group file will have a different major/minor number combination. See vgcreate(1M) for details. o It is no longer necessary to set maximums for physical volumes, logical volumes, or extents per physical volume. Instead the command expects a maximum size for the volume group. This size of a volume group is the sum of the user data space on all physi- cal volumes assigned to the volume group. o Extent size is now a required parameter. For volume groups version 1.0, the default extent size is 4MB. For volume groups ver- sion 2.0 or higher, extent size must be specified. Volume group versions 2.0 and higher do not support root, boot, swap, or dump. Additionally, volume groups version 2.0 or higher do not support spare physical volumes. The maximum number of 1.0 version volume groups per system is 256. The maximum number of 2.0 version volume groups per system is 512. The maximum combined 2.0 and 2.1 volume groups is 2048. The vgversion(1M) command allows the migration between any two supported volume group versions, with the exception of moving back to ver- sion 1.0. Extent Sizing for Volume Group Version 2.0 and Higher In volume groups version 1.0, LVM metadata is required to fit into a single physical extent. If large values for maximum physical volumes, logical volumes, and extents per physical volume were chosen, then a large extent size is required. In volume groups version 2.0 and higher, metadata is not restricted to an extent. There is an implementation limit to the number of extents in a volume group (see lvmadm(1M)), so the larger the extent size the larger the maximum volume group size which can be supported. The amount of space taken up on each physical volume by LVM metadata is dependent on the physical extent size and the maximum volume group size specified when the volume group is created. LVM metadata for volume groups version 2.0 and higher may consume more space than on vol- ume groups version 1.0. The command has a new option which will show the relationship between extent size and maximum volume group size. A smaller extent size allows finer granularity in assigning space to logical volumes. It also means that smaller blocks of data are marked stale when IOs to a mirror copy fail. For small logical and physical volumes, a smaller extent size may result in less wasted space. Since there are limits to the number of extents in a logical or physical volume, a small extent size will limit the total size of a logical or physical volume. Conversely a larger extent size allows creation of larger logical volumes and use of larger physical volumes. Auto Boot Disk Migration This feature is intended to allow users to configure how LVM handles situations where the physical location of the boot disk changes between reboots. This situation can occur during hardware configuration changes or if boot disk images are cloned. In those situations, Auto Migration of Boot Disk will automatically update stale configuring entries for the root volume group in LVM configuration file and the Boot Data Reserved Areas for each bootable physical volume in the root volume group. The configuration files are synchronized with the information from the kernel at the time of boot. The Auto Boot Disk Migration feature (defined by the AUTO_BOOT_MIGRATE flag in the file) is turned on by default on the system. When the feature is turned on, any mismatch between the entries and the on-disk metadata structures for the root volume group in the kernel will be automatically fixed during the boot process. The Auto Boot Disk Migration feature can be turned off by editing the file and setting the flag AUTO_BOOT_MIGRATE to 0. In those situa- tions, users need to check the file post boot activity and follow the instructions logged to the file, if any. EXAMPLES
The basic steps to take to begin using LVM are as follows: o Identify the disks to be used for LVM. o Create an LVM data structure on each identified disk (see pvcreate(1M)). o Collect all the physical volumes to form a new volume group (see vgcreate(1M)). o Create logical volumes from the space in the volume group (see lvcreate(1M)). o Use each logical volume as if it were a disk section (create a file system, or use for raw access). To configure disk as part of a new volume groups version 1.0 named First, initialize the disk for LVM with the command. Then, create the pseudo device file that is used by the LVM subsystem. The directory and file will be created automatically. Optionally, these files can be created before doing the vgcreate, as follows: The minor number for the file should be unique among all the volume groups on the system. It has the format where NN ranges from to Create the volume group, containing the physical volume, with the command. You can view information about the newly created volume group with the command. Create a logical volume of size 100 MB, named on this volume group with the command. This creates two device files for the logical volume, which is the block device file, and which is the character (raw) device file. You can view information about the newly created logical volume with the command. Any operation allowed on a disk partition is allowed on the logical volume. Thus, you can use to hold a file system. To use a volume group version or higher in the above example only few changes are required. The volume group directory and file are created automatically in all supported versions. The command would be changed. The following creates the volume group with an extent size of 32 megabytes and a maximum volume group size of 32 terabytes (see vgcreate(1M)). or SEE ALSO
lvchange(1M), lvcreate(1M), lvdisplay(1M), lvextend(1M), lvlnboot(1M), lvmadm(1M), lvreduce(1M), lvremove(1M), lvrmboot(1M), pvchange(1M), pvcreate(1M), pvdisplay(1M), pvmove(1M), vgcfgbackup(1M), vgcfgrestore(1M), vgchange(1M), vgcreate(1M), vgdisplay(1M), vgexport(1M), vgex- tend(1M), vgimport(1M), vgmodify(1M), vgmove(1M), vgreduce(1M), vgremove(1M), vgscan(1M), vgversion(1M), intro(7). If HP MirrorDisk/UX is installed: lvmerge(1M), lvsplit(1M), lvsync(1M), vgsync(1M). If HP Serviceguard is installed: cmcheckconf(1M), cmquerycl(1M), lvm(7)
All times are GMT -4. The time now is 04:15 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy