Sponsored Content
Operating Systems Linux Red Hat Advice on allocating SAN storage to a virtual database server on VMware Post 303027928 by gull04 on Friday 28th of December 2018 09:26:22 AM
Old 12-28-2018
Hi dkmartin,

In essence that is correct, but there are a number of caveats - as I said earlier if you have tiered storage and depending on how it is configured things change.

If we take an example of a single LUN it is likely to be sliced up along the lines of what you expect usage to be with with a single slice assigned to each VG, you have to bear in mind that this single LUN may be accross many spindles at the back end - this is invisible to the system.

As an example in our VNX which has three tiers of disk at the end of the 8*16Gb agregated fibres (4 on fabric A and 4 on Fabric B) the breakdown of the system build is 30Tb of SSD for tier 1, 90Tb of 15K RPM SAS for tier 2 and 120Tb of SATA for tier 3. This has been carved into disk groups which are then sub-divided into LUN's, however the ratio of disk 1:3:4 assigned to each LUN and the intelligence of the VNX means that the parts of the system experiencing sgnificant I/O are moved dynamically to the tier 1 disk allocated to that particular LUN.

Each of the disk groups mentioned above comprises a number of physical disks, strangely enough in the 1:3:4 ratio for SSD, SAS and SATA generally twenty something disks given the sizes of the disks in the VNX.

So following the logic of your 1Tb Lun you would have available 125Gb of SSD that you do not have to manage, the system does it for you. It may well be that the storage technology you have is much older and that it does not thave this functionallity, in that case you'll have to provide a bit more information about the setup.

Regards

Gull04
 

5 More Discussions You Might Find Interesting

1. Solaris

Using San storage - advice needed

Thinking of using our San for network backups.. Have a Netra 240 being installed and planning to get some space on our San. Do you know what software is used to access the San from my server or what I would need to do? I know how to connect to local storage, disk arrays etc but not sure what... (1 Reply)
Discussion started by: frustrated1
1 Replies

2. Solaris

SAN Storage to solaris 10 server

Hi, I have configured our SAN Storage to be connected to our new SUN T5220. On the SAn it looks all fine on the server I do not see any connection: cfgadm -al Ap_Id Type Receptacle Occupant Condition c1 scsi-bus connected ... (4 Replies)
Discussion started by: manni2
4 Replies

3. SuSE

Hot-add memory to SuSE / VMware virtual server

Hi, Here is the issue. Some more memory has been added from vCenter to the virtual machine. From the virtual machine running SuSE 11 SP3. # modprobe acpiphp # modprobe acpi-memhotplug # grep -v online /sys/devices/system/memory/*/state # It looks like there is no offline memory, but free... (1 Reply)
Discussion started by: aixlover
1 Replies

4. UNIX for Dummies Questions & Answers

Allocating Unallocated Drive Space from a SAN to a filesystem

Good Morning everyone, I want to know how to allocate unallocated drive space from a SAN to a file system that desperately needs the drive space. Does anyone have any documentation or tips on how to accomplish this? I am running on AIX version 6.1. (2 Replies)
Discussion started by: ryanco
2 Replies

5. Homework & Coursework Questions

How to mount a 79TB SAN storage to another server?

Hi Team, How do i mount or connect the SAN storage to a specific folder. I have tried to mount it but each time i can only mount 900GB of the storage to the folder: ipmi1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-root_vol ... (0 Replies)
Discussion started by: ElVista
0 Replies
intro(7)						 Miscellaneous Information Manual						  intro(7)

NAME
intro - introduction to device special files DESCRIPTION
This section describes the device special files (DSFs) and hardware paths used to access HP peripherals and device drivers. The names of the entries are generally derived from the type of device being described (disk, tape, terminal, and so on.), not the names of the device special files or device drivers themselves. Characteristics of both the hardware device and the corresponding HP-UX device driver are dis- cussed where applicable. Device Types Devices can be classified in two device access modes, and A raw or character-mode device, such as a line printer, transfers data in an unbuffered stream and uses a character device special file. A block-mode device, as the name implies, transfers data in blocks by means of the system's normal buffering mechanism. Block devices use block device special files and may have a character device interface too. Device File Naming Convention A device special file name becomes associated with a device when the file is created, either automatically by the special file daemon or explicitly with the or command. When creating device special files, it is recommended that the following standard naming convention be used: subdir An optional subdirectory for the device class (for example, for raw device special files for disks, for block device spe- cial files for disks, for raw tape devices). class The class of device, such as or # The instance number assigned by the operating system to the device. Each class of device has its own set of instance num- bers, so each combination of class and instance number refers to exactly one device. options Further qualifiers, such as disk partition tape density selection for a tape device, or surface specification for magneto- optical media. Naming conventions for each type of device are described in their respective manpage entries. Legacy mass storage device special files have a different naming convention that encodes the hardware path; this is described in the sec- tion. Hardware Paths Hardware path information, as well as class names and instance numbers, can be derived from output; see ioscan(1M). There are three dif- ferent types of paths to a device: and All three are numeric strings of hardware components, notated sequentially from the system bus address to the device address. Each number typically represents the location of a hardware component on the path to the device. The is composed of a series of bus-nexus addresses separated by slash characters, leading to a host bus adapter (HBA). Beneath the HBA, additional address elements are separated by period characters. All the elements are represented in decimal. This is the format printed by default by the command for most devices. An example of a legacy hardware path is The is used for mass storage devices, also known as logical units (LUNs). It is identical in format to a legacy hardware path, up to the HBA. Beneath the HBA, additional elements are printed in hexadecimal. The leading elements representing a transport-dependent target address, and the final element is a LUN address, which is a 64-bit representation of the LUN identifier reported by the target. This for- mat is printed by the command when the option is specified. The string is an example of a lunpath hardware path. Note that the address elements beneath the HBA may not correspond to physical hardware addresses; instead, the lunpath hardware path should be considered a handle, not a physical path to the device. The is a virtualized path that can represent multiple hardware paths to a single mass storage device. Instead of a series of bus-nexus addresses leading to the HBA, there is a virtual bus-nexus (known as the with an address of 64000. Addressing beneath that virtual root node consists of a virtual bus address and a virtual LUN identifier, delimited by slash characters. The string is an example of a LUN hardware path. As a virtualized path, the LUN hardware path is only a handle to the LUN, and does not represent the LUN's physical location; rather, it is linked to the LUN's World Wide Identifier (WWID). Thus, it remains the same if new physical paths to the device are added, if existing physical paths are removed, or if any of the physical paths changes. This LUN binding persists across reboots, but it is not guaranteed to persist across installations -- that is, reinstalling a system or installing an identically configured system may create a different set of LUN hardware paths. Device File Types (Mass Storage Devices) Mass storage devices, such as disk devices and tape devices, have two types of device files, device special files and device special files. Both can be used to access the mass storage device independently, and can coexist on the same system. A device special file is associated with a LUN hardware path, and thus transparently supports agile addressing and multipathing. In other words, a persistent device special file is unchanged if the LUN is moved from one HBA to another, moved from one switch/hub port to another, presented via a different target port to the host, or configured with multiple hardware paths. Like the LUN hardware path, the binding of device special file to device persists across reboots, but is not guaranteed to persist across installations. The device spe- cial file name follows the standard naming convention above, and the minor number contains no hardware path information. A device special file is locked to a particular physical hardware path, and does not support agile addressing. Such a device special file contains hardware path information such as SCSI bus, target, and LUN in the device file name and minor number. Specifically, the class and instance portions of the device special file name indicate hardware path information and are in the format as follows: The instance number assigned by the operating system to the interface card, in decimal. It is a decimal number with a range of 0 to 255. There is no direct correlation between instance number and physical slot number. The target address on a remote bus (for example, SCSI address). It is a decimal number with a typical range of 0 to 15. The device unit number at the target address (for example, the LUN in a SCSI device). It is a decimal number with a typical range of 0 to 7. Note that the legacy naming convention supports a maximum of 256 external buses and a maximum of 32768 LUNs. Systems with mass storage devices beyond those limits will be unable to address them using legacy naming conventions. Legacy device special files are deprecated, and their support will be removed in a future release of HP-UX. Viewing Mass Storage With the advent of persistent and legacy device special files, commands dealing with mass storage can choose between two of the I/O system. A command presenting the view uses legacy device special files and legacy hardware paths. The view uses persistent device special files, lunpath hardware paths, and LUN hardware paths. Depending on the command, both views may be presented, or the choice of view may be controlled by a command option or an environment vari- able. For example, the command shows the legacy view by default, and switches to the agile view if the option is specified. EXAMPLES
Example 1 The following is an example of a persistent device special file name: where indicates block disk access and indicates device class and instance number 3. The absence of indicates access to the entire disk; see disk(7) for details. Example 2 The following is an example of a legacy disk device special file name: where indicates block disk access and indicates logical disk access at interface card instance 0, target address 6, and unit 0. The indi- cates access to section 2 of the disk. Example 3 The following is an example of a persistent tape device special file name: where indicates raw magnetic tape, indicates tape device instance number 4, and identifies the tape format as QIC150; see mt(7) for details. WARNINGS
The support of legacy device special files is deprecated and will be removed in a future release of HP-UX. SEE ALSO
insf(1M), ioscan(1M), lssf(1M), mksf(1M), mknod(1M), hier(5), introduction(9). at whitepaper at: intro(7)
All times are GMT -4. The time now is 04:57 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy