Sponsored Content
Operating Systems Solaris 6120 Array. Additional physical Disks and ZFS Post 302281111 by myjess on Wednesday 28th of January 2009 08:22:15 AM
Old 01-28-2009
6120 Array. Additional physical Disks and ZFS

Hi;

I have 4 new disks in a 6120 Array attached to a SUN server running zfs.

There are already two virtual disks on the array comprising of 3 disk raid 5 for each Vdisk.

I need to add two more disks to each vdisk making each a 5 disk raid 5 Vdisk.

If ZFS already has the original vdisk as a lun in one of it's pool's, how does it know I have added extra storage capacity to the lun that it see from the array?

This is not adding two new disks directly to the zfs pool, but rather extending the hardware raid 5 lun in the background.

Thanks.
 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

how to assign same mount point for file systems mounted on physical disks

We have 6 hard disks attached to the hardware. Of this 2 hard disks are of 9 GB each. Now I want combine both the same in such a way that i see a combined entry in the output of df -k . The steps I follow are 1. Create partition on hard disks (Using format partition) 2. Run newfs -v for... (6 Replies)
Discussion started by: Hitesh Shah
6 Replies

2. UNIX for Dummies Questions & Answers

monitoring the state of physical disks

Hello, I would like to know if there are commands that can be used to monitor the state of physical disks (including RAID) under AIX and SUN unix platforms? Thank you in advance. (4 Replies)
Discussion started by: VeroL
4 Replies

3. UNIX for Dummies Questions & Answers

Vertias (physical disks monitoring)

Can anyone tell me what Vertias is? Is it free? What is it used for exactly? Thank you in advance. (1 Reply)
Discussion started by: VeroL
1 Replies

4. Filesystems, Disks and Memory

How do I check 4 physical damaged on Linux hard disks?

How do I check for physical damage on red hat linux hard disks? I tried smartctl /dev/sdb but it came back so fast saying it was ok. Is there a better linux command to check for bad sectors or physical disks in linux? Is there a good way such as with parted or something else? I normally in HP... (4 Replies)
Discussion started by: taekwondo
4 Replies

5. AIX

How to determine the physical volume fo the disks

This is the report I got running the comand rptconf, but I would like to know what is the capacity of the disks installed into our server power 6 with AIX System Model: IBM,7778-23X Machine Serial Number: 1066D5A Processor Type: PowerPC_POWER6 Processor Implementation Mode: POWER 6... (6 Replies)
Discussion started by: cucosss
6 Replies

6. Solaris

ZFS : Can arc size value exceed Physical RAM ?

Hi, kstat -p -m zfs -n arcstats -s size returns zfs:0:arcstats:size 8177310584 this values is approx (7.61 GB) but my Physical Memory size is only 6144 Megabytes. Can this happen ? if yes, then how can I find free memory on the system. BTW, I ran the kstat commands from a Non... (2 Replies)
Discussion started by: sapre_amit
2 Replies

7. Red Hat

Create volume using LVM over 2 physical disks

I wanted to know how we can combine volumes over 2 physical drives. # fdisk -l Disk /dev/sda: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 ... (16 Replies)
Discussion started by: ikn3
16 Replies

8. Solaris

ZFS rpool physical disk move

I'd like to finish setting up this system and then move the secondary or primary disk to another system that is the exact same hardware. I've done things like this in the past with ufs and disk suite mirroring just fine. But I have yet to do it with a zfs root pool mirror. Are there any... (1 Reply)
Discussion started by: Metasin
1 Replies

9. Solaris

Pls. help with Sun Array 6120

Hi, I am on Sun Edge 6120 Disk array. When I do port listmap. The failover status below means I have to take action. I never used Sun Array before. Please advice. port targetid addr_type lun volume owner access u1p1 1 hard 0 v0 ... (0 Replies)
Discussion started by: samnyc
0 Replies

10. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 11:30 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy