Sponsored Content
Operating Systems Solaris Resize LUNs and zfs-pool on sun cluster Post 302285251 by houston on Sunday 8th of February 2009 03:42:17 AM
Old 02-08-2009
Another thought, but try it with file before implementing it:
- Fail one drive (say c3t600A0B80001138280000A63C48183A82d0).
- Delete this Lun from disk unit and recreate with bigger size.
- Attach this newly created LUN to the same zpool (mirror mode).
- Wait till it syncs
- Fail the other LUN (c3t600A0B800011384A00005A5548183AF1d0)
- Delete this Lun from the disk unit and recreate with bigger size
- Attach the Lun to the same same pool
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Installing Sun Cluster on ZFS root pools

Hi All! I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems. The root zfs pool status (on the... (2 Replies)
Discussion started by: bluescreen
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

10. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies
format(1M)																format(1M)

NAME
format - format an HP SCSI disk array LUN SYNOPSIS
device_file DESCRIPTION
formats one LUN of the HP SCSI disk array associated with device file, device_file. The format will usually be a soft or zeroing format, in which the controller writes zeroes to the data area and parity area, if any, of the LUN. NOTE: The above should always be true of a sub-LUN, but the controller might decide, based on certain conditions, to do a full format of a regular LUN, which consists of sending a mode select and a media initialization command to the physical drive(s) in question, fol- lowed by zeroing the data and parity area, if any. The conditions which will cause a full format to be done are as follows: 1. The controller received a Mode Select command which requires a drive sector size change. 2. The controller received a Mode Select command which changed a parameter in the Format Device Page(0x03). 3. The LUN contains one or more failed drives. In this case only a certain subset of the drives containing the failed drives will be formatted. 4. Either the FmtData or the CmpLst bit in the Format Unit CDB is set. RETURN VALUE
returns the following values: 0 Successful completion. -1 Command failed. DIAGNOSTICS AND ERRORS
Errors can originate from problems with: o o SCSI (device level) communications o system calls Error messages generated by format: An error in command syntax has occurred. Enter command again with all required arguments, in the order shown. To ensure that does not modify a disk array that is being used by another process, attempts to obtain exclusive access to the disk array. If the disk array is already opened by another process (for example, LVM -- the Logical Volume Manager), a "" error message is returned by the driver. To eliminate the "" condition, determine what process has the device open. In the case of LVM, it is necessary to deactivate the volume group containing the array before formatting array LUNs (see vgchange(1M)). The LUN number, which is derived from the device file name, is out of range. The addressed LUN is not configured, and thus is not known to the array controller. Utilities must be able to open the device file for raw access. The device being addressed is not an HP SCSI disk array. SCSI (device level) communication errors: Sense data associated with the failed operation is printed. Error messages generated by system calls: uses the following system calls: and Documentation for these HP-UX system calls contains information about the specific error conditions associated with each call. does not alter the value of The interpretation of for printing purposes is performed by the system utility EXAMPLES
To format the HP SCSI disk array LUN on a Series 800: WARNING
The command will destroy all user data on the addressed LUN. DEPENDENCIES
The HP C2425 and HP C2427 disk arrays are only supported on Series 700 systems running HP-UX version 9.0X. The HP C2430 disk array is supported on Series 700 and 800 systems running HP-UX versions 9.0X and 10.0X. AUTHOR
was developed by HP. format(1M)
All times are GMT -4. The time now is 05:36 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy