Sponsored Content
Operating Systems Solaris Resize LUNs and zfs-pool on sun cluster Post 302285249 by houston on Sunday 8th of February 2009 03:34:26 AM
Old 02-08-2009
I too had this problem and created a new LUN and added to the zpool (safe way and it works). In your case, it looks like you are creating mirror from 2 different controller/disk unit so you have to create 1 Lun in each and add to the zpool as mirror.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Installing Sun Cluster on ZFS root pools

Hi All! I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems. The root zfs pool status (on the... (2 Replies)
Discussion started by: bluescreen
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

10. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies
LVCONVERT(8)						      System Manager's Manual						      LVCONVERT(8)

NAME
lvconvert - convert a logical volume from linear to mirror or snapshot SYNOPSIS
lvconvert -m|--mirrors Mirrors [--mirrorlog {disk|core}] [--corelog] [-R|--regionsize MirrorLogRegionSize] [-A|--alloc AllocationPolicy] [-b|--background] [-i|--interval Seconds] [-h|-?|--help] [-v|--verbose] [--version] LogicalVolume[Path] [PhysicalVolume[Path]...] lvconvert -s|--snapshot [-c|--chunksize ChunkSize] [-h|-?|--help] [-v|--verbose] [-Z|--zero y|n] [--version] OriginalLogicalVolume[Path] SnapshotLogicalVolume[Path] DESCRIPTION
lvconvert will change a linear logical volume to a mirror logical volume or to a snapshot of linear volume and vice versa. It is also used to add and remove disk logs from mirror devices. OPTIONS
See lvm for common options. Exactly one of --mirrors or --snapshot arguments required. -m, --mirrors Mirrors Specifies the degree of the mirror you wish to create. For example, "-m 1" would convert the original logical volume to a mirror volume with 2-sides; that is, a linear volume plus one copy. --mirrorlog {disk|core} Specifies the type of log to use. The default is disk, which is persistent and requires a small amount of storage space, usually on a separate device from the data being mirrored. Core may be useful for short-lived mirrors: It means the mirror is regenerated by copying the data from the first device again every time the device is activated - perhaps, for example, after every reboot. --corelog The optional argument "--corelog" is the same as specifying "--mirrorlog core". -R, --regionsize MirrorLogRegionSize A mirror is divided into regions of this size (in MB), and the mirror log uses this granularity to track which regions are in sync. -b, --background Run the daemon in the background. -i, --interval Seconds Report progress as a percentage at regular intervals. -s, --snapshot Create a snapshot from existing logical volume using another existing logical volume as its origin. -c, --chunksize ChunkSize Power of 2 chunk size for the snapshot logical volume between 4k and 512k. -Z, --zero y|n Controls zeroing of the first KB of data in the snapshot. If the volume is read-only the snapshot will not be zeroed. Examples "lvconvert -m1 vg00/lvol1" converts the linear logical volume "vg00/lvol1" to a two-way mirror logical volume. "lvconvert --mirrorlog core vg00/lvol1" converts a mirror with a disk log to a mirror with an in-memory log. "lvconvert --mirrorlog disk vg00/lvol1" converts a mirror with an in-memory log to a mirror with a disk log. "lvconvert -m0 vg00/lvol1" converts a mirror logical volume to a linear logical volume. "lvconvert -s vg00/lvol1 vg00/lvol2" converts logical volume "vg00/lvol2" to snapshot of original volume "vg00/lvol1" SEE ALSO
lvm(8), vgcreate(8), lvremove(8), lvrename(8), lvextend(8), lvreduce(8), lvdisplay(8), lvscan(8) Red Hat, Inc LVM TOOLS 2.02.44-cvs (02-17-09) LVCONVERT(8)
All times are GMT -4. The time now is 06:49 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy