Using liveupgrade on single ZFS pool


 
Thread Tools Search this Thread
Operating Systems Solaris Using liveupgrade on single ZFS pool
# 1  
Old 11-05-2012
Using liveupgrade on single ZFS pool

Hi Guys,

I have a single ZFS pool with 2 disk which is mirrored if i create a new BE with lucreate should i specify which disk where the new BE should be created?
# 2  
Old 11-05-2012
Hi,

No lucreate can generate a clone of your actual boot environement on the same zpool (rpool by befault).
# 3  
Old 11-05-2012
See http://docs.oracle.com/cd/E19253-01/...eff/index.html for full details on how to do it.
# 4  
Old 11-12-2012
Thanks all for the reply,

so in a mirrored single pool there is no way to create a BE and choose any of the disk on the mirrored pool where BE should be created?. like -p in choosing a different pool.

i think i read something about LU on a mirrored disk but this is SVM and cant find anything on mirrored ZFS.
# 5  
Old 11-12-2012
The question is: why would you want to do that?
# 6  
Old 11-12-2012
Quote:
Originally Posted by batas
.. and choose any of the disk on the mirrored pool where BE should be created?
If you have a mirrored pool, whatever you create on it is on both sides of the mirror.

Perhaps are you thinking of breaking the mirror before creating the new boot environment but with ZFS, unless you have not enough free space on the pool, that makes no sense.
# 7  
Old 11-15-2012
Thanks all for the info Smilie
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Opening up ZFS pool as writable

I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL). Using the following commands below I have successfully mounted the image file ready to be opened by zpool sudo losetup /dev/loop0 .img sudo... (1 Reply)
Discussion started by: alphatron150
1 Replies

2. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

3. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

4. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

5. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

6. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

7. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

8. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

9. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies
Login or Register to Ask a Question