Sponsored Content
Full Discussion: ZFS Pool Mix-up
Operating Systems Solaris ZFS Pool Mix-up Post 302317877 by blicki on Wednesday 20th of May 2009 06:49:47 AM
Old 05-20-2009
Question ZFS Pool Mix-up

Hi all

I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM:

- 2 internal disks: c0t0d0 and c0t1d0
- bootable root-volume (mirrored, both disks)
- 1 non-mirrored swap slice
- 1 non-mirrored slices for Live Upgrade

Is this possible using ZFS? Can I modify the properties of each volume after creating a mirrored pool oder have some volume mirrored afterwards?

Thanks for any help.

Kind regards, Mike
 

10 More Discussions You Might Find Interesting

1. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

7. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

8. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

9. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies

10. UNIX for Beginners Questions & Answers

Opening up ZFS pool as writable

I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL). Using the following commands below I have successfully mounted the image file ready to be opened by zpool sudo losetup /dev/loop0 .img sudo... (1 Reply)
Discussion started by: alphatron150
1 Replies
vx_emerg_start(1M)														vx_emerg_start(1M)

NAME
vx_emerg_start - start Veritas Volume Manager from recovery media SYNOPSIS
vx_emerg_start [-m] [-r root_daname] hostname DESCRIPTION
The vx_emerg_start utility can be used to start Veritas Volume Manager (VxVM) when a system is booted from alternate media, or when a sys- tem has been booted into Maintenance Mode Boot (MMB) mode. This allows a rootable VxVM configuration to be repaired in the event of a cat- astrophic failure. vx_emerg_start verifies that the /etc/vx/volboot file exists, and checks the command-line arguments against the contents of this file. OPTIONS
-m Mounts the root file system contained on the rootvol volume after VxVM has been started. Prior to being mounted, the rootvol volume is started and fsck is run on the root file system. -r root_daname Specifies the disk access name of one of the members of the root disk group that is to be imported. This option can be used to spec- ify the appropriate root disk group when multiple generations of the same root disk group exist on the system under repair. If this option is not specified, the desired root disk group may not be imported if multiple disk groups with the same name exist on the sys- tem, and if one of these disk groups has a more recent timestamp. ARGUMENTS
hostname Specifies the system name (nodename) of the host system being repaired. This name is used to allow the desired root disk group to be imported. It must match the name of the system being repaired, as it is unlikely to be recorded on the recovery media from which you booted the system. NOTES
HP-UX Maintenance Mode Boot (MMB) is intended for recovery from catastrophic failures that have prevented the target machine from booting. If a VxVM root volume is mirrored, only one mirror is active when the system is in MMB mode. Any writes that are made to the root file sys- tem in this mode can corrupt this file system when both mirrors are subsequently configured. The vx_emerg_start script allows VxVM to be started while a system is in MMB mode, and marks the non-boot mirror plexes as stale. This prevents corruption of the root volume or file system by forcing a subsequent recovery from the boot mirror to the non-boot mirrors to take place. USAGE
After VxVM has been started, various recovery options can be performed depending on the nature of the problem. It is recommended that you use the vxprint command to determine the state of the configuration. One common problem is when all the plexes of the root disk are stale as shown in the following sample output from vxprint: v rootvol root DISABLED 393216 - ACTIVE - pl rootvol-01 rootvol DISABLED 393216 - STALE - sd rootdisk01-02 rootvol-01 ENABLED 393216 0 - - pl rootvol-02 rootvol DISABLED 393216 - STALE - pl rootvol-02 rootvol DISABLED 393216 - STALE - sd rootdisk02-02 rootvol-02 ENABLED 393216 0 - - In this case, the volume can usually be repaired by using the vxvol command as shown here: vxvol -g 4.1ROOT -f start rootvol If the volume is mirrored, it is put in read-write-back recovery mode. As the command is run in the foreground, it does not exit until the recovery is complete. It is then recommended that you run fsck on the root file system, and mount it, before attempting to reboot the sys- tem: fsck -F vxfs -o full /dev/vx/rdsk/4.1ROOT/rootvol mkdir /tmp_mnt mount -F vxfs /dev/vx/dsk/4.1ROOT/rootvol /tmp_mnt SEE ALSO
fsck(1M), mkdir(1M), mount(1M), vxintro(1M), vxprint(1M), vxvol(1M) VxVM 5.0.31.1 24 Mar 2008 vx_emerg_start(1M)
All times are GMT -4. The time now is 04:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy