Sponsored Content
Full Discussion: zfs pool migration
Operating Systems Solaris zfs pool migration Post 302440010 by jac on Sunday 25th of July 2010 08:52:19 PM
Old 07-25-2010
zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

5. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

10. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies
pool(3erl)						     Erlang Module Definition							pool(3erl)

NAME
pool - Load Distribution Facility DESCRIPTION
pool can be used to run a set of Erlang nodes as a pool of computational processors. It is organized as a master and a set of slave nodes and includes the following features: * The slave nodes send regular reports to the master about their current load. * Queries can be sent to the master to determine which node will have the least load. The BIF statistics(run_queue) is used for estimating future loads. It returns the length of the queue of ready to run processes in the Erlang runtime system. The slave nodes are started with the slave module. This effects, tty IO, file IO, and code loading. If the master node fails, the entire pool will exit. EXPORTS
start(Name) -> start(Name, Args) -> Nodes Types Name = atom() Args = string() Nodes = [node()] Starts a new pool. The file .hosts.erlang is read to find host names where the pool nodes can be started. See section Files below. The start-up procedure fails if the file is not found. The slave nodes are started with slave:start/2,3 , passing along Name and, if provided, Args . Name is used as the first part of the node names, Args is used to specify command line arguments. See slave(3erl) . Access rights must be set so that all nodes in the pool have the authority to access each other. The function is synchronous and all the nodes, as well as all the system servers, are running when it returns a value. attach(Node) -> already_attached | attached Types Node = node() This function ensures that a pool master is running and includes Node in the pool master's pool of nodes. stop() -> stopped Stops the pool and kills all the slave nodes. get_nodes() -> Nodes Types Nodes = [node()] Returns a list of the current member nodes of the pool. pspawn(Mod, Fun, Args) -> pid() Types Mod = Fun = atom() Args = [term()] Spawns a process on the pool node which is expected to have the lowest future load. pspawn_link(Mod, Fun, Args) -> pid() Types Mod = Fun = atom() Args = [term()] Spawn links a process on the pool node which is expected to have the lowest future load. get_node() -> node() Returns the node with the expected lowest future load. FILES
.hosts.erlang is used to pick hosts where nodes can be started. See net_adm(3erl) for information about format and location of this file. $HOME/.erlang.slave.out.HOST is used for all additional IO that may come from the slave nodes on standard IO. If the start-up procedure does not work, this file may indicate the reason. Ericsson AB stdlib 1.17.3 pool(3erl)
All times are GMT -4. The time now is 01:42 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy