Sponsored Content
Full Discussion: zfs pool migration
Operating Systems Solaris zfs pool migration Post 302440010 by jac on Sunday 25th of July 2010 08:52:19 PM
Old 07-25-2010
zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

5. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

10. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies
DBIx::Class::Storage::DBI::Replicated::Balancer(3)	User Contributed Perl Documentation	DBIx::Class::Storage::DBI::Replicated::Balancer(3)

NAME
DBIx::Class::Storage::DBI::Replicated::Balancer - A Software Load Balancer SYNOPSIS
This role is used internally by DBIx::Class::Storage::DBI::Replicated. DESCRIPTION
Given a pool (DBIx::Class::Storage::DBI::Replicated::Pool) of replicated database's (DBIx::Class::Storage::DBI::Replicated::Replicant), defines a method by which query load can be spread out across each replicant in the pool. ATTRIBUTES
This class defines the following attributes. auto_validate_every ($seconds) If auto_validate has some sort of value, run "validate_replicants" in DBIx::Class::Storage::DBI::Replicated::Pool every $seconds. Be careful with this, because if you set it to 0 you will end up validating every query. master The DBIx::Class::Storage::DBI object that is the master database all the replicants are trying to follow. The balancer needs to know it since it's the ultimate fallback. pool The DBIx::Class::Storage::DBI::Replicated::Pool object that we are trying to balance. current_replicant Replicant storages (slaves) handle all read only traffic. The assumption is that your database will become readbound well before it becomes write bound and that being able to spread your read only traffic around to multiple databases is going to help you to scale traffic. This attribute returns the next slave to handle a read request. Your "pool" attribute has methods to help you shuffle through all the available replicants via its balancer object. METHODS
This class defines the following methods. _build_current_replicant Lazy builder for the "current_replicant_storage" attribute. next_storage This method should be defined in the class which consumes this role. Given a pool object, return the next replicant that will serve queries. The default behavior is to grab the first replicant it finds but you can write your own subclasses of DBIx::Class::Storage::DBI::Replicated::Balancer to support other balance systems. This returns from the pool of active replicants. If there are no active replicants, then you should have it return the master as an ultimate fallback. around: next_storage Advice on next storage to add the autovalidation. We have this broken out so that it's easier to break out the auto validation into a role. This also returns the master in the case that none of the replicants are active or just forgot to create them :) increment_storage Rolls the Storage to whatever is next in the queue, as defined by the Balancer. around: select Advice on the select attribute. Each time we use a replicant we need to change it via the storage pool algorithm. That way we are spreading the load evenly (hopefully) across existing capacity. around: select_single Advice on the select_single attribute. Each time we use a replicant we need to change it via the storage pool algorithm. That way we are spreading the load evenly (hopefully) across existing capacity. before: columns_info_for Advice on the current_replicant_storage attribute. Each time we use a replicant we need to change it via the storage pool algorithm. That way we are spreading the load evenly (hopefully) across existing capacity. _get_forced_pool ($name) Given an identifier, find the most correct storage object to handle the query. AUTHOR
John Napiorkowski <jjnapiork@cpan.org> LICENSE
You may distribute this code under the same terms as Perl itself. perl v5.18.2 2014-01-05 DBIx::Class::Storage::DBI::Replicated::Balancer(3)
All times are GMT -4. The time now is 09:42 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy