Sponsored Content
Special Forums UNIX and Linux Applications Infrastructure Monitoring zfs - migrate from pool to pool Post 302343847 by pupp on Thursday 13th of August 2009 08:44:31 PM
Old 08-13-2009
zfs - migrate from pool to pool

Here are the details.

Code:
cnjr-opennms>root$ zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
openpool               20.6G  46.3G  35.5K  /openpool
openpool/ROOT          15.4G  46.3G    18K  legacy
openpool/ROOT/rds      15.4G  46.3G  15.3G  /
openpool/ROOT/rds/var   102M  46.3G   102M  /var
openpool/dump          1.00G  46.3G  1.00G  -
openpool/export         311M  46.3G    19K  /export
openpool/export/home    311M  46.3G   311M  /export/home
openpool/swap          3.91G  50.2G    16K  -
storage                 220K   134G    21K  /storage
storage/onms             57K   134G    21K  /storage/onms
storage/onms/log         18K   134G    18K  /storage/onms/log
storage/onms/rrd         18K   134G    18K  /storage/onms/rrd
storage/snap             18K   134G    18K  /storage/snap

I am running an opennms server that stores performance related data in RRDs. they are stored under /opt/opennms/share/rrd/*. As I start to track more and more nodes with more specific OIDs (snmp), disk io is starting to climb steadily. What I am looking to do is move /opt/opennms/share/rrd from openpool and mount it under storage pool (storage/onms/rrd). more specifically, i want to do something like this:
Code:
cnjr-opennms>root$ zfs set mountpoint=/opt/opennms/share/rrd storage/onms/rrd

the problem here is i get:
Code:
cannot mount '/opt/opennms/share/rrd': directory is not empty
property may be set but unable to remount filesystem

So yes... the truth is there are plenty of directories beyond /opt/opennms/share/rrd/*. I'm just not sure how I can migrate this directory off openpool into storage.

Can anyone shed some light onto this. I am not 100% with zfs just yet but getting there Smilie

I was thinking about symbolic links but i'm not sure thats the best option here.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

3. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

7. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

8. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

9. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies

10. UNIX for Beginners Questions & Answers

Opening up ZFS pool as writable

I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL). Using the following commands below I have successfully mounted the image file ready to be opened by zpool sudo losetup /dev/loop0 .img sudo... (1 Reply)
Discussion started by: alphatron150
1 Replies
ZFS-FUSE(8)							  [FIXME: manual]						       ZFS-FUSE(8)

NAME
zfs-fuse - ZFS filesystem daemon SYNOPSIS
zfs-fuse [--pidfile filename] [--no-daemon] [--no-kstat-mount] [--disable-block-cache] [--disable-page-cache] [--fuse-attr-timeout SECONDS] [--fuse-entry-timeout SECONDS] [--log-uberblocks] [--max-arc-size MB] [--fuse-mount-options OPT,OPT,OPT...] [--min-uberblock-txg MIN] [--stack-size=size] [--enable-xattr] [--help] DESCRIPTION
This manual page documents briefly the zfs-fuse command. zfs-fuse is a daemon which provides support for the ZFS filesystem, via fuse. Ordinarily this daemon will be invoked from system boot scripts. OPTIONS
This program follows the usual GNU command line syntax, with long options starting with two dashes (`-'). A summary of options is included below. For a complete description, see the Info files. -h --help Show summary of options. -p filename --pidfile filename Write the daemon's PID to filename after daemonizing. Ignored if --no-daemon is passed. filename should be a fully-qualified path. -n --no-daemon Stay in foreground, don't daemonize. --no-kstat-mount Do not mount kstats in /zfs-kstat --disable-block-cache Enable direct I/O for disk operations. Completely disables caching reads and writes in the kernel block cache. Breaks mmap() in ZFS datasets too. --disable-page-cache Disable the page cache for files residing within ZFS filesystems. Not recommended as it slows down I/O operations considerably. -a SECONDS --fuse-attr-timeout SECONDS Sets timeout for caching FUSE attributes in kernel. Defaults to 0.0. Higher values give a 40% performance boost. -e SECONDS --fuse-entry-timeout SECONDS Sets timeout for caching FUSE entries in kernel. Defaults to 0.0. Higher values give a 10000% performance boost but cause file permission checking security issues. --log-uberblocks Logs uberblocks of any mounted filesystem to syslog -m MB --max-arc-size MB Forces the maximum ARC size (in megabytes). Range: 16 to 16384. -o OPT... --fuse-mount-options OPT,OPT,OPT... Sets FUSE mount options for all filesystems. Format: comma-separated string of characters. -u MIN --min-uberblock-txg MIN Skips uberblocks with a TXG < MIN when mounting any fs -v MB --vdev-cache-size MB adjust the size of the vdev cache. Default : 10 --zfs-prefetch-disable Disable the high level prefetch cache in zfs. This thing can eat up to 150 Mb of ram, maybe more --stack-size=size Limit the stack size of threads (in kb). default : no limit (8 Mb for linux) -x --enable-xattr Enable support for extended attributes. Not generally recommended because it currently has a significant performance penalty for many small IOPS -h --help Show this usage summary. REMARKS ON PRECEDENCE
Note that the parameters passed on the command line take precedence over those supplied through /etc/zfs/zfsrc. BUGS
/CAVEATS The path to the configuration file (/etc/zfs/zfsrc) cannot at this time be configured. Most existing packages suggest settings can be set at the top of their init script. These get frequently overridden by a (distribution specific) /etc/default/zfs-fuse file, if it exists. Be sure to look at these places if you want your changes to options to take effect. The /etc/zfs/zfsrc is going to be the recommended approach in the future. So, packagers, please refrain from passing commandline parameters within the initscript (except for --pid-file). SEE ALSO
zfs (8), zpool (8), zdb(8), zstreamdump(8), /etc/zfs/zfsrc AUTHOR
This manual page was written by Bryan Donlan bdonlan@gmail.com for the Debian(TM) system (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of the GNU General Public License, Version 2 any later version published by the Free Software Foundation, or the Common Development and Distribution License. Revised by Seth Heeren zfs-fuse@sehe.nl On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL. The text of the Common Development and Distribution Licence may be found at /usr/share/doc/zfs-fuse/copyright COPYRIGHT
Copyright (C) 2010 Bryan Donlan [FIXME: source] 2010-06-09 ZFS-FUSE(8)
All times are GMT -4. The time now is 09:47 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy