Sponsored Content
Operating Systems Solaris How do I export a zfs filesystem that I created? Post 302274610 by sqa777 on Thursday 8th of January 2009 01:58:46 AM
Old 01-08-2009
How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up.

These are the steps I did:

1) Create the zpool using raidz1 across five disks.

I have six disks and created a zpool across 5 of them. c4t0d0 already is being used by the default zpool so I excluded it.

zpool create raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0


2) Do a zpool status -v.

zpool status -v raidpool
pool: raidpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
raidpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0



# Things look good so far....

3) Create the first volume with compression on and mountpoint set to where I want this volume's export point to be:

zfs create -V 80G -o compression=lzjb -o mountpoint=/vol01 raidpool/vol01

cannot create 'raidpool/vol01': 'mountpoint' does not apply to datasets of this type


# Shouldn't the mountpoint option work and then let me mount raidpool/vol01 as /vol01 from another host via NFS? This is my understanding from reading the ZFS Administration Guide.


# I also tried using mountpoint with the sharenfs option:

zfs create -V 80G -o compression=lzjb -o mountpoint=/vol01 -o sharenfs=on raidpool/vol01

cannot create 'raidpool/vol01': 'mountpoint' does not apply to datasets of this type


# Still doesn't work. How can I get raidpool/vol01 be an exportable volume to other hosts?

Thnx.
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

NFS export filesystem with several partitions

I want to export the file system which consist of several partition. For example I export / and there are /home, /usr partitions. On client side I can see all files in /, but /home and /usr are empty. So far I failed to find the way to make other partitions visible in one mount. :confused: Of... (2 Replies)
Discussion started by: fmtu2005
2 Replies

2. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

3. Solaris

Export/import ZFS ACL's

I've been wondering about this one, is there any way to do the following with ZFS ACL's (i.e. "copy" the ACL over to another file)? getfacl /bla/dir1 | setfacl -f - /bla/dir2 I know about inheritence on dirs, it doesn't work in this scenario I'm working on. Just looking to copy the ACL's. ... (3 Replies)
Discussion started by: vimes
3 Replies

4. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

5. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

6. AIX

Problem with filesystem created after HACMP was shut down

There is a shared volume group connected to two AIX systems A and B on a shared storage. The shared volume group was regularly attached to the system A and in case of a system A crash, the system B should take over the shared volume group and resources on it. Resources on the shared storage:... (1 Reply)
Discussion started by: yamanu
1 Replies

7. Solaris

ZFS Filesystem

Hi, Recently we have new server T5 Oracle. We set up it for our database. For out database files we set one zfs filesystem. When i use iostat -xc the output as below. As you see the value for vdc4 is quite high. extended device statistics cpu device ... (32 Replies)
Discussion started by: tharmendran
32 Replies

8. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

9. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies
advscan(8)						      System Manager's Manual							advscan(8)

NAME
advscan - Locates AdvFS volumes on disk devices SYNOPSIS
/sbin/advfs/advscan [-g] [-a] [-r] [-f domain_name] devices... disk_group... OPTIONS
Scans all devices found in any /etc/fdmns domain as well as those in the command line. Fixes the domain count and the links in the /etc/fdmns directory for the named domain. Lists the AdvFS volumes in the order they are found on each disk device or Logical Storage Man- ager (LSM) disk group. Re-creates missing domains. The domain name is created from the device names or LSM disk group names. OPERANDS
Specifies the device names of disks to scan for AdvFS volumes. Specifies the LSM disk groups to scan for AdvFS volumes. DESCRIPTION
The advscan command locates AdvFS volumes (disk partitions or LSM volumes) that are in AdvFS domains. Given the AdvFS volumes, you can re-create or fix the /etc/fdmns directory of a named domain or LSM disk group. For example, if you have moved disks to a new system, moved disks around in a way that has changed device numbers, or lost track of where the AdvFS domains are, you can use this command to locate them. Another use of the advscan command is to repair AdvFS domains when you have broken them. For example, if you mistakenly delete the /etc/fdmns directory, delete a domain directory in the /etc/fdmns directory, or delete links from a domain directory under the /etc/fdmns directory, you can use the advscan command to fix the problem. The advscan command accepts a list of disk device names and/or LSM disk group names and searches all the disk partitions to determine which partitions are part of an AdvFS domain. You can run the advscan command to automatically rebuild all or part of your /etc/fdmns directory or you can rebuild it manually by supply- ing all the names of the AdvFS volumes in a domain. If the -g option is not set, the AdvFS volumes are listed as they are grouped in domains. Set this option to list the AdvFS volumes in the order they are found on each disk. Run the advscan command with the -r option set to re-create missing domains from the /etc/fdmns directory, missing links, or the entire /etc/fdmns directory. Although the advscan command will rebuild the /etc/fdmns directory automatically, Compaq recommends that you always keep a hard-copy record of the current /etc/fdmns directory. To determine if a disk partition is part of an AdvFS domain, the advscan command performs the following functions: Reads the first two pages of a partition to determine if it is an AdvFS volume and to find the domain information. Reads the disk label to sort out overlap- ping partitions. The size of overlapping partitions are examined and compared to the domain information to determine which partitions are in the domain. These partitions are reported in the output. Reads the boot block to determine if the partition is AdvFS root bootable. The advscan command displays the date the domain was created, the on-disk structure version, and the last known or current state of the volume. In order to mount an AdvFS fileset, the domain that contains the fileset must be consistent. An AdvFS domain is consistent when the number of physical partitions or volumes with the correct domain ID are equal to both the domain volume count (which is a number stored in the domain) and the number of links to the partitions that are in the /etc/fdmns directory. Domain inconsistencies can occur in diverse ways. Use the -f option to correct domain inconsistencies. If you attempt to mount an inconsistent domain, a message similar to the following will appear on the console: # Volume count mismatch for domain dmnz. dmnz expects 2 volumes, /etc/fdmns/dmnz has 1 links. RESTRICTIONS
You must be the root user to use this command. EXAMPLES
The following are examples of the output from the advscan command. The following example scans devices dsk3 and diskgroup rootdg for AdvFS partitions: # advscan dsk3 rootdg Scanning devices /dev/rdisk/dskz3 rootdg Found domains: usr_domain Domain Id 30a91a42.0001e060 Created Thu Mar 16 14:37:54 2000 Domain volumes 2 /etc/fdmns links 2 Actual partitions found: rz3g rootdg.vol03 The following example scans devices found in /etc/fdmns. It uses the -g option to list parti- tions in the order they are found on the disks rather than grouping them into domains and matching them with the /etc/fdmns directory. # advscan -a -g scanning disks /dev/rdisk/dsk2 /dev/rdisk/dsk3 rootdg Partition Domain Id /dev/dsk2a 30a919ff.000ec470 V3, mounted, bootable 1 volume in domain Created Mon Jan 11 14:36:47 1999 Last mount Fri Jun 30 16:00:04 2000 /dev/dsk2g 30a91a32.0007c250 V4, mounted 1 volume in domain Created Thu Mar 16 14:37:38 2000 Last mount Fri Mar 24 17:14:16 2000 /dev/dsk3a 30abe160.00028eff V3, never mounted 1 volume in domain Created Thu Mar 18 17:12:00 1999 /dev/dsk3g 30a91a42.0001e060 V3, mounted 1 volume in domain Created Tue Mar 16 14:37:54 1999 Last mount Thu Mar 23 17:14:17 2000 rootdg.vol01 30c62c74.00036750 V4, dismounted 2 volumes in domain Created Fri Apr 7 15:51:16 2000 Last mount Fri Apr 7 17:16:06 2000 rootdg.vol02 30c62c74.00036750 V3, dismounted Created Wed Apr 7 15:51:16 1999 Last mount Wed Apr 7 17:16:06 1999 For the following example, two domains using device dsk3 and disk group rootdg were removed from the /etc/fdmns directory. The advscan command scans device dsk3 and disk group rootdg and then re-creates the missing domains. # advscan -r dsk3 rootdg Scanning disks /dev/disk/dsk3 /dev/rvol/rootdg Found domains: *unknown* Domain Id 30a91a42.0001e060 Created Tue Mar 16 14:37:54 2000 Domain volumes 1 /etc/fdmns links 0 Actual partitions found: dsk3g* *unknown* Domain Id 30c62c74.00036750 Created Wed Apr 7 15:51:16 2000 Domain volumes 2 /etc/fdmns links 0 Actual partitions found: rootdg.vol01* rootdg.vol02* Creating /etc/fdmns/domain_dsk3g/ linking dsk3g Creating /etc/fdmns/domain_rootdg.vol01_rootdg.vol02/ linking rootdg.vol01 linking rootdg.vol02 FILES
SEE ALSO
Commands: disklabel(8) Files: fstab(4) advscan(8)
All times are GMT -4. The time now is 07:28 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy