Sponsored Content
Full Discussion: Mount A ZFS volume
Operating Systems Solaris Mount A ZFS volume Post 302384130 by jlliagre on Monday 4th of January 2010 06:06:14 AM
Old 01-04-2010
Please clarify. Filesystems are mounted by default when you import their pools.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS and SVM - volume management

pupp, thanks for the information. but is its integrated volume management better than SVM that we use (with ufs i believe)? (2 Replies)
Discussion started by: StarSol
2 Replies

2. Ubuntu

cannot mount volume

Hi I have recently install ubuntu on my laptop. I have tried to access my external drive wich is NTFS format but i get the following error: ´Cannot mount volume´ Can someone help me please?? (2 Replies)
Discussion started by: DDoS
2 Replies

3. Solaris

Remove the zfs snapshot keeping the original volume and clone

I created a snapshot and subsequent clone of a zfs volume. But now i 'm not able to remove the snapshot it gives me following error zfs destroy newpool/ldom2/zdisk4@bootimg cannot destroy 'newpool/ldom2/zdisk4@bootimg': snapshot has dependent clones use '-R' to destroy the following... (7 Replies)
Discussion started by: fugitive
7 Replies

4. Solaris

Ldom OS on SAN based zfs volume

Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain VDS NAME LDOM VOLUME DEVICE primary-vds0 primary iso sol-10-u6-ga1-sparc-dvd.iso cdrom ... (16 Replies)
Discussion started by: fugitive
16 Replies

5. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

6. Solaris

Grow / expand a ZFS volume

Hi, I need to expand a ZFS volume from 500GB to 800GB. I'd like to ask your help to confirm the following procedure: Can I do it on the fly without bothering the users working on this volume? Thank you in advance! (6 Replies)
Discussion started by: aixlover
6 Replies

7. Solaris

2540 volume expand and solaris zfs grow

Hello I hope everyone is having a good day! Situation: 2540 with 3.6TB of usable space volume A is 2.6TB volume B was 1TB Volume A is mounted via a single lun on a solaris server and is running out of space. Volume B was used on another server but is no longer, I deleted the volume in... (7 Replies)
Discussion started by: Metasin
7 Replies

8. Solaris

Delete zfs dump volume

Hi guys, how do you delete a zfs dump volume ? Thanks for your help. (2 Replies)
Discussion started by: cjashu
2 Replies

9. Shell Programming and Scripting

Mount a volume

Hi, I dig up an old topic because I don't find the solution with shell but with applescript mount volume "smb://MyIP/itransfert/Public/1-Arrivees" as user name "MyIP\\itransfert_cs" with password "MyPassword" Otherwise I would want to know the reason it doesn't work with bash: I have a... (9 Replies)
Discussion started by: protocomm
9 Replies

10. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies
pooladm(1M)                                                                                                                            pooladm(1M)

NAME
pooladm - activate and deactivate the resource pools facility SYNOPSIS
/usr/sbin/pooladm [-n] [-s] [-c] [filename] | -x /usr/sbin/pooladm [-d | -e] The pooladm command provides administrative operations on pools and sets. pooladm reads the specified filename and attempts to activate the pool configuration contained in it. Before updating the current pool run-time configuration, pooladm validates the configuration for correctness. Without options, pooladm prints out the current running pools configuration. The following options are supported: -c Instantiate the configuration at the given location. If a filename is not specified, it defaults to /etc/pooladm.conf. -d Disable the pools facility so that pools can no longer be manipulated. -e Enable the pools facility so that pools can be manipulated. -n Validate the configuration without actually updating the current active files. Checks that there are no syntactic errors and that the configuration can be instantiated on the current system. No validation of application specific properties is performed. -s Update the specified location with the details of the current dynamic configuration. This option requires update permission to the configuration that you are going to instantiate. If you use this option with the -c option, the dynamic configuration is updated before the static location. -x Remove the currently active pool configuration. Destroy all defined resources, and return all formerly partitioned compo- nents to their default resources. The following operands are supported: filename Use the configuration contained within this file. Example 1: Instantiating a Configuration The following command instantiates the configuration contained at /home/admin/newconfig: example# /usr/sbin/pooladm -c /home/admin/newconfig Example 2: Validating the Configuration Without Instantiating It The following command attempts to instantiate the configuration contained at /home/admin/newconfig. It displays any error conditions that it encounters, but does not actually modify the active configuration. example# /usr/sbin/pooladm -n -c /home/admin/newconfig Example 3: Removing the Current Configuration The following command removes the current pool configuration: example# /usr/sbin/pooladm -x Example 4: Enabling the Pools Facility The following command enables the pool facility: example# /usr/sbin/pooladm -e Example 5: Saving the Active Configuration to a Specified Location The following command saves the active configuration to /tmp/state.backup: example# /usr/sbin/pooladm -s /tmp/state.backup /etc/pooladm.conf See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWpool | +-----------------------------+-----------------------------+ |Interface Stability |See below. | +-----------------------------+-----------------------------+ The invocation is Evolving. The output is Unstable. poolcfg(1M), poolbind(1M), psrset(1M), pset_destroy(2), libpool(3LIB), attributes(5) Resource bindings that are not presented in the form of a binding to a partitionable resource, such as the scheduling class, are not neces- sarily modified in a pooladm -x operation. The pools facility is not active by default when Solaris starts. pooladm -e explicitly activates the pools facility. The behavior of cer- tain APIs related to processor partitioning and process binding are modified when pools is active. See libpool(3LIB). You cannot enable the pools facility on a system where processor sets have been created. Use the psrset(1M) command or pset_destroy(2) to destroy processor sets manually before you enable the pools facility. 15 Feb 2005 pooladm(1M)
All times are GMT -4. The time now is 03:30 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy