09-04-2011
for root pool, use SMI label using format -e .
you can create the pool right away using a EFI labelled disk
6 More Discussions You Might Find Interesting
1. Solaris
Hi Peeps,
Can anyone help me an EFI lablel on a 3510 raid array that I cannot get rid of, format -e and label just asks you if you want to label it. Want an SMI label writing to it.
Anyone got any ideas on how to remove the EFI label?
Thanks in advance
Martin (2 Replies)
Discussion started by: callmebob
2 Replies
2. Solaris
Hello, I am new to Solaris so i apologize upfront if my questions seem trivial.
I am trying to install a ZFS file system on a Solaris 10 machine with UFS already installed on it.
I want to run: # zpool create pool_zfs c0t0d0
then: # zfs create pool_zfs/fs
My question is more to... (3 Replies)
Discussion started by: mcdef
3 Replies
3. Solaris
Hi All!
I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems.
The root zfs pool status (on the... (2 Replies)
Discussion started by: bluescreen
2 Replies
4. Solaris
I accidently added a disk in different zpool instead of pool, where I want.
root@prtdrd21:/# zpool status cvfdb2_app_pool
pool: cvfdb2_app_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies
5. Solaris
Hi all,
I am using SPARC Solaris 11.1 with EFI labelled disks.
I am new to ZFS file systems and slightly stuck when trying to create a partition (slice) on one of my LUNs.
EFI labels use sectors and blocks and I am not sure how exactly it works.
From here I can try and create a... (2 Replies)
Discussion started by: selectstar
2 Replies
6. Solaris
Hi all,
I have a EFI disk and it is use in zfs pool.
partition> p
Volume: rpool
Current partition table (original):
Total disk sectors available: 1172107117 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm ... (8 Replies)
Discussion started by: javanoob
8 Replies
LEARN ABOUT DEBIAN
dpm-addfs
DPM-ADDFS(1) DPM Administrator Commands DPM-ADDFS(1)
NAME
dpm-addfs - add a filesystem to a disk pool
SYNOPSIS
dpm-addfs --poolname pool_name --server fs_server --fs fs_name [ --st status ] [ --weight weight ] [ --help ]
DESCRIPTION
dpm-addfs adds a filesystem to a disk pool.
This command requires ADMIN privilege.
OPTIONS
pool_name
specifies the disk pool name previously defined using dpm-addpool.
server specifies the host name of the disk server where this filesystem is mounted.
fs specifies the mount point of the dedicated filesystem.
status Initial status of this filesystem. It can be set to 0 or DISABLED or RDONLY. This can be either alphanumeric or the corresponding
numeric value.
weight specifies the weight of the filesystem. This is used during the filesystem selection. The value must be positive. It is recommended
to use a value lower than 10. Default is 1.
EXAMPLE
dpm-addfs --poolname Volatile --server sehost --fs /data
EXIT STATUS
This program returns 0 if the operation was successful or >0 if the operation failed.
SEE ALSO
dpm(1), dpm_addfs(3), dpm-addpool(1)
LCG
$Date$ DPM-ADDFS(1)