09-04-2011
for root pool, use SMI label using format -e .
you can create the pool right away using a EFI labelled disk
6 More Discussions You Might Find Interesting
1. Solaris
Hi Peeps,
Can anyone help me an EFI lablel on a 3510 raid array that I cannot get rid of, format -e and label just asks you if you want to label it. Want an SMI label writing to it.
Anyone got any ideas on how to remove the EFI label?
Thanks in advance
Martin (2 Replies)
Discussion started by: callmebob
2 Replies
2. Solaris
Hello, I am new to Solaris so i apologize upfront if my questions seem trivial.
I am trying to install a ZFS file system on a Solaris 10 machine with UFS already installed on it.
I want to run: # zpool create pool_zfs c0t0d0
then: # zfs create pool_zfs/fs
My question is more to... (3 Replies)
Discussion started by: mcdef
3 Replies
3. Solaris
Hi All!
I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems.
The root zfs pool status (on the... (2 Replies)
Discussion started by: bluescreen
2 Replies
4. Solaris
I accidently added a disk in different zpool instead of pool, where I want.
root@prtdrd21:/# zpool status cvfdb2_app_pool
pool: cvfdb2_app_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies
5. Solaris
Hi all,
I am using SPARC Solaris 11.1 with EFI labelled disks.
I am new to ZFS file systems and slightly stuck when trying to create a partition (slice) on one of my LUNs.
EFI labels use sectors and blocks and I am not sure how exactly it works.
From here I can try and create a... (2 Replies)
Discussion started by: selectstar
2 Replies
6. Solaris
Hi all,
I have a EFI disk and it is use in zfs pool.
partition> p
Volume: rpool
Current partition table (original):
Total disk sectors available: 1172107117 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm ... (8 Replies)
Discussion started by: javanoob
8 Replies
LEARN ABOUT DEBIAN
dpm_addfs
DPM_ADDFS(3) DPM Library Functions DPM_ADDFS(3)
NAME
dpm_addfs - add a filesystem to a disk pool
SYNOPSIS
#include <sys/types.h>
#include "dpm_api.h"
int dpm_addfs (char *poolname, char *server, char *fs, int status, int weight)
DESCRIPTION
dpm_addfs adds a filesystem to a disk pool.
poolname
specifies the disk pool name previously defined using dpm_addpool.
server specifies the host name of the disk server where this filesystem is mounted.
fs specifies the mount point of the dedicated filesystem.
status Initial status of this filesystem. It can be set to 0 or FS_DISABLED or FS_RDONLY.
weight specifies the weight of the filesystem. This is used during the filesystem selection. The value must be positive. A negative value
will tell the server to allocate the default weight value (1). It is recommended to use a value lower than 10.
This function requires ADMIN privilege.
RETURN VALUE
This routine returns 0 if the operation was successful or -1 if the operation failed. In the latter case, serrno is set appropriately.
ERRORS
ENOENT Filesystem does not exist.
EACCES The caller does not have ADMIN privilege.
EFAULT poolname, server or fs is a NULL pointer.
EEXIST this filesystem is already part of a pool.
ENOMEM Memory could not be allocated for storing the filesystem definition.
EINVAL The pool is unknown or the length of poolname exceeds CA_MAXPOOLNAMELEN or the length of server exceeds CA_MAXHOSTNAMELEN or
the length of fs exceeds 79.
SENOSHOST Host unknown.
SEINTERNAL Database error.
SECOMERR Communication error.
LCG
$Date$ DPM_ADDFS(3)