I would go with ZFS personally. It is much easier to manage. I'll put it this way. Take note of the native solaris device files:
This creates a two way mirror, creates a file system on it and mounts it at /newpool.
Here is how easy it is to create a RAID5 volume, creates a filesystem on it, and mounts it:
I am not kidding. It only takes one simple command to create a ZFS volume. You can also move external storage devices between different machines with ZFS, create disk quotas, reservations, integrate with Solaris containers, take snapshots, track io statistics, and many other cool things. ZFS is very simple to use, flexible, and cheap. In my opinion its only shortcoming is it doesn't have built in backup and restore tools like ufsdump and ufsrestore (I dont con't zfs send and receive).
And one really big advantage in case of using ZFS.... is delivered with default Solaris 10 installation, so is free
All solaris rescue gurus out there ....
I've a Solaris 2.6 E450 on which my sysadmin guy has deleted every file (not sub-directories) from the /etc directory.
The machine is (was) running Vxvm with the root volume encapsulated.
I've tried booting from CDROM, mounting the root volume... (3 Replies)
I've got a Linux box that I'm pretty sure is having some disk issues. iostat isn't installed, but vmstat is, so i've been trying to use that to do some initial diagnostics while I go through our company's change control process to get iostat installed.
The problem I'm having is that the disks... (4 Replies)
Hi,
Quick question if anyone knows this. Is there a command I can use in Veritas Volume manager on Solaris that will tell me what the name of the SAN I am connected to? We have a number of SANs so I am unsure which one my servers are connected to. Thanks. (13 Replies)
Hi Guys,
I have a doubt either to Reboot the server after Replacing the disk0.
I have two disks under vxvm root mirrored and i had a problem with primary disk so i replace the disk0 failed primary disk and then mirrored. After mirroring is it reboot required ? (7 Replies)
:confused:
Last week I read that VxVM won't work with MPxIO (i don't recall the link) and that it should be unconfigured when installing VxVM. Today I read that VxVM works in "pass-thru" mode with MPxIO and DMP uses the devices presented by MPxIO.
If I create disks with MPxIO and use VxVM to... (1 Reply)
I have VxVM 5.1 running on Solaris-10. I have to increase a application file-system and storage team gave me a lun. After scanning scsi port by cfgadm, I can see them in format output. I labelled it, but I am not able to see them in "vxdisk list".
I already tried commands -->
vxdctl enable... (4 Replies)
I have created a VxVM disk group in AIX7.1. I have tried to added this VxVM disk group in powerHA 6.1. But in cluster VxVM DGs are not listing. Is there any other procedure to add vxvm diskgroup to hacmp.
Please share me steps for adding vxvm diskgroup to hacmp. (6 Replies)
Discussion started by: sunnybee
6 Replies
LEARN ABOUT FREEBSD
gptzfsboot
GPTZFSBOOT(8) BSD System Manager's Manual GPTZFSBOOT(8)NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers
DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a
GPT-partitioned disk with gpart(8).
IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less.
BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device
labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk
from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines
that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR
partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is
used as a default boot pool.
The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then
the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is
present in the boot filesystem, boot options are read from it in the same way as boot(8).
The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the
vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables.
USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and
interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports.
The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is
[zfs:pool/filesystem:][/path/to/loader]
Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and
filesystem are specified, then /boot/zfsloader is used as a path.
Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool
status (see zpool(8)).
The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial
value of the currdev variable.
FILES
/boot/gptzfsboot boot code binary
/boot.config parameters for the boot block (optional)
/boot/config alternative parameters for the boot block (optional)
EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gptzfsboot can also be installed without the PMBR:
gpart bootcode -p /boot/gptzfsboot -i 1 ada0
SEE ALSO boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8)HISTORY
gptzfsboot appeared in FreeBSD 7.3.
AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>.
BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions
that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off-
sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen.
BSD September 15, 2014 BSD