Sponsored Content
Operating Systems Solaris SPARC T4-1/Solaris 11/Add 2 new HDDs in RAID 0 configuration Post 302696993 by DukeNuke2 on Thursday 6th of September 2012 01:56:03 AM
Old 09-06-2012
why so hard and not just a ZFS mirror? it's one command to create a new filesystem with the 2 disks and there is no need for a reboot...
 

8 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

solaris 8 sparc kernel configuration guide

i've scoured the net and haven't found too many items. i found one at princeton and a few things at sun's site, however, i don't find them to my level. they seem to be written for someone who is very comfortable doing what they do. does anyone know of any good tutorial that is written similar... (1 Reply)
Discussion started by: xyyz
1 Replies

2. AIX

regd raid 0 configuration

Hey Alll Thanks for the help u give me yesterday,i need a help again from u all guys i have RS 6000 server with AIX 4.3.3 OS and with external storage (multipack) with 12 Hard disk drive which will be connected to RS 6000 scsi controller ,i dont have any raid card. now i have to... (0 Replies)
Discussion started by: solaris8in
0 Replies

3. Shell Programming and Scripting

A script for converting raid configuration log messages to ChangeLog files. ....

hi to all am new to shell scripting..itz very urgent. when i excuting the command metastat(raid configuration info) it will display some information. #metastat d1:submirror status: okey pass:1 d2:submirror staus:okey d3:submirror staus:error if staus is okey.no problem.once i... (0 Replies)
Discussion started by: arjunreddy3
0 Replies

4. Solaris

T5240 and Internal SAS RAID Configuration help

Dear Friends, I need to configure T5240 with Internal SAS RAID HBA(SG-XPCIESAS-R-INT-Z).. T5240 uses 8 hard disks... From the documents of RAID card I have found that I need to create a jump start server to include three packages SUNWaac, StorMan, SUNWgccruntime if Im using solaris10 5/08... ... (5 Replies)
Discussion started by: nicktrix
5 Replies

5. SCO

raid 1 configuration in sco open unix

Dear Team , how i can configure raid 1 (mirroring) using ide hdd in sco open unix 5 i have two 80gb identical hdd (same make/model) thanx (0 Replies)
Discussion started by: sudhir69
0 Replies

6. Red Hat

Software RAID configuration

We have configured software based RAID5 with LVM on our RHEL5 servers. Please let us know if its good to configure software RAID on live environment servers. What can be the disadvantages of software RAID against hardware RAID (4 Replies)
Discussion started by: mitchnelson
4 Replies

7. UNIX for Dummies Questions & Answers

can I emulate solaris/sparc on virtualbox? Or other emulator to run solaris for sparc in my win7 PC?

Hi Gurus can I emulate solaris/sparc on virtualbox? Or other emulator to run solaris for sparc in my win7 PC? regards, Israel. (9 Replies)
Discussion started by: iga3725
9 Replies

8. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies
GPTZFSBOOT(8)						    BSD System Manager's Manual 					     GPTZFSBOOT(8)

NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a GPT-partitioned disk with gpart(8). IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less. BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is used as a default boot pool. The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is present in the boot filesystem, boot options are read from it in the same way as boot(8). The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables. USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports. The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is [zfs:pool/filesystem:][/path/to/loader] Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and filesystem are specified, then /boot/zfsloader is used as a path. Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool status (see zpool(8)). The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial value of the currdev variable. FILES
/boot/gptzfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gptzfsboot can also be installed without the PMBR: gpart bootcode -p /boot/gptzfsboot -i 1 ada0 SEE ALSO
boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8) HISTORY
gptzfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off- sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 05:40 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy