Sponsored Content
Operating Systems Solaris dual boot solaris/solaris zfs file system Post 302577096 by estammis on Monday 28th of November 2011 07:53:59 AM
Old 11-28-2011
dual boot solaris/solaris zfs file system

Hi,

I am running into a some problems creating a dual boot system of 2 solaris instances using ZFS file system and I was wondering if someone can help me out.

First some back ground. I have been asked to change the file system of our server from UFS to ZFS. Currently we are using Solaris 10/09 release (x86) using an Areca raid controller. This controller is configured into a raid 5 system and 1 single passthrough system on the same controller. The file system currently used is UFS. Which is ok, and the configuration of the grub boot settings at start up is fairly simple and basic.
The main raid5 system is booting from hd0 (c1t0d0s0) and the passthrough from hd1 (c1t1d0s0). Setting this up in menu.lst is simple.
The reason for the passthrough disk to be present is to have this instance running as emergency backup, with the latest available back up from the main raid5 instance.
Twice a day there is a backup script running on the raid5 system that copies all the information over to the passthrough disk (including servername, IP, routing etc).
Using UFS and mounting points, this is simple.

However, i have been asked to change the UFS file system for our new servers to ZFS. So clean install of the solaris OS etc.
This I have done and I have created several file system under the main rpoool.
This I did for both the raid5 system and the passthrough disk.
Both disks look like this:

NAME USED AVAIL REFER MOUNTPOINT
rpool 13.4G 132G 36K /rpool
rpool/ROOT 4.39G 132G 21K legacy
rpool/ROOT/s10x_u8wos_08a 4.39G 132G 4.39G /
rpool/dump 2.00G 132G 2.00G -
rpool/export 44K 132G 23K /export
rpool/export/home 21K 132G 21K /export/home
rpool/swap 2G 134G 98.9M -
rpool/usr3 24K 10.0G 24K /usr3
rpool/wtllog 122K 72.0G 122K /wtllog
rpool/wtlsw 239M 82.8G 239M /wtlsw

The problem I have now, is that i don't know how to configure menu.list in order to have the 3 options in the grub boot menu.
i need to have the following menu:
1- boot main production
2- boot from passthrough disk
3- boot fail safe

option 1 and 3 are automatically created by the installation of the OS. But how do i get the proper settings for booting completely from the passthrough?
(using UFS makes the creating of the boot order and execution in menu.lst easy by selecting either hd0 or hd1).

I went through the forum already and did a google search, but what i find is mainly dual boot between solaris and windows systems or linux.

Any help or direction to an online article for this problem is highly appreciated.

Thanks in advance and best Regards,

Epco
 

8 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

dual boot windows 2000 and solaris 8

Hello I recently purchased solaris 8 and I want to dual boot with windows 2000. What is the easiest way to install the solaris OS with the existing windows 2000? mogfog (4 Replies)
Discussion started by: mogfog
4 Replies

2. UNIX for Dummies Questions & Answers

Installing Dual Boot Xp is in first want Solaris

I'm looking to add a 2nd hard drive to my computer and make one hard drive Solaris and keep the 2nd as my origional Xp home edition. To basically keep them seperated in what they do. Now saying I have the hard drive in installed and everything and it's blank. I work for Sun Microsystems so i know... (2 Replies)
Discussion started by: Cyrix142
2 Replies

3. UNIX for Dummies Questions & Answers

Dual Boot Solaris 8

Greetings, I have learned much since joining the message board, but I was unable to locate any information concerning dual booting Solaris with Windows on a machine with standardized equipment. I have read on the sun.com page that it is possible and can even be pushed to the rear partition, but... (3 Replies)
Discussion started by: TStoddard
3 Replies

4. UNIX for Dummies Questions & Answers

solaris dual boot

Hello friends, On sparc 5 with solaris 8 i like to have dual boot with solaris 9. is this correct that the minimum memmory must be 96 mb for dual boot? do you recommand dual boot on unix? solaris 8 will be the testing environment and solaris 9 the production. does any one has this type... (5 Replies)
Discussion started by: grep
5 Replies

5. Solaris

solaris dual boot

hi ;) So I have 2 HDD (SATA and ATA). On the SATA I've installed WindwosXP and now I want to install solaris 10 on the ATA disk. Is it possible if the ATA disk is primary to make dual boot ? thank you very much (0 Replies)
Discussion started by: nocture
0 Replies

6. Solaris

Dual Boot XP Solaris

Does here know how to configure the Windows XP Boot loader to dual boot XP and Solaris 10? I installed Solaris after XP but it did not detect the XP installation, and I really can't reformat right now. (3 Replies)
Discussion started by: Super User
3 Replies

7. Solaris

Solaris and Xp in dual boot

Hello,I've a big problem. On my hd I've Windows Xp and after I've installed Solaris 10. Solaris now run perfectly but I can't boot the Xp partition.I've tried with grub super disk and I receive an error 12 when I try to boot the Windows partition.However with the windows cd the partition is... (2 Replies)
Discussion started by: bgf0
2 Replies

8. Solaris

Solaris/Linux Dual Boot

From googling around, I have found that dual booting isn't so hard... if you are installing both for the first time. However, I couldn't find anything on if I can preserve my Solaris 10 partition that is already installed. Any words of advice? (1 Reply)
Discussion started by: GeekMasterFlash
1 Replies
ZDB(8)							    BSD System Manager's Manual 						    ZDB(8)

NAME
zdb -- Display zpool debugging and consistency information SYNOPSIS
zdb [-CumdibcsDvhLMXFPA] [-e [-p path...]] [-t txg] [-U cache] [-I inflight I/Os] [-x dumpdir] poolname [object ...] zdb [-divPA] [-e [-p path...]] [-U cache] dataset [object ...] zdb -m [-MLXFPA] [-t txg] [-e [-p path...]] [-U cache] poolname zdb -R [-A] [-e [-p path...]] [-U cache] poolname poolname vdev:offset:size[:flags] zdb -S [-AP] [-e [-p path...]] [-U cache] poolname poolname zdb -l [-uA] device zdb -C [-A] [-U cache] DESCRIPTION
The zdb utility displays information about a ZFS pool useful for debugging and performs some amount of consistency checking. It is a not a general purpose tool and options (and facilities) may change. This is neither a fsck(8) nor a fsdb(8) utility. The output of this command in general reflects the on-disk structure of a ZFS pool, and is inherently unstable. The precise output of most invocations is not documented, a knowledge of ZFS internals is assumed. When operating on an imported and active pool it is possible, though unlikely, that zdb may interpret inconsistent pool data and behave erratically. OPTIONS
Display options: -b Display statistics regarding the number, size (logical, physical and allocated) and deduplication of blocks. -c Verify the checksum of all metadata blocks while printing block statistics (see -b). If specified multiple times, verify the checksums of all blocks. -C Display information about the configuration. If specified with no other options, instead display information about the cache file (/etc/zfs/zpool.cache). To specify the cache file to display, see -U If specified multiple times, and a pool name is also specified display both the cached configuration and the on-disk configuration. If specified multiple times with -e also display the configuration that would be used were the pool to be imported. -d Display information about datasets. Specified once, displays basic dataset information: ID, create transaction, size, and object count. If specified multiple times provides greater and greater verbosity. If object IDs are specified, display information about those specific objects only. -D Display deduplication statistics, including the deduplication ratio (dedup), compression ratio (compress), inflation due to the zfs copies property (copies), and an overall effective ratio (dedup * compress / copies). If specified twice, display a histogram of deduplication statistics, showing the allocated (physically present on disk) and refer- enced (logically referenced in the pool) block counts and sizes by reference count. If specified a third time, display the statistics independently for each deduplication table. If specified a fourth time, dump the contents of the deduplication tables describing duplicate blocks. If specified a fifth time, also dump the contents of the deduplication tables describing unique blocks. -h Display pool history similar to zpool history, but include internal changes, transaction, and dataset information. -i Display information about intent log (ZIL) entries relating to each dataset. If specified multiple times, display counts of each intent log transaction type. -l device Display the vdev labels from the specified device. If the -u option is also specified, also display the uberblocks on this device. -L Disable leak tracing and the loading of space maps. By default, zdb verifies that all non-free blocks are referenced, which can be very expensive. -m Display the offset, spacemap, and free space of each metaslab. When specified twice, also display information about the on-disk free space histogram associated with each metaslab. When specified three time, display the maximum contiguous free space, the in-core free space histogram, and the percentage of free space in each space map. When specified four times display every spacemap record. -M Display the offset, spacemap, and free space of each metaslab. When specified twice, also display information about the maximum con- tiguous free space and the percentage of free space in each space map. When specified three times display every spacemap record. -R poolname vdev:offset:size[:flags] Read and display a block from the specified device. By default the block is displayed as a hex dump, but see the description of the -r flag, below. The block is specified in terms of a colon-separated tuple vdev (an integer vdev identifier) offset (the offset within the vdev) size (the size of the block to read) and, optionally, flags (a set of flags, described below). b offset Print block pointer d Decompress the block e Byte swap the block g Dump gang block header i Dump indirect block r Dump raw uninterpreted block data -s Report statistics on zdb's I/O. Display operation counts, bandwidth, and error counts of I/O to the pool from zdb. -S Simulate the effects of deduplication, constructing a DDT and then display that DDT as with -DD. -u Display the current uberblock. Other options: -A Do not abort should any assertion fail. -AA Enable panic recovery, certain errors which would otherwise be fatal are demoted to warnings. -AAA Do not abort if asserts fail and also enable panic recovery. -e [-p path...] Operate on an exported pool, not present in /etc/zfs/zpool.cache. The -p flag specifies the path under which devices are to be searched. -x dumpdir All blocks accessed will be copied to files in the specified directory. The blocks will be placed in sparse files whose name is the same as that of the file or device read. zdb can be then run on the generated files. Note that the -bbc flags are sufficient to access (and thus copy) all metadata on the pool. -F Attempt to make an unreadable pool readable by trying progressively older transactions. -I inflight I/Os Limit the number of outstanding checksum I/Os to the specified value. The default value is 200. This option affects the performance of the -c option. -P Print numbers in an unscaled form more amenable to parsing, eg. 1000000 rather than 1M. -t transaction Specify the highest transaction to use when searching for uberblocks. See also the -u and -l options for a means to see the avail- able uberblocks and their associated transaction numbers. -U cachefile Use a cache file other than /boot/zfs/zpool.cache. -v Enable verbosity. Specify multiple times for increased verbosity. -X Attempt 'extreme' transaction rewind, that is attempt the same recovery as -F but read transactions otherwise deemed too old. Specifying a display option more than once enables verbosity for only that option, with more occurrences enabling more verbosity. If no options are specified, all information about the named pool will be displayed at default verbosity. EXAMPLES
Example 1 Display the configuration of imported pool 'rpool' # zdb -C rpool MOS Configuration: version: 28 name: 'rpool' ... Example 2 Display basic dataset information about 'rpool' # zdb -d rpool Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects ... Example 3 Display basic information about object 0 in 'rpool/export/home' # zdb -d rpool/export/home 0 Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects Object lvl iblk dblk dsize lsize %full type 0 7 16K 16K 15.0K 16K 25.00 DMU dnode Example 4 Display the predicted effect of enabling deduplication on 'rpool' # zdb -S rpool Simulated DDT histogram: bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G 2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G ... dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00 SEE ALSO
zfs(8), zpool(8) AUTHORS
This manual page is a mdoc(7) reimplementation of the illumos manual page zdb(1M), modified and customized for FreeBSD and licensed under the Common Development and Distribution License (CDDL). The mdoc(7) implementation of this manual page was initially written by Martin Matuska <mm@FreeBSD.org> and Marcelo Araujo <araujo@FreeBSD.org>. BSD
July 26, 2014 BSD
All times are GMT -4. The time now is 07:44 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy