Sponsored Content
Operating Systems Solaris Local Full Backup and Restore ZFS Post 302389841 by tien86 on Tuesday 26th of January 2010 06:10:08 AM
Old 01-26-2010
Local Full Backup and Restore ZFS

Hi men,

I'm testing for my backup&restore job with ZFS.

My server have two disks. I wanna do backup&restore job like ufsdump utility:

Disk0 is rpool(root zpool) and disk1 (backup zpool) will be stored full backup replicates. When rpool zpool have problem, i can "boot cdrom -s" and use my full backup replicate to restore my OS on disk0.

This is a example i read:
Code:
For example:
# zfs send tank/gozer@0830 > /bkups/gozer.083006
# zfs receive tank/gozer2@today < /bkups/gozer.083006
# zfs rename tank/gozer tank/gozer.old
# zfs rename tank/gozer2 tank/gozer

How can i do above commands in my circumstance? I did "zpool import " command but mount point of zpool cannot mount.

Code:
ok>boot cdrom -s
# zpool import backup
cannot mount '/backup': failed to create mountpoint
# zfs list
NAME     USED  AVAIL  REFER  MOUNTPOINT
backup  8.90G  58.0G  8.90G  /backup
# zpool import rpool
cannot mount '/rpool': failed to create mountpoint

# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
backup                         8.90G  58.0G  8.90G  /a/backup
rpool                          9.79G  57.1G    94K  /rpool
rpool@0602                         0      -    94K  -
rpool/ROOT                     4.79G  57.1G    18K  legacy
rpool/ROOT@0602                    0      -    18K  -
rpool/ROOT/s10s_u7wos_08       4.79G  57.1G  4.78G  /
rpool/ROOT/s10s_u7wos_08@0602  3.26M      -  4.78G  -
rpool/dump                     4.00G  57.1G  4.00G  -
rpool/dump@0602                    0      -  4.00G  -
rpool/swap                     1.00G  58.1G    16K  -
rpool/swap@0602                    0      -    16K  -

I am muddling now, zfs is so complex. Please suggest my case or i should try other way to do full backup&restore?

Thx men!

Last edited by pludi; 01-26-2010 at 07:15 AM.. Reason: code tags, please...
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Full System Restore from 250 - V280R

Hello, Here is what I am trying to do. We have an old Enterprise 250 which we want to phase out. So plan is to move everything running on the 250 to an unused Sun Fire V280R. Here is what I decided to do: 1. Fresh Solaris 8 install on Sun 280R (disk0). Configure network and install... (1 Reply)
Discussion started by: mshafi
1 Replies

2. Solaris

Full backup and Restore

Dear All ; first how are you every body I'm just subscribed in your forum and i hope i found what i searched for along time . I'm not a Solaris specialist but i read more to build a Network Management Station depends on Solaris as OS and it is working good now . my problem is how to perform... (16 Replies)
Discussion started by: Basha
16 Replies

3. Solaris

after ZFS can not restore VTOC of disk..

I tried to use zfs .. only for test ..so when I take my test disks into zfs pool their VTOC changed .. and 7th slice does not appear now. How can I restore default VTOC to my disks. my machine is x86 .. SunFire X4200 .. so this disks do not have slices like sparc machines .. they have... (6 Replies)
Discussion started by: samar
6 Replies

4. Solaris

ZFS Restore Attempt

We currently have T2000 servers attached to Sun 3320 Storage Arrays. We run a database on the 3320 devices with the storage created through ZFS. We use 3 RAIDZ2 pools and then create our partitions from those. Now onto the question. If our T2000 fails and we swap out machines is there a way... (1 Reply)
Discussion started by: buckhtr77
1 Replies

5. Solaris

Restore from Flash Archive on local filesystem

I am doing a practice restore using a test UNIX(Solaris) system, and a SCSI Hard Drive whose slice 5 contains the Flash Archive file that I need to restore from. The test system is offline( no network ) and does not have external devices, such as tape drive. My goal is to somehow restore the... (9 Replies)
Discussion started by: the.gooch
9 Replies

6. Solaris

Emergency - restore a deleted zfs snapshot

Hi, I deleted a zfs snapshot because it was as big as the original zfs. After the snapshot was removed, all the data in the original zfs is gone. How this happened? Can I restore the snapshot? Please help. Thanks a lot! (5 Replies)
Discussion started by: aixlover
5 Replies

7. Solaris

Full zfs - cannot import

I installed Solaris 11 Express on my machine and created a raidz2 filesystem over 5 harddrives. Thas was a few months ago. Unfortunately, yesterday I made a mistake in free space calculation and tried to copy more data to the fs than how much space there was. When the fs got full, the... (5 Replies)
Discussion started by: RychnD
5 Replies

8. Solaris

New ZFS FS not seen on local zone

Hi , I added a new fs to the global zone and also did the zonecfg to add the FS to the zone but finally I do not see the new "FS" on the local zone. Even in th e/etc/zones/zone.xml the fs and the correct directory is mentioned. Any Idea please ? (4 Replies)
Discussion started by: manni2
4 Replies

9. UNIX and Linux Applications

Deja-dup make my / full. So I cannot restore my back up

The problematic directory is the following: /root/.cache/deja-dup This directory grows until my "/" is full and then the restoring activity fails. I already tried to create a symbolic link with origin another partition where I have more space. However during the restoring activity ... (4 Replies)
Discussion started by: puertas12
4 Replies
ZDB(8)							    BSD System Manager's Manual 						    ZDB(8)

NAME
zdb -- Display zpool debugging and consistency information SYNOPSIS
zdb [-CumdibcsDvhLMXFPA] [-e [-p path...]] [-t txg] [-U cache] [-I inflight I/Os] [-x dumpdir] poolname [object ...] zdb [-divPA] [-e [-p path...]] [-U cache] dataset [object ...] zdb -m [-MLXFPA] [-t txg] [-e [-p path...]] [-U cache] poolname zdb -R [-A] [-e [-p path...]] [-U cache] poolname poolname vdev:offset:size[:flags] zdb -S [-AP] [-e [-p path...]] [-U cache] poolname poolname zdb -l [-uA] device zdb -C [-A] [-U cache] DESCRIPTION
The zdb utility displays information about a ZFS pool useful for debugging and performs some amount of consistency checking. It is a not a general purpose tool and options (and facilities) may change. This is neither a fsck(8) nor a fsdb(8) utility. The output of this command in general reflects the on-disk structure of a ZFS pool, and is inherently unstable. The precise output of most invocations is not documented, a knowledge of ZFS internals is assumed. When operating on an imported and active pool it is possible, though unlikely, that zdb may interpret inconsistent pool data and behave erratically. OPTIONS
Display options: -b Display statistics regarding the number, size (logical, physical and allocated) and deduplication of blocks. -c Verify the checksum of all metadata blocks while printing block statistics (see -b). If specified multiple times, verify the checksums of all blocks. -C Display information about the configuration. If specified with no other options, instead display information about the cache file (/etc/zfs/zpool.cache). To specify the cache file to display, see -U If specified multiple times, and a pool name is also specified display both the cached configuration and the on-disk configuration. If specified multiple times with -e also display the configuration that would be used were the pool to be imported. -d Display information about datasets. Specified once, displays basic dataset information: ID, create transaction, size, and object count. If specified multiple times provides greater and greater verbosity. If object IDs are specified, display information about those specific objects only. -D Display deduplication statistics, including the deduplication ratio (dedup), compression ratio (compress), inflation due to the zfs copies property (copies), and an overall effective ratio (dedup * compress / copies). If specified twice, display a histogram of deduplication statistics, showing the allocated (physically present on disk) and refer- enced (logically referenced in the pool) block counts and sizes by reference count. If specified a third time, display the statistics independently for each deduplication table. If specified a fourth time, dump the contents of the deduplication tables describing duplicate blocks. If specified a fifth time, also dump the contents of the deduplication tables describing unique blocks. -h Display pool history similar to zpool history, but include internal changes, transaction, and dataset information. -i Display information about intent log (ZIL) entries relating to each dataset. If specified multiple times, display counts of each intent log transaction type. -l device Display the vdev labels from the specified device. If the -u option is also specified, also display the uberblocks on this device. -L Disable leak tracing and the loading of space maps. By default, zdb verifies that all non-free blocks are referenced, which can be very expensive. -m Display the offset, spacemap, and free space of each metaslab. When specified twice, also display information about the on-disk free space histogram associated with each metaslab. When specified three time, display the maximum contiguous free space, the in-core free space histogram, and the percentage of free space in each space map. When specified four times display every spacemap record. -M Display the offset, spacemap, and free space of each metaslab. When specified twice, also display information about the maximum con- tiguous free space and the percentage of free space in each space map. When specified three times display every spacemap record. -R poolname vdev:offset:size[:flags] Read and display a block from the specified device. By default the block is displayed as a hex dump, but see the description of the -r flag, below. The block is specified in terms of a colon-separated tuple vdev (an integer vdev identifier) offset (the offset within the vdev) size (the size of the block to read) and, optionally, flags (a set of flags, described below). b offset Print block pointer d Decompress the block e Byte swap the block g Dump gang block header i Dump indirect block r Dump raw uninterpreted block data -s Report statistics on zdb's I/O. Display operation counts, bandwidth, and error counts of I/O to the pool from zdb. -S Simulate the effects of deduplication, constructing a DDT and then display that DDT as with -DD. -u Display the current uberblock. Other options: -A Do not abort should any assertion fail. -AA Enable panic recovery, certain errors which would otherwise be fatal are demoted to warnings. -AAA Do not abort if asserts fail and also enable panic recovery. -e [-p path...] Operate on an exported pool, not present in /etc/zfs/zpool.cache. The -p flag specifies the path under which devices are to be searched. -x dumpdir All blocks accessed will be copied to files in the specified directory. The blocks will be placed in sparse files whose name is the same as that of the file or device read. zdb can be then run on the generated files. Note that the -bbc flags are sufficient to access (and thus copy) all metadata on the pool. -F Attempt to make an unreadable pool readable by trying progressively older transactions. -I inflight I/Os Limit the number of outstanding checksum I/Os to the specified value. The default value is 200. This option affects the performance of the -c option. -P Print numbers in an unscaled form more amenable to parsing, eg. 1000000 rather than 1M. -t transaction Specify the highest transaction to use when searching for uberblocks. See also the -u and -l options for a means to see the avail- able uberblocks and their associated transaction numbers. -U cachefile Use a cache file other than /boot/zfs/zpool.cache. -v Enable verbosity. Specify multiple times for increased verbosity. -X Attempt 'extreme' transaction rewind, that is attempt the same recovery as -F but read transactions otherwise deemed too old. Specifying a display option more than once enables verbosity for only that option, with more occurrences enabling more verbosity. If no options are specified, all information about the named pool will be displayed at default verbosity. EXAMPLES
Example 1 Display the configuration of imported pool 'rpool' # zdb -C rpool MOS Configuration: version: 28 name: 'rpool' ... Example 2 Display basic dataset information about 'rpool' # zdb -d rpool Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects ... Example 3 Display basic information about object 0 in 'rpool/export/home' # zdb -d rpool/export/home 0 Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects Object lvl iblk dblk dsize lsize %full type 0 7 16K 16K 15.0K 16K 25.00 DMU dnode Example 4 Display the predicted effect of enabling deduplication on 'rpool' # zdb -S rpool Simulated DDT histogram: bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G 2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G ... dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00 SEE ALSO
zfs(8), zpool(8) AUTHORS
This manual page is a mdoc(7) reimplementation of the illumos manual page zdb(1M), modified and customized for FreeBSD and licensed under the Common Development and Distribution License (CDDL). The mdoc(7) implementation of this manual page was initially written by Martin Matuska <mm@FreeBSD.org> and Marcelo Araujo <araujo@FreeBSD.org>. BSD
July 26, 2014 BSD
All times are GMT -4. The time now is 08:31 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy