How to backup ZFS filesystems to files on USB drive?


 
Thread Tools Search this Thread
Operating Systems Solaris How to backup ZFS filesystems to files on USB drive?
# 15  
Old 02-16-2014
Quote:
Originally Posted by gjackson123
So I don’t understand where the issue is with mount /export which is preventing the system from booting into multi-user mode.
Just read again this thread fom the beginning.

Creating snapshots cannot have any adverse effects.

Importing a root pool without taking specific precautions will create trouble, which is what you are experiencing.

In post 9 I wrote:
Quote:
There are also specific options you need to use to import a root pool as otherwise, some of the properties, especially mount points, will collide with your current root pool.
In post #13 I wrote:
Quote:
This is dubious, as I already wrote, you need specific options to import a root pool, at least a different root mountpoint.
and
Code:
Yes, but I suspect there will be errors because you'll end up with multiple file systems with the same mount point.

This User Gave Thanks to jlliagre For This Post:
# 16  
Old 02-17-2014
Hi jlliagre,

Can you offer any advice on what to troubleshoot to get out of this hole? Fortunately, I managed to boot it up from another partition which also has rpool and it is working. However, it is a different system which I still couldn't get in as root. Nevertheless, I need to find out more about the differences with this rpool setup compared to the previous one that I have backed up, ended up not able to boot it up to multi-user mode due to something in export folder.

Some people have advised me to remove home folder (copy it to some where first) in export before rebooting but I hesitate of not being locked out of the system altogether.

I am very much dependent on your guidance from here.

Many thanks again,

George
# 17  
Old 02-17-2014
Quote:
Originally Posted by gjackson123
Can you offer any advice on what to troubleshoot to get out of this hole?
It is not easy without knowing precisely what you did. I would probably start by destroying backups/usbdrive, if this is where you imported the stream.
# 18  
Old 02-18-2014
Update changes made on multi-boot server

Ok jlliagre,
I am so glad you haven't deserted me now that we have come to a crunch. Let me list out as much commands recalled used to carry out the backup:

Code:
 
# zfs list -f rpool
# zfs snapshot -r rpool@001
# zpool create -f -R backups c0t0d0s2 (USB)
# zfs create -f  backups/usbdrive
# zpool import -R backups
# zfs list -r backups
# zpool status -R -v backups
# zfs send -R -p rpool  > /mnt/usbdrive/servername_sr02_rpool@001.snapshot1

Below is the zpool output of an alternative multi-boot system (servername_s1) that I have ended up booting into on the same box, the only one I can boot up successfully into yet couldn't work out what the root password is still:

Code:
 
-bash-3.2$ hostname
servername_sr02
-bash-3.2$ uname -a
SunOS servername_sr02 5.10 Generic_147441-09 i86pc i386 i86pc
-bash-3.2$ df -h
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/servername_srs_1  228G   6.4G   204G     4%    /
/devices                0K     0K     0K     0%    /devices
ctfs                    0K     0K     0K     0%    /system/contract
proc                    0K     0K     0K     0%    /proc
mnttab                  0K     0K     0K     0%    /etc/mnttab
swap                   8.2G   1.0M   8.2G     1%    /etc/svc/volatile
objfs                   0K     0K     0K     0%    /system/object
sharefs                 0K     0K     0K     0%    /etc/dfs/sharetab
rpool/ROOT/servername_srs_1/var 228G   1.6G   204G     1%    /var
swap                   8.2G   1.9M   8.2G     1%    /tmp
swap                   8.2G   376K   8.2G     1%    /var/run
rpool/export           228G    32K   204G     1%    /export
rpool/export/home      228G   4.6M   204G     1%    /export/home
rpool                  228G    49K   204G     1%    /rpool
/vol/dev/dsk/c0t0d0/sol_10_811_x86
-bash-3.2$ /sbin/zpool status -v rpool
  pool: rpool
 state: ONLINE
 scan: resilvered 9.59G in 0h12m with 0 errors on Wed Mar 14 12:09:12 2012
config:
        NAME         STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
          c1t1d0s0  ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0
errors: No known data errors
-bash-3.2$ ./zfs list -r rpool
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
rpool                                          24.4G   204G    49K  /rpool
rpool@001                                        30K      -    49K  -
rpool/ROOT                                     10.7G   204G    31K  legacy
rpool/ROOT@001                                     0      -    31K  -
rpool/ROOT/10                                  14.1M   204G  4.37G  /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/10@001                               299K      -  4.37G  -
rpool/ROOT/10/var                              6.50M   204G  1011M  /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/10/var@001                             1K      -  1011M  -
rpool/ROOT/base_srs_install                    38.2M   204G  4.48G  /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/base_srs_install@001                 299K      -  4.48G  -
rpool/ROOT/base_srs_install/var                14.4M   204G  1.47G  /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/base_srs_install/var@001               1K      -  1.47G  -
rpool/ROOT/servername_srs_1                     189M   204G  6.37G  /
rpool/ROOT/servername_srs_1@001                3.29M      -  6.37G  -
rpool/ROOT/servername_srs_1/var                 112M   204G  1.56G  /var
rpool/ROOT/servername_srs_1/var@001            10.5M      -  1.48G  -
rpool/ROOT/servername_srs_2                    10.5G   204G  6.47G  /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/servername_srs_2@base_srs_install   75.0M      -  4.37G  -
rpool/ROOT/servername_srs_2@servername_srs_1   71.3M      -  4.47G  -
rpool/ROOT/servername_srs_2@servername_srs_2   67.2M      -  6.37G  -
rpool/ROOT/servername_srs_2@001                13.2M      -  6.38G  -
rpool/ROOT/servername_srs_2/var                3.74G   204G  3.33G  /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/servername_srs_2/var@base_srs_install  23.8M      -  1011M  -
rpool/ROOT/servername_srs_2/var@servername_srs_1  14.8M      -  1.46G  -
rpool/ROOT/servername_srs_2/var@servername_srs_2  34.7M      -  1.48G  -
rpool/ROOT/servername_srs_2/var@001               3.36M      -  3.33G  -
rpool/dump                                     1.00G   204G  1.00G  -
rpool/dump@001                                   16K      -  1.00G  -
rpool/export                                   4.83M   204G    32K  /export
rpool/export@001                                   0      -    32K  -
rpool/export/home                              4.80M   204G  4.61M  /export/home
rpool/export/home@001                           192K      -  4.61M  -
rpool/swap                                     12.7G   213G  4.16G  -
rpool/swap@001                                     0      -  4.16G  -
-bash-3.2$ ls -lt /export
total 3
drwxr-xr-x   3 root     root           3 Nov 16 00:08 home
-bash-3.2$ cd /export
-bash-3.2$ ls
home
-bash-3.2$ ls -lt
total 3
drwxr-xr-x   3 root     root           3 Nov 16 00:08 home
-bash-3.2$ cd home
-bash-3.2$ ls -lt
total 3
drwxr-xr-x  15 support  sys           23 Nov 16 19:02 support
-bash-3.2$ cd support
-bash-3.2$ ls -lt
total 6
drwxr-xr-x   2 support  sys            3 Mar 10  2012 Desktop
drwxr-xr-x   2 support  sys            2 Mar 10  2012 Documents

I have taken the following action in an attempt to resolve the mount issue of ‘ /export' by removing home under /export mount point which is preventing ‘/export' from mounting for servername_sr02:

Code:
 
- Boot up in single user mode & login as root
- Unmount /export/home
- Check the /export directory (mount point) 
- Remove the /export/home directory
- Reboot but found another set of similar error as follows:
cannot mount ‘/export': directory is not empty
svc:/system/filesystem/local:default:WARNING:/usr/sbin/zfs mount -a failed: exit status 1
Nov 15 10:50:17 svc.startd[10]:svc:/system/filesystem/local: default: Method “/lib/svc/method/fs-local” failed with exit status 95.
svc.startd[10]: system/filesystem/local: default failed fatally: transitioned to maintenance
ERROR: svc:/system/filesystem/minimal: default faild to mount /var/run (see ‘svc -x' for details)
Nov 18 00:44:06 svc.startd[10]:svc:/system/filesystem/minimal:default: Method /lib/svc/method/fs-minimal “ failed with exit status 95.
Nov 18 00:44:06 svc.startd[10]:svc:/system/filesystem/minimal:default: failed fatally transitioned to maintenance...
Requesting System Maintenance Mode
Console login service(s) cannot run
Root password for system maintenance (Control-d to bypass)

( 1 ) Why am I encountering yet another svc error when booting into servername_sr02? I am still not clear whether the same rpool is used by both of the following boot up filesystems/partitions in Grub:

Code:
 
title servername_srs_1
findroot (BE_servername_srs_1,0,a)
bootfs rpool/ROOT/servername_srs_1
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title servername_srs_1 failsafe
findroot (BE_servername_srs_1,0,a)
bootfs rpool/ROOT/servername_srs_1
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe
title servername_srs_2
findroot (BE_servername_srs_2,0,a)
bootfs rpool/ROOT/servername_srs_2
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title servername_srs_2 failsafe
findroot (BE_servername_srs_2,0,a)
bootfs rpool/ROOT/servername_srs_2
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe

( 2 ) Can you confirm whether I am still using the same rpool on both boot up filesystems? They appear to have a different ROOT ‘/' filesystem and hence explains why I am not able to su in as root with the original password on the failed servername_srs_2.
( 3 ) I wouldn't mind staying in the current servername_srs_1 boot up filesystem provided that I can hack in as root.
( 4 ) Also can longer boot up from Solairs 10 x86 DVD installation disk after having inadvertently reset boot up firmware. Now the system would go ........................... and not go any further.
Hope this update hasn't confused you completely.
As always, thank you so much for stick around,
George
# 19  
Old 02-18-2014
The third command is syntactically incorrect :
Code:
zpool create –f –R backups c0t0d0s2

Beyond the incorrect hyphens, the -R option is missing a parameter.
Did you really use the -R option, if yes, with what parameter which is missing here ?
Are you sure c0t0d0 was the USB drive ?
Why did you use the -f option ?
# 20  
Old 02-19-2014
Explanation on commands used to create alternative backups zpool on same system

Hi Jlliagre,

I think the missing parameter was the /mnt. The full syntax would be:

Code:
zpool create -f -R /mnt backups c0t0d0s2

Where -R is to create an alternative zpool on the same system according to my understanding.

I am absolutely sure that disk c0t0d0 is for the USB drive. Used format -e and found it to have 1.9TB size as opposed to the other 2 around 239GB.

I used -f with the intention to force the zpool creation after it failed without it. Nevertheless, this was only my recollection and it may not be the exact command used. It was unfortunate that I couldn't get into the text mode console at the time to copy-paste every step.

Btw, what to specify in Grub to boot up in text mode (ttyb.....) so I can SSH to console screen. I normally use the following sequence of step to into single user mode for instance when boot up from Solaris 10 x86 installation DVD disk:

Code:
( i ) e (edit)
( ii ) e (edit)
( iii ) append -s after CDROM follow by ENTER
( iv ) b (boot up)

Btw, this Sun Ray server has not had a backup done in the past so I am hesitant to tamper with it apart from hoping to crack its password, which would save a lot of headaches.

I suspect that it would be similar sequence but not sure what the syntax is to boot up from local disk and to text ttyb.... mode.

Thanks again,

George
# 21  
Old 02-19-2014
Quote:
Originally Posted by gjackson123
Hi Jlliagre,

I think the missing parameter was the /mnt. The full syntax would be:

Code:
zpool create -f -R /mnt backups c0t0d0s2

Where -R is to create an alternative zpool on the same system according to my understanding.
The -R option here is dubious. Its only effect is to have the backups pool to be mounted on /mnt instead of the default location /backups. That might lead to confusion as /mnt is for traditional temporary mounts.
Quote:
I am absolutely sure that disk c0t0d0 is for the USB drive. Used format -e and found it to have 1.9TB size as opposed to the other 2 around 239GB.
Granted. This is surprising as one should expect the first drive on the first controller to be a internal disk but why not.
Quote:
I used -f with the intention to force the zpool creation after it failed without it. Nevertheless, this was only my recollection and it may not be the exact command used. It was unfortunate that I couldn't get into the text mode console at the time to copy-paste every step.
It looks like you created a pool on a device that was containing an already mounted file system. This usually leads to disasters.
Quote:
Btw, what to specify in Grub to boot up in text mode (ttyb.....) so I can SSH to console screen. I normally use the following sequence of step to into single user mode for instance when boot up from Solaris 10 x86 installation DVD disk:

Code:
( i ) e (edit)
( ii ) e (edit)
( iii ) append -s after CDROM follow by ENTER
( iv ) b (boot up)

Not sure about what you mean. ttyb is for serial console access, not ssh. In single user mode, the ssh service is disabled.

Back to the commands you entered:

You created:
- a recursive snapshot of the root pool named 001
- a zpool on your USB disk while it might already be used by something else.
- a file system named usbdrive on this pool

Then, you imported this pool.

Can you explain why as it should have been already imported ?

What did the "zfs list" and "zpool status" commands report ?

The "zfs send" command syntax is incorrect, you cannot send a file system but a snapshot and the -r option is not supported here so I'm assuming you used this command:
Code:
zfs send -R -p rpool/001 > /mnt/usbdrive/servername_sr02_rpool@001.snapshot1

- Did you check the /mnt/usbdrive/servername_sr02_rpool@001.snapshot1 file was created ? What was its size ?

- What did you do and what happened after that ?
This User Gave Thanks to jlliagre For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

How to take backup of ZFS file system on a tape drive?

Hi Guys, I want to take backup of a ZFS file system on tape drive. Can anybody help me with this? Thanks, Pras (0 Replies)
Discussion started by: prashant2507198
0 Replies

2. SuSE

In KDE Copy completed not always mean files were copied to USB flash drive - how to fix it?

I have noticed that when I copy files to flash disk and in UI I see message copy completed in notification area on task bar, usually copy is not completed. So, if I eject the media I loose data. It is very serious problem because may cause loss of valuable and even critical data. Moreover, when... (2 Replies)
Discussion started by: netwalker
2 Replies

3. Solaris

ZFS adding new filesystems to a non-global zone

Hi Guys I have one Global Zone and 2 non-global zones. root@solar109 # zoneadm list -icv ID NAME STATUS PATH BRAND IP 0 global running / native shared 20 solar109b running ... (1 Reply)
Discussion started by: fryzh
1 Replies

4. Solaris

Backup files to tape drive on solaris

Hi, I want to take backup of files older than 20 days from a directory onto a tape drive on remote machine on Solaris. The files are of format abc-20100301000000.gz on my local machine. I know the below commands for searching files older than x days and command for backup procedure. solar1 #... (7 Replies)
Discussion started by: jyothi_wipro
7 Replies

5. SCO

Complete backup of system spanning all filesystems

how would a make a complete backup of all files spanning all file systems on my SCO box to tape? i read somewhere: find . -print | cpio -ocv > /dev/rStp0 from / ... will this do it?? (yes /dev/rStp0 is my tape drive) (11 Replies)
Discussion started by: herot
11 Replies

6. Solaris

Adding and removing ZFS filesystems in Zones

I have a Solaris 10 container that is running on ZFS filesystems being presented from the Global Zone. I have a filesystem presented to the Local zone and my user wants me to remove it. It there any way I can remove this while the zone is running? I tried unmounting it from the local zone... (0 Replies)
Discussion started by: BG_JrAdmin
0 Replies

7. SCO

mounting USB floppy drive /Flash drive in OSR 6.0

Can anybody help me out to mount USB flash /floppy drive in sco openserver 6.0 . (5 Replies)
Discussion started by: sureshdrajan
5 Replies

8. Solaris

Jumpstart: creating zfs filesystems

Is it possible to create zfs pools and filesystems using the profile file in jumpstart? edit to add: using Solaris 10 Release 11/06 (1 Reply)
Discussion started by: dangral
1 Replies

9. Solaris

Solaris System State & filesystems backup

Hi , We are using Veritas Net Backup , I want to create a new policy for backing up the (Solaris Operating System & the file systems) only the OS. not Full backup because we have an other policy for Oracle Apps and it takes full backup for all Partitions. I need the OS backup to be in... (2 Replies)
Discussion started by: adel8483
2 Replies

10. UNIX for Dummies Questions & Answers

Accessing files on external USB drive using UNIX?

Hi Folks, I'm a serious UNIX newbie... I'm using a bash shell on Mac OS X. Basically I took up unix in order to use a specific image processing software package... I've learned enough to write a script to batch process all of my images, but I have so many that I would like to use an... (1 Reply)
Discussion started by: Slanter
1 Replies
Login or Register to Ask a Question