Migration of system having UFS root FS with zones root to ZFS root FS


Login or Register for Dates, Times and to Reply

 
Thread Tools Search this Thread
# 1  
Migration of system having UFS root FS with zones root to ZFS root FS

Hi All

After downloading ZFS documentation from oracle site, I am able to successfully migrate UFS root FS without zones to ZFS root FS. But in case of UFS root file system with zones , I am successfully able to migrate global zone to zfs root file system but zone are still in UFS root file system mode. I am not able to successfully migrate zone to zfs root filesystem.

Please let me know how can we do that.
# 2  
Please find the above related activity logs & let me know where I went wrong




Code:
bash-3.2# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c1t1d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
format> p
WARNING - This disk may be in use by an application that has
          modified the fdisk table. Ensure that this disk is
          not currently in use before proceeding to use fdisk.
format> fdisk
No fdisk table exists. The default partition for the disk is:

  a 100% "SOLARIS System" partition

Type "y" to accept the default partition,  otherwise type "n" to edit the
 partition table.
y
format> 
format> 
format> p


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit
partition> p
Current partition table (original):
Total disk cylinders available: 2085 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)           0
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wu       0 - 2084       15.97GB    (2085/0/0) 33495525
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0

partition> 0
Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)           0

Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]: 
Enter new starting cyl[1]: 
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: $
partition> p
Current partition table (unnamed):
Total disk cylinders available: 2085 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       1 - 2084       15.96GB    (2084/0/0) 33479460
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wu       0 - 2084       15.97GB    (2085/0/0) 33495525
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0

partition> l
Ready to label disk, continue? y

partition> q


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
format> q
bash-3.2# 
bash-3.2# 
bash-3.2# echo |format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c1t1d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number): 
bash-3.2# 
bash-3.2# 
bash-3.2# zpool create rootpool c1t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t1d0s0 overlaps with /dev/dsk/c1t1d0s2
bash-3.2# zpool create -f rootpool c1t1d0s0
bash-3.2# df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0      7.8G   4.5G   3.2G    59%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   668M   964K   667M     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
                       7.8G   4.5G   3.2G    59%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   667M    40K   667M     1%    /tmp
swap                   667M    36K   667M     1%    /var/run
/dev/dsk/c1t0d0s7      3.5G   3.6M   3.5G     1%    /export/home
rootpool                16G    31K    16G     1%    /rootpool
bash-3.2# 
bash-3.2# zfs list
NAME       USED  AVAIL  REFER  MOUNTPOINT
rootpool  77.5K  15.6G    31K  /rootpool
bash-3.2# zpool list
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
rootpool  15.9G  78.5K  15.9G     0%  ONLINE  -
bash-3.2# 
bash-3.2# zoneadm list
global
zone1
bash-3.2# 
bash-3.2# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   1 zone1            running    /export/zone1                  native   shared
bash-3.2# 
bash-3.2# zlogin -C zone1
[Connected to zone 'zone1' console]

zone1 console login: root
Password: 
Jun 24 17:17:26 zone1 login: ROOT LOGIN /dev/console

Last login: Sun Jun 24 15:04:01 on console
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# 

zone1 console login: ~.
Password: 

~.
[Connection to zone 'zone1' console closed]
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# lucreate -c ufsBE -n zfsBE -p rootpool -D /var
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>.
Creating <zfs> file system for </var> in zone <global> on <rootpool/ROOT/zfsBE/var>.
Populating file systems on boot environment <zfsBE>.
Temporarily mounting zones in PBE <ufsBE>.
Analyzing zones.
Mounting ABE <zfsBE>.
Cloning mountpoint directories.
Generating file list.
Copying data from PBE <ufsBE> to ABE <zfsBE>.

  0% of filenames transferred
  1% of filenames transferred
  2% of filenames transferred
  3% of filenames transferred
  4% of filenames transferred
  5% of filenames transferred
  6% of filenames transferred
  7% of filenames transferred
  8% of filenames transferred
  9% of filenames transferred
 10% of filenames transferred
 11% of filenames transferred
 12% of filenames transferred
 13% of filenames transferred
 14% of filenames transferred
 15% of filenames transferred
 16% of filenames transferred
 17% of filenames transferred
 18% of filenames transferred
 19% of filenames transferred
 20% of filenames transferred
 21% of filenames transferred
 22% of filenames transferred
 23% of filenames transferred
 24% of filenames transferred
 25% of filenames transferred
 26% of filenames transferred
 27% of filenames transferred
 28% of filenames transferred
 29% of filenames transferred
 30% of filenames transferred
 31% of filenames transferred
 32% of filenames transferred
 33% of filenames transferred
 34% of filenames transferred
 35% of filenames transferred
 36% of filenames transferred
 37% of filenames transferred
 38% of filenames transferred
 39% of filenames transferred
 40% of filenames transferred
 41% of filenames transferred
 42% of filenames transferred
 43% of filenames transferred
 44% of filenames transferred
 45% of filenames transferred
 46% of filenames transferred
 47% of filenames transferred
 48% of filenames transferred
 49% of filenames transferred
 50% of filenames transferred
 51% of filenames transferred
 52% of filenames transferred
 53% of filenames transferred
 54% of filenames transferred
 55% of filenames transferred
 56% of filenames transferred
 57% of filenames transferred
 58% of filenames transferred
 59% of filenames transferred
 60% of filenames transferred
 61% of filenames transferred
 62% of filenames transferred
 63% of filenames transferred
 64% of filenames transferred
 65% of filenames transferred
 66% of filenames transferred
 67% of filenames transferred
 68% of filenames transferred
 69% of filenames transferred
 70% of filenames transferred
 71% of filenames transferred
 72% of filenames transferred
 73% of filenames transferred
 74% of filenames transferred
 75% of filenames transferred
 76% of filenames transferred
 77% of filenames transferred
 78% of filenames transferred
 79% of filenames transferred
 80% of filenames transferred
 81% of filenames transferred
 82% of filenames transferred
 83% of filenames transferred
 84% of filenames transferred
 85% of filenames transferred
 86% of filenames transferred
 87% of filenames transferred
 88% of filenames transferred
 89% of filenames transferred
 90% of filenames transferred
 91% of filenames transferred
 92% of filenames transferred
 93% of filenames transferred
 94% of filenames transferred
 95% of filenames transferred
 96% of filenames transferred
 97% of filenames transferred
 98% of filenames transferred
 99% of filenames transferred
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <zfsBE>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <ufsBE>.
Making boot environment <zfsBE> bootable.
Updating bootenv.rc on ABE <zfsBE>.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <zfsBE> in GRUB menu
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         
bash-3.2# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rootpool                 5.74G  9.88G    36K  /rootpool
rootpool/ROOT            4.70G  9.88G    31K  legacy
rootpool/ROOT/zfsBE      4.70G  9.88G  4.62G  /
rootpool/ROOT/zfsBE/var  85.9M  9.88G  85.9M  /var
rootpool/dump             528M  10.4G    16K  -
rootpool/swap             535M  10.4G    16K  -
bash-3.2# 
bash-3.2# luactivate zfsBE
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <ufsBE>
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.

Generating boot-sign for ABE <zfsBE>
NOTE: File </etc/bootsign> not found in top level dataset for BE <zfsBE>
Generating partition and slice information for ABE <zfsBE>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following command to mount:

     mount -Fufs /dev/dsk/c1t0d0s0 /mnt

3. Run <luactivate> utility with out any arguments from the Parent boot 
environment root slice, as shown below:

     /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <zfsBE> successful.
bash-3.2# 
bash-3.2# 
bash-3.2# init 6
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
bash-3.2# 
Connection closed by foreign host.





bash-3.2# 
bash-3.2# df -h
Filesystem             size   used  avail capacity  Mounted on
rootpool/ROOT/zfsBE     16G   4.6G   9.9G    32%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   397M   972K   396M     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
                        15G   4.6G   9.9G    32%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
rootpool/ROOT/zfsBE/var
                        16G    86M   9.9G     1%    /var
swap                   396M    40K   396M     1%    /tmp
swap                   396M    36K   396M     1%    /var/run
/dev/dsk/c1t0d0s7      3.5G   3.6M   3.5G     1%    /export/home
rootpool                16G    42K   9.9G     1%    /rootpool
bash-3.2# 
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -         
zfsBE                      yes      yes    yes       no     -         
bash-3.2# 
bash-3.2# 
bash-3.2# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   5 zone1            running    /export/zone1                  native   shared
bash-3.2# 
bash-3.2# 
bash-3.2# zlogin zone1
[Connected to zone 'zone1' pts/3]
Last login: Sun Jun 24 17:17:26 on console
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# bash
bash-3.2# 
bash-3.2# df -h
Filesystem             size   used  avail capacity  Mounted on
/                       15G   4.6G   9.9G    32%    /
/dev                    15G   4.6G   9.9G    32%    /dev
/lib                    15G   4.6G   9.9G    32%    /lib
/platform               15G   4.6G   9.9G    32%    /platform
/sbin                   15G   4.6G   9.9G    32%    /sbin
/usr                    15G   4.6G   9.9G    32%    /usr
proc                     0K     0K     0K     0%    /proc
ctfs                     0K     0K     0K     0%    /system/contract
mnttab                   0K     0K     0K     0%    /etc/mnttab
objfs                    0K     0K     0K     0%    /system/object
swap                   393M   332K   393M     1%    /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1
                        15G   4.6G   9.9G    32%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   393M    36K   393M     1%    /tmp
swap                   393M    20K   393M     1%    /var/run
bash-3.2# 
bash-3.2# 
bash-3.2# exit
# 
[Connection to zone 'zone1' pts/3 closed]
bash-3.2# lucreate zfsBE2
ERROR: command line argument(s) <zfsBE2> not recognized
Usage: lucreate -n BE_name [ -A BE_description ] [ -c BE_name ] 
        [ -C ( boot_device | - ) ] [ -f exclude_list-file [ -f ... ] ] [ -I ] 
        [ -l error_log-file ] [ -M slice_list-file [ -M ... ] ] 
        [ -m mountPoint:devicePath:fsOptions [ -m ... ] ] [ -o out_file ] 
        [-p rootPool ] [-D datasetMountPoint [ -D ... ] ]
        [ -s ( - | source_BE_name ) ] [ -x exclude_dir/file [ -x ... ] ] [ -X ] 
        [ -y include_dir/file [ -y ... ] ] [ -Y include_list-file [ -Y ... ] ] 
        [ -z filter_list-file ] 
bash-3.2# 
bash-3.2# lucreate -n zfsBE2
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfsBE2>.
Source boot environment is <zfsBE>.
Creating file systems on boot environment <zfsBE2>.
Populating file systems on boot environment <zfsBE2>.
Temporarily mounting zones in PBE <zfsBE>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rootpool/ROOT/zfsBE> on <rootpool/ROOT/zfsBE@zfsBE2>.
Creating clone for <rootpool/ROOT/zfsBE@zfsBE2> on <rootpool/ROOT/zfsBE2>.
Creating snapshot for <rootpool/ROOT/zfsBE/var> on <rootpool/ROOT/zfsBE/var@zfsBE2>.
Creating clone for <rootpool/ROOT/zfsBE/var@zfsBE2> on <rootpool/ROOT/zfsBE2/var>.
Mounting ABE <zfsBE2>.
Generating file list.
Copying data from PBE <zfsBE> to ABE <zfsBE2>.

  0% of filenames transferred
  1% of filenames transferred
  2% of filenames transferred
  3% of filenames transferred
  4% of filenames transferred
  5% of filenames transferred
  6% of filenames transferred
  7% of filenames transferred
  8% of filenames transferred
  9% of filenames transferred
 10% of filenames transferred
 11% of filenames transferred
 12% of filenames transferred
 13% of filenames transferred
 14% of filenames transferred
 15% of filenames transferred
 16% of filenames transferred
 17% of filenames transferred
 18% of filenames transferred
 19% of filenames transferred
 20% of filenames transferred
 21% of filenames transferred
 22% of filenames transferred
 23% of filenames transferred
 24% of filenames transferred
 25% of filenames transferred
 26% of filenames transferred
 27% of filenames transferred
 28% of filenames transferred
 29% of filenames transferred
 30% of filenames transferred
 31% of filenames transferred
 32% of filenames transferred
 33% of filenames transferred
 34% of filenames transferred
 35% of filenames transferred
 36% of filenames transferred
 37% of filenames transferred
 38% of filenames transferred
 39% of filenames transferred
 40% of filenames transferred
 41% of filenames transferred
 42% of filenames transferred
 43% of filenames transferred
 44% of filenames transferred
 45% of filenames transferred
 46% of filenames transferred
 47% of filenames transferred
 48% of filenames transferred
 49% of filenames transferred
 50% of filenames transferred
 51% of filenames transferred
 52% of filenames transferred
 53% of filenames transferred
 54% of filenames transferred
 55% of filenames transferred
 56% of filenames transferred
 57% of filenames transferred
 58% of filenames transferred
 59% of filenames transferred
 60% of filenames transferred
 61% of filenames transferred
 62% of filenames transferred
 63% of filenames transferred
 64% of filenames transferred
 65% of filenames transferred
 66% of filenames transferred
 67% of filenames transferred
 68% of filenames transferred
 69% of filenames transferred
 70% of filenames transferred
 71% of filenames transferred
 72% of filenames transferred
 73% of filenames transferred
 74% of filenames transferred
 75% of filenames transferred
 76% of filenames transferred
 77% of filenames transferred
 78% of filenames transferred
 79% of filenames transferred
 80% of filenames transferred
 81% of filenames transferred
 82% of filenames transferred
 83% of filenames transferred
 84% of filenames transferred
 85% of filenames transferred
 86% of filenames transferred
 87% of filenames transferred
 88% of filenames transferred
 89% of filenames transferred
 90% of filenames transferred
 91% of filenames transferred
 92% of filenames transferred
 93% of filenames transferred
 94% of filenames transferred
 95% of filenames transferred
 96% of filenames transferred
 97% of filenames transferred
 98% of filenames transferred
 99% of filenames transferred
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <zfsBE2>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <zfsBE>.
Making boot environment <zfsBE2> bootable.
Updating bootenv.rc on ABE <zfsBE2>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE2> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <zfsBE2> in GRUB menu
Population of boot environment <zfsBE2> successful.
Creation of boot environment <zfsBE2> successful.
bash-3.2# 
bash-3.2# 
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -         
zfsBE                      yes      yes    yes       no     -         
zfsBE2                     yes      no     no        yes    -         
bash-3.2# luactivate zfsBE2
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <zfsBE>
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE2>.

Generating boot-sign for ABE <zfsBE2>
Saving existing file </etc/bootsign> in top level dataset for BE <zfsBE2> as <mount-point>//etc/bootsign.prev.
Generating partition and slice information for ABE <zfsBE2>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rootpool
     zfs inherit -r mountpoint rootpool/ROOT/zfsBE
     zfs set mountpoint=<mountpointName> rootpool/ROOT/zfsBE 
     zfs mount rootpool/ROOT/zfsBE

3. Run <luactivate> utility with out any arguments from the Parent boot 
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <zfsBE2> successful.
bash-3.2# 
bash-3.2# 
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -         
zfsBE                      yes      yes    no        no     -         
zfsBE2                     yes      no     yes       no     -         
bash-3.2# 
bash-3.2# init 6
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE2> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
bash-3.2# 
Connection closed by foreign host.


bash-3.2# 
bash-3.2# df -h
Filesystem             size   used  avail capacity  Mounted on
rootpool/ROOT/zfsBE2    16G   4.6G   9.6G    33%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   426M   968K   425M     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
                        14G   4.6G   9.6G    33%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
rootpool/ROOT/zfsBE2/var
                        16G    87M   9.6G     1%    /var
swap                   425M    40K   425M     1%    /tmp
swap                   425M    36K   425M     1%    /var/run
/dev/dsk/c1t0d0s7      3.5G   3.6M   3.5G     1%    /export/home
rootpool                16G    44K   9.6G     1%    /rootpool
bash-3.2# 
bash-3.2# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   5 zone1            running    /export/zone1                  native   shared
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -         
zfsBE                      yes      no     no        yes    -         
zfsBE2                     yes      yes    yes       no     -         
bash-3.2# 
bash-3.2# 
bash-3.2# zlogin zone1
[Connected to zone 'zone1' pts/3]
Last login: Sun Jun 24 20:04:26 on pts/3
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# df -h
Filesystem             size   used  avail capacity  Mounted on
/                       14G   4.6G   9.6G    33%    /
/dev                    14G   4.6G   9.6G    33%    /dev
/lib                    14G   4.6G   9.6G    33%    /lib
/platform               14G   4.6G   9.6G    33%    /platform
/sbin                   14G   4.6G   9.6G    33%    /sbin
/usr                    14G   4.6G   9.6G    33%    /usr
proc                     0K     0K     0K     0%    /proc
ctfs                     0K     0K     0K     0%    /system/contract
mnttab                   0K     0K     0K     0%    /etc/mnttab
objfs                    0K     0K     0K     0%    /system/object
swap                   423M   328K   422M     1%    /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1
                        14G   4.6G   9.6G    33%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   423M    36K   422M     1%    /tmp
swap                   422M    20K   422M     1%    /var/run
# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 192.168.1.127 netmask ffffff00 broadcast 192.168.1.255
# 
[Connection to zone 'zone1' pts/3 closed]
bash-3.2# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone zone1
        inet 127.0.0.1 netmask ff000000 
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 192.168.1.126 netmask ffffff00 broadcast 192.168.1.255
        ether 0:c:29:ff:4a:9e 
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        zone zone1
        inet 192.168.1.127 netmask ffffff00 broadcast 192.168.1.255
bash-3.2# 
bash-3.2# zlogin zone1
[Connected to zone 'zone1' pts/3]
Last login: Sun Jun 24 20:21:46 on pts/3
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# zfs list
no datasets available
# 
[Connection to zone 'zone1' pts/3 closed]
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# 
bash-3.2# df -h
Filesystem             size   used  avail capacity  Mounted on
rootpool/ROOT/zfsBE2    16G   4.6G   9.6G    33%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   425M   968K   424M     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
                        14G   4.6G   9.6G    33%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
rootpool/ROOT/zfsBE2/var
                        16G    87M   9.6G     1%    /var
swap                   424M    40K   424M     1%    /tmp
swap                   424M    36K   424M     1%    /var/run
/dev/dsk/c1t0d0s7      3.5G   3.6M   3.5G     1%    /export/home
rootpool                16G    44K   9.6G     1%    /rootpool
bash-3.2# 
bash-3.2# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   5 zone1            running    /export/zone1                  native   shared
bash-3.2# zlogin zone1
[Connected to zone 'zone1' pts/3]
Last login: Sun Jun 24 20:26:03 on pts/3
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# df -h
Filesystem             size   used  avail capacity  Mounted on
/                       14G   4.6G   9.6G    33%    /
/dev                    14G   4.6G   9.6G    33%    /dev
/lib                    14G   4.6G   9.6G    33%    /lib
/platform               14G   4.6G   9.6G    33%    /platform
/sbin                   14G   4.6G   9.6G    33%    /sbin
/usr                    14G   4.6G   9.6G    33%    /usr
proc                     0K     0K     0K     0%    /proc
ctfs                     0K     0K     0K     0%    /system/contract
mnttab                   0K     0K     0K     0%    /etc/mnttab
objfs                    0K     0K     0K     0%    /system/object
swap                   422M   328K   422M     1%    /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1
                        14G   4.6G   9.6G    33%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   422M    36K   422M     1%    /tmp
swap                   422M    20K   422M     1%    /var/run
#poweroff


[Connection to zone 'zone1' pts/3 closed]
bash-3.2# poweroff
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful

Connection closed by foreign host.

# 3  
Can you post output of:
Code:
df -h /export/zone1

Login or Register for Dates, Times and to Reply

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #111
Difficulty: Easy
The Unix version with the largest installed base in 2019 is macOS.
True or False?

7 More Discussions You Might Find Interesting

1. SuSE

Auditors want more security with root to root access via ssh keys

I access over 100 SUSE SLES servers as root from my admin server, via ssh sessions using ssh keys, so I don't have to enter a password. My SUSE Admin server is setup in the following manner: 1) Remote root access is turned off in the sshd_config file. 2) I am the only user of this admin... (6 Replies)
Discussion started by: dvbell
6 Replies

2. Solaris

Creation of zone based on zfs root file system

Hi all I want to know if suppose my global zone has UFS root file system & now I want to create non global zone with ZFS root file system. Is it possible.....If this is possible then how will I able to create zone based on ZFS root file system in global zone having UFS based root file system (5 Replies)
Discussion started by: sb200
5 Replies

3. Solaris

Convert UFS (root) to VxFS

Hi all, How to i use vxfs for my server? Because when i install OS, it is installed with ufs, then after Solaris 10 installation, i proceed to install vxfs. How do i convert all the ufs to vxfs? Or is it what i'm doing is the incorrect way? Please help. :wall: (9 Replies)
Discussion started by: beginningDBA
9 Replies

4. Solaris

Convert root UFS to ZFS on x86 solaris 10/09

Hello All, Good Morning, We are trying to convert the UFS root in to ZFS. Am getting below error. Any one help me out on this ? bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT guru 5.95G 483M 5.48G 7% ONLINE - bash-3.00# zpool create rpool c2t10d0p0... (4 Replies)
Discussion started by: gowthamakanthan
4 Replies

5. UNIX for Dummies Questions & Answers

How to allow access to some commands having root privleges to be run bu non root user

hi i am new to unix and i have abig task. i have to \run particular commands having root privileges from a non root user. i know sudo is one of the way but i need sum other approach kindly help Thanks (5 Replies)
Discussion started by: suryashikha
5 Replies

6. AIX

Can't login root account due to can't find root shell

Hi, yesterday, I changed root's shell in /etc/passwd, cause a mistake then I can not log in root account (can't find correct shell). I attempted to log in single-mode, however, it prompted for single-mode's password then I type root's password but still can not log in. I'm using AIX 5L version 5.2... (2 Replies)
Discussion started by: neikel
2 Replies

7. UNIX for Dummies Questions & Answers

Run non-root script as root with non-root environment

All, I want to run a non-root script as the root user with non-root environment variables with crontab. The non-root user would have environment variables for database access such as Oracle or Sybase. The root user does not have the Oracle or Sybase enviroment variables. I thought you could do... (2 Replies)
Discussion started by: bubba112557
2 Replies

Featured Tech Videos