Hi Duke, reran the lucreate command, and this seemed to work fine
Code:
# newfs /dev/rdsk/c6t60A9800057396D6468344D7A4F356151d0s0
newfs: /dev/rdsk/c6t60A9800057396D6468344D7A4F356151d0s0 last mounted as /.alt.tmp.b-s3b.mnt
newfs: construct a new file system /dev/rdsk/c6t60A9800057396D6468344D7A4F356151d0s0: (y/n)? y
Warning: 2048 sector(s) in last cylinder unallocated
/dev/rdsk/c6t60A9800057396D6468344D7A4F356151d0s0: 245825536 sectors in 40011 cylinders of 48 tracks, 128 sectors
120032.0MB in 2501 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
.................................................
super-block backups for last 10 cylinder groups at:
244882848, 244981280, 245079712, 245178144, 245276576, 245366816, 245465248,
245563680, 245662112, 245760544
Code:
# lucreate -c solenv1 -m /:/dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0:ufs -n solenv2
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <solenv2>.
Source boot environment is <solenv1>.
Creating file systems on boot environment <solenv2>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0>.
Mounting file systems for boot environment <solenv2>.
Calculating required sizes of file systems for boot environment <solenv2>.
Populating file systems on boot environment <solenv2>.
Analyzing zones.
Mounting ABE <solenv2>.
Generating file list.
Copying data from PBE <solenv1> to ABE <solenv2>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <solenv2>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <solenv1>.
Making boot environment <solenv2> bootable.
Population of boot environment <solenv2> successful.
Creation of boot environment <solenv2> successful.
Code:
# luupgrade -u -k /var/tmp/LDMupgrade/autoreg -n solenv2 -s /sol_dvd
miniroot filesystem is <lofs>
Mounting miniroot at </sol_dvd/Solaris_10/Tools/Boot>
#######################################################################
NOTE: To improve products and services, Oracle Solaris communicates
configuration data to Oracle after rebooting.
You can register your version of Oracle Solaris to capture this data
for your use, or the data is sent anonymously.
For information about what configuration data is communicated and how
to control this facility, see the Release Notes or
www.oracle.com/goto/solarisautoreg.
INFORMATION: After activated and booted into new BE <solenv2>,
Auto Registration happens automatically with the following Information
autoreg=disable
#######################################################################
Validating the contents of the media </sol_dvd>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <solenv2>.
Determining packages to install or upgrade for BE <solenv2>.
Performing the operating system upgrade of the BE <solenv2>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
May 7 17:33:01 <server> ufs: NOTICE: /a: bad dir ino 1729 at offset 0: mangled entry
Upgrading Solaris: 14% completed
Is the bad dir ino message something to worry about?
I've had a look on google but cant seem to find anything that matches
Hi,
I upgraded solaris 10 x86 from update 3 to update 7 with zones installed n UFS file system . The global zone was updated but the non global zone still shows update 3 what could be the reason for this and how can i update the local zones to update 7 (0 Replies)
I 'm running solaris10 u6 with 141414-02. My system is T5220 running 2 Ldoms and 7 zones on primary domain. I 'm tryin to create booth environment on my alternate root disk after breaking the SVM mirroring but it does not go well and stuck at some point , and i have to reboot the system to get rid... (1 Reply)
I have created a solaris10 update9 zfs root flash archive for sun4v environment which i 'm tryin to use for upgrading solaris10 update8 zfs root based server using live upgrade.
following is my current system status
lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now... (0 Replies)
I tried a live upgrade for one my solaris 10u8 server which didnt go sucessfull and after that i now have following mounts in memory.
df: cannot statvfs /.alt.sol10u8_2/var: No such file or directory
df: cannot statvfs /.alt.sol10u8_2/var/run: No such file or directory
df: cannot statvfs... (0 Replies)
I have Solaris-10 sparc box with ZFS file-system, which is running two non global zones. I am in process of applying Solaris Recommended patch cluster via Live Upgrade.
Though I have enough space in root file-system of both zones, everytime I run installcluster, it fails with complaining less... (7 Replies)