Sponsored Content
Operating Systems Solaris Problem with live upgrade creation: Telling me metadevices do not exist Post 302422731 by notreallyhere on Thursday 20th of May 2010 03:51:01 AM
Old 05-20-2010
Problem with live upgrade creation: Telling me metadevices do not exist

Hi

I am having a problem creating my live upgrade environment.
Here is the error I get:

Code:
root@server:/# lucreate -c SOL10_18May -n SOL10_19May -z /lu_excludelist -m /:dev/md/dsk/d0:ufs -m /var:/dev/md/dsk/d4:ufs -m /export/home:/dev/md/dsk/d6:ufs -m -:/dev/md/dsk/d3:swap -C /dev/dsk/c1t3d0s0
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
ERROR: device <dev/md/dsk/d0> does not exist
ERROR: device <dev/md/dsk/d0> is not available for use with mount point </>
ERROR: cannot create new boot environment using file systems as configured
ERROR: please review all file system configuration options
ERROR: cannot create new boot environment using options provided

I have created all those metadevices prior to running the lucreate command.
I have also installed all the required packages and patches.

Can any please help me find a solution?

Thanks

---------- Post updated 05-20-10 at 09:51 AM ---------- Previous update was 05-19-10 at 02:46 PM ----------

Oh my word, I was such a fool. I left a slash out....

dev/md/dsk/d0 does not excist, but /dev/md/dsk/d0 does...
Smilie

Last edited by pludi; 05-19-2010 at 09:57 AM.. Reason: code tags, please...
 

8 More Discussions You Might Find Interesting

1. Solaris

Live upgrade Issue

Hi, I upgraded solaris 10 x86 from update 3 to update 7 with zones installed n UFS file system . The global zone was updated but the non global zone still shows update 3 what could be the reason for this and how can i update the local zones to update 7 (0 Replies)
Discussion started by: fugitive
0 Replies

2. Solaris

Live upgrade question

I want to basically update an ABE that someone created a few months back. I'm sure stuff has changed since it was made, and I was going to delete it and create a new one. But from what I'm looking at, the lumake appears like it would be a faster approach. I want to use live upgrade to... (0 Replies)
Discussion started by: BG_JrAdmin
0 Replies

3. Solaris

Live Upgrade Issue

I tried a live upgrade for one my solaris 10u8 server which didnt go sucessfull and after that i now have following mounts in memory. df: cannot statvfs /.alt.sol10u8_2/var: No such file or directory df: cannot statvfs /.alt.sol10u8_2/var/run: No such file or directory df: cannot statvfs... (0 Replies)
Discussion started by: fugitive
0 Replies

4. Solaris

Solaris 10 Live Upgrade Issue

Hi Guys, having an issue with running Live Upgrade on a t5240 runiing Solaris 10 5/08. The system has the required patches 121430, and Live upgrade was updated from the install media sol-10-u10-ga2-sparc-dvd.iso The following boot environments were created solenv1 and solenv2 with the... (8 Replies)
Discussion started by: Revo
8 Replies

5. Emergency UNIX and Linux Support

Live Upgrade Query

I am upgrading Solaris-9 Update-7 to Solaris-9 Update-9 through live upgrade. I am able to create another boot environment and have OS DVD inside server. But i am confused for, what command/path should I give for luupgrade. OS DVD is mounted on /mnt Boot environments are Solaris9old (Active now)... (2 Replies)
Discussion started by: solaris_1977
2 Replies

6. Solaris

Live upgrade query

Hi All, Is it possible to use external san disk for creating alternate boot environment and boot from it.My root disk is about 70 gb and i want to use external san disk for 272gb to create alternate boot environment.If this is possible can you please redirect me some good documents, i had... (1 Reply)
Discussion started by: sahil_shine
1 Replies

7. Solaris

Problem in SVM after live upgrade

Hi I am new to live upgrade. I would like to tell you about my new setup, where my boot disk(c0d0) is mirrored with secondary disk(c0d1). I have remove the secondary whole disk(C0d1) from the mirror, so that I can do live upgrade on this secondary disk. I have done live upgrade on s0 partition... (3 Replies)
Discussion started by: amity
3 Replies

8. Solaris

Live upgrade first steps

Hello Guys, I am a little confused about the first step in the live upgrade process. I will be glad if someone can clarify this for me. The pre-live upgrade patch, when do you add this patch to the OS you want to upgrade? 1. before creating the new boot environment? or 2. after creating... (1 Reply)
Discussion started by: cjashu
1 Replies
cmdisklock(1m)															    cmdisklock(1m)

NAME
cmdisklock - manage Serviceguard cluster lock devices. SYNOPSIS
cmdisklock check path cmdisklock [-f] reset path DESCRIPTION
cmdisklock is a tool to check the current state of a Serviceguard cluster lock device. It can also be used to reset the state of the clus- ter lock device. The need to reset the cluster lock device state could arise if the cluster lock device is replaced or becomes corrupt. A cluster lock device can be either an HP-UX LVM cluster lock or a cluster lock LUN device. HP-UX LVM cluster locks exist only on a disk in an LVM volume group. Cluster lock LUNs exist only on disks dedicated to cluster lock. cmdisklock is useful for checking either type of cluster lock and for re-initializing cluster lock LUN devices after a failure or corruption. NOTE To restore an HP-UX LVM cluster lock, use vgcfgrestore. cmdisklock will fail until vgcfgrestore is run, and cmdisklock is unnecessary as long as vgcfgbackup was done after the cluster lock was initialized. See the Managing Serviceguard manual for details. The syntax of the path option depends on the type of lock. For HP-UX LVM cluster lock disks, the syntax is VG:PV (for example: /dev/vglock:/dev/dsk/c0t0d2). For cluster lock LUN disks, the path is the disk device path. For example, /dev/sdd1 (on Linux) or /dev/dsk/c0t1d2 (on HP-UX). Options cmdisklock supports the following options: check Check the current state of the cluster lock device and report the results. reset Reset (initialize) the state of the cluster lock device. This operation should only be performed on a cluster lock LUN device. For HP-UX LVM cluster lock, use vgcfgrestore as documented in the Managing Serviceguard manual. After performing a reset, a check can be used to verify that the lock is cleared. EXAMPLES
If the cluster lock LUN device becomes corrupted and the cluster is up, messages like the following will appear in syslog. Mar 15 12:20:41 usb cmdisklockd[17599]: WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is cor- rected, a single failure could cause all nodes in the cluster to crash. Mar 15 12:20:41 usb cmdisklockd[17599]: After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair Mar 15 12:20:41 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is inaccessible Once the above messages appear in syslog on all running nodes, the following command will re-initialize the cluster lock LUN: ucd:/> cmdisklock reset /dev/dsk/c0t1d2 WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is corrected, a single failure could cause all nodes in the cluster to crash. After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair /dev/dsk/c0t1d2 is inaccessible Resetting cluster lock device /dev/dsk/c0t1d2 Cluster lock reset completed /dev/dsk/c0t1d2 is accessible cleared After the lock is restored, a message like the following appears in syslog: Mar 15 12:23:11 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is accessible WARNINGS
CAUTION For cluster lock LUN, reset is a potentially destructive operation. While cmdisklock checks for known volume manager and file system use (overridden by -f), it does not validate that the device to be reset is actually used by any cluster. If -f is used on the wrong device file, loss of data may result. CAUTION Care should be taken when doing a reset when the cluster is active as there is a remote possibility that the cluster will partition right when this command is run and both nodes could end up thinking they have successfully acquired the lock. To avoid this situation, make sure cmcld has logged a message in syslog on all running nodes saying the device is inaccessble, before performing a reset. Note that it is safe to run cmdisklock when the cluster is down. RETURN VALUE
cmdisklock returns the following values: 0 Successful completion. 1 The disk is inaccessible or is not recognized as a cluster lock. AUTHOR
cmdisklock was developed by HP. SEE ALSO
cmapplyconf(1m), cmviewcl(1m), vgcfgbackup(1m), vgcfgrestore(1m) Requires Optional Serviceguard Software cmdisklock(1m)
All times are GMT -4. The time now is 11:10 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy