01-14-2005
VxVM
All solaris rescue gurus out there ....
I've a Solaris 2.6 E450 on which my sysadmin guy has deleted every file (not sub-directories) from the /etc directory.
The machine is (was) running Vxvm with the root volume encapsulated.
I've tried booting from CDROM, mounting the root volume directly, and copying the contents of the etc directory back into place then editing the /etc/system file to forceload drivers and put in Vxvm entries to get the root volume up but the system fails to forceload the drivers then panics with a mutex error.
A'm I going about this the right way, or is there a better way?
I need to get some data off one of the other Vxvm-managed volumes.
Problem is that /usr /var and /opt were all seperate volumes and have been encapsulated under volume manager. The / volume is not big enough to hold the whole OS and
We've patched enough of the etc directory back together to get the machine running off the rootvol but then the machine has no /usr contents and hence no commands to do anything..
The boot proess is falling apart every time I try to add back the VX config into the /etc/system file to allow boot with Volume Manager enabled, either because it can't do the focreload of the vx drivers.
Does anyone kno why (how to) when I'm booted from CD, how to eject the OS CD so that I can insert the Vol Manager install CD and try tools on there?
thanks for your help guys/gals
andy
10 More Discussions You Might Find Interesting
1. Filesystems, Disks and Memory
I've got a Linux box that I'm pretty sure is having some disk issues. iostat isn't installed, but vmstat is, so i've been trying to use that to do some initial diagnostics while I go through our company's change control process to get iostat installed.
The problem I'm having is that the disks... (4 Replies)
Discussion started by: kknigga
4 Replies
2. Solaris
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 sliced rootdisk rootdg online
c1t1d0s2 sliced disk01 rootdg online
c2t0d0s2 sliced actsvr101 actsvr1dg online
c2t2d0s2 sliced actsvr102 actsvr1dg online
c2t3d0s2 ... (13 Replies)
Discussion started by: incredible
13 Replies
3. Solaris
Hi,
Quick question if anyone knows this. Is there a command I can use in Veritas Volume manager on Solaris that will tell me what the name of the SAN I am connected to? We have a number of SANs so I am unsure which one my servers are connected to. Thanks. (13 Replies)
Discussion started by: gwhelan
13 Replies
4. Solaris
Hi Guys,
I have a doubt either to Reboot the server after Replacing the disk0.
I have two disks under vxvm root mirrored and i had a problem with primary disk so i replace the disk0 failed primary disk and then mirrored. After mirroring is it reboot required ? (7 Replies)
Discussion started by: kurva
7 Replies
5. Filesystems, Disks and Memory
:confused:
Last week I read that VxVM won't work with MPxIO (i don't recall the link) and that it should be unconfigured when installing VxVM. Today I read that VxVM works in "pass-thru" mode with MPxIO and DMP uses the devices presented by MPxIO.
If I create disks with MPxIO and use VxVM to... (1 Reply)
Discussion started by: bluescreen
1 Replies
6. Solaris
Hi community,
I've a hard question for you.
1)What are the differences between ZFS and Veritas Volume Manager on Solaris10?
2) what is the difference to manage the internal disks (Mirror)?
3) what is the difference to manage the external disks?
4) What is the difference to manage... (5 Replies)
Discussion started by: Sunb3
5 Replies
7. Solaris
Anyone knows that how many volumes can be created in a Diskgroup?
Thanks in advance... (1 Reply)
Discussion started by: bpsunadm
1 Replies
8. Solaris
hi all,
how can we check whether vxvm is installed in our system or not in solaris?
Thanks in advance
dinu (4 Replies)
Discussion started by: dinu
4 Replies
9. Emergency UNIX and Linux Support
I have VxVM 5.1 running on Solaris-10. I have to increase a application file-system and storage team gave me a lun. After scanning scsi port by cfgadm, I can see them in format output. I labelled it, but I am not able to see them in "vxdisk list".
I already tried commands -->
vxdctl enable... (4 Replies)
Discussion started by: solaris_1977
4 Replies
10. Solaris
Hi,
When we are running fsck in vxvm FS within few sec it will completed even if data is more than 500GB or in TB also.
compare to UFS FS in that it will take more time compare with vxvm.UFS check FS in block level. & then vvxm on where its checking the FS.
Please explain. (1 Reply)
Discussion started by: tiger09
1 Replies
LEARN ABOUT OSF1
volencap
volencap(8) System Manager's Manual volencap(8)
NAME
volencap, volreconfig - Encapsulates disks, partitions, or AdvFS domains
SYNOPSIS
/usr/sbin/volencap [-g diskgroup] [-p volprefix] [privlen=nn] [nconfig=nn] { diskname | partition_name | domain_name }...
/usr/sbin/volencap -k { diskname | partition_name | domain_name }...
/usr/sbin/volencap -k -a
/usr/sbin/volencap -s
/sbin/volreconfig
OPTIONS
Puts the encapsulated disk into the disk group specified by disk group ID or disk group name. Instructs volencap to use the specified vol-
prefix as the prefix for LSM volume names generated. This is useful if the LSM volume names generated already exist. Instructs volencap to
remove any pending encapsulation requests for the specified disk(s), partition(s), or domain(s). Instructs volencap to remove all pending
encapsulation requests. Displays all pending encapsulation requests.
DESCRIPTION
Disk partitions that have data can be added to LSM using the encapsulation process. This enables users with existing data in disk parti-
tions to easily migrate to the use of LSM volumes instead of disk partitions.
The /usr/sbin/volencap command is used to encapsulate one or many disk partitions or AdvFS domains.
If an individual partition is specified for encapsulation, the volencap command creates a nopriv disk and a volume for that partition.
If an entire disk is specified, the volencap command creates a nopriv disk and volume for each in-use partition on the disk.
If an AdvFS domain is specified, all disk partitions in the domain are converted to LSM volumes.
You can use the volencap command to encapsulate the partitions on the boot disk, including the root file system, primary swap, usr and var
partitions, to place them under LSM control. See the Logical Storage Manager manual for more information on encapsulating these partitions.
The volencap command can be run directly or through the voldiskadm menus. The volencap command generates LSM commands that are later exe-
cuted by the /sbin/volreconfig command to perform the actual conversion of disk partitions to LSM volumes.
If the disk partition contains a mounted file system or is otherwise open or in use, the volreconfig command will reboot the system to com-
plete the encapsulation process. If you cannot reboot the system at the time you encapsulate the disk partition, either defer running the
volreconfig command until you can reboot the system, or unmount or close the disk partition, then run the volreconfig command.
See the Logical Storage Manager manual for more details.
ATTRIBUTES
Specifies the size of the LSM private region in bytes, for example, privlen=1024. Specifies the number of log copies and copies of the
configuration database, for example, nconfig=0.
EXAMPLES
To encapsulate all disk partitions on the boot disk called dsk0 to LSM volumes, enter: # volencap dsk0 # volreconfig
The volreconfig command will request confirmation prior to rebooting the system. The LSM command scripts created by the volencap
command will execute as the system reboots. To encapsulate only the root file system (dsk0a) and swap (dsk0b) partitions on the
boot disk, enter: # volencap dsk0a dsk0b # volreconfig
The volreconfig command will request confirmation prior to rebooting the system. The LSM command scripts created by the volencap
command will execute as the system reboots. To convert all disk partitions in the AdvFS domain called dom1 to LSM volumes, enter: #
volencap dom1 # volreconfig
The volreconfig command will determine if the domain is active, and if so, request confirmation prior to rebooting the system. The
LSM command scripts created by the volencap command will execute as the system reboots.
If the AdvFS filesets in domain dom1 are not mounted, then /sbin/volreconfig can be executed on the command line. To encapsulate
the specific partition dsk1g to an LSM volume, enter: # volencap dsk1g # volreconfig
The volreconfig command will determine if the partition is open, and if so, request confirmation prior to rebooting the system. The
LSM command scripts created by the volencap command will be executed as the system reboots.
SEE ALSO
volintro(8)
volencap(8)