Sponsored Content
Full Discussion: Disk expansion on LDOM Guest
Operating Systems Solaris Disk expansion on LDOM Guest Post 302981789 by pressy on Monday 19th of September 2016 01:13:08 PM
Old 09-19-2016
That's not much information...

You will need to create a vdsdev in your vds service. this vdsdev can be used as a vdisk for your ldom. within the ldom you will need to add this disk to the volume(s) for your /u02... and that will depend on your volume manager... ZFS? VxVM?

gP
 

9 More Discussions You Might Find Interesting

1. Solaris

Help needed - trying to run commands in Guest LDoms from Control LDOM

Hi Folks, I am used to writing scripts to get info by running commands at local zones level from their respective global zone by using zlogin <localzone> "command>" while remaining at the global zone level. Can the same be done with Guest LDoms while remaining at the control LDOM level? ... (4 Replies)
Discussion started by: momin
4 Replies

2. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

3. Solaris

Network Config on Zone in a Guest LDOM

Solaris for Sparc 11.1 with the latest patches. Created a Guest LDOM with two vnet's net0 and net1, installed a guest whole root, ip exclusive zone that I want to be able to utilize DHCP. I have been able to create the zone but unable to get it to boot because I am unable to assign an anet to it.... (4 Replies)
Discussion started by: os2mac
4 Replies

4. Solaris

Increase disk size of guest domain

Host System: SPARC S7-2 Server; 2x8-core CPUs; 128Gb RAM; 2x600Gb HDD. running Solaris 11.3. Last login: Tue Sep 19 14:42:42 2017 from xxx.xxx.xxx Oracle Corporation SunOS 5.11 11.3 June 2017 $ uname -a SunOS sog01 5.11 11.3 sun4v sparc sun4v $ Original physical systems: Sun... (0 Replies)
Discussion started by: apmcd47
0 Replies

5. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

6. Solaris

Disk alignment inside of an LDOM

Hi! Quick background for the question... I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies

7. Solaris

Ldom guest volumen problem t8 Solaris 11

hello to everyone. im new member here. i have a problem with a guest ldom on solaris 11 sparc in a T8. I need to access to disk vds assigned to guest domain but from control domain. I want to modify a parameter in inittab of the guest domain because start guest domain give me problems... (2 Replies)
Discussion started by: Liam_
2 Replies

8. UNIX for Beginners Questions & Answers

Solaris 11 LDOM guest network not working

I'm really stuck here. I've created an LDOM on a SPARC T4-1 with Solaris 11.4 to run a copy of Linux for SPARC. I got the Linux ISO installed and Linux itself installed and booted OK. The only thing is is that there's no networking available in the Linux guest. This question is basically the... (7 Replies)
Discussion started by: Michele31416
7 Replies

9. Solaris

Sharing a physical disk with an LDOM

I have a guest LDOM running Solaris 10U11 on a Sun T4-1 host running Solaris 11.4. The host has a disk named bkpool that I'd like to share with the LDOM so both can read and write it. The host is hemlock, the guest is sol10. root@hemlock:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP ... (3 Replies)
Discussion started by: Michele31416
3 Replies
vx_emerg_start(1M)														vx_emerg_start(1M)

NAME
vx_emerg_start - start Veritas Volume Manager from recovery media SYNOPSIS
vx_emerg_start [-m] [-r root_daname] hostname DESCRIPTION
The vx_emerg_start utility can be used to start Veritas Volume Manager (VxVM) when a system is booted from alternate media, or when a sys- tem has been booted into Maintenance Mode Boot (MMB) mode. This allows a rootable VxVM configuration to be repaired in the event of a cat- astrophic failure. vx_emerg_start verifies that the /etc/vx/volboot file exists, and checks the command-line arguments against the contents of this file. OPTIONS
-m Mounts the root file system contained on the rootvol volume after VxVM has been started. Prior to being mounted, the rootvol volume is started and fsck is run on the root file system. -r root_daname Specifies the disk access name of one of the members of the root disk group that is to be imported. This option can be used to spec- ify the appropriate root disk group when multiple generations of the same root disk group exist on the system under repair. If this option is not specified, the desired root disk group may not be imported if multiple disk groups with the same name exist on the sys- tem, and if one of these disk groups has a more recent timestamp. ARGUMENTS
hostname Specifies the system name (nodename) of the host system being repaired. This name is used to allow the desired root disk group to be imported. It must match the name of the system being repaired, as it is unlikely to be recorded on the recovery media from which you booted the system. NOTES
HP-UX Maintenance Mode Boot (MMB) is intended for recovery from catastrophic failures that have prevented the target machine from booting. If a VxVM root volume is mirrored, only one mirror is active when the system is in MMB mode. Any writes that are made to the root file sys- tem in this mode can corrupt this file system when both mirrors are subsequently configured. The vx_emerg_start script allows VxVM to be started while a system is in MMB mode, and marks the non-boot mirror plexes as stale. This prevents corruption of the root volume or file system by forcing a subsequent recovery from the boot mirror to the non-boot mirrors to take place. USAGE
After VxVM has been started, various recovery options can be performed depending on the nature of the problem. It is recommended that you use the vxprint command to determine the state of the configuration. One common problem is when all the plexes of the root disk are stale as shown in the following sample output from vxprint: v rootvol root DISABLED 393216 - ACTIVE - pl rootvol-01 rootvol DISABLED 393216 - STALE - sd rootdisk01-02 rootvol-01 ENABLED 393216 0 - - pl rootvol-02 rootvol DISABLED 393216 - STALE - pl rootvol-02 rootvol DISABLED 393216 - STALE - sd rootdisk02-02 rootvol-02 ENABLED 393216 0 - - In this case, the volume can usually be repaired by using the vxvol command as shown here: vxvol -g 4.1ROOT -f start rootvol If the volume is mirrored, it is put in read-write-back recovery mode. As the command is run in the foreground, it does not exit until the recovery is complete. It is then recommended that you run fsck on the root file system, and mount it, before attempting to reboot the sys- tem: fsck -F vxfs -o full /dev/vx/rdsk/4.1ROOT/rootvol mkdir /tmp_mnt mount -F vxfs /dev/vx/dsk/4.1ROOT/rootvol /tmp_mnt SEE ALSO
fsck(1M), mkdir(1M), mount(1M), vxintro(1M), vxprint(1M), vxvol(1M) VxVM 5.0.31.1 24 Mar 2008 vx_emerg_start(1M)
All times are GMT -4. The time now is 10:05 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy