Sponsored Content
Operating Systems Solaris Move root disk to new identical hardware Post 303040914 by hicksd8 on Saturday 9th of November 2019 04:59:56 AM
Old 11-09-2019
You don't say which version of Solaris this is. Please post your OS version.

Firstly, whilst it's up and running, and before you do anything with the existing system, make sure you know the hostid exactly!! If you move the root disk(s) and cannot get the original hostid, to set it you'll need to inject the required value into the kernel module. If you don't know what hostid you need at that time you are screwed.

Also, blatantly obvious thing to say I know is, you need to ensure that you are also moving the metadb (which is usually on a very small disk partition) otherwise SVM won't work on the new machine. If metadb is on a non-root disk then you'll have to move that too.

If the root disk is SVM mirrored then I guess you mean to move a pair of disks??

There's likely no EPROM on this hardware to allow you to move the NIC address, etc, and the hostid is hashed from the NIC address on SPARC hardware. So it's quite likely IMHO that you will need to forcibly change the hostid on the new box. The code to do that I posted on this forum a long time ago so you can search for it. REMEMBER THOUGH, that the code is different for SPARC vs X86 and both are published on here. Make sure you use the right method.

If you've been professional and have a full backup I don't see any harm in giving it a try.

(If you cannot find the relevant hostid discussion on this forum drop me (hicksd8) or moderator gull04 a PM and prompt one of us to post back on this thread. We know how to do this.)

Last edited by hicksd8; 11-09-2019 at 06:38 AM..
 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Move a disk to Sol. 2.6

I removed an external Sun disk (with data on it) from an old 2.6 system and added the disk to another 2.6 system. The new system seems to recognize the system (verified by the format command). When try to mount I am getting, I got the error: mount: /dev/dsk/c1t1d0s6 is not this fstype. I... (3 Replies)
Discussion started by: sunshine
3 Replies

2. UNIX for Dummies Questions & Answers

Multi disk hardware

Hi all, I purchased a pci controller card and a 3rd hard drive to put Linux on my machine. The card did not enable me to boot to the 3rd drive so I could not get Mandrake to install. My motherboard is a Gigabyte GA-7DXR+. Can anyone point me to a reasonable card/connector which will allow me... (2 Replies)
Discussion started by: onestepto
2 Replies

3. Solaris

Move root disk to new server

Hi, I need to do an installation where I have identical hardware at both sites and create the installation at one site and take the disk to the other site. Question: Do I need to do anything special for the OS to come up properly? thanks. (18 Replies)
Discussion started by: VirginiaSA
18 Replies

4. Solaris

Move root filesystem to other slice

Hi, df -h display: # df -h Filesystem size used avail capacity Mounted on /dev/dsk/c1t0d0s0 9.8G 8.1G 1.7G 84% / /proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab fd 0K 0K 0K 0% /dev/fd swap 1.0G 152K 1.0G 1% /var/run swap 1.1G 24M 1.0G 3% /tmp /dev/dsk/c1t0d0s3 57G 13G 43G 24%... (4 Replies)
Discussion started by: lamoul
4 Replies

5. Solaris

Move partition to new disk

Hi, The disks of my servers are getting full and I need to move the /export/home partition on to a new set of disks. I already have 2 mirrored disks and have added 2 more and mirrored them after creating the filesystem on them. Do I just need to edit the /etc/vfstab and point the /export/home... (1 Reply)
Discussion started by: run_time_error
1 Replies

6. Solaris

Hardware RAID not recognize the new disk [Sun T6320]

We have hardware RAID configured on Sun-Blade-T6320 and one of the disk got failed. Hence we replaced the failed disk. But the hot swapped disk not recognized by RAID. Kindly help on fixing this issue. We have 2 LDOM configured on this server and this server running on single disk. #... (8 Replies)
Discussion started by: rock123
8 Replies

7. Solaris

Lost Root Password on VXVM Encapsulated Root Disk

Hi All Hope it's okay to post on this sub-forum, couldn't find a better place I've got a 480R running solaris 8 with veritas volume manager managing all filesystems, including an encapsulated root disk (I believe the root disk is encapsulated as one of the root mirror disks has an entry under... (1 Reply)
Discussion started by: sunnyd76
1 Replies

8. UNIX for Dummies Questions & Answers

Move os from 1 disk to the other

Hi, I have server with 2 active disk, but disk 1 contain big part of os is falling, how can i move everything to disk 2 and then remove the disk 1? Thanks (4 Replies)
Discussion started by: prpkrk
4 Replies

9. Solaris

[solved] How to blink faulty disk in Solaris hardware?

Hi Guys, One of two disks in my solaris machine has failed, the name is disk0, this is SUN physical sparc machine But I work remotely, so people working near that physical server are not that technical, so from OS command prompt can run some command to bink faulty disk at front panel of Server.... (9 Replies)
Discussion started by: manalisharmabe
9 Replies

10. Linux

Move OS to storage disk

Hi We have RHEL 7.3 running from local disk and we want to move it to storage. I am unable to find any proper procedure to do this activity. Please help. (4 Replies)
Discussion started by: powerAIX
4 Replies
voldctl(8)						      System Manager's Manual							voldctl(8)

NAME
voldctl - Controls the volume configuration daemon SYNOPSIS
/sbin/voldctl init [hostid] /sbin/voldctl hostid hostid /sbin/voldctl add disk accessname [attr[=value]]... /sbin/voldctl rm disk accessname... /sbin/voldctl list /sbin/voldctl enable /sbin/voldctl disable /sbin/voldctl [-k] stop /sbin/voldctl mode /sbin/voldctl license [init] DESCRIPTION
The voldctl utility manages some aspects of the state of the volume configuration daemon, vold, and manages configuration aspects related to bootstrapping the rootdg disk group configuration. A key part of the state of vold and of bootstrapping the rootdg disk group is the /etc/vol/volboot file. This file contains a host ID, which is usually the host name, that LSM uses to establish ownership of physical disks. This host ID is used to ensure that two or more hosts that can access disks on a shared SCSI bus will not interfere with each other in their use of those disks. This host ID is also important in the generation of some unique ID strings that are used internally by the Logical Storage Manager for stamping disks and disk groups. The /etc/vol/volboot file may also contain a list of disks to scan in search of the rootdg disk group. This list is only needed if the autoconfiguration functionality of LSM is disabled (see vold(8) for details). At least one disk in the list must be both readable and a part of the rootdg disk group, or the Logical Storage Manager will not be able to start up correctly. The vold daemon operates in one of three modes: enabled, disabled, or booted. The enabled state is the normal operating state. Most con- figuration operations are allowed in the enabled state. Entering the enabled state imports all disk groups that were previously imported on this host, and begins the management of device nodes stored in the /dev/vol and /dev/rvol directories. In the disabled state, vold does not retain configuration information for the imported disk groups, and does not maintain the volume device directories. Most operations are disallowed in the disabled state. Certain failures, most commonly the loss of all disks or configuration copies in the rootdg disk group, will cause vold to enter the disabled state automatically. The booted state is entered as part of normal system startup, prior to checking the root file system (see fsck(1)). Entering the booted mode imports the rootdg disk group, and then waits for a request to enter the enabled mode. The volume device node directories are not maintained in booted mode, because it may not be possible to write to the root file system. KEYWORDS
The action performed by voldctl depends upon the keyword specified as the first operand. Supported keywords are: Reinitializes the /etc/vol/volboot file with a new host ID (which is usually the host name), and an empty list of disks. If a hostid operand is specified, this string is used; otherwise, a default host ID is used. On systems with a hardware-defined system ID, the default host ID might be derived from this hardware ID. Changes the host ID, which is usually the host name, in the /etc/vol/volboot file and on all disks in disk groups currently imported on this machine. It may be desirable to change the Logical Storage Manager host ID for your machine if you are also changing the network node name of your machine. If some disks are inaccessible at the time of a hostid operation, it may be necessary to use the voldisk clearimport operation to clear out the old host ID on those disks when they become reaccessible. Otherwise, you may not be able to re-add those disks to their disk groups. Note Take care when using this command. If the system crashes before the hostid operation completes, some disk groups may not reimport automatically. Adds to the list of disks in the /etc/vol/volboot file. Disks are specified based on their disk access name. This name identifies the physical address of the disk. For example, to add disk dsk3c you might use the command: /sbin/voldctl add disk dsk3c If there is a disk access record in the rootdg configuration for the named disk, configuration parameters are taken from that record. Otherwise, it may be necessary to specify some attributes to voldctl add disk. Removes one or more disks from the /etc/vol/volboot file. Disks are specified based on the name used in the corresponding voldctl add disk operation. Lists the con- tents of the /etc/vol/volboot file. This list includes the host ID (which is usually the host name), some sequence numbers, and the list of disks and disk attributes stored in the /etc/vol/volboot file. Requests that vold enter enabled mode, import all disk groups that were previously imported on this host, and rebuild the volume device node directories. This operation can be used even if vold is already in enabled mode, however any deported disk groups remain deported. The primary purpose for using this operation when in enabled mode is to rebuild the volume device nodes. This operation will also cause vold to scan for any disks that were newly added since vold was last started. In this manner, disks can be dynamically configured to the system and then recognized by the Logical Storage Manager. If this operation fails, voldctl exits with the appropriate error status and displays an error message. Requests that vold enter disabled mode. This may be necessary to perform some maintenance operations. This does not disable any configuration state loaded into the kernel. It only prevents further configuration changes to loaded disk groups until vold is re-enabled. Requests that vold exit. This may be necessary to reset the Logical Storage Manager, such as using the -r reset option to vold. This does not disable any configuration state loaded into the kernel. It only affects the ability to make configuration changes until vold is restarted. If the -k option is used vold will be stopped by sending it a SIGKILL signal. The command will delay for up to 1 second to verify that vold has exited. After 1 second if vold has not exited an error will be returned. Prints the current operating mode of vold. The output format is: mode: operating_mode where operating_mode is either enabled, disabled, booted, or not-running. With an argument of init, requests that vold re-read any persistently stored license information. If licenses have expired, this may cause some features to become unavailable. If new licenses have been added, this will make the features defined in those licenses available. With no arguments, voldctl license prints the list of features which are currently available based on known licensing information. SEE ALSO
volintro(8), vold(8), voldg(8), voldisk(8), signal(4) voldctl(8)
All times are GMT -4. The time now is 05:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy