08-13-2019
HI,
You've stated that the storage team did the migration of IBM Storage to Nimble storage, I'm guessing that that is where the problem lies. Or there could be issues with the vpath software and Nimble storage, does the Solaris version support Nimble?
I would have tackled this a different way;
- Added the new disk to the running system as normal.
- Extended the metadevice into a four way mirror.
- Remove the original disk.
- Recreate the metadb.
I'm not sure why you would attempt to do this using SAN replication, the only exception I would make is where you can replicate the device at a block level ensuring tha the boot block and everything else would come over.
You may get away with installing the boot block and re-labeling the disks, but I'd rather adopt the add the disks and extend the mirror approach.
Given the current situation, you may want to try the metarecover options to recover the individual devices.
Regards
Gull04
--- Post updated at 03:15 PM ---
Hi,
Just out of curiosity, have you made any required changes to /etc/vfstab - you don't mention it.
Regards
Gull04
Last edited by gull04; 08-13-2019 at 10:20 AM..
Reason: Additional Information.
This User Gave Thanks to gull04 For This Post:
10 More Discussions You Might Find Interesting
1. Solaris
Running Solaris 9 with SVM. I'm not that familiar with it, but metastat output gives "needs maintenance" message on 2 of the mirrors. There are no errors in /var/adm/messages. What do I need to do to fix this error? Thanks. (14 Replies)
Discussion started by: dangral
14 Replies
2. Solaris
hi all,
can someone pls pass on your suggestion?
Firs thing I am testing a script which checks for the pattern 'Needs Maintenance' from metastat output and prints some messages in the screen. So i need to simulate an error in mirrored disk for metastat to give this message 'Needs Maintenance'.... (3 Replies)
Discussion started by: srirammad007
3 Replies
3. Solaris
Hi people,
I have on problem when execute the command METASTAT...
d60: Soft Partition
Device: d10
State: Errored
Size: 12582912 blocks (6.0 GB)
Someone help me?
Thank you very much (4 Replies)
Discussion started by: denisgomes
4 Replies
4. Solaris
Hi,
Ii am facing the belwo problem:
d50: Mirror
Submirror 0: d30
State: Needs maintenance
Submirror 1: d40
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 212176648 blocks (101 GB)
d30:... (3 Replies)
Discussion started by: sag71155
3 Replies
5. Solaris
Hi All,
Sorry to post a problem for my first post but I'm in a bit of a pickle at the minute!
I have an Ultra45 connected to a Storedge 3100 series, 2 internal, 2 external disks with a db application running on the external disks.
Now everything is working fine and we've had no downtime or... (4 Replies)
Discussion started by: TheSteed
4 Replies
6. Solaris
Can any one of you suggest me the method to get apache server in online from maintenance mode. I tried in the following way, but couldn't get that service to online.
bash-3.00# svcs -a | grep apache
legacy_run 9:51:55 lrc:/etc/rc3_d/S50apache
offline 9:51:22... (3 Replies)
Discussion started by: Sesha
3 Replies
7. Solaris
Dear,
Required an script such that :
If metastat |grep Needs , results in some output then this command to be executed for the same :
opcmsg object=metastat a=OS msg_grp=OpC severity=critical msg_text="Need maintenance for the system $line"
With regards,
Mjoshi (3 Replies)
Discussion started by: mjoshi87
3 Replies
8. Solaris
Hi
I need to export a directory named /sybase from my solaris machine via NFS. The svcs command shows the state as disabled. Please let me know how to export the directory.
Once the directory is exported from the solaris machine it has to be mounted locally in an aix machine. Can some one... (2 Replies)
Discussion started by: newtoaixos
2 Replies
9. AIX
Hi Admins,
I am having a whole system lpar in open firmware state on HMC.
How can I bring it to running state ?
Let me know. Thanks. (2 Replies)
Discussion started by: snchaudhari2
2 Replies
10. Solaris
Hi,
I'm new to Solaris. I have an issue with ssh service. When I restart the service it exits with an exit status of 0
$svcadm restart svc:/network/ssh:default
$echo $?
0
$
However, the service goes into maintenance mode after restart. I'm able to connect even though the service is in... (3 Replies)
Discussion started by: maverick_here
3 Replies
LEARN ABOUT DEBIAN
metaroot
metaroot(1M) metaroot(1M)
NAME
metaroot - setup system files for root (/) metadevice
SYNOPSIS
/usr/sbin/metaroot -h
/usr/sbin/metaroot [-n] [-k system-name] [-v vfstab-name] [-c mddb.cf-name] [-m md.conf-name] [-R root-path] device
The metaroot command edits the /etc/vfstab and /etc/system files so that the system may be booted with the root file system (/) on an
appropriate metadevice. The only metadevices that support the root file system are a stripe with only a single slice or a mirror on a sin-
gle-slice stripe.
If necessary, the metaroot command can reset a system that has been configured to boot the root file system (/) on a metadevice so that it
uses a physical slice.
Root privileges are required for all of the following options except -h.
The following options are supported:
-c mddb.cf-name Use mddb.cf-name instead of the default /etc/lvm/mddb.cf file as a source of metadevice database locations.
-h Display a usage message.
-k system-name Edit a user-supplied system-name instead of the default /etc/system system configuration information file.
-m md.conf-name Edit the configuration file specified by md.conf-name rather than the default, /kernel/drv/md.conf.
-n Print what would be done without actually doing it.
-R root-path When metaroot modifies system files, it accesses them in their relative location under root-path.
The -R option cannot be used in combination with the -c, -k,-m, or -v options.
Note - The root file system of any non-global zones must not be referenced with the -R option. Doing so might damage the
global zone's file system, might compromise the security of the global zone, and might damage the non-global zone's
file system. See zones(5).
-v vfstab-name Edit vfstab-name instead of the default /etc/vfstab table of file system defaults.
The following operands are supported:
device Specifies either the metadevice or the conventional disk device (slice) used for the root file system (/).
Example 1: Specifying Root File System on Metadevice
The following command edits /etc/system and /etc/vfstab to specify that the root file system is now on metadevice d0.
# metaroot d0
Example 2: Specifying Root File System on SCSI Disk
The following command edits /etc/system and /etc/vfstab to specify that the root file system is now on the SCSI disk device
/dev/dsk/c0t3d0s0.
# metaroot /dev/dsk/c0t3d0s0
/etc/system System configuration information file. See system(4).
/etc/vfstab File system defaults.
/etc/lvm/mddb.cf Metadevice state database locations.
/kernel/drv/md.conf Configuration file for the metadevice driver, md.
The following exit values are returned:
0 Successful completion.
>0 An error occurred.
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Availability |SUNWmdu |
+-----------------------------+-----------------------------+
mdmonitord(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M),
metarecover(1M), metarename(1M), metareplace(1M), metaset(1M), metassist(1M), metastat(1M), metasync(1M), metattach(1M), md.tab(4),
md.cf(4), mddb.cf(4), md.tab(4), attributes(5), md(7D)
6 Apr 2005 metaroot(1M)