08-14-2019
Hi prvnrk,
I'd suggest that your best approach here is to take a couple of steps back with this one if you can, where you have two options the way that I see this.
Option 1.
Regress the changes to the system and go with the original configuration, bring the system up and attempt to bring in the Nimble storage with your existing IBM storage still available and then use host mirroring to bring the Nimble disks into the mirror sets. This would prove that all is well with the system and allow you to prove that the Nimble storage is compatible with the installed system. Here you would create new meta devices and add them into the mirror set metainit d53 and metainit d54 once created you would then use metattach d5 d53 and metattach d5 d54 this would give you a fourway mirror. Once everything is mirrored up you can remove the original d51 and d52 devices and clear the storage from the OS.
Once you have the new mirrors silvered and working you can remove the old IBM disks from the mirrors, result is that the system has been migrated to the new storage and there is no outage to the application.
Option 2.
Remove the Nimble storage and bring the system up - obviously without the application being started, remediate any issues to give you a clean running server. Then following the standard process add the Nimble storage and configure the device tree and metadb with the new devices ( d5, d51 and d52 ), then you'll have to restore your original data and restart the application.
Of the two options I would definately go with the first, it's less disruptive to the system and follows a more logical progression. I'm making the assumption here that your SAN technology is switched fabric and not direct connect.
If you are using direct connect on the SAN then you would have to go a slightly different way, but both options are still open to you if you have sufficient fibre connections available.
Solaris Volume Manager (SVM) whilst fairly old is a well proven and robust tool, in some respects it's simplicity is it's greatest strength along with it's biggest limitation. In this case it would have made the migration to new disk a simple task, I have successfully used it to migrate storage platforms several times without significant issues.
Regards
Gull04
Last edited by gull04; 08-14-2019 at 04:06 AM..
Reason: More Info
10 More Discussions You Might Find Interesting
1. Solaris
Running Solaris 9 with SVM. I'm not that familiar with it, but metastat output gives "needs maintenance" message on 2 of the mirrors. There are no errors in /var/adm/messages. What do I need to do to fix this error? Thanks. (14 Replies)
Discussion started by: dangral
14 Replies
2. Solaris
hi all,
can someone pls pass on your suggestion?
Firs thing I am testing a script which checks for the pattern 'Needs Maintenance' from metastat output and prints some messages in the screen. So i need to simulate an error in mirrored disk for metastat to give this message 'Needs Maintenance'.... (3 Replies)
Discussion started by: srirammad007
3 Replies
3. Solaris
Hi people,
I have on problem when execute the command METASTAT...
d60: Soft Partition
Device: d10
State: Errored
Size: 12582912 blocks (6.0 GB)
Someone help me?
Thank you very much (4 Replies)
Discussion started by: denisgomes
4 Replies
4. Solaris
Hi,
Ii am facing the belwo problem:
d50: Mirror
Submirror 0: d30
State: Needs maintenance
Submirror 1: d40
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 212176648 blocks (101 GB)
d30:... (3 Replies)
Discussion started by: sag71155
3 Replies
5. Solaris
Hi All,
Sorry to post a problem for my first post but I'm in a bit of a pickle at the minute!
I have an Ultra45 connected to a Storedge 3100 series, 2 internal, 2 external disks with a db application running on the external disks.
Now everything is working fine and we've had no downtime or... (4 Replies)
Discussion started by: TheSteed
4 Replies
6. Solaris
Can any one of you suggest me the method to get apache server in online from maintenance mode. I tried in the following way, but couldn't get that service to online.
bash-3.00# svcs -a | grep apache
legacy_run 9:51:55 lrc:/etc/rc3_d/S50apache
offline 9:51:22... (3 Replies)
Discussion started by: Sesha
3 Replies
7. Solaris
Dear,
Required an script such that :
If metastat |grep Needs , results in some output then this command to be executed for the same :
opcmsg object=metastat a=OS msg_grp=OpC severity=critical msg_text="Need maintenance for the system $line"
With regards,
Mjoshi (3 Replies)
Discussion started by: mjoshi87
3 Replies
8. Solaris
Hi
I need to export a directory named /sybase from my solaris machine via NFS. The svcs command shows the state as disabled. Please let me know how to export the directory.
Once the directory is exported from the solaris machine it has to be mounted locally in an aix machine. Can some one... (2 Replies)
Discussion started by: newtoaixos
2 Replies
9. AIX
Hi Admins,
I am having a whole system lpar in open firmware state on HMC.
How can I bring it to running state ?
Let me know. Thanks. (2 Replies)
Discussion started by: snchaudhari2
2 Replies
10. Solaris
Hi,
I'm new to Solaris. I have an issue with ssh service. When I restart the service it exits with an exit status of 0
$svcadm restart svc:/network/ssh:default
$echo $?
0
$
However, the service goes into maintenance mode after restart. I'm able to connect even though the service is in... (3 Replies)
Discussion started by: maverick_here
3 Replies
LEARN ABOUT SUNOS
metareplace
metareplace(1M) System Administration Commands metareplace(1M)
NAME
metareplace - enable or replace components of submirrors or RAID5 metadevices
SYNOPSIS
/usr/sbin/metareplace -h
/usr/sbin/metareplace [-s setname] -e mirror component
/usr/sbin/metareplace [-s setname] mirror component-old component-new
/usr/sbin/metareplace [-s setname] -e RAID component
/usr/sbin/metareplace [-s setname] [-f] RAID component-old component-new
DESCRIPTION
The metareplace command is used to enable or replace components (slices) within a submirror or a RAID5 metadevice.
When you replace a component, the metareplace command automatically starts resyncing the new component with the rest of the metadevice.
When the resync completes, the replaced component becomes readable and writable. If the failed component has been hot spare replaced, the
hot spare is placed in the available state and made available for other hot spare replacements.
Note that the new component must be large enough to replace the old component.
A component may be in one of several states. The Last Erred and the Maintenance states require action. Always replace components in the
Maintenance state first, followed by a resync and validation of data. After components requiring maintenance are fixed, validated, and
resynced, components in the Last Erred state should be replaced. To avoid data loss, it is always best to back up all data before replacing
Last Erred devices.
OPTIONS
Root privileges are required for all of the following options except -h.
-e Transitions the state of component to the available state and resyncs the failed component. If the failed component has
been hot spare replaced, the hot spare is placed in the available state and made available for other hot spare replace-
ments. This command is useful when a component fails due to human error (for example, accidentally turning off a disk), or
because the component was physically replaced. In this case, the replacement component must be partitioned to match the
disk being replaced before running the metareplace command.
-f Forces the replacement of an errored component of a metadevice in which multiple components are in error. The component
determined by the metastat display to be in the ``Maintenance'' state must be replaced first. This option may cause data to
be fabricated since multiple components are in error.
-h Display help message.
-s setname Specifies the name of the diskset on which metareplace will work. Using the -s option will cause the command to perform its
administrative function within the specified diskset. Without this option, the command will perform its function on local
metadevices.
mirror The metadevice name of the mirror.
component The logical name for the physical slice (partition) on a disk drive, such as /dev/dsk/c0t0d0s2.
component-old The physical slice that is being replaced.
component-new The physical slice that is replacing component-old.
RAID The metadevice name of the RAID5 device.
EXAMPLES
Example 1: Recovering from Error Condition in RAID5 Metadevice
This example shows how to recover when a single component in a RAID5 metadevice is errored.
# metareplace d10 c3t0d0s2 c5t0d0s2
In this example, a RAID5 metadevice d10 has an errored component, c3t0d0s2, replaced by a new component, c5t0d0s2.
Example 2: Use of -e After Physical Disk Replacement
This example shows the use of the -e option after a physical disk in a submirror (a submirror of mirror d11, in this case) has been
replaced.
# metareplace -e d11 c1t4d0s2
Note: The replacement disk must be partitioned to match the disk it is replacing before running the metareplace command.
EXIT STATUS
The following exit values are returned:
0 Successful completion.
>0 An error occurred.
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Availability |SUNWmdu |
+-----------------------------+-----------------------------+
SEE ALSO
mdmonitord(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M),
metarecover(1M), metarename(1M), metaroot(1M), metaset(1M), metassist(1M), metastat(1M), metasync(1M), metattach(1M), md.tab(4), md.cf(4),
mddb.cf(4), md.tab(4), attributes(5), md(7D)
Solaris Volume Manager Administration Guide
SunOS 5.10 8 Aug 2003 metareplace(1M)