Sponsored Content
Operating Systems Solaris Metastat shows state needs maintenance Post 303037807 by gull04 on Wednesday 14th of August 2019 02:57:32 AM
Old 08-14-2019
Hi prvnrk,

I'd suggest that your best approach here is to take a couple of steps back with this one if you can, where you have two options the way that I see this.

Option 1.

Regress the changes to the system and go with the original configuration, bring the system up and attempt to bring in the Nimble storage with your existing IBM storage still available and then use host mirroring to bring the Nimble disks into the mirror sets. This would prove that all is well with the system and allow you to prove that the Nimble storage is compatible with the installed system. Here you would create new meta devices and add them into the mirror set metainit d53 and metainit d54 once created you would then use metattach d5 d53 and metattach d5 d54 this would give you a fourway mirror. Once everything is mirrored up you can remove the original d51 and d52 devices and clear the storage from the OS.

Once you have the new mirrors silvered and working you can remove the old IBM disks from the mirrors, result is that the system has been migrated to the new storage and there is no outage to the application.

Option 2.

Remove the Nimble storage and bring the system up - obviously without the application being started, remediate any issues to give you a clean running server. Then following the standard process add the Nimble storage and configure the device tree and metadb with the new devices ( d5, d51 and d52 ), then you'll have to restore your original data and restart the application.

Of the two options I would definately go with the first, it's less disruptive to the system and follows a more logical progression. I'm making the assumption here that your SAN technology is switched fabric and not direct connect.

If you are using direct connect on the SAN then you would have to go a slightly different way, but both options are still open to you if you have sufficient fibre connections available.

Solaris Volume Manager (SVM) whilst fairly old is a well proven and robust tool, in some respects it's simplicity is it's greatest strength along with it's biggest limitation. In this case it would have made the migration to new disk a simple task, I have successfully used it to migrate storage platforms several times without significant issues.

Regards

Gull04

Last edited by gull04; 08-14-2019 at 04:06 AM.. Reason: More Info
 

10 More Discussions You Might Find Interesting

1. Solaris

SVM metastat -- needs maintenance

Running Solaris 9 with SVM. I'm not that familiar with it, but metastat output gives "needs maintenance" message on 2 of the mirrors. There are no errors in /var/adm/messages. What do I need to do to fix this error? Thanks. (14 Replies)
Discussion started by: dangral
14 Replies

2. Solaris

Help on metastat

hi all, can someone pls pass on your suggestion? Firs thing I am testing a script which checks for the pattern 'Needs Maintenance' from metastat output and prints some messages in the screen. So i need to simulate an error in mirrored disk for metastat to give this message 'Needs Maintenance'.... (3 Replies)
Discussion started by: srirammad007
3 Replies

3. Solaris

Softpartition State: Errored in Command MetaStat

Hi people, I have on problem when execute the command METASTAT... d60: Soft Partition Device: d10 State: Errored Size: 12582912 blocks (6.0 GB) Someone help me? Thank you very much (4 Replies)
Discussion started by: denisgomes
4 Replies

4. Solaris

both mirrors in needs maintenance state.

Hi, Ii am facing the belwo problem: d50: Mirror Submirror 0: d30 State: Needs maintenance Submirror 1: d40 State: Needs maintenance Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 212176648 blocks (101 GB) d30:... (3 Replies)
Discussion started by: sag71155
3 Replies

5. Solaris

Metastat shows "maintenance" and "last-erred"

Hi All, Sorry to post a problem for my first post but I'm in a bit of a pickle at the minute! I have an Ultra45 connected to a Storedge 3100 series, 2 internal, 2 external disks with a db application running on the external disks. Now everything is working fine and we've had no downtime or... (4 Replies)
Discussion started by: TheSteed
4 Replies

6. Solaris

How to clear maintenance state for apache2 server?

Can any one of you suggest me the method to get apache server in online from maintenance mode. I tried in the following way, but couldn't get that service to online. bash-3.00# svcs -a | grep apache legacy_run 9:51:55 lrc:/etc/rc3_d/S50apache offline 9:51:22... (3 Replies)
Discussion started by: Sesha
3 Replies

7. Solaris

metastat |grep Needs

Dear, Required an script such that : If metastat |grep Needs , results in some output then this command to be executed for the same : opcmsg object=metastat a=OS msg_grp=OpC severity=critical msg_text="Need maintenance for the system $line" With regards, Mjoshi (3 Replies)
Discussion started by: mjoshi87
3 Replies

8. Solaris

svcs command shows the state as disabled

Hi I need to export a directory named /sybase from my solaris machine via NFS. The svcs command shows the state as disabled. Please let me know how to export the directory. Once the directory is exported from the solaris machine it has to be mounted locally in an aix machine. Can some one... (2 Replies)
Discussion started by: newtoaixos
2 Replies

9. AIX

Open firmware state to running state

Hi Admins, I am having a whole system lpar in open firmware state on HMC. How can I bring it to running state ? Let me know. Thanks. (2 Replies)
Discussion started by: snchaudhari2
2 Replies

10. Solaris

SSH : Restarting too quickly, changing state to maintenance

Hi, I'm new to Solaris. I have an issue with ssh service. When I restart the service it exits with an exit status of 0 $svcadm restart svc:/network/ssh:default $echo $? 0 $ However, the service goes into maintenance mode after restart. I'm able to connect even though the service is in... (3 Replies)
Discussion started by: maverick_here
3 Replies
md.tab(4)							   File Formats 							 md.tab(4)

NAME
md.tab, md.cf - Solaris Volume Manager utility files SYNOPSIS
/etc/lvm/md.tab /etc/lvm/md.cf DESCRIPTION
The file /etc/lvm/md.tab can be used by metainit(1M) and metadb(1M) to configure metadevices, hot spare pools, and metadevice state data- base replicas in a batch-like mode. Solaris Volume Manager does not store configuration information in the /etc/lvm/md.tab file. You can use: metastat -p > /etc/lvm/md.tab to create this file. Edit it by hand using the instructions in the md.tab.4 file. Similarly, if no hot spares are in use, the cp md.cf md.tab command generates an acceptable version of the md.tab file, with the editing caveats previously mentioned. When using the md.tab file, each metadevice, hot spare pool, or state database replica in the file must have a unique entry. Entries can include the following: simple metadevices (stripes, concatenations, and concatenations of stripes); mirrors, soft partitions, and RAID5 metadevices; hot spare pools; and state database replicas. Because md.tab contains only entries that you enter in it, do not rely on the file for the current configuration of metadevices, hot spare pools, and replicas on the system at any given time. Tabs, spaces, comments (by using a pound sign, #), and continuation of lines (by using a backslash-newline), are allowed. Typically, you set up metadevices according to information specified on the command line by using the metainit command. Likewise, you set up state database replicas with the metadb command. An alternative to the command line is to use the md.tab file. Metadevices and state database replicas can be specified in the md.tab file in any order, and then activated in a batch-like mode with the metainit and metadb commands. If you edit the md.tab file, you specify one complete configuration entry per line. Metadevices are defined using the same syntax as required by the metainit command. You then run the metainit command with either the -a option, to activate all metadevices in the md.tab file, or with the metadevice name corresponding to a specific configuration entry. metainit does not maintain the state of the volumes that would have been created when metainit is run with both the -a and -n flags. If a device d0 is created in the first line of the md.tab file, and a later line in md.tab assumes the existence of d0, the later line will fail when metainit -an runs (even if it would succeed with metainit -a). State database replicas are defined in the /etc/lvm/md.tab file as follows: mddb number options [ slice... ] Where mddb number is the char- acters mddb followed by a number of two or more digits that identifies the state database replica. slice is a physical slice. For example: mddb05 /dev/dsk/c0t1d0s2. The file /etc/lvm/md.cf is a backup of the configuration used for disaster recovery. Whenever the Volume Manager configuration is changed, this file is automatically updated (except when hot sparing occurs). You should not directly edit this file. EXAMPLES
Example 1: Concatenation All drives in the following examples have the same size of 525 Mbytes. This example shows a metadevice, /dev/md/dsk/d7, consisting of a concatenation of four disks. # # (concatenation of four disks) # d7 4 1 c0t1d0s0 1 c0t2d0s0 1 c0t3d0s0 1 c0t4d0s0 The number 4 indicates there are four individual stripes in the concatenation. Each stripe is made of one slice, hence the number 1 appears in front of each slice. Note that the first disk sector in all of the above devices contains a disk label. To preserve the labels on devices /dev/dsk/c0t2d0s0, /dev/dsk/c0t3d0s0, and /dev/dsk/c0t4d0s0, the metadisk driver must skip at least the first sector of those disks when mapping accesses across the concatenation boundaries. Since skipping only the first sector would create an irregular disk geome- try, the entire first cylinder of these disks will be skipped. This allows higher level file system software to optimize block allocations correctly. Example 2: Stripe This example shows a metadevice, /dev/md/dsk/d15, consisting of two slices. # # (stripe consisting of two disks) # d15 1 2 c0t1d0s2 c0t2d0s2 -i 32k The number 1 indicates that one stripe is being created. Because the stripe is made of two slices, the number 2 follows next. The optional -i followed by 32k specifies the interlace size will be 32 Kbytes. If the interlace size were not specified, the stripe would use the default value of 16 Kbytes. Example 3: Concatenation of Stripes This example shows a metadevice, /dev/md/dsk/d75, consisting of a concatenation of two stripes of three disks. # # (concatenation of two stripes, each consisting of three disks) # d75 2 3 c0t1d0s2 c0t2d0s2 c0t3d0s2 -i 16k 3 c1t1d0s2 c1t2d0s2 c1t3d0s2 -i 32k On the first line, the -i followed by 16k specifies that the stripe's interlace size is 16 Kbytes. The second set specifies the stripe interlace size will be 32 Kbytes. If the second set did not specify 32 Kbytes, the set would use default interlace value of 16 Kbytes. The blocks of each set of three disks are interlaced across three disks. Example 4: Mirroring This example shows a three-way mirror, /dev/md/dsk/d50, consisting of three submirrors. This mirror does not contain any existing data. # # (mirror) # d50 -m d51 d51 1 1 c0t1d0s2 d52 1 1 c0t2d0s2 d53 1 1 c0t3d0s2 In this example, a one-way mirror is first defined using the -m option. The one-way mirror consists of submirror d51. The other two sub- mirrors, d52 and d53, are attached later using the metattach command. The default read and write options in this example are a round-robin read algorithm and parallel writes to all submirrors. The order in which mirrors appear in the /etc/lvm/md.tab file is unimportant. Example 5: RAID5 This example shows a RAID5 metadevice, d80, consisting of three slices: # # (RAID devices) # d80 -r c0t1d0s1 c1t0d0s1 c2t0d0s1 -i 20k In this example, a RAID5 metadevice is defined using the -r option with an interlace size of 20 Kbytes. The data and parity segments will be striped across the slices, c0t1d0s1, c1t0d0s1, and c2t0d0s1. Example 6: Soft Partition This example shows a soft partition, d85, that reformats an entire 9 GB disk. Slice 0 occupies all of the disk except for the few Mbytes taken by slice 7, which is space reserved for a state database replica. Slice 7 will be a minimum of 4Mbytes, but could be larger, depend- ing on the disk geometry. d85 sits on c3t4d0s0. Drives are repartitioned when they are added to a diskset only if Slice 7 is not set up correctly. A small portion of each drive is reserved in Slice 7 for use by Volume Manager. The remainder of the space on each drive is placed into Slice 0. Any existing data on the disks is lost after repartitioning. After adding a drive to a diskset, you can repartition the drive as necessary. However, Slice 7 should not be moved, removed, or overlapped with any other partition. Manually specifying the offsets and extents of soft partitions is not recommended. This example is included for to provide a better under- standing of the file if it is automatically generated and for completeness. # # (Soft Partitions) d85 -p -e c3t4d0 9g In this example, creating the soft partition and required space for the state database replica occupies all 9 GB of disk c3t4d0. Example 7: Soft Partition This example shows the command used to re-create a soft partition with two extents, the first one starting at offset 20483 and extending for 20480 blocks and the second extent starting at 135398 and extending for 20480 blocks: # # (Soft Partitions) # d1 -p c0t3d0s0 -o 20483 -b 20480 -o 135398 -b 20480 Example 8: Hot Spare This example shows a three-way mirror, /dev/md/dsk/d10, consisting of three submirrors and three hot spare pools. # # (mirror and hot spare) # d10 -m d20 d20 1 1 c1t0d0s2 -h hsp001 d30 1 1 c2t0d0s2 -h hsp002 d40 1 1 c3t0d0s2 -h hsp003 hsp001 c2t2d0s2 c3t2d0s2 c1t2d0s2 hsp002 c3t2d0s2 c1t2d0s2 c2t2d0s2 hsp003 c1t2d0s2 c2t2d0s2 c3t2d0s2 In this example, a one-way mirror is first defined using the -m option. The submirrors are attached later using the metattach(1M) command. The hot spare pools to be used are tied to the submirrors with the -h option. In this example, there are three disks used as hot spares, defined in three separate hot spare pools. The hot spare pools are given the names hsp001, hsp002, and hsp003. Setting up three hot spare pools rather than assigning just one hot spare with each component helps to maximize the use of hardware. This configuration enables the user to specify that the most desirable hot spare be selected first, and improves availability by having more hot spares available. At the end of the entry, the hot spares to be used are defined. Note that, when using the md.tab file, to associate hot spares with metadevices, the hot spare spool does not have to exist prior to the association. Volume Manager takes care of the order in which metadevices and hot spares are created when using the md.tab file. Example 9: State Database Replicas This example shows how to set up an initial state database and three replicas on a server that has three disks. # # (state database and replicas) # mddb01 -c 3 c0t1d0s0 c0t2d0s0 c0t3d0s0 In this example, three state database replicas are stored on each of the three slices. Once the above entry is made in the /etc/lvm/md.tab file, the metadb command must be run with both the -a and -f options. For example, typing the following command creates one state database replicas on three slices: # metadb -a -f mddb01 FILES
o /etc/lvm/md.tab o /etc/lvm/md.cf SEE ALSO
mdmonitord(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metarecover(1M), metarename(1M), metareplace(1M), metaroot(1M), metassist(1M), metaset(1M), metastat(1M), metasync(1M), metattach(1M), md.cf(4), mddb.cf(4), attributes(5), md(7D) Solaris Volume Manager Administration Guide LIMITATIONS
Recursive mirroring is not allowed; that is, a mirror cannot appear in the definition of another mirror. Recursive logging is not allowed. Stripes and RAID5 metadevices must contains slices or soft partitions only. Mirroring of RAID5 metadevices is not allowed. Soft partitions can be built directly on slices or can be the top level (accessible by applications directly), but cannot be in the middle, with other metadevices above and below them. NOTES
Trans metadevices have been replaced by UFS logging. Existing trans devices are not logging--they pass data directly through to the under- lying device. See mount_ufs(1M) for more information about UFS logging. SunOS 5.10 15 Dec 2004 md.tab(4)
All times are GMT -4. The time now is 11:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy