Sponsored Content
Operating Systems Solaris Metastat shows state needs maintenance Post 303037807 by gull04 on Wednesday 14th of August 2019 02:57:32 AM
Old 08-14-2019
Hi prvnrk,

I'd suggest that your best approach here is to take a couple of steps back with this one if you can, where you have two options the way that I see this.

Option 1.

Regress the changes to the system and go with the original configuration, bring the system up and attempt to bring in the Nimble storage with your existing IBM storage still available and then use host mirroring to bring the Nimble disks into the mirror sets. This would prove that all is well with the system and allow you to prove that the Nimble storage is compatible with the installed system. Here you would create new meta devices and add them into the mirror set metainit d53 and metainit d54 once created you would then use metattach d5 d53 and metattach d5 d54 this would give you a fourway mirror. Once everything is mirrored up you can remove the original d51 and d52 devices and clear the storage from the OS.

Once you have the new mirrors silvered and working you can remove the old IBM disks from the mirrors, result is that the system has been migrated to the new storage and there is no outage to the application.

Option 2.

Remove the Nimble storage and bring the system up - obviously without the application being started, remediate any issues to give you a clean running server. Then following the standard process add the Nimble storage and configure the device tree and metadb with the new devices ( d5, d51 and d52 ), then you'll have to restore your original data and restart the application.

Of the two options I would definately go with the first, it's less disruptive to the system and follows a more logical progression. I'm making the assumption here that your SAN technology is switched fabric and not direct connect.

If you are using direct connect on the SAN then you would have to go a slightly different way, but both options are still open to you if you have sufficient fibre connections available.

Solaris Volume Manager (SVM) whilst fairly old is a well proven and robust tool, in some respects it's simplicity is it's greatest strength along with it's biggest limitation. In this case it would have made the migration to new disk a simple task, I have successfully used it to migrate storage platforms several times without significant issues.

Regards

Gull04

Last edited by gull04; 08-14-2019 at 04:06 AM.. Reason: More Info
 

10 More Discussions You Might Find Interesting

1. Solaris

SVM metastat -- needs maintenance

Running Solaris 9 with SVM. I'm not that familiar with it, but metastat output gives "needs maintenance" message on 2 of the mirrors. There are no errors in /var/adm/messages. What do I need to do to fix this error? Thanks. (14 Replies)
Discussion started by: dangral
14 Replies

2. Solaris

Help on metastat

hi all, can someone pls pass on your suggestion? Firs thing I am testing a script which checks for the pattern 'Needs Maintenance' from metastat output and prints some messages in the screen. So i need to simulate an error in mirrored disk for metastat to give this message 'Needs Maintenance'.... (3 Replies)
Discussion started by: srirammad007
3 Replies

3. Solaris

Softpartition State: Errored in Command MetaStat

Hi people, I have on problem when execute the command METASTAT... d60: Soft Partition Device: d10 State: Errored Size: 12582912 blocks (6.0 GB) Someone help me? Thank you very much (4 Replies)
Discussion started by: denisgomes
4 Replies

4. Solaris

both mirrors in needs maintenance state.

Hi, Ii am facing the belwo problem: d50: Mirror Submirror 0: d30 State: Needs maintenance Submirror 1: d40 State: Needs maintenance Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 212176648 blocks (101 GB) d30:... (3 Replies)
Discussion started by: sag71155
3 Replies

5. Solaris

Metastat shows "maintenance" and "last-erred"

Hi All, Sorry to post a problem for my first post but I'm in a bit of a pickle at the minute! I have an Ultra45 connected to a Storedge 3100 series, 2 internal, 2 external disks with a db application running on the external disks. Now everything is working fine and we've had no downtime or... (4 Replies)
Discussion started by: TheSteed
4 Replies

6. Solaris

How to clear maintenance state for apache2 server?

Can any one of you suggest me the method to get apache server in online from maintenance mode. I tried in the following way, but couldn't get that service to online. bash-3.00# svcs -a | grep apache legacy_run 9:51:55 lrc:/etc/rc3_d/S50apache offline 9:51:22... (3 Replies)
Discussion started by: Sesha
3 Replies

7. Solaris

metastat |grep Needs

Dear, Required an script such that : If metastat |grep Needs , results in some output then this command to be executed for the same : opcmsg object=metastat a=OS msg_grp=OpC severity=critical msg_text="Need maintenance for the system $line" With regards, Mjoshi (3 Replies)
Discussion started by: mjoshi87
3 Replies

8. Solaris

svcs command shows the state as disabled

Hi I need to export a directory named /sybase from my solaris machine via NFS. The svcs command shows the state as disabled. Please let me know how to export the directory. Once the directory is exported from the solaris machine it has to be mounted locally in an aix machine. Can some one... (2 Replies)
Discussion started by: newtoaixos
2 Replies

9. AIX

Open firmware state to running state

Hi Admins, I am having a whole system lpar in open firmware state on HMC. How can I bring it to running state ? Let me know. Thanks. (2 Replies)
Discussion started by: snchaudhari2
2 Replies

10. Solaris

SSH : Restarting too quickly, changing state to maintenance

Hi, I'm new to Solaris. I have an issue with ssh service. When I restart the service it exits with an exit status of 0 $svcadm restart svc:/network/ssh:default $echo $? 0 $ However, the service goes into maintenance mode after restart. I'm able to connect even though the service is in... (3 Replies)
Discussion started by: maverick_here
3 Replies
BACKUP_DELHOST(8)					       AFS Command Reference						 BACKUP_DELHOST(8)

NAME
backup_delhost - Deletes a Tape Coordinator entry from the Backup Database SYNOPSIS
backup delhost -tapehost <tape machine name> [-portoffset <TC port offset>] [-localauth] [-cell <cell name>] [-help] backup delh -t <tape machine name> [-p <TC port offset>] [-l] [-c <cell name>] [-h] DESCRIPTION
The backup delhost command deletes the indicated Tape Coordinator entry from the Backup Database. It is then impossible to submit backup operations to that Tape Coordinator, even if it is still running. To keep configuration information consistent, also remove the corresponding entry from the /var/lib/openafs/backup/tapeconfig file on the Tape Coordinator machine. To list the Tape Coordinator machines and port offsets defined in the Backup Database, issue the backup listhosts command. OPTIONS
-tapehost <tape machine name> Specifies the hostname of the machine housing the Tape Coordinator to delete. -portoffset <TC port offset> Specifies the port offset number of the Tape Coordinator to delete. If omitted, it defaults to 0. If provided, it is an integer between 0 (zero) and 58510, and must match the port offset number assigned to the same combination of Tape Coordinator and tape device or file in the /var/lib/openafs/backup/tapeconfig file on the Tape Coordinator machine indicated by the -tapehost argument. -localauth Constructs a server ticket using a key from the local /etc/openafs/server/KeyFile file. The backup command interpreter presents it to the Backup Server, Volume Server and VL Server during mutual authentication. Do not combine this flag with the -cell argument. For more details, see backup(8). -cell <cell name> Names the cell in which to run the command. Do not combine this argument with the -localauth flag. For more details, see backup(8). -help Prints the online help for this command. All other valid options are ignored. EXAMPLES
The following command deletes the Backup Database entry for the Tape Coordinator with port offset 2 on the Tape Coordinator machine "backup3.abc.com": % backup delhost -tapehost backup3.abc.com -portoffset 2 PRIVILEGE REQUIRED
The issuer must be listed in the /etc/openafs/server/UserList file on every machine where the Backup Server is running, or must be logged onto a server machine as the local superuser "root" if the -localauth flag is included. SEE ALSO
backup(8), backup_addhost(8), backup_listhosts(8) COPYRIGHT
IBM Corporation 2000. <http://www.ibm.com/> All Rights Reserved. This documentation is covered by the IBM Public License Version 1.0. It was converted from HTML to POD by software written by Chas Williams and Russ Allbery, based on work by Alf Wachsmann and Elizabeth Cassell. OpenAFS 2012-03-26 BACKUP_DELHOST(8)
All times are GMT -4. The time now is 06:39 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy