Sponsored Content
Operating Systems Solaris Solaris 10 Volume Manager - adding slice to metadb Post 302980557 by javanoob on Tuesday 30th of August 2016 08:15:33 AM
Old 08-30-2016
Solaris 10 Volume Manager - adding slice to metadb

Hi all,

I added a new disk slice to the current metadb.
Below is what I see

Code:
bash-3.2# metadb -i
        flags           first blk       block count
     a m  p  luo        16              8192            /dev/dsk/c0t0d0s7
     a    p  luo        8208            8192            /dev/dsk/c0t0d0s7
     a    p  luo        16400           8192            /dev/dsk/c0t0d0s7
     a    p  luo        16              8192            /dev/dsk/c0t1d0s7
     a    p  luo        8208            8192            /dev/dsk/c0t1d0s7
     a    p  luo        16400           8192            /dev/dsk/c0t1d0s7
     a        u         16              8192            /dev/dsk/c0t2d0s7
     a        u         8208            8192            /dev/dsk/c0t2d0s7
     a        u         16400           8192            /dev/dsk/c0t2d0s7

I am missing the following flags.

p - replica's location was patched in kernel
l - locator for this replica was read successfully
o - replica active prior to last mddb configuration change

q1) Can anyone elaborate the meaning of 'p' and 'l' especially ?

q2) How do make the flags appear ?

Regards,
Noob
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

newfs hotspares: solaris volume manager

I'm running sun volume manager on solaris 9. I have two hotspares and are currently on standby. Both are not being utilized. Can I newfs both of them? Do I need to deleted the hostpares first, then newfs? hsp002: 2 hot spares Device Status Length Reloc ... (3 Replies)
Discussion started by: xnightcrawl
3 Replies

2. Solaris

Disk Mirror in Solaris 9 via Solaris Volume Manager

Hello, I am trying to do mirror in solaris 9. I have total 0-7 disks 4 5 6 7 0 1 2 3 Drive 0 and Drive 4 = Boot Drives Need to Mirror following drives. Drive 1 and Drive 5 = Need to mirror Drive 1 was mounted on: /prod1, /prod2, /prod3, /prod4, /prod5. Then i... (3 Replies)
Discussion started by: deal732
3 Replies

3. Solaris

How to resize mirror volume in veritas volume manager 3.5 on Solaris 9 OE

Hi all, I have a problem with vxvm volume which is mirror with two disks. when i am try to increase file system, it is throwing an ERROR: can not allocate 5083938 blocks, ERROR: can not able to run vxassist on this volume. Please find a sutable solutions. Thanks and Regards B. Nageswar... (0 Replies)
Discussion started by: nageswarb
0 Replies

4. Solaris

Need some E-Labs for solaris 10 & veritas volume manager

Hello friends, I need to test some of the solaris 10 concepts and veritas volume manager.i want to know ,where i get the testing labs or E-labs(online) for this practice.i ready to pay for this labs.pls kindly help me to get the details or website names...Thanks in Advance -Jay (6 Replies)
Discussion started by: rjay.com
6 Replies

5. Solaris

How to create metadb when there is no free slice

Hi All, I have to do disk mirroring, for that I have to create a metadb as disk mirroring have to do with SVM. However I do not have any slice free for metadb. What are the options? Please suggest (4 Replies)
Discussion started by: kumarmani
4 Replies

6. Solaris

Veritas volume manager in solaris.

Can you please let me know the certification code for veritas volume manager in solaris ? Thanks in advance. (2 Replies)
Discussion started by: gowthamakanthan
2 Replies

7. Solaris

Solaris Volume Manager

Hello All, I have small doubt. It's related to SVM in solaris 10. I have created raid 0 (striping) using 3 slices of 500 MB size (default interlace value as 32KB) d2: Concat/Stripe ---- Interlace value is 32 KB Size: 2923830 blocks (1.4 GB) Stripe 0: (interlace: 64 blocks) Device Start... (1 Reply)
Discussion started by: vaibhav.kanchan
1 Replies

8. Solaris

Solaris Volume manager

Hi Friends, I have to migrate my raid-1 volume to raid -5 online. Can anyone please help me, If possible then send me the step by step commands for online migration. Thanks in advance. (2 Replies)
Discussion started by: rajaramrnb
2 Replies

9. Solaris

root disk mirroring in solaris volume manager for solaris 10

Need a procedure document to do "root disk mirroring in solaris volume manager for solaris 10". I hope some one will help me asap. I need to do it production environment. Let me know if you need any deatils on this. Thanks, Rama (1 Reply)
Discussion started by: ramareddi16
1 Replies

10. Solaris

solaris volume manager- stripe?

Hello Admins.. I am going through solaris volume manager guide for RAID-0 concatenation and stripes, I do not understand the concept of stripe from following example of concatenation. There is an eample for concatenation: # metainit d25 1 1 c0t1d0s2 d25: Concat/Stripe is setup the... (5 Replies)
Discussion started by: snchaudhari2
5 Replies
metadb(1M)						  System Administration Commands						metadb(1M)

NAME
metadb - create and delete replicas of the metadevice state database SYNOPSIS
/sbin/metadb -h /sbin/metadb [-s setname] /sbin/metadb [-s setname] -a [-f] [-k system-file] mddbnn /sbin/metadb [-s setname] -a [-f] [-k system-file] [-c number] [-l length] slice... /sbin/metadb [-s setname] -d [-f] [-k system-file] mddbnn /sbin/metadb [-s setname] -d [-f] [-k system-file] slice... /sbin/metadb [-s setname] -i /sbin/metadb [-s setname] -p [-k system-file] [mddb.cf-file] DESCRIPTION
The metadb command creates and deletes replicas of the metadevice state database. State database replicas can be created on dedicated slices, or on slices that will later become part of a simple metadevice (concatenation or stripe) or RAID5 metadevice. Do not place state database replicas on fabric-attached storage, SANs, or other storage that is not directly attached to the system and available at the same point in the boot process as traditional SCSI or IDE drives. See NOTES. The metadevice state database contains the configuration of all metadevices and hot spare pools in the system. Additionally, the metadevice state database keeps track of the current state of metadevices and hot spare pools, and their components. Solaris Volume Manager automati- cally updates the metadevice state database when a configuration or state change occurs. A submirror failure is an example of a state change. Creating a new metadevice is an example of a configuration change. The metadevice state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a replica, is subject to strict consistency checking to ensure correctness. Replicated databases have an inherent problem in determining which database has valid and correct data. To solve this problem, Volume Man- ager uses a majority consensus algorithm. This algorithm requires that a majority of the database replicas be available before any of them are declared valid. This algorithm strongly encourages the presence of at least three initial replicas, which you create. A consensus can then be reached as long as at least two of the three replicas are available. If there is only one replica and the system crashes, it is possible that all metadevice configuration data can be lost. The majority consensus algorithm is conservative in the sense that it will fail if a majority consensus cannot be reached, even if one replica actually does contain the most up-to-date data. This approach guarantees that stale data will not be accidentally used, regardless of the failure scenario. The majority consensus algorithm accounts for the following: the system will stay running with exactly half or more replicas; the system will panic when less than half the replicas are available; the system will not reboot without one more than half the total replicas. When used with no options, the metadb command gives a short form of the status of the metadevice state database. Use metadb -i for an explanation of the flags field in the output. The initial state database is created using the metadb command with both the -a and -f options, followed by the slice where the replica is to reside. The -a option specifies that a replica (in this case, the initial) state database should be created. The -f option forces the creation to occur, even though a state database does not exist. (The -a and -f options should be used together only when no state databases exist.) Additional replicas beyond those initially created can be added to the system. They contain the same information as the existing replicas, and help to prevent the loss of the configuration information. Loss of the configuration makes operation of the metadevices impossible. To create additional replicas, use the metadb -a command, followed by the name of the new slice(s) where the replicas will reside. All repli- cas that are located on the same slice must be created at the same time. To delete all replicas that are located on the same slice, the metadb -d command is used, followed by the slice name. When used with the -i option, metadb displays the status of the metadevice state databases. The status can change if a hardware failure occurs or when state databases have been added or deleted. To fix a replica in an error state, delete the replica and add it back again. The metadevice state database (mddb) also contains a list of the replica locations for this set (local or shared diskset). The local set mddb can also contain host and drive information for each of the shared disksets of which this node is a member. Other than the diskset host and drive information stored in the local set mddb, the local and shared diskset mddbs are functionality identical. The mddbs are written to during the resync of a mirror or during a component failure or configuration change. A configuration change or failure can also occur on a single replica (removal of a mddb or a failed disk) and this causes the other replicas to be updated with this failure information. OPTIONS
Root privileges are required for all of the following options except -h and -i. The following options can be used with the metadb command. Not all the options are compatible on the same command line. Refer to the SYNOP- SIS to see the supported use of the options. -a Attach a new database device. The /kernel/drv/md.conf file is automatically updated with the new information and the /etc/lvm/mddb.cf file is updated as well. An alternate way to create replicas is by defining them in the /etc/lvm/md.tab file and specifying the assigned name at the command line in the form, mddbnn, where nn is a two-digit number given to the replica definitions. Refer to the md.tab(4) man page for instructions on setting up replicas in that file. -c number Specifies the number of replicas to be placed on each device. The default number of replicas is 1. -d Deletes all replicas that are located on the specified slice. The /kernel/drv/md.conf file is automatically updated with the new information and the /etc/lvm/mddb.cf file is updated as well. -f The -f option is used to create the initial state database. It is also used to force the deletion of replicas below the minimum of one. (The -a and -f options should be used together only when no state databases exist.) -h Displays a usage message. -i Inquire about the status of the replicas. The output of the -i option includes characters in front of the device name that represent the status of the state database. Explanations of the characters are displayed following the replica status and are as follows: d replica does not have an associated device ID. o replica active prior to last mddb configuration change u replica is up to date l locator for this replica was read successfully c replica's location was in /etc/lvm/mddb.cf p replica's location was patched in kernel m replica is master, this is replica selected as input r replica does not have device relocation information t tagged data is associated with the replica W replica has device write errors a replica is active, commits are occurring to this M replica had problem with master blocks D replica had problem with data blocks F replica had format problems S replica is too small to hold current database R replica had device read errors B tagged data associated with the replica is not valid -k system-file Specifies the name of the kernel file where the replica information should be written. The default system-file is /ker- nel/drv/md.conf. This option is for use with the local diskset only. -l length Specifies the size of each replica. The default length is 8192 blocks, which should be appropriate for most configurations. "Replica sizes of less than 128 blocks are not recommended. -p Specifies updating the system file (/kernel/drv/md.conf) with entries from the /etc/lvm/mddb.cf file. This option is nor- mally used to update a newly built system before it is booted for the first time. If the system has been built on a system other than the one where it will run, the location of the mddb.cf on the local machine can be passed as an argument. The system file to be updated can be changed using the -k option. This option is for use with the local diskset only. -s setname Specifies the name of the diskset on which the metadb command will work. Using the -s option will cause the command to per- form its administrative function within the specified diskset. Without this option, the command will perform its function on local database replicas. slice Specifies the logical name of the physical slice (partition), such as /dev/dsk/c0t0d0s3. EXAMPLES
Example 1: Creating Initial State Database Replicas The following example creates the initial state database replicas on a new system. # metadb -a -f c0t0d0s7 c0t1d0s3 c1t0d0s7 c1t1d0s3 The -a and -f options force the creation of the initial database and replicas. You could then create metadevices with these same slices, making efficient use of the system. Example 2: Adding Two Replicas on Two New Disks This example shows how to add two replicas on two new disks that have been connected to a system currently running Volume Manager. # metadb -a c0t2d0s3 c1t1d0s3 Example 3: Deleting Two Replicas This example shows how to delete two replicas from the system. Assume that replicas have been set up on /dev/dsk/c0t2d0s3 and /dev/dsk/c1t1d0s3. # metadb -d c0t2d0s3 c1t1d0s3 Although you can delete all replicas, you should never do so while metadevices still exist. Removing all replicas causes existing metade- vices to become inoperable. FILES
/etc/lvm/mddb.cf Contains the location of each copy of the metadevice state database. /etc/lvm/md.tab Workspace file for metadevice database configuration. /kernel/drv/md.conf Contains database replica information for all metadevices on a system. Also contains Solaris Volume Manager configuration information. EXIT STATUS
The following exit values are returned: 0 successful completion >0 an error occurred ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWmdr | +-----------------------------+-----------------------------+ SEE ALSO
mdmonitord(1M), metaclear(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metarecover(1M), metarename(1M), metareplace(1M), metaroot(1M), metaset(1M), metassist(1M), metastat(1M), metasync(1M), metattach(1M), md.tab(4), md.cf(4), mddb.cf(4), md.tab(4), attributes(5), md(7D) Solaris Volume Manager Administration Guide NOTES
Replicas cannot be stored on fabric-attached storage, SANs, or other storage that is not directly attached to the system. Replicas must be on storage that is available at the same point in the boot process as traditional SCSI or IDE drives. A replica can be stored on a: o Dedicated local disk partition o Local partition that will be part of a volume o Local partition that will be part of a UFS logging device SunOS 5.10 14 Sep 2004 metadb(1M)
All times are GMT -4. The time now is 08:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy