Metadb


 
Thread Tools Search this Thread
Operating Systems Solaris Metadb
# 1  
Old 03-30-2012
Metadb

Hi,

I have inherited a system with two disks mirrored, and only 1 metadb on each, I have deleted the db on c1t1 and recreated 3 which is fine.

But when it comes to c1t0, the metadb -d /dev/dsk/c1t0d0s5 doesnt removed the existing db and the create metadb -a -f -c3 /dev/dsk/c1t0dos5 wont create more.

Please advise, many thanks in advance.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Solaris can not boot because metadb lost

how to recovery metadb? Thanks! (1 Reply)
Discussion started by: dzung
1 Replies

2. Shell Programming and Scripting

Creating metadb

hi all , i need please help with this command i tried to create a metadb in live system , i got this message metadb: fintest: c0t0d0s7: overlaps with c0t0d0s0 which is mounted as '/' any idea of who to do the creation with such problem (3 Replies)
Discussion started by: semaan
3 Replies

3. Solaris

How to create metadb with zpool in Solaris 11

Hi, my root pool is as follows. How can I create a metadb if I want to create SVM volumes? zpool status pool: rpool1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool1 ONLINE 0 0 0 c4t1d0s0 ... (10 Replies)
Discussion started by: incredible
10 Replies

4. Solaris

Meta Devices ( Metadb )

Hi All, I have two disks which has been configured for RAID 1 Already. I am adding more 6 disks in the system and I am configuring RAID 1 for all. So in this case should , how should i create a metadb . Thanks and Regards Rj (3 Replies)
Discussion started by: jegaraman
3 Replies

5. Solaris

Creating MetaDB

I'm trying create metadb's. I'm telling them to live on mounted partitions. Is this acceptable? (3 Replies)
Discussion started by: adelsin
3 Replies

6. Solaris

metadb problem - server unable to boot

Hi everyone I have a problem. My configuration is as follows: Sun 280R server 2 x internal disks 3 x state databases on disk 1 3 x state databases on disk 2 Disk 1 was giving errors, so I cleared the mirrors on it, deleted the state databases and replaced the disk. Before attaching... (5 Replies)
Discussion started by: soliberus
5 Replies

7. UNIX for Dummies Questions & Answers

metadb errors

Hi All. Been out of Unix admin work for about 10 years and I just took over a new position... seeing the 'use metadb to delete databases which are broken message' on reboot. The system comes up but I'm not sure what's on this disk that's failing... metadb -i shows: excalibur# metadb... (1 Reply)
Discussion started by: jamie_collins
1 Replies

8. Solaris

How to create metadb when there is no free slice

Hi All, I have to do disk mirroring, for that I have to create a metadb as disk mirroring have to do with SVM. However I do not have any slice free for metadb. What are the options? Please suggest (4 Replies)
Discussion started by: kumarmani
4 Replies

9. Solaris

solaris with metadb

Hi All, If solaris has metadb services on the disk, it means that it has a HW raid controller. Or what. Thanks in advance. (1 Reply)
Discussion started by: itik
1 Replies

10. Solaris

help on metadb

Hi All, Please help me on metastat and metadb. I don't where do start on doing queries on all these solaris 8 disk. Give me an idea and I will do the rest? What's the command for verifying raid 1 or 5? Verify what's the adapter (scsi or san)? Thanks in advance, itik (5 Replies)
Discussion started by: itik
5 Replies
Login or Register to Ask a Question
md.tab(4)							   File Formats 							 md.tab(4)

NAME
md.tab, md.cf - Solaris Volume Manager utility files SYNOPSIS
/etc/lvm/md.tab /etc/lvm/md.cf DESCRIPTION
The file /etc/lvm/md.tab can be used by metainit(1M) and metadb(1M) to configure metadevices, hot spare pools, and metadevice state data- base replicas in a batch-like mode. Solaris Volume Manager does not store configuration information in the /etc/lvm/md.tab file. You can use: metastat -p > /etc/lvm/md.tab to create this file. Edit it by hand using the instructions in the md.tab.4 file. Similarly, if no hot spares are in use, the cp md.cf md.tab command generates an acceptable version of the md.tab file, with the editing caveats previously mentioned. When using the md.tab file, each metadevice, hot spare pool, or state database replica in the file must have a unique entry. Entries can include the following: simple metadevices (stripes, concatenations, and concatenations of stripes); mirrors, soft partitions, and RAID5 metadevices; hot spare pools; and state database replicas. Because md.tab contains only entries that you enter in it, do not rely on the file for the current configuration of metadevices, hot spare pools, and replicas on the system at any given time. Tabs, spaces, comments (by using a pound sign, #), and continuation of lines (by using a backslash-newline), are allowed. Typically, you set up metadevices according to information specified on the command line by using the metainit command. Likewise, you set up state database replicas with the metadb command. An alternative to the command line is to use the md.tab file. Metadevices and state database replicas can be specified in the md.tab file in any order, and then activated in a batch-like mode with the metainit and metadb commands. If you edit the md.tab file, you specify one complete configuration entry per line. Metadevices are defined using the same syntax as required by the metainit command. You then run the metainit command with either the -a option, to activate all metadevices in the md.tab file, or with the metadevice name corresponding to a specific configuration entry. metainit does not maintain the state of the volumes that would have been created when metainit is run with both the -a and -n flags. If a device d0 is created in the first line of the md.tab file, and a later line in md.tab assumes the existence of d0, the later line will fail when metainit -an runs (even if it would succeed with metainit -a). State database replicas are defined in the /etc/lvm/md.tab file as follows: mddb number options [ slice... ] Where mddb number is the char- acters mddb followed by a number of two or more digits that identifies the state database replica. slice is a physical slice. For example: mddb05 /dev/dsk/c0t1d0s2. The file /etc/lvm/md.cf is a backup of the configuration used for disaster recovery. Whenever the Volume Manager configuration is changed, this file is automatically updated (except when hot sparing occurs). You should not directly edit this file. EXAMPLES
Example 1: Concatenation All drives in the following examples have the same size of 525 Mbytes. This example shows a metadevice, /dev/md/dsk/d7, consisting of a concatenation of four disks. # # (concatenation of four disks) # d7 4 1 c0t1d0s0 1 c0t2d0s0 1 c0t3d0s0 1 c0t4d0s0 The number 4 indicates there are four individual stripes in the concatenation. Each stripe is made of one slice, hence the number 1 appears in front of each slice. Note that the first disk sector in all of the above devices contains a disk label. To preserve the labels on devices /dev/dsk/c0t2d0s0, /dev/dsk/c0t3d0s0, and /dev/dsk/c0t4d0s0, the metadisk driver must skip at least the first sector of those disks when mapping accesses across the concatenation boundaries. Since skipping only the first sector would create an irregular disk geome- try, the entire first cylinder of these disks will be skipped. This allows higher level file system software to optimize block allocations correctly. Example 2: Stripe This example shows a metadevice, /dev/md/dsk/d15, consisting of two slices. # # (stripe consisting of two disks) # d15 1 2 c0t1d0s2 c0t2d0s2 -i 32k The number 1 indicates that one stripe is being created. Because the stripe is made of two slices, the number 2 follows next. The optional -i followed by 32k specifies the interlace size will be 32 Kbytes. If the interlace size were not specified, the stripe would use the default value of 16 Kbytes. Example 3: Concatenation of Stripes This example shows a metadevice, /dev/md/dsk/d75, consisting of a concatenation of two stripes of three disks. # # (concatenation of two stripes, each consisting of three disks) # d75 2 3 c0t1d0s2 c0t2d0s2 c0t3d0s2 -i 16k 3 c1t1d0s2 c1t2d0s2 c1t3d0s2 -i 32k On the first line, the -i followed by 16k specifies that the stripe's interlace size is 16 Kbytes. The second set specifies the stripe interlace size will be 32 Kbytes. If the second set did not specify 32 Kbytes, the set would use default interlace value of 16 Kbytes. The blocks of each set of three disks are interlaced across three disks. Example 4: Mirroring This example shows a three-way mirror, /dev/md/dsk/d50, consisting of three submirrors. This mirror does not contain any existing data. # # (mirror) # d50 -m d51 d51 1 1 c0t1d0s2 d52 1 1 c0t2d0s2 d53 1 1 c0t3d0s2 In this example, a one-way mirror is first defined using the -m option. The one-way mirror consists of submirror d51. The other two sub- mirrors, d52 and d53, are attached later using the metattach command. The default read and write options in this example are a round-robin read algorithm and parallel writes to all submirrors. The order in which mirrors appear in the /etc/lvm/md.tab file is unimportant. Example 5: RAID5 This example shows a RAID5 metadevice, d80, consisting of three slices: # # (RAID devices) # d80 -r c0t1d0s1 c1t0d0s1 c2t0d0s1 -i 20k In this example, a RAID5 metadevice is defined using the -r option with an interlace size of 20 Kbytes. The data and parity segments will be striped across the slices, c0t1d0s1, c1t0d0s1, and c2t0d0s1. Example 6: Soft Partition This example shows a soft partition, d85, that reformats an entire 9 GB disk. Slice 0 occupies all of the disk except for the few Mbytes taken by slice 7, which is space reserved for a state database replica. Slice 7 will be a minimum of 4Mbytes, but could be larger, depend- ing on the disk geometry. d85 sits on c3t4d0s0. Drives are repartitioned when they are added to a diskset only if Slice 7 is not set up correctly. A small portion of each drive is reserved in Slice 7 for use by Volume Manager. The remainder of the space on each drive is placed into Slice 0. Any existing data on the disks is lost after repartitioning. After adding a drive to a diskset, you can repartition the drive as necessary. However, Slice 7 should not be moved, removed, or overlapped with any other partition. Manually specifying the offsets and extents of soft partitions is not recommended. This example is included for to provide a better under- standing of the file if it is automatically generated and for completeness. # # (Soft Partitions) d85 -p -e c3t4d0 9g In this example, creating the soft partition and required space for the state database replica occupies all 9 GB of disk c3t4d0. Example 7: Soft Partition This example shows the command used to re-create a soft partition with two extents, the first one starting at offset 20483 and extending for 20480 blocks and the second extent starting at 135398 and extending for 20480 blocks: # # (Soft Partitions) # d1 -p c0t3d0s0 -o 20483 -b 20480 -o 135398 -b 20480 Example 8: Hot Spare This example shows a three-way mirror, /dev/md/dsk/d10, consisting of three submirrors and three hot spare pools. # # (mirror and hot spare) # d10 -m d20 d20 1 1 c1t0d0s2 -h hsp001 d30 1 1 c2t0d0s2 -h hsp002 d40 1 1 c3t0d0s2 -h hsp003 hsp001 c2t2d0s2 c3t2d0s2 c1t2d0s2 hsp002 c3t2d0s2 c1t2d0s2 c2t2d0s2 hsp003 c1t2d0s2 c2t2d0s2 c3t2d0s2 In this example, a one-way mirror is first defined using the -m option. The submirrors are attached later using the metattach(1M) command. The hot spare pools to be used are tied to the submirrors with the -h option. In this example, there are three disks used as hot spares, defined in three separate hot spare pools. The hot spare pools are given the names hsp001, hsp002, and hsp003. Setting up three hot spare pools rather than assigning just one hot spare with each component helps to maximize the use of hardware. This configuration enables the user to specify that the most desirable hot spare be selected first, and improves availability by having more hot spares available. At the end of the entry, the hot spares to be used are defined. Note that, when using the md.tab file, to associate hot spares with metadevices, the hot spare spool does not have to exist prior to the association. Volume Manager takes care of the order in which metadevices and hot spares are created when using the md.tab file. Example 9: State Database Replicas This example shows how to set up an initial state database and three replicas on a server that has three disks. # # (state database and replicas) # mddb01 -c 3 c0t1d0s0 c0t2d0s0 c0t3d0s0 In this example, three state database replicas are stored on each of the three slices. Once the above entry is made in the /etc/lvm/md.tab file, the metadb command must be run with both the -a and -f options. For example, typing the following command creates one state database replicas on three slices: # metadb -a -f mddb01 FILES
o /etc/lvm/md.tab o /etc/lvm/md.cf SEE ALSO
mdmonitord(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metarecover(1M), metarename(1M), metareplace(1M), metaroot(1M), metassist(1M), metaset(1M), metastat(1M), metasync(1M), metattach(1M), md.cf(4), mddb.cf(4), attributes(5), md(7D) Solaris Volume Manager Administration Guide LIMITATIONS
Recursive mirroring is not allowed; that is, a mirror cannot appear in the definition of another mirror. Recursive logging is not allowed. Stripes and RAID5 metadevices must contains slices or soft partitions only. Mirroring of RAID5 metadevices is not allowed. Soft partitions can be built directly on slices or can be the top level (accessible by applications directly), but cannot be in the middle, with other metadevices above and below them. NOTES
Trans metadevices have been replaced by UFS logging. Existing trans devices are not logging--they pass data directly through to the under- lying device. See mount_ufs(1M) for more information about UFS logging. SunOS 5.10 15 Dec 2004 md.tab(4)