Sponsored Content
Full Discussion: Metadevices in mirroring ?
Operating Systems Solaris Metadevices in mirroring ? Post 302360302 by aggadtech08 on Thursday 8th of October 2009 02:35:59 PM
Old 10-08-2009
Metadevices in mirroring ?

Hi Guys.

I have the follow disk mappig....

My doubt is that the filesystem root is in mirroring. I can see this in the configuration but , I dont know exactly if this in mirroring mean disk in RAID.

In short: Watching the configuration...Can I said if the filesystem / is in Raid?

Thanks in Advanced,
AGAD
Code:
root@sap02 # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d10        7.9G   7.2G   599M    93%     /
/proc                        0K     0K     0K         0%       /proc
mnttab                      0K     0K     0K         0%       /etc/mnttab
fd                             0K     0K     0K         0%       /dev/fd
/dev/md/dsk/d40         2.0G   1.6G   295M    85%     /var
swap                         2.4G   48K   2.4G      1%       /var/run
swap                         3.3G   49M   2.4G      26%     /tmp
/dev/dsk/c0t0d0s5       8.9G   6.1G   2.6G     70%     /oracle
/dev/dsk/c0t0d0s6       12G    10G   1.5G      88%     /data

 
root@sap02 # metastat
d40: Mirror
    Submirror 0: d41
      State: Okay
    Submirror 1: d42
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks (2.0 GB)
d41: Submirror of d40
    State: Okay
    Size: 4194828 blocks (2.0 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c0t0d0s4          0     No            Okay   Yes

d42: Submirror of d40
    State: Okay
    Size: 4194828 blocks (2.0 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c0t1d0s4          0     No            Okay   Yes

d20: Mirror
    Submirror 0: d21
      State: Okay
    Submirror 1: d22
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks (2.0 GB)
d21: Submirror of d20
    State: Okay
    Size: 4194828 blocks (2.0 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c0t0d0s1          0     No            Okay   Yes

d22: Submirror of d20
    State: Okay
    Size: 4194828 blocks (2.0 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c0t1d0s1          0     No            Okay   Yes

d10: Mirror
    Submirror 0: d11
      State: Okay
    Submirror 1: d12
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 16779312 blocks (8.0 GB)
d11: Submirror of d10
    State: Okay
    Size: 16779312 blocks (8.0 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c0t0d0s0          0     No            Okay   Yes

d12: Submirror of d10
    State: Okay
    Size: 16779312 blocks (8.0 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c0t1d0s0          0     No            Okay   Yes

Device Relocation Information:
Device   Reloc  Device ID
c0t1d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA73PMN000074299YRL
c0t0d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA73RKG00007430WFD2
root@sap02 #


Last edited by pludi; 10-08-2009 at 04:04 PM.. Reason: code tags, please
 

9 More Discussions You Might Find Interesting

1. Solaris

Mirroring

I am running Solaris 10 and i need to mirror a 73 gig HD. How do you mirror one in Solaris? (2 Replies)
Discussion started by: dewsdwarfs
2 Replies

2. SCO

Mirroring

How Can I Do Mirroring In Unix? (0 Replies)
Discussion started by: DIMITRIOSDOUMOS
0 Replies

3. Solaris

Mirroring

Hi All i wish to mirror the root disk, but i face the below error. root@saturn # metainit d11 1 1 c0t0d0s0 metainit: saturn: c0t0d0s0: is mounted on / kindly assist... (27 Replies)
Discussion started by: SmartAntz
27 Replies

4. Solaris

Problem with live upgrade creation: Telling me metadevices do not exist

Hi I am having a problem creating my live upgrade environment. Here is the error I get: root@server:/# lucreate -c SOL10_18May -n SOL10_19May -z /lu_excludelist -m /:dev/md/dsk/d0:ufs -m /var:/dev/md/dsk/d4:ufs -m /export/home:/dev/md/dsk/d6:ufs -m -:/dev/md/dsk/d3:swap -C... (0 Replies)
Discussion started by: notreallyhere
0 Replies

5. Solaris

metadevices: how to test metadb (how to corrupt replicas/understanding replicas)

Hi all, I recently started exploring Solaris 10. I am testing metadevices now. I have been reading about the state databases here: 6.State Database (Overview) (Solaris Volume Manager Administration Guide) - Sun Microsystems So I created 3 metadbs on 2 slices (6 in total; c1t1d0s3... (3 Replies)
Discussion started by: deadeyes
3 Replies

6. Solaris

Metadevices deleted

Hi Guys, We have an issue, all metadb's on system was deleted and the system was rebooted. the system is currented mounted in single-user mode . its a x86 server. the volumes under SVM is as follows. / /var and /usr please suggest Please use code tags <- click the... (1 Reply)
Discussion started by: karthick.sh
1 Replies

7. Solaris

SVM - Metadevices are offline after changing hostname solaris x86

Hi , We are facing an issue on one of our solaris x86 server, After changing the hostname and a orderly reboot , all metadevices shows offline. please let us know the steps to restore back all metadevices to working state with this new hostname (3 Replies)
Discussion started by: karthick.sh
3 Replies

8. Solaris

ZFS mirroring

Hello, I just build a Solaris 10 server on an x86 box. I forgot to mirror the two disks when I install the OS. Can I get some help with this? I have this # zpool list rpool 278G 5.77G 272G 2% ONLINE - # zpool status pool: online state: ONLINE scan: none requested config: ... (12 Replies)
Discussion started by: bitlord
12 Replies

9. Solaris

Creating metadevices

Hi I am new to SVM, but I would like to configure it in one of my servers. The setup is as follows: I have one server running solaris 10 , and is connected to a NetApp via iSCSI protocol. This is how I intend to do it: Ask the NetApp admin, to provide me with 3 LUNs of 100G each, them use the... (14 Replies)
Discussion started by: fretagi
14 Replies
CCDCONFIG(8)						    BSD System Manager's Manual 					      CCDCONFIG(8)

NAME
ccdconfig -- configuration utility for the concatenated disk driver SYNOPSIS
ccdconfig [-cv] ccd ileave [flags] dev ... ccdconfig -C [-v] [-f config_file] ccdconfig -u [-v] ccd ... ccdconfig -U [-v] [-f config_file] ccdconfig -g [ccd ...] DESCRIPTION
The ccdconfig utility is used to dynamically configure and unconfigure concatenated disk devices, or ccds. For more information about the ccd, see ccd(4). The options are as follows: -c Configure a ccd. This is the default behavior of ccdconfig. -C Configure all ccd devices listed in the ccd configuration file. -f config_file When configuring or unconfiguring all devices, read the file config_file instead of the default /etc/ccd.conf. -g Dump the current ccd configuration in a format suitable for use as the ccd configuration file. If no arguments are specified, every configured ccd is dumped. Otherwise, the configuration of each listed ccd is dumped. -u Unconfigure a ccd. -U Unconfigure all ccd devices listed the ccd configuration file. -v Cause ccdconfig to be verbose. A ccd is described on the command line and in the ccd configuration file by the name of the ccd, the interleave factor, the ccd configuration flags, and a list of one or more devices. The flags may be represented as a decimal number, a hexadecimal number, a comma-separated list of strings, or the word ``none''. The flags are as follows: CCDF_UNIFORM 0x02 Use uniform interleave CCDF_MIRROR 0x04 Support mirroring CCDF_NO_OFFSET 0x08 Do not use an offset CCDF_LINUX 0x0A Linux md(4) compatibility The format in the configuration file appears exactly as if it were entered on the command line. Note that on the command line and in the configuration file, the flags argument is optional. # # /etc/ccd.conf # Configuration file for concatenated disk devices # # ccd ileave flags component devices ccd0 16 none /dev/da2s1 /dev/da3s1 The component devices need to name partitions of type FS_BSDFFS (or ``4.2BSD'' as shown by disklabel(8)). If you want to use the Linux md(4) compatibility mode, please be sure to read the notes in ccd(4). FILES
/etc/ccd.conf default ccd configuration file EXAMPLES
A number of ccdconfig examples are shown below. The arguments passed to ccdconfig are exactly the same as you might place in the /etc/ccd.conf configuration file. The first example creates a 4-disk stripe out of four scsi disk partitions. The stripe uses a 64 sector interleave. The second example is an example of a complex stripe/mirror combination. It reads as a two disk stripe of da4 and da5 which is mirrored to a two disk stripe of da6 and da7. The last example is a simple mirror. The 2nd slice of /dev/da8 is mirrored with the 3rd slice of /dev/da9 and assigned to ccd0. # ccdconfig ccd0 64 none /dev/da0s1 /dev/da1s1 /dev/da2s1 /dev/da3s1 # ccdconfig ccd0 128 CCDF_MIRROR /dev/da4 /dev/da5 /dev/da6 /dev/da7 # ccdconfig ccd0 128 CCDF_MIRROR /dev/da8s2 /dev/da9s3 The following are matching commands in Linux and FreeBSD to create a RAID-0 in Linux and read it from FreeBSD. # Create a RAID-0 on Linux: mdadm --create --chunk=32 --level=0 --raid-devices=2 /dev/md0 /dev/hda1 /dev/hdb1 # Make the RAID-0 just created available on FreeBSD: ccdconfig -c /dev/ccd0 32 linux /dev/ada0s1 /dev/ada0s2 When you create a new ccd disk you generally want to fdisk(8) and disklabel(8) it before doing anything else. Once you create the initial label you can edit it, adding additional partitions. The label itself takes up the first 16 sectors of the ccd disk. If all you are doing is creating file systems with newfs, you do not have to worry about this as newfs will skip the label area. However, if you intend to dd(1) to or from a ccd partition it is usually a good idea to construct the partition such that it does not overlap the label area. For example, if you have A ccd disk with 10000 sectors you might create a 'd' partition with offset 16 and size 9984. # disklabel ccd0 > /tmp/disklabel.ccd0 # disklabel -Rr ccd0 /tmp/disklabel.ccd0 # disklabel -e ccd0 The disklabeling of a ccd disk is usually a one-time affair. If you reboot the machine and reconfigure the ccd disk, the disklabel you had created before will still be there and not require reinitialization. Beware that changing any ccd parameters: interleave, flags, or the device list making up the ccd disk, will usually destroy any prior data on that ccd disk. If this occurs it is usually a good idea to reini- tialize the label before [re]constructing your ccd disk. RECOVERY
An error on a ccd disk is usually unrecoverable unless you are using the mirroring option. But mirroring has its own perils: It assumes that both copies of the data at any given sector are the same. This holds true until a write error occurs or until you replace either side of the mirror. This is a poor-man's mirroring implementation. It works well enough that if you begin to get disk errors you should be able to backup the ccd disk, replace the broken hardware, and then regenerate the ccd disk. If you need more than this you should look into external hardware RAID SCSI boxes, RAID controllers (see GENERIC), or software RAID systems such as geom(8) and gvinum(8). SEE ALSO
dd(1), ccd(4), disklabel(8), fdisk(8), gvinum(8), rc(8) HISTORY
The ccdconfig utility first appeared in NetBSD 1.0A. BUGS
The initial disklabel returned by ccd(4) specifies only 3 partitions. One needs to change the number of partitions to 8 using ``disklabel -e'' to get the usual BSD expectations. BSD
October 1, 2013 BSD
All times are GMT -4. The time now is 08:37 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy