02-08-2006
Problem with DiskSuite replicas
Good morning,
I have Solstice disk suite installed on my server.
One disk broke so I sostitute it.
A replica was present on this disk.
I had delete and then recreate it with commands metadb -d and metadb -a.
Now when I inquire for the status of replicas I see this:
stp11# metadb -i
flags first blk block count
a m p luo 16 1034 /dev/dsk/c0t0d0s5
a u 16 1034 /dev/dsk/c1t9d0s3
a p luo 16 1034 /dev/dsk/c1t11d0s0
Replica of slice c1t9d0s3 has't "l" and "0" attributes like others.
I still tried to delete and recreate it but nothing.
Anyone knows how I can do for having attributes "luo" also for this replica?
Thanks
Christian
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hello,
We are using Solstice Disk Suite on Solaris 2.7.
We want to add two striped volume with six disks.
On each disk, we take a slice and we create the stripe.
That I want to know :
Is it necessary to add two replicas on the same slice on the new disks, as made before on the others... (1 Reply)
Discussion started by: christophe
1 Replies
2. Solaris
First I would like to thank this forum for assisting me in setting up my 1st sunbox.
Could not have done it if it had not been for you guys and google :D
I have mirrored my box and have SUCCESSFULLY tested booting from the rootdisk and rootmirror successfully.
I am now looking at configuring... (2 Replies)
Discussion started by: mr_manny
2 Replies
3. Solaris
Has anybody every used Solstice DiskSuite? I am having trouble setting it up. I installed it without a problem, but do I really have to blow away the drives on the D1000 just to create a metastate database? (8 Replies)
Discussion started by: hshapiro
8 Replies
4. UNIX for Dummies Questions & Answers
I have a live Sunfire v440 server with 4 drives and I want to mirror drive 0 & 1 to 2 & 3. The on-board raid controller only allows for 1 live mirror. I was thinking of disksuite, but unfortunately the second disk is just one large partition with no free slices.
I was thinking of using... (0 Replies)
Discussion started by: csgonan
0 Replies
5. Solaris
hi all,
i want to mirror two disks with disksuite under solaris 9 , doses smeone can explain me Briefly the essential steps to do that plz ? (3 Replies)
Discussion started by: lid-j-one
3 Replies
6. Solaris
I have a SOlaris 10 v240 server. I'm installing disksuite to mirror the root drive D0 to D2. I also have one partition on disk 1 that I want to mirror to D3. I am not using ZFS right now.
Can I add that to my initial mirroring configuration or can I only mirror 1 drive to 1 drive?
Can I... (2 Replies)
Discussion started by: csross
2 Replies
7. Solaris
Hi all,
I recently started exploring Solaris 10.
I am testing metadevices now.
I have been reading about the state databases here: 6.State Database (Overview) (Solaris Volume Manager Administration Guide) - Sun Microsystems
So I created 3 metadbs on 2 slices (6 in total; c1t1d0s3... (3 Replies)
Discussion started by: deadeyes
3 Replies
8. Solaris
Hello all,
I have a Solaris Disksuite question :-
I will be adding 4 new drives to an E250 server and need will be configuring 2 striped volumes each consisting 2 new disks with SVM. In the end i will have 2 volumes each of 72gb. So in effect i will have 1 volume called D7 and another volume... (6 Replies)
Discussion started by: commandline
6 Replies
9. Solaris
Hey all!
I was hoping someone knew anything about this one...
I know with Solaris Volume Manager the default Database Replica size is 8192 blocks (4MB approximately)
Now I know you can increase this amount but is there any point?
The reason I am asking this is that I've setup mirroring on... (2 Replies)
Discussion started by: Keepcase
2 Replies
10. Solaris
I lost my system volume in a power outage, but fortunately I had a dual boot and I could boot into an older opensolaris version and my raidz2 7 drive pool was still fine. I even scrubbed it, no errors. However, the older os has some smb problems so I wanted to upgrade to opensolaris11. I... (3 Replies)
Discussion started by: skk
3 Replies
i2o_bs(7D) Devices i2o_bs(7D)
NAME
i2o_bs - Block Storage OSM for I2O
SYNOPSIS
disk@local target id#:a through u
disk@local target id#:a through u raw
DESCRIPTION
The I2O Block Storage OSM abstraction (BSA, which also is referred to as block storage class) layer is the primary interface that Solaris
operating environments use to access block storage devices. A block storage device provides random access to a permanent storage medium.
The i2o_bs device driver uses I2O Block Storage class messages to control the block device; and provides the same functionality (ioctls,
for example) that is present in the Solaris device driver like 'cmdk, dadk' on x86 for disk. The maximum size disk supported by i2o_bs is
the same as what is available on x86.
The i2o_bs is currently implemented version 1.5 of Intelligent IO specification.
The block files access the disk using the system's normal buffering mechanism and are read and written without regard to physical disk
records. There is also a "raw" interface that provides for direct transmission between the disk and the user's read or write buffer. A
single read or write call usually results in one I/O operation; raw I/O is therefore considerably more efficient when many bytes are
transmitted. The names of the block files are found in /dev/dsk; the names of the raw files are found in /dev/rdsk.
I2O associates each block storage device with a unique ID called a local target id that is assigned by I2O hardware. This information can
be acquired by the block storage OSM through I2O Block Storage class messages. For Block Storage OSM, nodes are created in
/devices/pci#/pci# which include the local target ID as one component of device name that the node refers to. However the /dev names and
the names in /dev/dsk and /dev/rdsk do not encode the local target id in any part of the name.
For example, you might have the following:
/devices/ /dev/dsk name
---------------------------------------------------------------
/devices/pci@0,0/pci101e,0@10,1/disk@10:a /dev/dsk/c1d0s0
I/O requests to the disk must have an offset and transfer length that is a multiple of 512 bytes or the driver returns an EINVAL error.
Slice 0 is normally used for the root file system on a disk, slice 1 is used as a paging area (for example, swap), and slice 2 for backing
up the entire fdisk partition for Solaris software. Other slices may be used for usr file systems or system reserved area.
Fdisk partition 0 is to access the entire disk and is generally used by the fdisk(1M) program.
FILES
/dev/dsk/cndn[s|p]n block device
/dev/rdsk/cndn[s|p]n raw device
where:
cn controller n
dn instance number
sn UNIX system slice n (0-15)
pn fdisk partition(0)
/kernel/drv/i2o_bs i2o_bs driver
/kernel/drv/i2o_bs.conf Configuration file
ATTRIBUTES
See attributes(5)
for descriptions of the following attributes:
+-----------------------------+-----------------------------+
|ATTRIBUTE TYPE |ATTRIBUTE VALUE
+-----------------------------+-----------------------------+
|Architecture |x86 |
+-----------------------------+-----------------------------+
SEE ALSO
fdisk(1M), format(1M)mount(1M),lseek(2), read(2), write(2), readdir(3C), vfstab(4), acct.h(3HEAD), attributes(5), dkio(7I)
SunOS 5.10 21 Jul 1998 i2o_bs(7D)