Dear all,
I have created a shared metaset(500gb) having 3 hosts in which 2 hosts are in cluster and 1 is non cluster. I have taken the ownership in cluster node from non cluster node but the problem is i am unable to mount the file system it is giving error "/dev/md/eccdb-ds/d100 or /eccdb-ds... (1 Reply)
Dear Cluster Guru,
I had installed sc 3.2 on 2 box sunfire 240 (solaris 10 update 7 ,each node 2 gb ram) and use scsi juke box as share storage .
i had test cluster application :
1. apache HA
2 oracleHA (oracle 9i)
all running well (switch over between... (3 Replies)
Hi Guys
i have two servers which are n cluster prd1 and prd2.. I already have a metaset configured in it..My job was to create a 1.8 tb LUN in my storage which is having Hardware raid5 pre-configured..i Created 1.8 tb with raid 5 in my storage and it got detected in both t servers prd1... (4 Replies)
hi all,
i am using solaris 5.10 on sun blade 150 and i am trying to configure diskset in sun volume manager. When i fire the following command, it says some rpc related error.
bash-3.00# metaset -s kingston -a -h u15_9
metaset: u15_9: metad client create: RPC: Program not registered
how to... (4 Replies)
Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC.
Am thinking how to migrate to sun cluster setup instead.
My plan as follows leave the existing vcs intact as a fallback plan.
Then install and build suncluster on... (5 Replies)
Hi,
Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is... (0 Replies)
Hello everyone,
Can you please help me understand what happened; here is the problem:
We have a sun cluster composed by two nodes (E4900; sSolaris 9), suddenly one of them were inaccessible by PUTTY, however we were able to ping on it. The cluster haven't basculated and after 20 minutes the... (0 Replies)
Hello experts -
I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts.
(1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over)
(2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies
LEARN ABOUT SUNOS
i2o_bs
i2o_bs(7D) Devices i2o_bs(7D)NAME
i2o_bs - Block Storage OSM for I2O
SYNOPSIS
disk@local target id#:a through u
disk@local target id#:a through u raw
DESCRIPTION
The I2O Block Storage OSM abstraction (BSA, which also is referred to as block storage class) layer is the primary interface that Solaris
operating environments use to access block storage devices. A block storage device provides random access to a permanent storage medium.
The i2o_bs device driver uses I2O Block Storage class messages to control the block device; and provides the same functionality (ioctls,
for example) that is present in the Solaris device driver like 'cmdk, dadk' on x86 for disk. The maximum size disk supported by i2o_bs is
the same as what is available on x86.
The i2o_bs is currently implemented version 1.5 of Intelligent IO specification.
The block files access the disk using the system's normal buffering mechanism and are read and written without regard to physical disk
records. There is also a "raw" interface that provides for direct transmission between the disk and the user's read or write buffer. A
single read or write call usually results in one I/O operation; raw I/O is therefore considerably more efficient when many bytes are
transmitted. The names of the block files are found in /dev/dsk; the names of the raw files are found in /dev/rdsk.
I2O associates each block storage device with a unique ID called a local target id that is assigned by I2O hardware. This information can
be acquired by the block storage OSM through I2O Block Storage class messages. For Block Storage OSM, nodes are created in
/devices/pci#/pci# which include the local target ID as one component of device name that the node refers to. However the /dev names and
the names in /dev/dsk and /dev/rdsk do not encode the local target id in any part of the name.
For example, you might have the following:
/devices/ /dev/dsk name
---------------------------------------------------------------
/devices/pci@0,0/pci101e,0@10,1/disk@10:a /dev/dsk/c1d0s0
I/O requests to the disk must have an offset and transfer length that is a multiple of 512 bytes or the driver returns an EINVAL error.
Slice 0 is normally used for the root file system on a disk, slice 1 is used as a paging area (for example, swap), and slice 2 for backing
up the entire fdisk partition for Solaris software. Other slices may be used for usr file systems or system reserved area.
Fdisk partition 0 is to access the entire disk and is generally used by the fdisk(1M) program.
FILES
/dev/dsk/cndn[s|p]n block device
/dev/rdsk/cndn[s|p]n raw device
where:
cn controller n
dn instance number
sn UNIX system slice n (0-15)
pn fdisk partition(0)
/kernel/drv/i2o_bs i2o_bs driver
/kernel/drv/i2o_bs.conf Configuration file
ATTRIBUTES
See attributes(5)
for descriptions of the following attributes:
+-----------------------------+-----------------------------+
|ATTRIBUTE TYPE |ATTRIBUTE VALUE
+-----------------------------+-----------------------------+
|Architecture |x86 |
+-----------------------------+-----------------------------+
SEE ALSO fdisk(1M), format(1M)mount(1M),lseek(2), read(2), write(2), readdir(3C), vfstab(4), acct.h(3HEAD), attributes(5), dkio(7I)SunOS 5.10 21 Jul 1998 i2o_bs(7D)