09-20-2010
Normally a logical volume (lun) is based on all of the disks in a given SAN device. You do not tell the SAN where to store a lun. Period.
The SAN software can be confiured for different types of RAID settings and SAN software does all sorts of tuning and caching. If what you saw is WAY out of line, ask your disk guy to monitor what is going on. He/she can make adjustments.
The "disk" stuff you see df -h or iostat in the OS is the logical presentation of a filesystem on the OS, never a single physical disk. Don't conflate logical volumes and physical disks, which is what you seem to be doing.
10 More Discussions You Might Find Interesting
1. AIX
Hi All,
I have mirrored SAN volume on my B80 rootvg. Can I just remove the mirror and "Remove a P V from a V G" and it will be a diskless AIX?
Is that going to boot on SAN rootvg volume?
Thanks in advance,
itik (3 Replies)
Discussion started by: itik
3 Replies
2. Filesystems, Disks and Memory
so we have a solrais 9 system attached to an HP SAN.
we are using sssu to do snap clones every hour.
the only problem is the we get write errors on the solrais system every time we do a snap.
from /var/adm/messages
Apr 21 14:37:48 svr001 scsi: WARNING:... (0 Replies)
Discussion started by: robsonde
0 Replies
3. AIX
Hello everyone
I got several aix boxes with aix 5.3
I got a ibm san ds4500
My question is
How can I do a match between my disks on aix and the san?
I try to do a match with the LUN but for example. In my san I got several 1 LUN and on one of my aix box I got this
If I type lscfg... (4 Replies)
Discussion started by: lo-lp-kl
4 Replies
4. Solaris
hi all,
have a solaris 9 OS and a SAN disk which used to work fine is not getting picked up by my machine. can anyone point out things to check in order to troubleshoot this ??
thanks in advance. (3 Replies)
Discussion started by: cesarNZ
3 Replies
5. Filesystems, Disks and Memory
Scenario:
I've got 2 M5000's connected to a 9985 SAN storage array. I have configured the SAN disks with stmsboot, format and newfs. I can access the same SAN space from both systems. I have created files from both systems on the SAN space.
Question:
Why can't I see the file created... (3 Replies)
Discussion started by: bluescreen
3 Replies
6. Solaris
Hi,
I have a production solaris 10 SPARC system (portal). Yesterday legato/Networker gave an I/O Error on one of the files on its SAN mounted disk.
I went to that particular file on the system, did an ls and it showed the file. However, ls -l did not work and it said IO error.
... (6 Replies)
Discussion started by: Mack1982
6 Replies
7. Solaris
I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue.
I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN.
I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies
8. Solaris
Hi All,
I have server : Sun-Fire-V490 configured with Solaris 10 zfs
..
and I have configured three mirror the third one from EMC
storage.
root@server # zpool status -v
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
... (8 Replies)
Discussion started by: top.level
8 Replies
9. Red Hat
Hi ,
I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies
10. UNIX for Beginners Questions & Answers
I am in the market looking to purchase a new E950 server and I am trying to decide between using local SSD drives or SSD based SAN. The application that will be running on this server is read-intensive so I am looking for the most optimal configuration to support this application. There are no... (2 Replies)
Discussion started by: ikx
2 Replies
LEARN ABOUT NETBSD
bioctl
BIOCTL(8) BSD System Manager's Manual BIOCTL(8)
NAME
bioctl -- RAID management interface
SYNOPSIS
bioctl device command [arg [...]]
DESCRIPTION
RAID device drivers which support management functionality can register their services with the bio(4) driver. bioctl then can be used to
manage the RAID controller's properties.
COMMANDS
The following commands are supported:
show [disks | volumes]
Without any argument by default bioctl will show information about all volumes and the logical disks used on them. If
disks is specified, only information about physical disks will be shown. If volumes is specified, only information about
the volumes will be shown.
alarm [disable | enable | silence | test]
Control the RAID card's alarm functionality, if supported. By default if no argument is specified, its current state
will be shown. Optionally the disable, enable, silence, or test arguments may be specified to enable, disable, silence,
or test the RAID card's alarm.
blink start channel:target.lun | stop channel:target.lun
Instruct the device at channel:target.lun to start or cease blinking, if there's ses(4) support in the enclosure.
hotspare add channel:target.lun | remove channel:target.lun
Create or remove a hot-spare drive at location channel:target.lun.
passthru add DISKID channel:target.lun | remove channel:target.lun
Create or remove a pass-through device. The DISKID argument specifies the disk that will be used for the new device, and
it will be created at the location channel:target.lun. NOTE: Removing a pass-through device that has a mounted filesys-
tem will lead to undefined behaviour.
check start VOLID | stop VOLID
Start or stop consistency volume check in the volume with index VOLID. NOTE: Not many RAID controllers support this fea-
ture.
create volume VOLID DISKIDs [SIZE] STRIPE RAID_LEVEL channel:target.lun
Create a volume at index VOLID. The DISKIDs argument will specify the first and last disk, i.e.: 0-3 will use the disks
0, 1, 2, and 3. The SIZE argument is optional and may be specified if not all available disk space is wanted (also
dependent of the RAID_LEVEL). The volume will have a stripe size defined in the STRIPE argument and it will be located
at channel:target.lun.
remove volume VOLID channel:target.lun
Remove a volume at index VOLID and located at channel:target.lun. NOTE: Removing a RAID volume that has a mounted
filesystem will lead to undefined behaviour.
EXAMPLES
The following command, executed from the command line, shows the status of the volumes and its logical disks on the RAID controller:
$ bioctl arcmsr0 show
Volume Status Size Device/Label RAID Level Stripe
=================================================================
0 Building 468G sd0 ARC-1210-VOL#00 RAID 6 128KB 0% done
0:0 Online 234G 0:0.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:1 Online 234G 0:1.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:2 Online 234G 0:2.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:3 Online 234G 0:3.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
To create a RAID 5 volume on the SCSI 0:15.0 location on the disks 0, 1, 2, and 3, with stripe size of 64Kb on the first volume ID, using all
available free space on the disks:
$ bioctl arcmsr0 create volume 0 0-3 64 5 0:15.0
To remove the volume 0 previously created at the SCSI 0:15.0 location:
$ bioctl arcmsr0 remove volume 0 0:15.0
SEE ALSO
arcmsr(4), bio(4), cac(4), ciss(4), mfi(4)
HISTORY
The bioctl command first appeared in OpenBSD 3.8, it was rewritten for NetBSD 5.0.
AUTHORS
The bioctl interface was written by Marco Peereboom <marco@openbsd.org> and was rewritten with multiple features by
Juan Romero Pardines <xtraeme@NetBSD.org>.
BSD
March 16, 2008 BSD