Sponsored Content
Top Forums UNIX for Dummies Questions & Answers monitoring the state of physical disks Post 46533 by VeroL on Monday 19th of January 2004 09:53:02 AM
Old 01-19-2004
monitoring the state of physical disks

Hello,

I would like to know if there are commands that can be used to monitor the state of physical disks (including RAID) under AIX and SUN unix platforms?

Thank you in advance.
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

how to assign same mount point for file systems mounted on physical disks

We have 6 hard disks attached to the hardware. Of this 2 hard disks are of 9 GB each. Now I want combine both the same in such a way that i see a combined entry in the output of df -k . The steps I follow are 1. Create partition on hard disks (Using format partition) 2. Run newfs -v for... (6 Replies)
Discussion started by: Hitesh Shah
6 Replies

2. UNIX for Dummies Questions & Answers

Vertias (physical disks monitoring)

Can anyone tell me what Vertias is? Is it free? What is it used for exactly? Thank you in advance. (1 Reply)
Discussion started by: VeroL
1 Replies

3. Solaris

6120 Array. Additional physical Disks and ZFS

Hi; I have 4 new disks in a 6120 Array attached to a SUN server running zfs. There are already two virtual disks on the array comprising of 3 disk raid 5 for each Vdisk. I need to add two more disks to each vdisk making each a 5 disk raid 5 Vdisk. If ZFS already has the original... (3 Replies)
Discussion started by: myjess
3 Replies

4. Filesystems, Disks and Memory

How do I check 4 physical damaged on Linux hard disks?

How do I check for physical damage on red hat linux hard disks? I tried smartctl /dev/sdb but it came back so fast saying it was ok. Is there a better linux command to check for bad sectors or physical disks in linux? Is there a good way such as with parted or something else? I normally in HP... (4 Replies)
Discussion started by: taekwondo
4 Replies

5. AIX

Maximum Limit of HMC to handle Physical Power Virtualization Physical Machine

Hello All, Can anybody please tell me what is the maximum limit of Physical IBM Power Machine which can be handled by single HMC at a single point of time? Thanks, Jenish (1 Reply)
Discussion started by: jenish_shah
1 Replies

6. AIX

How to determine the physical volume fo the disks

This is the report I got running the comand rptconf, but I would like to know what is the capacity of the disks installed into our server power 6 with AIX System Model: IBM,7778-23X Machine Serial Number: 1066D5A Processor Type: PowerPC_POWER6 Processor Implementation Mode: POWER 6... (6 Replies)
Discussion started by: cucosss
6 Replies

7. Solaris

Reboot causes disks in Resync State

Dear Team, This time i am facing some new problems which is beyond my thinking. I need some expert advice. We are having 4 Servers ( 2 nos Sun SPARC Enterprise T5220 & 2 nos of SF e2900 Servers ). Both the T5220 Servers are Termed as Node A and Node B . The same things are followed with... (1 Reply)
Discussion started by: sudhansu
1 Replies

8. Red Hat

Create volume using LVM over 2 physical disks

I wanted to know how we can combine volumes over 2 physical drives. # fdisk -l Disk /dev/sda: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 ... (16 Replies)
Discussion started by: ikn3
16 Replies

9. AIX

Open firmware state to running state

Hi Admins, I am having a whole system lpar in open firmware state on HMC. How can I bring it to running state ? Let me know. Thanks. (2 Replies)
Discussion started by: snchaudhari2
2 Replies
BIOCTL(8)						    BSD System Manager's Manual 						 BIOCTL(8)

NAME
bioctl -- RAID management interface SYNOPSIS
bioctl device command [arg [...]] DESCRIPTION
RAID device drivers which support management functionality can register their services with the bio(4) driver. bioctl then can be used to manage the RAID controller's properties. COMMANDS
The following commands are supported: show [disks | volumes] Without any argument by default bioctl will show information about all volumes and the logical disks used on them. If disks is specified, only information about physical disks will be shown. If volumes is specified, only information about the volumes will be shown. alarm [disable | enable | silence | test] Control the RAID card's alarm functionality, if supported. By default if no argument is specified, its current state will be shown. Optionally the disable, enable, silence, or test arguments may be specified to enable, disable, silence, or test the RAID card's alarm. blink start channel:target.lun | stop channel:target.lun Instruct the device at channel:target.lun to start or cease blinking, if there's ses(4) support in the enclosure. hotspare add channel:target.lun | remove channel:target.lun Create or remove a hot-spare drive at location channel:target.lun. passthru add DISKID channel:target.lun | remove channel:target.lun Create or remove a pass-through device. The DISKID argument specifies the disk that will be used for the new device, and it will be created at the location channel:target.lun. NOTE: Removing a pass-through device that has a mounted filesys- tem will lead to undefined behaviour. check start VOLID | stop VOLID Start or stop consistency volume check in the volume with index VOLID. NOTE: Not many RAID controllers support this fea- ture. create volume VOLID DISKIDs [SIZE] STRIPE RAID_LEVEL channel:target.lun Create a volume at index VOLID. The DISKIDs argument will specify the first and last disk, i.e.: 0-3 will use the disks 0, 1, 2, and 3. The SIZE argument is optional and may be specified if not all available disk space is wanted (also dependent of the RAID_LEVEL). The volume will have a stripe size defined in the STRIPE argument and it will be located at channel:target.lun. remove volume VOLID channel:target.lun Remove a volume at index VOLID and located at channel:target.lun. NOTE: Removing a RAID volume that has a mounted filesystem will lead to undefined behaviour. EXAMPLES
The following command, executed from the command line, shows the status of the volumes and its logical disks on the RAID controller: $ bioctl arcmsr0 show Volume Status Size Device/Label RAID Level Stripe ================================================================= 0 Building 468G sd0 ARC-1210-VOL#00 RAID 6 128KB 0% done 0:0 Online 234G 0:0.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:1 Online 234G 0:1.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:2 Online 234G 0:2.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:3 Online 234G 0:3.0 noencl <WDC WD2500YS-01SHB1 20.06C06> To create a RAID 5 volume on the SCSI 0:15.0 location on the disks 0, 1, 2, and 3, with stripe size of 64Kb on the first volume ID, using all available free space on the disks: $ bioctl arcmsr0 create volume 0 0-3 64 5 0:15.0 To remove the volume 0 previously created at the SCSI 0:15.0 location: $ bioctl arcmsr0 remove volume 0 0:15.0 SEE ALSO
arcmsr(4), bio(4), cac(4), ciss(4), mfi(4) HISTORY
The bioctl command first appeared in OpenBSD 3.8, it was rewritten for NetBSD 5.0. AUTHORS
The bioctl interface was written by Marco Peereboom <marco@openbsd.org> and was rewritten with multiple features by Juan Romero Pardines <xtraeme@NetBSD.org>. BSD
March 16, 2008 BSD
All times are GMT -4. The time now is 09:59 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy