Sponsored Content
Operating Systems Solaris Hardware RAID on Solaris-10 disk Post 303046063 by solaris_1977 on Wednesday 22nd of April 2020 04:03:21 PM
Old 04-22-2020
Hardware RAID on Solaris-10 disk

Hello,

I am not able to figure out if the disk is in mirror or not (hardware RAID). c1t1d0s0 is the one, which I need to replace, as this one is in the failing state.
Code:
solaris-10-priv#df -h /export/u02
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t1d0s0      275G   266G   6.4G    98%    /export/u02
solaris-10-priv#echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <Sun-STKRAIDINT-V1.0 cyl 17831 alt 2 hd 255 sec 63>
          /pci@0/pci@0/pci@9/scsi@0/disk@0,0
       1. c1t1d0 <Sun-STKRAIDINT-V1.0 cyl 36417 alt 2 hd 255 sec 63>
          /pci@0/pci@0/pci@9/scsi@0/disk@1,0
       2. c1t2d0 <Sun-STKRAIDINT-V1.0 cyl 36418 alt 2 hd 255 sec 126>
          /pci@0/pci@0/pci@9/scsi@0/disk@2,0
Specify disk (enter its number): Specify disk (enter its number):
solaris-10-priv#iostat -En | grep Hard
c1t0d0           Soft Errors: 2 Hard Errors: 1 Transport Errors: 59
c1t1d0           Soft Errors: 0 Hard Errors: 342 Transport Errors: 0
c1t2d0           Soft Errors: 2 Hard Errors: 1 Transport Errors: 10
c0t0d0           Soft Errors: 2 Hard Errors: 0 Transport Errors: 0
solaris-10-priv#

solaris-10-priv#raidctl -l
Controller: 2
solaris-10-priv#raidctl -l 2
Controller      Type            Version
----------------------------------------------------------------
c2              LSI_1068E       1.27.00.00
solaris-10-priv#

raidctl on that disk doesn't work, which I thought should work
Code:
solaris-10-priv#raidctl -l c1t1d0
Controller device can not be found.

solaris-10-priv#

Please suggest.

Thanks
---------UPDATE--------------
Didn't realized, I was supposed to use "/opt/StorMan/arcconf getconfig 1"
I am good now :-)

Last edited by solaris_1977; 04-22-2020 at 05:52 PM..
These 2 Users Gave Thanks to solaris_1977 For This Post:
 

9 More Discussions You Might Find Interesting

1. Solaris

Hardware RAID

I don't understood why on SPARC-Platforms have not present RAID-Controller ? Sorry for my bad english, but it's crazy always setup software RAID !!! I whanna Hardware RAID and when i can find solution ? (7 Replies)
Discussion started by: jess_t03
7 Replies

2. Solaris

how to hardware RAID 1 on T5120

Hi, I have t5120 sparc and I have 2 146 G drives in the system. I will be installing solaris 10 and also want the system mirrored using Hardware RAID "1" The System did come preinstalled as it comes from sun. I did not do much on it. I booted system using boot cdrom -s gave format... (6 Replies)
Discussion started by: upengan78
6 Replies

3. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

4. Solaris

Hardware Raid - LiveUpgrade

Hi, I have a question. Do LiveUpgrade supports hardware raid? How to choose the configuration of the system disk for Solaris 10 SPARC? 1st Hardware RAID-1 and UFS 2nd Hardware RAID-1 and ZFS 3rd SVM - UFS and RAID1 4th Software RAID-1 and ZFS I care about this in the future to take... (1 Reply)
Discussion started by: bieszczaders
1 Replies

5. Solaris

how to check for hardware layered raid in solaris 10 ?

hi all, will command metastats tell whether there is hardware layered raid ? i intend to do patching and plug out one of the disks in case the one inside encountered issues. (1 Reply)
Discussion started by: Exposure
1 Replies

6. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

7. Solaris

Hardware RAID not recognize the new disk [Sun T6320]

We have hardware RAID configured on Sun-Blade-T6320 and one of the disk got failed. Hence we replaced the failed disk. But the hot swapped disk not recognized by RAID. Kindly help on fixing this issue. We have 2 LDOM configured on this server and this server running on single disk. #... (8 Replies)
Discussion started by: rock123
8 Replies

8. Solaris

[solved] How to blink faulty disk in Solaris hardware?

Hi Guys, One of two disks in my solaris machine has failed, the name is disk0, this is SUN physical sparc machine But I work remotely, so people working near that physical server are not that technical, so from OS command prompt can run some command to bink faulty disk at front panel of Server.... (9 Replies)
Discussion started by: manalisharmabe
9 Replies

9. Solaris

Hardware raid patching

Dear All , we have hardware raid 1 implemented on Solaris Disks. We need to patch the Servers. Kindly let me know how to patch hardware raid implemented Servers. Thanks... Rj (7 Replies)
Discussion started by: jegaraman
7 Replies
GRAID(8)						    BSD System Manager's Manual 						  GRAID(8)

NAME
graid -- control utility for software RAID devices SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ... graid add [-f] [-S size] [-s strip] name label level graid delete [-f] name [label | num] graid insert name prov ... graid remove name prov ... graid fail name prov ... graid stop [-fv] name ... graid list graid status graid load graid unload DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced users are free to do it using this utility. The first argument to graid indicates an action to be performed: label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as "Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name "NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level and metadata format. Additional options include: -f Enforce specified configuration creation if it is officially unsupported, but technically can be created. -o fmtopt Specifies metadata format options. -S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller components going to be inserted later. Defaults to size of the smallest component. -s strip Specifies strip size in bytes. Defaults to 131072. add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The rest of arguments are the same as for the label command. delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion. Additional options include: -f Delete volume(s) even if it is still open. insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo- nents, mark disk(s) as spare. remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s) will be replaced by spares. fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are spare disks present - failed disk(s) will be replaced with one of them. stop Stop the given array. The metadata will not be erased. Additional options include: -f Stop the given array even if some of its volumes are opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol- lowing formats: DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware RAID controllers. Because of high format flexibility different implementations support different set of features and have different on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array. Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con- figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used by some Adaptec controllers. Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). JMicron The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). NVIDIA The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks). Promise The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur- rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF. RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption! 2TiB BARRIERS NVIDIA metadata format does not support volumes above 2TiB. SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class. kern.geom.raid.aggressive_spare: 0 Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much care to not lose data if connecting unrelated disk! kern.geom.raid.clean_time: 5 Mark volume as clean when idle for the specified number of seconds. kern.geom.raid.debug: 0 Debug level of the RAID GEOM class. kern.geom.raid.enable: 1 Enable on-disk metadata taste. kern.geom.raid.idle_threshold: 1000000 Time in microseconds to consider a volume idle for rebuild purposes. kern.geom.raid.name_format: 0 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. kern.geom.raid.read_err_thresh: 10 Number of read errors equated to disk failure. Write errors are always considered as disk failures. kern.geom.raid.start_timeout: 30 Time to wait for missing array components on startup. kern.geom.raid.X.enable: 1 Enable taste for specific metadata or transformation module. kern.geom.raid.legacy_aliases: 0 Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases. EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails. SEE ALSO
geom(4), geom(8), gvinum(8) HISTORY
The graid utility appeared in FreeBSD 9.0. AUTHORS
Alexander Motin <mav@FreeBSD.org> M. Warner Losh <imp@FreeBSD.org> BSD
April 4, 2013 BSD
All times are GMT -4. The time now is 02:43 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy