Sponsored Content
Operating Systems Solaris Raidctl - Sun T5240 Solaris 10 Problem Post 302417188 by ilikecows on Wednesday 28th of April 2010 09:43:26 PM
Old 04-28-2010
I would download a SPARC version of Linux and try to use the one of its disk tools to format the disk and then installing Solaris. I would also double check to make sure raidctl supports your RAID controller before attempting another hardware mirror. Is there any reason you are shying away from RAID1 using a zpool or SVM volume?
 

9 More Discussions You Might Find Interesting

1. Solaris

boot problem in Installation i86 sun solaris

Hello everybody, I installed sun solaris i86 , the programme installation install the "Mini Root" and after that he doing shutdown. the camputer coming up and he can't boot. how i can to resuled this problem ??? Thenk you in advence..... (2 Replies)
Discussion started by: yanly
2 Replies

2. UNIX for Dummies Questions & Answers

Sun Solaris 10: How do I create a bootup disc? The Sun website confuses me

Hey there, I am starting a Computer Science Foundation year at the end of this month and am trying to get a little bit ahead of the game. I have always wanted to learn Unix and am currently struggling with creating a boot disc to run Solaris (I have chosen to study this) from as opposed to... (0 Replies)
Discussion started by: Jupiter
0 Replies

3. Solaris

Problem after Install SUN solaris x86

after install solaris x86 on my computer success, but it can't boot. When the machine start, and i choose option 1 - default, it shows "W" on screen and system restart . Anyone can help me. My computer : dual core - 1gb ram - x86 (7 Replies)
Discussion started by: quan0509
7 Replies

4. Solaris

raidctl on SUN T5240

Setting up a T5240 with two disks c1t0d0 and c1t1d0. I am trying to use raidctl but when I issue. raidctl -l I get Controller 1 Disk: 0.0.0 Disk: 0.1.0 So I try raidctl -c '0.0.0 0.1.0' -r 1 1 and I get "Array in use." I try (4 Replies)
Discussion started by: photon
4 Replies

5. UNIX for Advanced & Expert Users

Solaris 10 Raidctl

Hello World: Recently I ran into an issue where a collegue had installed a Sun T5140 with twin 136GB disks in them. However, he forgot to execute the raidctl command first to mirror c1t0d0 to c1t1d0 boo hoo:) So along I come and try to mirror the disks by booting to sigle user... (1 Reply)
Discussion started by: rambo15
1 Replies

6. Solaris

SUN T5240 vs M3000

Hi, We are planning to buy new server for our data center. Sun T5240 or M3000 which one have better performance, we are going to create many dt sessions in this server. So, i need your suggestions. RJS (4 Replies)
Discussion started by: rajasekg
4 Replies

7. UNIX for Dummies Questions & Answers

Problem in GUI based program on Sun Solaris

Hello! I am trying to run a program which has used Xlib for its graphical user interface on Solaris through Common Desktop Environment(CDE). All I get is my three required widows open but all blank.They suppose to show some symbols, pictures ad buttons.In the command terminal the following... (1 Reply)
Discussion started by: asif92
1 Replies

8. Solaris

SUN SPARC rebooting problem, after Solaris 10 installation

Hi All, I had install the solaris 10 into the SUNFIRE V890 server, after installed, it keep rebooting. I wish to enter the OK mode again, and install back the Solaris 10. i had try the "break" button and "stop+A" button, but all fail. kindly advice... (4 Replies)
Discussion started by: SmartAntz
4 Replies

9. Solaris

Bizarre Sun T5240 behavior

Hi - I have a T5240 with 7 LDOMS configured. One night, network comm was broken somehow. Nobody was doing anything on the machine at the time. Here is what I saw in messages: WARNING: nxge3 : nxge_dma_mem_alloc: ddi_dma_mem_alloc kmem alloc failed WARNING: nxge3 : nxge_alloc_rx_buf_dma:... (2 Replies)
Discussion started by: pyroman
2 Replies
GRAID(8)						    BSD System Manager's Manual 						  GRAID(8)

NAME
graid -- control utility for software RAID devices SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ... graid add [-f] [-S size] [-s strip] name label level graid delete [-f] name [label | num] graid insert name prov ... graid remove name prov ... graid fail name prov ... graid stop [-fv] name ... graid list graid status graid load graid unload DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced users are free to do it using this utility. The first argument to graid indicates an action to be performed: label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as "Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name "NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level and metadata format. Additional options include: -f Enforce specified configuration creation if it is officially unsupported, but technically can be created. -o fmtopt Specifies metadata format options. -S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller components going to be inserted later. Defaults to size of the smallest component. -s strip Specifies strip size in bytes. Defaults to 131072. add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The rest of arguments are the same as for the label command. delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion. Additional options include: -f Delete volume(s) even if it is still open. insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo- nents, mark disk(s) as spare. remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s) will be replaced by spares. fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are spare disks present - failed disk(s) will be replaced with one of them. stop Stop the given array. The metadata will not be erased. Additional options include: -f Stop the given array even if some of its volumes are opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol- lowing formats: DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware RAID controllers. Because of high format flexibility different implementations support different set of features and have different on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array. Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con- figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used by some Adaptec controllers. Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). JMicron The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). NVIDIA The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks). Promise The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur- rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF. RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption! 2TiB BARRIERS NVIDIA metadata format does not support volumes above 2TiB. SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class. kern.geom.raid.aggressive_spare: 0 Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much care to not lose data if connecting unrelated disk! kern.geom.raid.clean_time: 5 Mark volume as clean when idle for the specified number of seconds. kern.geom.raid.debug: 0 Debug level of the RAID GEOM class. kern.geom.raid.enable: 1 Enable on-disk metadata taste. kern.geom.raid.idle_threshold: 1000000 Time in microseconds to consider a volume idle for rebuild purposes. kern.geom.raid.name_format: 0 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. kern.geom.raid.read_err_thresh: 10 Number of read errors equated to disk failure. Write errors are always considered as disk failures. kern.geom.raid.start_timeout: 30 Time to wait for missing array components on startup. kern.geom.raid.X.enable: 1 Enable taste for specific metadata or transformation module. kern.geom.raid.legacy_aliases: 0 Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases. EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails. SEE ALSO
geom(4), geom(8), gvinum(8) HISTORY
The graid utility appeared in FreeBSD 9.0. AUTHORS
Alexander Motin <mav@FreeBSD.org> M. Warner Losh <imp@FreeBSD.org> BSD
April 4, 2013 BSD
All times are GMT -4. The time now is 11:07 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy