Sponsored Content
Operating Systems Solaris Solaris not recognizing RAID 5 disks Post 302501704 by Celtic_Monkey on Friday 4th of March 2011 10:16:11 AM
Old 03-04-2011
Quote:
Originally Posted by goose25
the format command just shows the two drives in my RAID1 config.
Goose - there are others here that will be better placed to help you, as for the moment, i don't have a system that will allow me to double check what i'm saying - so accept this (possibly incorrect information) in the spirit it is given.

Q. Is it hardware / Software raid?
Q. How did you create the Raid 1 Array exactly?

Normally when you raid up the disks - solaris hides the second device from view and allows access to this from the first device!

this would make sence that you are seeing two devices - 1 x raid 1 device (second hidden) and 1 x raid.

post the output of the format command.
post the prtvtoc of the two devices "format" finds
 

10 More Discussions You Might Find Interesting

1. Solaris

RAID in Solaris 9

All, Is there a way to RAID the disks in Solaris 9? Solaris 8 provided DiskSuite for mirroring disk drives. Is there a GUI for DiskSuite in Solaris 9? Thanks, Mike (2 Replies)
Discussion started by: bubba112557
2 Replies

2. Solaris

disks in solaris

whats the command to find name of all disks. Is it iostat -En ? (1 Reply)
Discussion started by: vikashtulsiyan
1 Replies

3. Red Hat

IBM RAID disks

We have a Red Hat linux server running on IBM x445 hardware. There are external disks in an IBM EXP300 disk enclosure. The system is running RAID 5. One of the four IBM disks (73.4 GB 10k FRU 06P5760) has become faulty. The system is still up and running OK because of the RAID. In that same EXP300... (3 Replies)
Discussion started by: pdudley
3 Replies

4. Solaris

Move disks to different StorEdge, keeping RAID

Hi. I need to move a 5 disk RAID5 array from a SE3310 box to a different SE3310 array. After installing the disks in the "new" StorEdge device, I "would like" ;) to be able have access to the data which is on the RAID. Essentially, the quesion is, how can this be done? :confused: I checked... (5 Replies)
Discussion started by: alexs77
5 Replies

5. Linux

If i don't have raid disks can i shut down dmraid device-mapper?

hello my centOS newly installed system loading dmraid modules on startup I did remove all LVM/raid things from system installation menus and after installation too but dmraid is still there and he says: no raid disks found also I did modprobe -r dm_raid45 and it do remove it but only until... (7 Replies)
Discussion started by: tip78
7 Replies

6. Solaris

Solaris 9 not recognizing CDROM drive

Hello, I've read many posts that offer tips on how to mount a CDROM but I haven't seen any on how to get the system to recognize the CDROM drive. I was transferring files from CDROM to the hard drive successfully. I entered the third CDROM and the system refused to automount it. I tried... (2 Replies)
Discussion started by: TrueSon
2 Replies

7. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

8. Solaris

Solaris 10 Installation - Disks missing, and Raid

Hey everyone. First, let me start by saying I'm primarily focused on linux boxes, and just happened to get pulled into building two T5220's. I'm not super educated on sun boxes. Both T5220's have 8 146GB 15k SAS drives. Inside the service processor, I can run SHOW /SYS/HDD{0-7} and they all come... (2 Replies)
Discussion started by: msarro
2 Replies

9. Solaris

Hardware RAID using three disks

Dear All , Pl find the below command , # raidctl -l Controller: 1 Volume:c1t0d0 Disk: 0.0.0 Disk: 0.1.0 Disk: 0.3.0 # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size ... (10 Replies)
Discussion started by: jegaraman
10 Replies

10. Solaris

Root user not recognizing on Solaris-10 (shadow file corruption)

Hello, I got into a wired state on one of solaris 10 server. When I noticed that server is having some issue, I found that there were dumpadm.conf entries in /etc/shadow and real entries were wiped of. Probably somebody fat fingers. I was able to boot into failsafe, break SVM mirror, copied... (25 Replies)
Discussion started by: solaris_1977
25 Replies
LVCHANGE(8)						      System Manager's Manual						       LVCHANGE(8)

NAME
lvchange - change attributes of a logical volume SYNOPSIS
lvchange [--addtag Tag] [-A|--autobackup {y|n}] [-a|--activate [a|e|l]{y|n}] [-k|--setactivationskip{y|n}] [-K|--ignoreactivationskip] [--alloc AllocationPolicy] [-C|--contiguous {y|n}] [-d|--debug] [--deltag Tag] [--profile ProfileName] [--detachprofile] [--discards {ignore|nopassdown|passdown}] [--resync] [-h|-?|--help] [--ignorelockingfailure] [--ignoremonitoring] [--ignoreskippedcluster] [--monitor {y|n}] [--poll {y|n}] [--[raid]maxrecoveryrate Rate] [--[raid]minrecoveryrate Rate] [--[raid]syncaction {check|repair}] [--[raid]writebe- hind IOCount] [--[raid]writemostly PhysicalVolume[:{t|n|y}]] [--sysinit] [--noudevsync] [-M|--persistent {y|n}] [--minor minor] [-P|--par- tial] [-p|--permission {r|rw}] [-r|--readahead {ReadAheadSectors|auto|none}] [--refresh] [-t|--test] [-v|--verbose] [-Z|--zero {y|n}] Logi- calVolumePath [LogicalVolumePath...] DESCRIPTION
lvchange allows you to change the attributes of a logical volume including making them known to the kernel ready for use. OPTIONS
See lvm(8) for common options. -a, --activate [a|e|l]{y|n} Controls the availability of the logical volumes for use. Communicates with the kernel device-mapper driver via libdevmapper to activate (-ay) or deactivate (-an) the logical volumes. If autoactivation option is used (-aay), the logical volume is activated only if it matches an item in the activation/auto_activation_volume_list set in lvm.conf. If this list is not set, then all volumes are considered for activation. The -aay option should be also used during system boot so it's possible to select which volumes to activate using the activation/auto_activation_volume_list setting. If clustered locking is enabled, -aey will activate exclusively on one node and -aly will activate only on the local node. To deac- tivate only on the local node use -aln. Logical volumes with single-host snapshots are always activated exclusively because they can only be used on one node at once. -k, --setactivationskip {y|n} Controls whether Logical Volumes are persistently flagged to be skipped during activation. By default, thin snapshot volumes are flagged for activation skip. To activate such volumes, an extra -K/--ignoreactivationskip option must be used. The flag is not applied during deactivation. To see whether the flag is attached, use lvs command where the state of the flag is reported within lv_attr bits. -K, --ignoreactivationskip Ignore the flag to skip Logical Volumes during activation. -C, --contiguous {y|n} Tries to set or reset the contiguous allocation policy for logical volumes. It's only possible to change a non-contiguous logical volume's allocation policy to contiguous, if all of the allocated physical extents are already contiguous. --detachprofile Detach any configuration profiles attached to given Logical Volumes. See also lvm(8) and lvm.conf(5) for more information about configuration profiles. --discards {ignore|nopassdown|passdown} Set this to ignore to ignore any discards received by a thin pool Logical Volume. Set to nopassdown to process such discards within the thin pool itself and allow the no-longer-needed extents to be overwritten by new data. Set to passdown (the default) to process them both within the thin pool itself and to pass them down the underlying device. --resync Forces the complete resynchronization of a mirror. In normal circumstances you should not need this option because synchronization happens automatically. Data is read from the primary mirror device and copied to the others, so this can take a considerable amount of time - and during this time you are without a complete redundant copy of your data. --minor minor Set the minor number. --monitor {y|n} Start or stop monitoring a mirrored or snapshot logical volume with dmeventd, if it is installed. If a device used by a monitored mirror reports an I/O error, the failure is handled according to mirror_image_fault_policy and mirror_log_fault_policy set in lvm.conf. --poll {y|n} Without polling a logical volume's backgrounded transformation process will never complete. If there is an incomplete pvmove or lvconvert (for example, on rebooting after a crash), use --poll y to restart the process from its last checkpoint. However, it may not be appropriate to immediately poll a logical volume when it is activated, use --poll n to defer and then --poll y to restart the process. --[raid]maxrecoveryrate Rate[bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume. Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --[raid]minrecoveryrate Rate[bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume. Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --[raid]syncaction {check|repair} This argument is used to initiate various RAID synchronization operations. The check and repair options provide a way to check the integrity of a RAID logical volume (often referred to as "scrubbing"). These options cause the RAID logical volume to read all of the data and parity blocks in the array and check for any discrepancies (e.g. mismatches between mirrors or incorrect parity val- ues). If check is used, the discrepancies will be counted but not repaired. If repair is used, the discrepancies will be corrected as they are encountered. The 'lvs' command can be used to show the number of discrepancies found or repaired. --[raid]writebehind IOCount Specify the maximum number of outstanding writes that are allowed to devices in a RAID1 logical volume that are marked as write- mostly. Once this value is exceeded, writes become synchronous (i.e. all writes to the constituent devices must complete before the array signals the write has completed). Setting the value to zero clears the preference and allows the system to choose the value arbitrarily. --[raid]writemostly PhysicalVolume[:{t|y|n}] Mark a device in a RAID1 logical volume as write-mostly. All reads to these drives will be avoided unless absolutely necessary. This keeps the number of I/Os to the drive to a minimum. The default behavior is to set the write-mostly attribute for the speci- fied physical volume in the logical volume. It is possible to also remove the write-mostly flag by appending a ":n" to the physical volume or to toggle the value by specifying ":t". The --writemostly argument can be specified more than one time in a single com- mand; making it possible to toggle the write-mostly attributes for all the physical volumes in a logical volume at once. --sysinit Indicates that lvchange(8) is being invoked from early system initialisation scripts (e.g. rc.sysinit or an initrd), before write- able filesystems are available. As such, some functionality needs to be disabled and this option acts as a shortcut which selects an appropriate set of options. Currently this is equivalent to using --ignorelockingfailure, --ignoremonitoring, --poll n and setting LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES environment variable. If --sysinit is used in conjunction with lvmetad(8) enabled and running, autoactivation is preferred over manual activation via direct lvchange call. Logical volumes are autoactivated according to auto_activation_volume_list set in lvm.conf(5). --noudevsync Disable udev synchronisation. The process will not wait for notification from udev. It will continue irrespective of any possible udev processing in the background. You should only use this if udev is not running or has rules that ignore the devices LVM2 cre- ates. --ignoremonitoring Make no attempt to interact with dmeventd unless --monitor is specified. Do not use this if dmeventd is already monitoring a device. -M, --persistent {y|n} Set to y to make the minor number specified persistent. -p, --permission {r|rw} Change access permission to read-only or read/write. -r, --readahead {ReadAheadSectors|auto|none} Set read ahead sector count of this logical volume. For volume groups with metadata in lvm1 format, this must be a value between 2 and 120 sectors. The default value is "auto" which allows the kernel to choose a suitable value automatically. "None" is equiva- lent to specifying zero. --refresh If the logical volume is active, reload its metadata. This is not necessary in normal operation, but may be useful if something has gone wrong or if you're doing clustering manually without a clustered lock manager. -Z, --zero {y|n} Set zeroing mode for thin pool. Note: already provisioned blocks from pool in non-zero mode are not cleared in unwritten parts when setting zero to y. Examples Changes the permission on volume lvol1 in volume group vg00 to be read-only: lvchange -pr vg00/lvol1 SEE ALSO
lvm(8), lvcreate(8), vgchange(8) Sistina Software UK LVM TOOLS 2.02.105(2)-RHEL7 (2014-03-26) LVCHANGE(8)
All times are GMT -4. The time now is 11:41 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy