Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory MDADM Failure - where it came from? Post 302926370 by achenle on Sunday 23rd of November 2014 06:07:49 PM
Old 11-23-2014
Looks like sdh2 failed, and you got a NULL pointer dereference immediately afterwards. So much for robustness in event of a disk failure...

Check all your disks.
 

10 More Discussions You Might Find Interesting

1. Programming

ld failure

Hi, I am using gmake to compile a c program with a makefile. The make file runs ld. I get the following error jsh1035c:/users/egate453/admegate/kapil/samples $ gmake -e -f GNUmakefile queue_c gmake -f ./GNUmakefile queue_c in_objdir=1 build_root=/users/egate453/admegate/kapil/samples... (2 Replies)
Discussion started by: handak9
2 Replies

2. Linux

mdadm - Swapping 500GB disks for 1TB

Hi, I have a three disk raid 5, with 500GB disks. This is close to being full, and whilst I can just add another disk and rebuild to add another 500GB, I would prefer to replace with 1TB disks. So i have some questions. Can I replace these disks one by one with bigger disks? I... (1 Reply)
Discussion started by: snoop2048
1 Replies

3. Virtualization and Cloud Computing

is mdadm --incremental --rebuild --run --scan destructive?

Hello Unix Community: My task to figure out how to add a 20G volume to an existing EBS Array (RAID0) at AWS. I haven't been told that growing the existing volumes isn't an option, or adding another larger volume to the existing array is the way to go. The client's existing data-store is... (0 Replies)
Discussion started by: Habitual
0 Replies

4. Emergency UNIX and Linux Support

mdadm unable to fail a resyncing drive?

Hi All I have a RAID 5 array consisting of 4 drives that had a partial drive failure in one of the drives. Rebooting shows the faulty drive as background rebuilding and mdadm /dev/ARRAYID shows three drives as in sync with the fourth drive as spare rebuilding. However the array won't come... (9 Replies)
Discussion started by: Bashingaway
9 Replies

5. UNIX for Advanced & Expert Users

mdadm question

Hello, I have 4 drives (500G each) in a raid 10, I got a power failior and this is the result? cat /proc/mdstat Personalities : md126 : inactive sdb sdc sdd sde 1953536528 blocks super external:-md127/0 md127 : inactive sdd(S) sde(S) sdb(S) sdc(S) 9028 blocks super... (3 Replies)
Discussion started by: rmokros
3 Replies

6. UNIX for Advanced & Expert Users

mdadm container! How does it work

Hi everyone, I am not sure if I understand how mdadm --create /dev/md0 --level=container works? A device called /dev/md0 appears in /proc/mdstat but I am not sure how to use that device? I have 2 blank drives with 1 500GB partition on each. I would like to setup mirroring, but not in the... (0 Replies)
Discussion started by: hytron
0 Replies

7. Red Hat

mdadm for / and /boot

had this RHEL 5 installation with /dev/sda1 and /dev/sda2 running.. created two more partitions /dev/sdj1 and /dev/sdj2 , the same sized partition as /dev/sda trying to use mdadm to create RAID1 .. I cannot even do it in "rescue" mode, I wonder if it can be done.. it kept... (2 Replies)
Discussion started by: ppchu99
2 Replies

8. UNIX for Dummies Questions & Answers

boot up failure unix sco after power failure

hi power went out. next day unix sco wont boot up error code 303. any help appreciated as we are clueless. (11 Replies)
Discussion started by: fredthayer
11 Replies

9. UNIX for Advanced & Expert Users

USB RAID 5 Problem on Joli OS 1.2 (Ubuntu) using mdadm

Hi All, I have been trying to create a USB RAID 5 using mdadm tool on Joli OS 1.2 (Ubuntu) but with no luck. I cannot even get pass the creation of array device (/dev/md0) and superblock. I am using 3 USB keys (2 16.4 GB kingston and 1 16GB sandisk). My steps are: ... (5 Replies)
Discussion started by: powelltallen
5 Replies

10. UNIX for Advanced & Expert Users

How to fix mistake on raid: mdadm create instead of assemble?

Hi guys, I'm new to RAID although I've had a server running raid5 for a while. It was delivered preinstalled like this and I never really wondered how to monitor and maintain it. This quick introduction just to let you understand why I'm such an idiot asking such a silly question. Now what... (0 Replies)
Discussion started by: chebarbudo
0 Replies
vxdarestore(1M) 														   vxdarestore(1M)

NAME
vxdarestore - restore simple or nopriv disk access records SYNOPSIS
/etc/vx/bin/vxdarestore DESCRIPTION
The vxdarestore utility is used to restore persistent simple or nopriv disk access (da) records that have failed due to changing the naming scheme used by vxconfigd from c#t#d#-based to enclosure-based. The use of vxdarestore is required if you use the vxdiskadm command to change from the c#t#d#-based to the enclosure-based naming scheme. As a result, some existing persistent simple or nopriv disks go into the "error" state and the VxVM objects on those disks fail. vxdarestore may be used to restore the disk access records that have failed. The utility also recovers the VxVM objects on the failed disk access records. Note: vxdarestore may only be run when vxconfigd is using the enclosure-based naming scheme. Note: You can use the command vxdisk list da_name to discover whether a disk access record is persistent. The record is non-persistent if the flags field includes the flag autoconfig; otherwise it is persistent. The following sections describe how to use the vxdarestore utility under various conditions. Persistent Simple/Nopriv Disks in the rootdg Disk Group If all persistent simple or nopriv disks in the rootdg disk group go into the "error" state, use the following procedure: 1. Use the vxdiskadm command to change back to the c#t#d# based naming scheme. 2. Either shut down and reboot the host, or run the following command: vxconfigd -kr reset 3. If you want to use the enclosure-based naming scheme, add a non-persistent simple disk to the rootdg disk group, use vxdiskadm to change to the enclosure-based naming scheme, and then run vxdarestore. Note: If not all the disks in rootdg go into the error state, simply running vxdarestore restores those disks in the error state and the objects that that they contain. Persistent Simple/Nopriv Disks in Disk Groups other than rootdg If all disk access records in an imported disk group consist only of persistent simple and/or nopriv disks, the disk group is put in the "online dgdisabled" state after changing to the enclosure-based naming scheme. For such disk groups, perform the following steps: 1. Deport the disk group using the following command: vxdg deport diskgroup 2. Run the vxdarestore command. 3. Re-import the disk group using the following command: vxdg import diskgroup NOTES
Use of the vxdarestore command is not required in the following cases: o If there are no persistent simple or nopriv disk access records on an HP-UX host. o If all devices on which simple or nopriv disks are present are not automatically configurable by VxVM. For example, third-party drivers export devices that are not automatically configured by VxVM. VxVM objects on simple/nopriv disks created from such disks are not affected by switching to the enclosure based naming scheme. The vxdarestore command does not handle the following cases: o If the enclosure-based naming scheme is in use and the vxdmpadm command is used to change the name of an enclosure, the disk access names of all devices in that enclosure are also changed. As a result, any persistent simple/nopriv disks in the enclosure are put into the "error" state, and VxVM objects configured on those disks fail. o If the enclosure-based naming scheme is in use and the system is rebooted after making hardware configuration changes to the host. This may change the disk access names and cause some persistent simple/nopriv disks to be put into the "error" state. o If the enclosure-based naming scheme is in use, the device discovery layer claims some disks under the JBOD category, and the vxdd- ladm rmjbod command is used to remove remove support for the JBOD category for disks from a particular vendor. As a result of the consequent name change, disks with persistent disk access records are put into the "error" state, and VxVM objects configured on those disks fail. EXIT CODES
A zero exit status is returned if the operation is successful or if no actions were necessary. An exit status of 1 is returned if vxdare- store is run while vxconfigd is using the c#t#d# naming scheme. An exit status of 2 is returned if vxconfigd is not running. SEE ALSO
vxconfigd(1M), vxdg(1M), vxdisk(1M), vxdiskadm(1M), vxdmpadm(1M), vxintro(1M), vxreattach(1M), vxrecover(1M) VxVM 5.0.31.1 24 Mar 2008 vxdarestore(1M)
All times are GMT -4. The time now is 08:30 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy