[SOLVED] Error with trying to resize software raid device
Hello,
I have a failed raid array that is mismatched in size of the entire hard drives. The portions that are shared are, of course, the same. The smaller drive failed. Here is the working drive setup (a picture is worth 1000 words).
As you can see, I would like to grow sdb6 into the unallocated area.
This is my /proc/mdstat file
Code:
Personalities : [raid1]
md0 : active raid1 sdb6[1]
717040448 blocks super 1.2 [2/1] [_U]
unused devices: <none>
This is my fdisk -l
Code:
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00011318
Device Boot Start End Blocks Id System
/dev/sdb1 2048 41945087 20971520 83 Linux
/dev/sdb2 41945088 1953523711 955789312 5 Extended
/dev/sdb5 41947136 54530047 6291456 82 Linux swap / Solaris
/dev/sdb6 519178240 1953523711 717172736 fd Linux raid autodetect
Disk /dev/md0: 734.2 GB, 734249418752 bytes
2 heads, 4 sectors/track, 179260112 cylinders, total 1434080896 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
And this is the saved error from gparted when I try to resize that partition.
Code:
GParted 0.12.1 --enable-libparted-dmraid
Libparted 2.3
Move /dev/sdb6 to the left and grow it from 683.95 GiB to 905.50 GiB 00:00:00 ( ERROR )
calibrate /dev/sdb6 00:00:00 ( SUCCESS )
path: /dev/sdb6
start: 519,178,240
end: 1,953,523,711
size: 1,434,345,472 (683.95 GiB)
check file system on /dev/sdb6 for errors and (if possible) fix them 00:00:00 ( ERROR )
e2fsck -f -y -v /dev/sdb6
ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmap
e2fsck: Group descriptors look bad... trying backup blocks...
e2fsck: Bad magic number in super-block while using the backup blockse2fsck: going back to original superblock
Linux_HomeR2: ********** WARNING: Filesystem still has errors **********
e2fsck 1.42.5 (29-Jul-2012)
e2fsck: A block group is missing an inode table while checking ext3 journal for Linux_HomeR2
========================================
With the error that it couldn't faulty the drive as the device or resource was busy. However, the device wasn't in use at all, so I'm not sure why it was saying that.
I tried the resize after successfully doing:
Code:
mdadm -S /dev/md0
This is when I tried the resize using gparted with the above error.
It turns out that the directions actually left out steps. For anyone reading this thread, the step is after removing the offending drive by failing it first with:
Code:
# mdadm /dev/md0 --fail /dev/sda6
where /dev/md0 needs changed to your device as does /dev/sda6.
Then remove it with:
Code:
# mdadm /dev/md0 --remove /dev/sda6
changing the devices to meet your system.
What is left out I'm sure is that the partition needs erased. Then it needs rebuilt with the larger partition. Then added to the array.
Code:
# mdadm /dev/md0 --add /dev/sda6
Let it resync, then repeat on the other array member. I was trying to resize it and keep the info, which was a dumb thought.
I didn't get to follow this procedure because my array wouldn't sync due to a bad sector unrecoverable I/O error, so I had to backup, copy to a brand newly built array which I just created with the partition expanded in the first place.
Server Model: T5120 with 146G x4 disks.
OS: Solaris 10 - installed on c1t0d0.
Plan to use software raid (veritas volume mgr) on c1t2d0 disk.
After format and label the disk, still not able to detect using vxdiskadm.
Question:
Should I remove the hardware raid on c1t2d0 first?
My... (4 Replies)
We have configured software based RAID5 with LVM on our RHEL5 servers. Please let us know if its good to configure software RAID on live environment servers. What can be the disadvantages of software RAID against hardware RAID (4 Replies)
Hello,
My company has inherited a Centos based machine that has 7 hard drives and a software based raid system. Supposedly one of the drives has failed. I need to replace the hardrive.
How can I about telling which hard drive needs replacing? I have looked in the logs and there clearly is a... (5 Replies)
hi friends,
I am having issues with adding a spare device to a failed array.
I have created RAID 1 with 3 partitions using mdadm command. Later I added a spare with
mdadm --add /dev/md0 /dev/sdb6
This works fine and when I check this with mdadm --detail command it just sits there as a spare... (7 Replies)
Hey,
I have worked with Linux for some time, but have not gotten into the specifics of hard drive tuning or software RAID. This is about to change. I have a Dell PowerEdge T105 at home and I am purchasing the following:
1GBx4 DDR2 ECC PC6400 RAM
Rosewill RSV-5 E-Sata 5 bay disk enclosure... (6 Replies)
Hi!
A couple of months ago a disk failed in our JBOD cabinett and I have finally got a new disk to replace it with. It was a RAID 0 so we have to create and configure the whole thing again. First we thought of RAID 1+0 but it seems you can't do this with LVM. If you read my last thread, you can... (0 Replies)
Hi all,
I m just trying using software RAID in RHEL 4, without problem , then i wanna simulate if disk 1 is fail (thereis an bootloader), i plug off my 1st disk. My problems is the second disk cannot boot? just stuck in grub, the computer is hang. Sorry for poor concept in RAID? I use a RAID 1.... (0 Replies)
Hello Lunix people,
I am looking for Raid software or solution besides Veritas. Veritas has some great software but are way too costly. Does anyone know of good raid software that but NOT Veritas. I need the funcations but not the cost. (7 Replies)