Sponsored Content
Full Discussion: 13 disk raidz2 pool lost
Operating Systems Solaris 13 disk raidz2 pool lost Post 302707289 by tatxo on Friday 28th of September 2012 10:05:05 AM
Old 09-28-2012
13 disk raidz2 pool lost

Hi guys, I appreciate any help in this regard, we have lost sensitive data in the company.

One box with 2 disk mirrored and a 3ware controller handling 13 disks in a raidz2 pool. Suddenly the box restart and keeps "Reading ZFS config" for hours.

Unplugging disk by disk we isolate the disk was causing the system not to be able to restar and we execute 'zpool clear -F' as suggested by 'zpool status' command. During hours of proccess we get a console error from the controller, and the system hangs, so we decide to change such disk, getting the pool from DEGRADED to FAULTED. After one 'zpool clear' we get the pool again DEGRADED, but no access to data, so we try to roll back with previous disks. (we didn't commit any 'zpool replace').

The box keeps restarting, freezing and unable to boot, so we decide to plug the original 13 disks in another box with same hardware.

Now we are trying to import the pool here, after hours of proccess and huge disk activity, the box hangs and the import doesn't succeed. This is the result of 'zpool import' command:

Code:
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        zsan08rz2     DEGRADED
          raidz2-0    DEGRADED
            c10t2d0   FAULTED  corrupted data
            c10t2d0   ONLINE
            c10t5d0   ONLINE
            c10t9d0   ONLINE
            c10t0d0   ONLINE
            c10t1d0   ONLINE
            c10t4d0   ONLINE
            c10t8d0   ONLINE
            c10t12d0  ONLINE
            c10t11d0  ONLINE
            c10t3d0   ONLINE
            c10t7d0   ONLINE
            c10t6d0   ONLINE

Any ideas? Note that c10t2d0 is duplicated, and note that during las import process we got this error from the controller in the console:

Code:
zsan08 tw: WARNING: tw0: tw_aen_task AEN 0x000a Drive error detected unit=7 port=13

This drive seems to be different than the drive c10t2d0.

Suggestions? Thanks!
 

7 More Discussions You Might Find Interesting

1. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

2. Ubuntu

Disk Space lost mysteriously upon breaking a process.

Hi All, Today when I was working on a script to generate custom wordlist. So I ran a script and the output was directed to /tmp. The disk space was around 19 gb. While the script was running, I decided to direct the o/p file to my 1TB drive. So I broke the run using Ctrl + C. Now when I... (4 Replies)
Discussion started by: morningSunshine
4 Replies

3. Boot Loaders

Lost MBR on disk

trying to recover a lost partition table, where the signature (0x55AA) has been lost, though attempting to restore using a number of tools (fdisk, testdisk et al) the write fails. also the os is unable to read the disk geometry correctly, after attempting to correct the geometry, the updated... (2 Replies)
Discussion started by: xaphan
2 Replies

4. Solaris

zfs raidz2 - insufficient replicas

I lost my system volume in a power outage, but fortunately I had a dual boot and I could boot into an older opensolaris version and my raidz2 7 drive pool was still fine. I even scrubbed it, no errors. However, the older os has some smb problems so I wanted to upgrade to opensolaris11. I... (3 Replies)
Discussion started by: skk
3 Replies

5. Solaris

Lost Root Password on VXVM Encapsulated Root Disk

Hi All Hope it's okay to post on this sub-forum, couldn't find a better place I've got a 480R running solaris 8 with veritas volume manager managing all filesystems, including an encapsulated root disk (I believe the root disk is encapsulated as one of the root mirror disks has an entry under... (1 Reply)
Discussion started by: sunnyd76
1 Replies

6. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

7. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
ZFSBOOT(8)						    BSD System Manager's Manual 						ZFSBOOT(8)

NAME
zfsboot -- bootcode for ZFS on BIOS-based computers DESCRIPTION
zfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. zfsboot is installed in two parts on a disk or a partition used by a ZFS pool. The first part, a single-sector starter boot block, is installed at the beginning of the disk or partition. The second part, a main boot block, is installed at a special offset within the disk or partition. Both areas are reserved by the ZFS on-disk specifi- cation for boot use. If zfsboot is installed in a partition, then that partition should be made bootable using appropriate configuration and boot blocks described in boot(8). BOOTING
The zfsboot boot process is very similar to that of gptzfsboot(8). One significant difference is that zfsboot does not currently support the GPT partitioning scheme. Thus only whole disks and MBR partitions, traditionally referred to as slices, are probed for ZFS disk labels. See the BUGS section in gptzfsboot(8) for some limitations of the MBR scheme support. USAGE
zfsboot supports all the same prompt and configuration file arguments as gptzfsboot(8). FILES
/boot/zfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
zfsboot is typically installed using dd(1). To install zfsboot on the ada0 drive: dd if=/boot/zfsboot of=/dev/ada0 count=1 dd if=/boot/zfsboot of=/dev/ada0 iseek=1 oseek=1024 If the drive is currently in use, the GEOM safety will prevent writes and must be disabled before running the above commands: sysctl kern.geom.debugflags=0x10 zfsboot can also be installed in an MBR slice: gpart create -s mbr ada0 gpart add -t freebsd ada0 gpart create -s BSD ada0s1 gpart bootcode -b /boot/boot0 ada0 gpart set -a active -i 1 ada0 dd if=/boot/zfsboot of=/dev/ada0s1 count=1 dd if=/boot/zfsboot of=/dev/ada0s1 iseek=1 oseek=1024 Note that commands to create and populate a pool are not shown in the example above. SEE ALSO
dd(1), boot.config(5), boot(8), gptzfsboot(8), loader(8), zfsloader(8), zpool(8) HISTORY
zfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
Installing zfsboot with dd(1) is a hack. ZFS needs a command to properly install zfsboot onto a ZFS-controlled disk or partition. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 02:09 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy