Hi guys, I appreciate any help in this regard, we have lost sensitive data in the company.
One box with 2 disk mirrored and a 3ware controller handling 13 disks in a raidz2 pool. Suddenly the box restart and keeps "Reading ZFS config" for hours.
Unplugging disk by disk we isolate the disk was causing the system not to be able to restar and we execute 'zpool clear -F' as suggested by 'zpool status' command. During hours of proccess we get a console error from the controller, and the system hangs, so we decide to change such disk, getting the pool from DEGRADED to FAULTED. After one 'zpool clear' we get the pool again DEGRADED, but no access to data, so we try to roll back with previous disks. (we didn't commit any 'zpool replace').
The box keeps restarting, freezing and unable to boot, so we decide to plug the original 13 disks in another box with same hardware.
Now we are trying to import the pool here, after hours of proccess and huge disk activity, the box hangs and the import doesn't succeed. This is the result of 'zpool import' command:
Code:
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
zsan08rz2 DEGRADED
raidz2-0 DEGRADED
c10t2d0 FAULTED corrupted data
c10t2d0 ONLINE
c10t5d0 ONLINE
c10t9d0 ONLINE
c10t0d0 ONLINE
c10t1d0 ONLINE
c10t4d0 ONLINE
c10t8d0 ONLINE
c10t12d0 ONLINE
c10t11d0 ONLINE
c10t3d0 ONLINE
c10t7d0 ONLINE
c10t6d0 ONLINE
Any ideas? Note that c10t2d0 is duplicated, and note that during las import process we got this error from the controller in the console:
On an OmniOS server, I removed a single-disk pool I was using for testing.
Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore.
# zpool import
pool: fido
id: 7452075738474086658
state: FAULTED
status: The pool was last... (11 Replies)
I accidently added a disk in different zpool instead of pool, where I want.
root@prtdrd21:/# zpool status cvfdb2_app_pool
pool: cvfdb2_app_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Hi All
Hope it's okay to post on this sub-forum, couldn't find a better place
I've got a 480R running solaris 8 with veritas volume manager managing all filesystems, including an encapsulated root disk (I believe the root disk is encapsulated as one of the root mirror disks has an entry under... (1 Reply)
I lost my system volume in a power outage, but fortunately I had a dual boot and I could boot into an older opensolaris version and my raidz2 7 drive pool was still fine. I even scrubbed it, no errors. However, the older os has some smb problems so I wanted to upgrade to opensolaris11. I... (3 Replies)
trying to recover a lost partition table, where the signature (0x55AA) has been lost, though attempting to restore using a number of tools (fdisk, testdisk et al) the write fails.
also the os is unable to read the disk geometry correctly, after attempting to correct the geometry, the updated... (2 Replies)
Hi All,
Today when I was working on a script to generate custom wordlist. So I ran a script and the output was directed to /tmp.
The disk space was around 19 gb. While the script was running, I decided to direct the o/p file to my 1TB drive. So I broke the run using Ctrl + C.
Now when I... (4 Replies)