Sponsored Content
Operating Systems Solaris zfs raidz2 - insufficient replicas Post 302584000 by skk on Thursday 22nd of December 2011 07:36:27 AM
Old 12-22-2011
I thought I was OK, but zpool scrub hangs forever at 20% across multiple cold boots, and importing from oracle solaris 11 live hangs forever... I what do I do now, older versions of open solaris will not import the pool anymore...

---------- Post updated at 07:36 AM ---------- Previous update was at 12:48 AM ----------

Well it's 4AM and now I am getting mad. I think this whole bloody mess is caused by Oracle's 'great new solaris 11' package. My advice, don't touch this piece.

To recap, I wanted to try the best newest Solaris release for my super critical file server, so I downloaded from Oracle what I thought was the right iso. I booted, and just as the boot menu came up I got a phone call. When I came back the thing had automatically installed a new version of solaris over one of the drives in the zpool. (BAD #1 ).

Since I have a raidz2 I wasn't too worried ( at first ) and I booted into my original 2009 opensolaris. However, I got the errors shown in the above posts. I exported and could not re-import my pool. Oracale had done something to that drive breaking my entire pool even though it is a radz2 and should be tolerant of 2 drive failures at the same time. ( BAD #2 ).

Since I could do nothing with my pool with my old OS, I tried on the new Oracle solaris, and could indeed import my pool in degraded state because of the one overwritten drive. Fine, I wanted to scrub everything first ( I don't know if this was wise or not ) so I did zpool scrub, which eventually hung forever at 11%. All access to the drive was similarly hung. Rebooting the machine did not change this situation which seemed increasingly dire ( BAD #3 )

I finally got out of this problem by unplugging drives to fault the pool and rebooting in single user mode. Eventually I was able to stop the scrub with the "zpool scrub -s" command, in single user mode. And when I rebooted, I could access my pool again. My first priority at this point was to back up all data immediately. I began to copy off my most important stuff, but unfortunately before I could copy even a fraction off the file system hung again ( BAD #4 ).

Googling around I found most causes for hanging zpool commands are related to hardware failure, so going on a hunch, I figured Oracle phased out or screwed up the drivers for my disks. I still could not import my pool in the old opensolaris OS, because of whatever the Oracle install wrote on that drive. So I booted in Oracle solaris, in single user mode, and did "zpool offline <pool> <drive>" and it worked! Then I rebooted into good old opensolaris, and imported my pool. It worked!!!!

So at this point it is now more like 5AM and I have backed up most of my critical data, way more than I could before anyway. It appears at this point that I was correct and the drivers for either my motherboard or my hard drive controller card were broken by the Oracle release in a way that let it silently trash my zpool. I have a SIIG 2 drive sata and a MSI n1996 motherboard, not sure which the problem is with, but whichever, it works fine in opensolaris 2009 and previous versions.

I just want to warn people that are not real Solaris experts from even trying this Oracle package. Personally I am migrating to fbsd as soon as I can...
 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

SDS and replicas

Hello, We are using Solstice Disk Suite on Solaris 2.7. We want to add two striped volume with six disks. On each disk, we take a slice and we create the stripe. That I want to know : Is it necessary to add two replicas on the same slice on the new disks, as made before on the others... (1 Reply)
Discussion started by: christophe
1 Replies

2. HP-UX

Insufficient permissions on ftp'ed files from WIN2K to HP-UX

We have an application running on Win2K and this application ftp files to HP-UX using ftpdc as user id. The files are created in HP-UX with following permissions: -rw-r----- 1 ftpdc users 968321 Apr 12 22:57 aaaa.txt There is a job that runs on HP-UX trying to modify this file using the... (7 Replies)
Discussion started by: Anamika
7 Replies

3. Solaris

insufficient metadevice database replicas ERROR

First I would like to thank this forum for assisting me in setting up my 1st sunbox. Could not have done it if it had not been for you guys and google :D I have mirrored my box and have SUCCESSFULLY tested booting from the rootdisk and rootmirror successfully. I am now looking at configuring... (2 Replies)
Discussion started by: mr_manny
2 Replies

4. Solaris

Problem with DiskSuite replicas

Good morning, I have Solstice disk suite installed on my server. One disk broke so I sostitute it. A replica was present on this disk. I had delete and then recreate it with commands metadb -d and metadb -a. Now when I inquire for the status of replicas I see this: stp11# metadb -i ... (2 Replies)
Discussion started by: bonovox
2 Replies

5. UNIX for Advanced & Expert Users

insufficient available memory

Hi, when navigating on application webpages (peoplesoft applications) the users receive : INSUFFICIENT AVAILABLE MEMORY. I issued vmstat on UNIX server ( where is hosted Web server and application server and DB). Here is the result : :vmstat 2 10 System configuration: lcpu=4 mem=30720MB... (8 Replies)
Discussion started by: big123456
8 Replies

6. UNIX for Advanced & Expert Users

Issue with insufficient swap or memory space

Hi, When I execute one of my shellscript I am getting the below mentioned error message .This application takes 2input files which have the records counts 26463 and 1178046 exec(2): insufficient swap or memory available. exec(2): insufficient swap or memory available. exec(2): insufficient swap... (3 Replies)
Discussion started by: kavithakuttyk
3 Replies

7. Solaris

metadevices: how to test metadb (how to corrupt replicas/understanding replicas)

Hi all, I recently started exploring Solaris 10. I am testing metadevices now. I have been reading about the state databases here: 6.State Database (Overview) (Solaris Volume Manager Administration Guide) - Sun Microsystems So I created 3 metadbs on 2 slices (6 in total; c1t1d0s3... (3 Replies)
Discussion started by: deadeyes
3 Replies

8. Solaris

Solaris Volume Manger - Database Replicas Question - Benefits of Increasing Default Size?

Hey all! I was hoping someone knew anything about this one... I know with Solaris Volume Manager the default Database Replica size is 8192 blocks (4MB approximately) Now I know you can increase this amount but is there any point? The reason I am asking this is that I've setup mirroring on... (2 Replies)
Discussion started by: Keepcase
2 Replies

9. Solaris

13 disk raidz2 pool lost

Hi guys, I appreciate any help in this regard, we have lost sensitive data in the company. One box with 2 disk mirrored and a 3ware controller handling 13 disks in a raidz2 pool. Suddenly the box restart and keeps "Reading ZFS config" for hours. Unplugging disk by disk we isolate the disk... (3 Replies)
Discussion started by: tatxo
3 Replies
GPTZFSBOOT(8)						    BSD System Manager's Manual 					     GPTZFSBOOT(8)

NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a GPT-partitioned disk with gpart(8). IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less. BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is used as a default boot pool. The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is present in the boot filesystem, boot options are read from it in the same way as boot(8). The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables. USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports. The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is [zfs:pool/filesystem:][/path/to/loader] Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and filesystem are specified, then /boot/zfsloader is used as a path. Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool status (see zpool(8)). The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial value of the currdev variable. FILES
/boot/gptzfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gptzfsboot can also be installed without the PMBR: gpart bootcode -p /boot/gptzfsboot -i 1 ada0 SEE ALSO
boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8) HISTORY
gptzfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off- sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 01:51 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy