Sponsored Content
Operating Systems Solaris Exporting zpool sitting on different disk partition Post 302992859 by solaris_1977 on Thursday 2nd of March 2017 01:50:35 PM
Old 03-02-2017
Exporting zpool sitting on different disk partition

Hello,
I need some help in recovering ZFS pool. Here is scenerio. There are two disks -
c0t0d0 - This is good disk. I cloned it from other server and boot server from this disk.
c0t1d0 - This is original disk of this server, having errors. I am able to mount it on /mnt. So that I can copy required data from this to c0t0d0

Below pool is not imported yet and copy of another server, from where I have cloned.
Code:
# zpool import
  pool: zplctpool
    id: 11623878967666942759
 state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        zplctpool          DEGRADED
          mirror      DEGRADED
            c0t0d0s7  FAULTED  corrupted data
            c0t0d0s7  ONLINE

I do not want this zplctpool, it can be deleted. Instead, I want zplctpool, which is sitting on c0t1d0s7

Regards
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Disk Partition

Hi All, While my LINUX SERVER installed we didnt' used all the space for partitions. with what tool I can create a new partition or mount point to use the free space. I tried the command fdisk and diskdruid. They are not working. Thanks in advance With Best regards, Varma. (2 Replies)
Discussion started by: jarkvarma
2 Replies

2. Solaris

New disk - how to partition?

Have a solaris x86 running solaris 9. Root disk - logical - mirrored. I added 2 more disks today - and I have mirrored them using array configuration utility. I did a reconfiguration boot - and now I can see the logical disk using format: I then partitioned this the way I wanted (I hope)... (4 Replies)
Discussion started by: frustrated1
4 Replies

3. Solaris

Disk Partition

I have 3 disks to partition in following file system. c1t1d0 = 72gb /prod1 /prod2 /prod3 /prod4 /prod5 I am first time using "format" command to do this. How can i name with specified size. -Adeel (1 Reply)
Discussion started by: deal732
1 Replies

4. Filesystems, Disks and Memory

Partition disk

Hi, Can I partition disk in use or would I damage the file store on it? Regards Mehrdad (1 Reply)
Discussion started by: mehrdad68
1 Replies

5. Red Hat

How to partition your disk?

Hello, I am a new member of the forum. I need an idea on how to partition the disk. My situation is as follows: I have two 3TB disks ognuno.In 6TB total then, but I have to do to force a RAID 1 so my space will be 3TB. I'll have to force install RedHat 5.8 and liquids is to be taken in... (4 Replies)
Discussion started by: Joaquin
4 Replies

6. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

7. Solaris

Cannot remove disk added to zpool

I added a disk to a zpool using "zpool add diskname" My intention was mirror a zpool disk that no mirror; that is a zpool with only one disk. I did not issue the right command. Now, the disk has been added successfully but I cannot remove nor detach it as Solaris 11 thinks it has data on it... (14 Replies)
Discussion started by: LittleLebowski
14 Replies

8. Solaris

Replace zpool with another disk

issue, I had a zpool which was full pool_temp1 199G 197G 1.56G 99% ONLINE - pool_temp2 199G 196G 3.09G 98% ONLINE - as you can see, full so I replaced with a larger disk. zpool replace pool_temp1 c3t600144F0FF8BA036000058CC1DB80008d0s0... (2 Replies)
Discussion started by: rrodgers
2 Replies

9. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated disk driver SYNOPSIS
pseudo-device ccd [count] DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you're familiar with how to generate kernels, how to properly configure disks and pseudo-devices in a kernel con- figuration file, and how to partition disks. Note that the 'raw' partitions of the disks must not be combined. Each component partition should be offset at least one cylinder from the beginning of the component disk. This avoids potential conflicts between the component disk's disklabel and the ccd's disklabel. The kernel will only allow component partitions of type FS_CCD. But for now, it allows partition of all types since some port lacks support of an on- disk BSD disklabel. The partition of FS_UNUSED may be rejected because device driver of component disk will refuse it. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: pseudo-device ccd 4 # concatenated disk devices The count argument is how many ccds memory is allocated for at boot time. In this example, no more than 4 ccds may be configured. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase performance. Since the interleave factor is expressed in units of DEV_BSIZE, one must account for sector sizes other than DEV_BSIZE in order to calculate the correct interleave. The kernel will not allow an interleave factor less than the size of the largest component sector divided by DEV_BSIZE. Note that best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. Also note that the total size of concatenated disk may vary depending on the interleave factor even if the exact same components are concate- nated. And an old on-disk disklabel may be read after interleave factor change. As a result, the disklabel may contain wrong partition geometry and will cause an error when doing I/O near the end of concatenated disk. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. WARNINGS
If just one (or more) of the disks in a non-mirrored ccd fails, the entire file system will be lost. FILES
/dev/{,r}ccd* ccd device special files. SEE ALSO
config(1), MAKEDEV(8), ccdconfig(8), fsck(8), mount(8), newfs(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
March 5, 2004 BSD
All times are GMT -4. The time now is 03:42 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy