Sponsored Content
Operating Systems Solaris Exporting zpool sitting on different disk partition Post 302993296 by os2mac on Wednesday 8th of March 2017 01:36:58 PM
Old 03-08-2017
Zpool import -f

Quote:
Originally Posted by solaris_1977
Hello,
I need some help in recovering ZFS pool. Here is scenerio. There are two disks -
c0t0d0 - This is good disk. I cloned it from other server and boot server from this disk.
c0t1d0 - This is original disk of this server, having errors. I am able to mount it on /mnt. So that I can copy required data from this to c0t0d0

Below pool is not imported yet and copy of another server, from where I have cloned.
Code:
# zpool import
  pool: zplctpool
    id: 11623878967666942759
 state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        zplctpool          DEGRADED
          mirror      DEGRADED
            c0t0d0s7  FAULTED  corrupted data
            c0t0d0s7  ONLINE

I do not want this zplctpool, it can be deleted. Instead, I want zplctpool, which is sitting on c0t1d0s7

Regards
assuming this is NOT your /rpool, simply export the existing zpool on the current c0t0d0s7, remove the disk and then with the new disk installed do a
Code:
zpool import -f zplctpool

 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Disk Partition

Hi All, While my LINUX SERVER installed we didnt' used all the space for partitions. with what tool I can create a new partition or mount point to use the free space. I tried the command fdisk and diskdruid. They are not working. Thanks in advance With Best regards, Varma. (2 Replies)
Discussion started by: jarkvarma
2 Replies

2. Solaris

New disk - how to partition?

Have a solaris x86 running solaris 9. Root disk - logical - mirrored. I added 2 more disks today - and I have mirrored them using array configuration utility. I did a reconfiguration boot - and now I can see the logical disk using format: I then partitioned this the way I wanted (I hope)... (4 Replies)
Discussion started by: frustrated1
4 Replies

3. Solaris

Disk Partition

I have 3 disks to partition in following file system. c1t1d0 = 72gb /prod1 /prod2 /prod3 /prod4 /prod5 I am first time using "format" command to do this. How can i name with specified size. -Adeel (1 Reply)
Discussion started by: deal732
1 Replies

4. Filesystems, Disks and Memory

Partition disk

Hi, Can I partition disk in use or would I damage the file store on it? Regards Mehrdad (1 Reply)
Discussion started by: mehrdad68
1 Replies

5. Red Hat

How to partition your disk?

Hello, I am a new member of the forum. I need an idea on how to partition the disk. My situation is as follows: I have two 3TB disks ognuno.In 6TB total then, but I have to do to force a RAID 1 so my space will be 3TB. I'll have to force install RedHat 5.8 and liquids is to be taken in... (4 Replies)
Discussion started by: Joaquin
4 Replies

6. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

7. Solaris

Cannot remove disk added to zpool

I added a disk to a zpool using "zpool add diskname" My intention was mirror a zpool disk that no mirror; that is a zpool with only one disk. I did not issue the right command. Now, the disk has been added successfully but I cannot remove nor detach it as Solaris 11 thinks it has data on it... (14 Replies)
Discussion started by: LittleLebowski
14 Replies

8. Solaris

Replace zpool with another disk

issue, I had a zpool which was full pool_temp1 199G 197G 1.56G 99% ONLINE - pool_temp2 199G 196G 3.09G 98% ONLINE - as you can see, full so I replaced with a larger disk. zpool replace pool_temp1 c3t600144F0FF8BA036000058CC1DB80008d0s0... (2 Replies)
Discussion started by: rrodgers
2 Replies

9. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
DPM-MODIFYFS(1) 					    DPM Administrator Commands						   DPM-MODIFYFS(1)

NAME
dpm-modifyfs - modify the parameters of a disk pool filesystem SYNOPSIS
dpm-modifyfs --server fs_server --fs fs_name [ --st status ] [ --weight weight ] [ --help ] DESCRIPTION
dpm-modifyfs modifies the parameters of a disk pool filesystem. This command requires ADMIN privilege. OPTIONS
server specifies the host name of the disk server where this filesystem is mounted. fs specifies the mount point of the dedicated filesystem. status New status of this filesystem. It can be set to 0 (enabled read/write) or DISABLED or RDONLY. This can be either alphanumeric or the corresponding numeric value. weight specifies the weight of the filesystem. This is used during the filesystem selection. The value must be positive. It is recommended to use a value lower than 10. EXAMPLE
dpm-modifyfs --server sehost --fs /data --st RDONLY EXIT STATUS
This program returns 0 if the operation was successful or >0 if the operation failed. SEE ALSO
dpm(1), dpm_modifyfs(3) LCG
$Date$ DPM-MODIFYFS(1)
All times are GMT -4. The time now is 09:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy