Trying to create ZFS slice on rpool


 
Thread Tools Search this Thread
Operating Systems Solaris Trying to create ZFS slice on rpool
# 1  
Old 01-10-2018
Trying to create ZFS slice on rpool

I have a 240GB disk as rpool. I have installed Solaris 11.3 to a partition which is 110GB. Now I have another 130GB which is unallocated. I want to use that additional space as a temporary folder to be shared between Solaris and Linux. The additional space had no /dev/dsk/c2t4... entry so I used gparted to create an empty unformatted partition. Gparted shows the unformatted space to be: c2t4d0p3

Now I try to turn that unformatted space into a zfs slice. But it doesnt work. Anyone have a clue why? Is it because this slice is one the same disk as rpool? So I cannot rename the slice to OCZVERTEX3_240GB because the correct name is "rpool"?
Code:
# zpool create -n -o version=28 -O version=5 OCZVERTEX3_240GB c2t4d0p3
Unable to build pool from specified devices: cannot open '/dev/dsk/c2t4d0p3': I/O error

Here we have
Code:
#prtvtoc /dev/rdsk/c2t4d0
* /dev/rdsk/c2t4d0 partition map
*
* Dimensions:
*     512 bytes/sector
* 468862128 sectors
* 468862061 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*    First     Sector    Last
*    Sector     Count    Sector 
*          34       222       255
*   468840960      4608 468845567
*   468861952       142 468862093
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      4    00        256 230686720 230686975
       1     12    00  230686976    524288 231211263
       2     17    00  231211264 237629696 468840959
       8     11    00  468845568     16384 468861951

---------- Post updated 01-10-18 at 05:08 AM ---------- Previous update was 01-09-18 at 06:25 AM ----------

Ok, I did this to solve this problem.

I formatted the 130 GB space to NTFS from Windows, so the space was not unformatted anymore. Then I booted up Linux Mint 18.3 and installed ZFS on linux and created a zpool without problems.

Then I booted up Solaris 11.3 but "zpool import" said: the pool cannot be imported, recreate the pool. However, I see that "zpool import" reported "c2t4d0s2" instead of "c2t4d0p3". So I did a "zpool create tank c2t4d0s2" and now I can access the ZFS slice just fine. Problem solved. I wonder how I could have figured out the name of the zpool without creating a zfs slice in the first hand via Linux...

So anyone knows how I could find out the name of the zfs slice? The correct name was c2t4d0s2 but gparted reported c2t4d0p3...

Last edited by kebabbert; 01-10-2018 at 09:20 AM..
# 2  
Old 01-10-2018
It is unclear how your disk was/is divided.

You show prtvtoc output and the slices (s0, s1, s2,...) but not the lower level partions (p1, p2, p3,...) fdisk or gpt based, and their type.
# 3  
Old 01-11-2018
Quote:
Originally Posted by jlliagre
It is unclear how your disk was/is divided.

You show prtvtoc output and the slices (s0, s1, s2,...) but not the lower level partions (p1, p2, p3,...) fdisk or gpt based, and their type.
Oh? What command could I have used to show correct disk information?
# 4  
Old 01-11-2018
Here is a command to display MBR partitions under Solaris on x86:

Code:
fdisk -v -W - /dev/rdsk/c2t4d0p0

# 5  
Old 01-11-2018
Quote:
Originally Posted by jlliagre
Here is a command to display MBR partitions under Solaris on x86:

Code:
fdisk -v -W - /dev/rdsk/c2t4d0p0

It shows this output? Is this what you had in mind?

Code:
# fdisk -v -W - /dev/rdsk/c2t4d0p0

* /dev/rdsk/c2t4d0p0 default fdisk table
* Dimensions:
*    512 bytes/sector
*     56 sectors/track
*    224 tracks/cylinder
*   37377 cylinders
*
* systid:
*    1: DOSOS12
*    2: PCIXOS
*    4: DOSOS16
*    5: EXTDOS
*    6: DOSBIG
*    7: FDISK_IFS
*    8: FDISK_AIXBOOT
*    9: FDISK_AIXDATA
*   10: FDISK_0S2BOOT
*   11: FDISK_WINDOWS
*   12: FDISK_EXT_WIN
*   14: FDISK_FAT95
*   15: FDISK_EXTLBA
*   18: DIAGPART
*   65: FDISK_LINUX
*   82: FDISK_CPM
*   86: DOSDATA
*   98: OTHEROS
*   99: UNIXOS
*  100: FDISK_NOVELL2
*  101: FDISK_NOVELL3
*  119: FDISK_QNX4
*  120: FDISK_QNX42
*  121: FDISK_QNX43
*  130: SUNIXOS
*  131: FDISK_LINUXNAT
*  134: FDISK_NTFSVOL1
*  135: FDISK_NTFSVOL2
*  165: FDISK_BSD
*  167: FDISK_NEXTSTEP
*  183: FDISK_BSDIFS
*  184: FDISK_BSDISWAP
*  190: X86BOOT
*  191: SUNIXOS2
*  238: EFI_PMBR
*  239: EFI_FS
*

* Id    Act  Bhead  Bsect  Bcyl    Ehead  Esect  Ecyl    Rsect      Numsect
  238   0    0      1      0       254    63     1023    1          468862127 
  0     0    0      0      0       0      0      0       0          0         
  0     0    0      0      0       0      0      0       0          0         
  0     0    0      0      0       0      0      0       0          0

# 6  
Old 01-11-2018
Okay, so you have an EFI GPT partitioned disk.

The last command shows there is just one single primary partition (p1) encompassing the whole disk.

Inside this partition, you have three slices, s0, s1, s2 and s8.

The system refused to use p3 because this partition exists but there are zero sector allocated to it, thus the I/O error.

As you found out, s2 is the 226 GiB partition you wanted.
# 7  
Old 01-12-2018
Quote:
Originally Posted by jlliagre
Inside this partition, you have three slices, s0, s1, s2 and s8.
How could you infer this information from the last command I tried? Why is not "s5" instead of "s8"?
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

ZFS: /system/zones "respawning" on rpool

Hi, I have this fresh installation of Solaris 11.3 sparc. I have two zfs pools both using two disks in mirroring mode, both are online. I want to move /system/zones, currently rpool/VARSHARE/zones, from rpool to the other zfs pool so my zones don't consume space on the disks allocated to... (1 Reply)
Discussion started by: X96
1 Replies

2. Solaris

Need help to understand zfs rpool space allocation

Hi, I am unable to understand that, in one of my servers while df -kh Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris-2 98G 11G 29G 29% / Even the Root FS filled on 40gb and system becomes unstable. it is showing... (4 Replies)
Discussion started by: anuragr
4 Replies

3. Solaris

Zfs rpool size

Hi everyone, I am doing housekeeping of my Solaris 11 for zfs snapshot to reduce the snapshot size. I have already cleared the / file system, however the rpool size still not reduced. Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris-2 98G 6.9G ... (2 Replies)
Discussion started by: freshmeat
2 Replies

4. Solaris

ZFS Slice disk

Hi, How to to make a slice and define as ufs from zpool? Please advice me. Thanks. ---------- Post updated at 01:53 AM ---------- Previous update was at 12:24 AM ---------- Before slice: Part Tag Flag Cylinders Size Blocks 0 root wm 0 ... (2 Replies)
Discussion started by: mzainal
2 Replies

5. Solaris

Create rpool manually

Hi All; My server's root partition was encapsulated with VxVM, I try to convert it to ZFS. I successfully de-encapsulated root. Now I try to mirror 2 root disks using ZFS. But I receive following error: # zpool create rpool mirror c0t0d0s0 c0t1d0s0 invalid vdev specification use '-f' to... (6 Replies)
Discussion started by: reseki
6 Replies

6. Solaris

Splitting rpool mirror disk in ZFS

Hi, I have Solaris-10 (release-7) box without any non global zone. I have a very critical change after couple of days from application side. For safer side, we want to keep few level of backups just in case of faliure. This server is having three pool root@prod_ddoa01:/# zpool list NAME SIZE... (2 Replies)
Discussion started by: solaris_1977
2 Replies

7. Solaris

ZFS rpool physical disk move

I'd like to finish setting up this system and then move the secondary or primary disk to another system that is the exact same hardware. I've done things like this in the past with ufs and disk suite mirroring just fine. But I have yet to do it with a zfs root pool mirror. Are there any... (1 Reply)
Discussion started by: Metasin
1 Replies

8. Solaris

ZFS - whole disk Vs slice

we have a ZFS file system that was created as a pool of just one disk (raid on a SAN) when this was created it was done as a whole disk, and so EFI label. now we want to mount this file system into an LDOM. my understanding of how ldom's and disk works this is that we can only do this as a... (1 Reply)
Discussion started by: robsonde
1 Replies

9. Solaris

How to create metadb when there is no free slice

Hi All, I have to do disk mirroring, for that I have to create a metadb as disk mirroring have to do with SVM. However I do not have any slice free for metadb. What are the options? Please suggest (4 Replies)
Discussion started by: kumarmani
4 Replies
Login or Register to Ask a Question