I have a 240GB disk as rpool. I have installed Solaris 11.3 to a partition which is 110GB. Now I have another 130GB which is unallocated. I want to use that additional space as a temporary folder to be shared between Solaris and Linux. The additional space had no /dev/dsk/c2t4... entry so I used gparted to create an empty unformatted partition. Gparted shows the unformatted space to be: c2t4d0p3
Now I try to turn that unformatted space into a zfs slice. But it doesnt work. Anyone have a clue why? Is it because this slice is one the same disk as rpool? So I cannot rename the slice to OCZVERTEX3_240GB because the correct name is "rpool"?
Here we have ---------- Post updated 01-10-18 at 05:08 AM ---------- Previous update was 01-09-18 at 06:25 AM ----------
Ok, I did this to solve this problem.
I formatted the 130 GB space to NTFS from Windows, so the space was not unformatted anymore. Then I booted up Linux Mint 18.3 and installed ZFS on linux and created a zpool without problems.
Then I booted up Solaris 11.3 but "zpool import" said: the pool cannot be imported, recreate the pool. However, I see that "zpool import" reported "c2t4d0s2" instead of "c2t4d0p3". So I did a "zpool create tank c2t4d0s2" and now I can access the ZFS slice just fine. Problem solved. I wonder how I could have figured out the name of the zpool without creating a zfs slice in the first hand via Linux...
So anyone knows how I could find out the name of the zfs slice? The correct name was c2t4d0s2 but gparted reported c2t4d0p3...
Last edited by kebabbert; 01-10-2018 at 09:20 AM..
Hi,
I have this fresh installation of Solaris 11.3 sparc.
I have two zfs pools both using two disks in mirroring mode, both are online.
I want to move /system/zones, currently rpool/VARSHARE/zones, from rpool to the other zfs pool so my zones don't consume space on the disks allocated to... (1 Reply)
Hi,
I am unable to understand that, in one of my servers while
df -kh
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris-2 98G 11G 29G 29% /
Even the Root FS filled on 40gb and system becomes unstable.
it is showing... (4 Replies)
Hi everyone,
I am doing housekeeping of my Solaris 11 for zfs snapshot to reduce the snapshot size. I have already cleared the / file system, however the rpool size still not reduced.
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris-2 98G 6.9G ... (2 Replies)
Hi,
How to to make a slice and define as ufs from zpool? Please advice me.
Thanks.
---------- Post updated at 01:53 AM ---------- Previous update was at 12:24 AM ----------
Before slice:
Part Tag Flag Cylinders Size Blocks
0 root wm 0 ... (2 Replies)
Hi All;
My server's root partition was encapsulated with VxVM, I try to convert it to ZFS. I successfully de-encapsulated root. Now I try to mirror 2 root disks using ZFS. But I receive following error:
# zpool create rpool mirror c0t0d0s0 c0t1d0s0
invalid vdev specification
use '-f' to... (6 Replies)
Hi,
I have Solaris-10 (release-7) box without any non global zone. I have a very critical change after couple of days from application side. For safer side, we want to keep few level of backups just in case of faliure. This server is having three pool
root@prod_ddoa01:/# zpool list
NAME SIZE... (2 Replies)
I'd like to finish setting up this system and then move the secondary or primary disk to another system that is the exact same hardware.
I've done things like this in the past with ufs and disk suite mirroring just fine. But I have yet to do it with a zfs root pool mirror.
Are there any... (1 Reply)
we have a ZFS file system that was created as a pool of just one disk (raid on a SAN) when this was created it was done as a whole disk, and so EFI label.
now we want to mount this file system into an LDOM.
my understanding of how ldom's and disk works this is that we can only do this as a... (1 Reply)
Hi All,
I have to do disk mirroring, for that I have to create a metadb as disk mirroring have to do with SVM. However I do not have any slice free for metadb.
What are the options? Please suggest (4 Replies)