04-17-2017
Hi Sir,
You mean, I can have disk in raid1 for the 2hdd(rpool) and have another disk in raid5 for the remaining 6hdd?
Thank you
Moderator's Comments:
|
|
No need to quote if you answer to the post just above your post!
|
|
Last edited by DukeNuke2; 04-17-2017 at 08:11 PM..
10 More Discussions You Might Find Interesting
1. IP Networking
Hi;
Can someone please explain how do connections differ from threads? or a link to a good site about connection pooling and how threads are utilized by the OS.
Thanks (1 Reply)
Discussion started by: suntan
1 Replies
2. Solaris
hi,
i am looking for a tool to see how many CPUs, controlled by FSS inside a pool, a project used over some time....
i have a 20k with several zones inside some pools. the cpu-sets/pools are configured with FSS and the zones with different shares. Inside the zones, i use projects with FSS... (2 Replies)
Discussion started by: pressy
2 Replies
3. Infrastructure Monitoring
Here are the details.
cnjr-opennms>root$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
openpool 20.6G 46.3G 35.5K /openpool
openpool/ROOT 15.4G 46.3G 18K legacy
openpool/ROOT/rds 15.4G 46.3G 15.3G /
openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies
4. Solaris
I created a pool the other day. I created a 10 gig files just for a test, then deleted it.
I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.
When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies
5. Solaris
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies
6. Solaris
i have this pool1 on my sun4u sparc machine
bash-3.00# zpool get all pool1
NAME PROPERTY VALUE SOURCE
pool1 size 292G -
pool1 used 76.5K -
pool1 available 292G -
pool1 capacity 0% -... (1 Reply)
Discussion started by: Sojourner
1 Replies
7. Solaris
Hi!
I would also like to know if I need first to create a pool before I can mirror my disks inside that pool.
My first disk is c7t0d0s0 and my second disk is c7t2d0s0 as seen in the figure below.
I would create a pool named rpool1 for this 2 disks.
# zpool create rpool1 c7t0d0p0 c7t2d0p0 ... (18 Replies)
Discussion started by: CarlosP
18 Replies
8. BSD
I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:
# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset
# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies
9. Solaris
I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies
10. Solaris
Hi all,
I am trying out Solaris 11.3
Realize the option of -p when using beadm that i can actually create another boot environment on another pool.
root@Unicorn6:~# beadm create -p mypool solaris-1
root@Unicorn6:~# beadm list -a
BE/Dataset/Snapshot Flags... (1 Reply)
Discussion started by: javanoob
1 Replies
LEARN ABOUT DEBIAN
mkfs.btrfs
MKFS.BTRFS(8) System Manager's Manual MKFS.BTRFS(8)
NAME
mkfs.btrfs - create an btrfs filesystem
SYNOPSIS
mkfs.btrfs [ -A alloc-start ] [ -b byte-count ] [ -d data-profile ] [ -l leafsize ] [ -L label ] [ -m metadata profile ] [ -n nodesize
] [ -s sectorsize ] [ -h ] [ -V ] device [ device ... ]
DESCRIPTION
mkfs.btrfs is used to create an btrfs filesystem (usually in a disk partition, or an array of disk partitions). device is the special file
corresponding to the device (e.g /dev/sdXX ). If multiple devices are specified, btrfs is created spanning across the specified
devices.
OPTIONS
-A, --alloc-start offset
Specify the offset from the start of the device to start the btrfs filesystem. The default value is zero, or the start of the
device.
-b, --byte-count size
Specify the size of the resultant filesystem. If this option is not used, mkfs.btrfs uses all the available storage for the filesys-
tem.
-d, --data type
Specify how the data must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.
-l, --leafsize size
Specify the leaf size, the least data item in which btrfs stores data. The default value is the page size.
-L, --label name
Specify a label for the filesystem.
-m, --metadata profile
Specify how metadata must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.
-n, --nodesize size
Specify the nodesize. By default the value is set to the pagesize.
-s, --sectorsize size
Specify the sectorsize, the minimum block allocation.
-V, --version
Print the mkfs.btrfs version and exit.
AVAILABILITY
mkfs.btrfs is part of btrfs-progs. Btrfs is currently under heavy development, and not suitable for any uses other than benchmarking and
review. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details.
SEE ALSO
btrfsck(8)
MKFS.BTRFS(8)