Hi flexi,
Firstly , we must know the your hardware system ?
Maybe we can determine best choice for your needs to that.
if your system x86 arch maybe you can try "LSI or other Logic Configuration Utility Console " which appears in the boot stages..
You assume 8 disks , but has your system contains these from local or san ?
if your all disks has local ( SAS / Sata .. ) , then what is the your critical data ? ( database )
all disks are at the same or size ?
Yes , Raid5 can be good choice or maybe not for avaliable disk space or security requirements.
That can change to your needs.
For exa : you want to create hard raid 5 vol with 6 disks at size 300 GB. ( security : with "1 disk" failure )
then your avaliable space ( n x size -1 x size ) = 6 x 300 - 1 x 300 ( for total parity size ) = 1500 GB
So at the reality , what is your systems for solaris ( oracle sparc / x86 / non-oracle x86 .. )
in Sparc systems LSI raid controllers does not support for raid 5
if you will be use zfs , then you can use software raid with zfs ( zpool )
For exa :
Now i can say what i do with 6 disk in my sparc hardware.
* 4 disk for RAID 1E ( security two disk in teory but may be "one disk" to controller !! )
-----------------------------------------------------------------------------------------
( * You can create similiar this volume with your 6 disks )
( boot prompt )
Code:
{0} ok select /pci@300/pci@1/pci@0/pci@2/scsi@0
{0} ok show-children
FCode Version 1.00.61, MPT Version 2.00, Firmware Version 9.00.00.00
Target 9
Unit 0 Disk HITACHI H109030SESUN300G A31A 585937500 Blocks, 300 GB
SASDeviceName 5000cca01641b6c8 SASAddress 5000cca01641b6c9 PhyNum 0
Target a
Unit 0 Removable Read Only device TEAC DV-W28SS-W 1.0A
SATA device PhyNum 7
Target b
Unit 0 Disk HITACHI H109030SESUN300G A31A 585937500 Blocks, 300 GB
SASDeviceName 5000cca01642c8c0 SASAddress 5000cca01642c8c1 PhyNum 1
Target c
Unit 0 Disk HITACHI H109030SESUN300G A31A 585937500 Blocks, 300 GB
SASDeviceName 5000cca016422b48 SASAddress 5000cca016422b49 PhyNum 2
Target d
Unit 0 Disk HITACHI H109030SESUN300G A31A 585937500 Blocks, 300 GB
SASDeviceName 5000cca0164271b8 SASAddress 5000cca0164271b9 PhyNum 3
* create 4 disk with RAID 1E volume ( vol1 ) --> For your database pool
Code:
{0} ok 9 b c d create-raid1e-volume --> Create RAID 1E ( Mirroring Extending ( + Stripped ) = Striped and Mirrored disk array )
Target 9 size is 583983104 Blocks, 298 GB
Target b size is 583983104 Blocks, 298 GB
Target c size is 583983104 Blocks, 298 GB
Target d size is 583983104 Blocks, 298 GB
The volume can be any size from 1 MB to 570296 MB
What size do you want? [570296]
Volume size will be 1167966208 Blocks, 597 GB --> 4 X 300 /2 = 600 GB disk space
Enter a volume name: [0 to 15 characters] vol1
Volume has been created
Code:
{0} ok unselect-dev
* After apply this , check volume :
---------------------------------------
Code:
{0} ok select /pci@300/pci@1/pci@0/pci@2/scsi@0
Code:
{0} ok show-volumes
Volume 0 Target 381 Type RAID1E (Mirroring Extended)
Name vol1 WWID 011ebf1128d9fdcd
Optimal Enabled Volume Not Consistent Background Init In Progress
4 Members 1167966208 Blocks, 597 GB
Disk 3
Member 0 Optimal
Target 9 HITACHI H109030SESUN300G A31A PhyNum 0
Disk 2
Member 1 Optimal
Target b HITACHI H109030SESUN300G A31A PhyNum 1
Disk 1
Member 2 Optimal
Target c HITACHI H109030SESUN300G A31A PhyNum 2
Disk 0
Member 3 Optimal
Target d HITACHI H109030SESUN300G A31A PhyNum 3
* If raid volume is "inactive" state then starts
Code:
{0} ok 0 activate-volume
Volume 0 is now activated
Code:
{0} ok unselect-dev
-> Now first Raid volume ( vol1 ) is Ready..
2-) The other 2 disk with RAID1 ( via the other Raid Controller port )
* Select the disk and create hard raid.
....
....
Code:
{0} ok 9 a create-raid1-volume --> RAID 1 Mirror for Operating System ( security one disk )
....
.... same operations..
-> Now first Raid volume ( vol2 ) is Ready..
* Raid operations completed..
* I installed the Solaris OS on the this volume ( vol2 ) at the Disk Select Menu..
* when system is on :
vol2 ( in 2 disk with size 559 with RAID 1 ) ---> 2 X 559 /2 = 556 GB in OS
----------------------------------------------
Code:
# zpool list rpool
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 556G 108G 448G 19% 1.00x ONLINE -
# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c6t3B347FFD52B27CEFd0s0 ONLINE 0 0 0
errors: No known data errors
Code:
# echo|format|nawk '/c6t3B347FFD52B27CEFd0/{a=$0;getline;print a RS $0;exit}'
533. c6t3B347FFD52B27CEFd0 <LSI-Logical Volume-3000 cyl 35701 alt 2 hd 256 sec 128> local02 --> ( vol2 )
/pci@3c0/pci@1/pci@0/pci@2/scsi@0/iport@v0/disk@w3b347ffd52b27cef,0
Code:
# prtdiag|nawk '/SASHBA/{a=$0;getline;print a RS $0;}'
/SYS/MB/SASHBA0 PCIE scsi-pciex1000,87 LSI,2308_2 8.0GT/x4 8.0GT/x4
/pci@300/pci@1/pci@0/pci@2/scsi@0
/SYS/MB/SASHBA1 PCIE scsi-pciex1000,87 LSI,2308_2 8.0GT/x4 8.0GT/x4 --> ( raid controller port for vol2 )
/pci@3c0/pci@1/pci@0/pci@2/scsi@0
note : my raid LSI controller(s) does not support other raid levels ( eg : raid5.. )
Code:
# raidconfig list all -v|grep Support
Supported RAID Levels: 0, 1, 1E
Supported RAID Levels: 0, 1, 1E
Hi;
Can someone please explain how do connections differ from threads? or a link to a good site about connection pooling and how threads are utilized by the OS.
Thanks (1 Reply)
hi,
i am looking for a tool to see how many CPUs, controlled by FSS inside a pool, a project used over some time....
i have a 20k with several zones inside some pools. the cpu-sets/pools are configured with FSS and the zones with different shares. Inside the zones, i use projects with FSS... (2 Replies)
I created a pool the other day. I created a 10 gig files just for a test, then deleted it.
I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.
When I ls -al the pool I just... (6 Replies)
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
i have this pool1 on my sun4u sparc machine
bash-3.00# zpool get all pool1
NAME PROPERTY VALUE SOURCE
pool1 size 292G -
pool1 used 76.5K -
pool1 available 292G -
pool1 capacity 0% -... (1 Reply)
Hi!
I would also like to know if I need first to create a pool before I can mirror my disks inside that pool.
My first disk is c7t0d0s0 and my second disk is c7t2d0s0 as seen in the figure below.
I would create a pool named rpool1 for this 2 disks.
# zpool create rpool1 c7t0d0p0 c7t2d0p0 ... (18 Replies)
I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:
# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset
# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or... (3 Replies)
I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Hi all,
I am trying out Solaris 11.3
Realize the option of -p when using beadm that i can actually create another boot environment on another pool.
root@Unicorn6:~# beadm create -p mypool solaris-1
root@Unicorn6:~# beadm list -a
BE/Dataset/Snapshot Flags... (1 Reply)