Sponsored Content
Full Discussion: Create Pool
Operating Systems Solaris Create Pool Post 302996114 by ygemici on Wednesday 19th of April 2017 06:26:49 AM
Old 04-19-2017
* HP Proliant is a good choice Smilie

Now , I assume it's a server with 8 discs.
Best practice ,
* first 2-disk "Raid1" structure will suffice as you would think for the system ( of course we will monitor the warning LEDs and ILOM Smilie )
* for the other 6 disks , the most suitable raid config is "Raid 5" ( if your database was production enviroment then maybe 1 disk spare can be considered but not needed for develop )

* and your hp tools ( smart storage admin / smart array ) forwards to you for creating raid pools and and advices for some details ( stripe size , raid conf and some defaults... )

good luck
regards
ygemici
This User Gave Thanks to ygemici For This Post:
 

10 More Discussions You Might Find Interesting

1. IP Networking

connection pool

Hi; Can someone please explain how do connections differ from threads? or a link to a good site about connection pooling and how threads are utilized by the OS. Thanks (1 Reply)
Discussion started by: suntan
1 Replies

2. Solaris

project vs pool vs use

hi, i am looking for a tool to see how many CPUs, controlled by FSS inside a pool, a project used over some time.... i have a 20k with several zones inside some pools. the cpu-sets/pools are configured with FSS and the zones with different shares. Inside the zones, i use projects with FSS... (2 Replies)
Discussion started by: pressy
2 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

5. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

6. Solaris

not able to use pool

i have this pool1 on my sun4u sparc machine bash-3.00# zpool get all pool1 NAME PROPERTY VALUE SOURCE pool1 size 292G - pool1 used 76.5K - pool1 available 292G - pool1 capacity 0% -... (1 Reply)
Discussion started by: Sojourner
1 Replies

7. Solaris

Do I need a pool before I can mirror my disks?

Hi! I would also like to know if I need first to create a pool before I can mirror my disks inside that pool. My first disk is c7t0d0s0 and my second disk is c7t2d0s0 as seen in the figure below. I would create a pool named rpool1 for this 2 disks. # zpool create rpool1 c7t0d0p0 c7t2d0p0 ... (18 Replies)
Discussion started by: CarlosP
18 Replies

8. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

Beadm create -p on another pool - making sense of it

Hi all, I am trying out Solaris 11.3 Realize the option of -p when using beadm that i can actually create another boot environment on another pool. root@Unicorn6:~# beadm create -p mypool solaris-1 root@Unicorn6:~# beadm list -a BE/Dataset/Snapshot Flags... (1 Reply)
Discussion started by: javanoob
1 Replies
MDMON(8)						      System Manager's Manual							  MDMON(8)

NAME
mdmon - monitor MD external metadata arrays SYNOPSIS
mdmon CONTAINER [NEWROOT] OVERVIEW
The 2.6.27 kernel brings the ability to support external metadata arrays. External metadata implies that user space handles all updates to the metadata. The kernel's responsibility is to notify user space when a "metadata event" occurs, like disk failures and clean-to-dirty transitions. The kernel, in important cases, waits for user space to take action on these notifications. DESCRIPTION
Metadata updates: To service metadata update requests a daemon, mdmon, is introduced. Mdmon is tasked with polling the sysfs namespace looking for changes in array_state, sync_action, and per disk state attributes. When a change is detected it calls a per metadata type handler to make modifi- cations to the metadata. The following actions are taken: array_state - inactive Clear the dirty bit for the volume and let the array be stopped array_state - write pending Set the dirty bit for the array and then set array_state to active. Writes are blocked until userspace writes active. array_state - active-idle The safe mode timer has expired so set array state to clean to block writes to the array array_state - clean Clear the dirty bit for the volume array_state - read-only This is the initial state that all arrays start at. mdmon takes one of the three actions: 1/ Transition the array to read-auto keeping the dirty bit clear if the metadata handler determines that the array does not need resyncing or other modification 2/ Transition the array to active if the metadata handler determines a resync or some other manipulation is necessary 3/ Leave the array read-only if the volume is marked to not be monitored; for example, the metadata version has been set to "external:-dev/md127" instead of "external:/dev/md127" sync_action - resync-to-idle Notify the metadata handler that a resync may have completed. If a resync process is idled before it completes this event allows the metadata handler to checkpoint resync. sync_action - recover-to-idle A spare may have completed rebuilding so tell the metadata handler about the state of each disk. This is the metadata han- dler's opportunity to clear any "out-of-sync" bits and clear the volume's degraded status. If a recovery process is idled before it completes this event allows the metadata handler to checkpoint recovery. <disk>/state - faulty A disk failure kicks off a series of events. First, notify the metadata handler that a disk has failed, and then notify the kernel that it can unblock writes that were dependent on this disk. After unblocking the kernel this disk is set to be removed+ from the member array. Finally the disk is marked failed in all other member arrays in the container. + Note This behavior differs slightly from native MD arrays where removal is reserved for a mdadm --remove event. In the external metadata case the container holds the final reference on a block device and a mdadm --remove <container> <victim> call is still required. Containers: External metadata formats, like DDF, differ from the native MD metadata formats in that they define a set of disks and a series of sub- arrays within those disks. MD metadata in comparison defines a 1:1 relationship between a set of block devices and a raid array. For example to create 2 arrays at different raid levels on a single set of disks, MD metadata requires the disks be partitioned and then each array can created be created with a subset of those partitions. The supported external formats perform this disk carving internally. Container devices simply hold references to all member disks and allow tools like mdmon to determine which active arrays belong to which container. Some array management commands like disk removal and disk add are now only valid at the container level. Attempts to perform these actions on member arrays are blocked with error messages like: "mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container" Containers are identified in /proc/mdstat with a metadata version string "external:<metadata name>". Member devices are identified by "external:/<container device>/<member index>", or "external:-<container device>/<member index>" if the array is to remain readonly. OPTIONS
CONTAINER The container device to monitor. It can be a full path like /dev/md/container, a simple md device name like md127, or /proc/mdstat which tells mdmon to scan for containers and launch an mdmon instance for each one found. [NEWROOT] In order to support an external metadata raid array as the rootfs mdmon needs to be started in the initramfs environment. Once the initramfs environment mounts the final rootfs mdmon needs to be restarted in the new namespace. When NEWROOT is specified mdmon will terminate any mdmon instances that are running in the current namespace, chroot(2) to NEWROOT, and continue monitoring the con- tainer. Note that mdmon is automatically started by mdadm when needed and so does not need to be considered when working with RAID arrays. The only times it is run other that by mdadm is when the boot scripts need to restart it after mounting the new root filesystem. SEE ALSO
mdadm(8), md(4). v3.0.3 MDMON(8)
All times are GMT -4. The time now is 09:33 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy