Sponsored Content
Full Discussion: Create Pool
Operating Systems Solaris Create Pool Post 302996114 by ygemici on Wednesday 19th of April 2017 06:26:49 AM
Old 04-19-2017
* HP Proliant is a good choice Smilie

Now , I assume it's a server with 8 discs.
Best practice ,
* first 2-disk "Raid1" structure will suffice as you would think for the system ( of course we will monitor the warning LEDs and ILOM Smilie )
* for the other 6 disks , the most suitable raid config is "Raid 5" ( if your database was production enviroment then maybe 1 disk spare can be considered but not needed for develop )

* and your hp tools ( smart storage admin / smart array ) forwards to you for creating raid pools and and advices for some details ( stripe size , raid conf and some defaults... )

good luck
regards
ygemici
This User Gave Thanks to ygemici For This Post:
 

10 More Discussions You Might Find Interesting

1. IP Networking

connection pool

Hi; Can someone please explain how do connections differ from threads? or a link to a good site about connection pooling and how threads are utilized by the OS. Thanks (1 Reply)
Discussion started by: suntan
1 Replies

2. Solaris

project vs pool vs use

hi, i am looking for a tool to see how many CPUs, controlled by FSS inside a pool, a project used over some time.... i have a 20k with several zones inside some pools. the cpu-sets/pools are configured with FSS and the zones with different shares. Inside the zones, i use projects with FSS... (2 Replies)
Discussion started by: pressy
2 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

5. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

6. Solaris

not able to use pool

i have this pool1 on my sun4u sparc machine bash-3.00# zpool get all pool1 NAME PROPERTY VALUE SOURCE pool1 size 292G - pool1 used 76.5K - pool1 available 292G - pool1 capacity 0% -... (1 Reply)
Discussion started by: Sojourner
1 Replies

7. Solaris

Do I need a pool before I can mirror my disks?

Hi! I would also like to know if I need first to create a pool before I can mirror my disks inside that pool. My first disk is c7t0d0s0 and my second disk is c7t2d0s0 as seen in the figure below. I would create a pool named rpool1 for this 2 disks. # zpool create rpool1 c7t0d0p0 c7t2d0p0 ... (18 Replies)
Discussion started by: CarlosP
18 Replies

8. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

Beadm create -p on another pool - making sense of it

Hi all, I am trying out Solaris 11.3 Realize the option of -p when using beadm that i can actually create another boot environment on another pool. root@Unicorn6:~# beadm create -p mypool solaris-1 root@Unicorn6:~# beadm list -a BE/Dataset/Snapshot Flags... (1 Reply)
Discussion started by: javanoob
1 Replies
MKINITRD(8)						      System Manager's Manual						       MKINITRD(8)

NAME
mkinitrd - creates initial ramdisk images for preloading modules SYNOPSIS
mkinitrd [--version] [-v] [-f] [--preload=module] [--omit-scsi-modules] [--omit-raid-modules] [--omit-lvm-modules] [--with=module] [--image-version] [--fstab=fstab] [--nocompress] [--builtin=module] [--nopivot] image kernel-version DESCRIPTION
mkinitrd creates filesystem images which are suitable for use as Linux initial ramdisk (initrd) images. Such images are often used for preloading the block device modules (such as IDE, SCSI or RAID) which are needed to access the root filesystem. mkinitrd automatically loads filesystem modules (such as ext3 and jbd), IDE modules, all scsi_hostadapter entries in /etc/modules.conf, and raid modules if the system's root partition is on raid, which makes it simple to build and use kernels using modular device drivers. Any module options specified in /etc/modules.conf are passed to the modules as they are loaded by the initial ramdisk. If the root device is on a loop device (such as /dev/loop0), mkinitrd will build an initrd which sets up the loopback file properly. To do this, the fstab must contain a comment of the form: # LOOP0: /dev/hda1 vfat /linux/rootfs LOOP0 must be the name of the loop device which needs to be configured, in all capital lettes. The parameters after the colon are the device which contains the filesystem with the loopback image on it, the filesystem which is on the device, and the full path to the loop- back image. If the filesystem is modular, initrd will automatically add the filesystem's modules to the initrd image. The root filesystem used by the kernel is specified in the boot configuration file, as always. The traditional root=/dev/hda1 style device specification is allowed. If a label is used, as in root=LABEL=rootPart the initrd will search all available devices for an ext2 or ext3 filesystem with the appropriate label, and mount that device as the root filesystem. OPTIONS
--builtin=module Act as if module is built into the kernel being used. mkinitrd will not look for this module, and will not emit an error if it does not exist. This option may be used multiple times. -f Allows mkinitrd to overwrite an existing image file. --fstab=fstab Use fstab to automatically determine what type of filesystem the root device is on. Normally, /etc/fstab is used. --image-version The kernel version number is appended to the initrd image path before the image is created. --nocompress Normally the created initrd image is compressed with gzip. If this option is specified, the compression is skipped. --nopivot Do not use the pivot_root system call as part of the initrd. This lets mkinitrd build proper images for Linux 2.2 kernels at the expense of some features. In particular, some filesystems (such as ext3) will not work properly and filesystem options will not be used to mount root. This option is not recommended, and will be removed in future versions. --omit-lvm-modules Do not load any lvm modules, even if /etc/fstab expects them. --omit-raid-modules Do not load any raid modules, even if /etc/fstab and /etc/raidtab expect them. --omit-scsi-modules Do not load any scsi modules, including 'scsi_mod' and 'sd_mod' modules, even if they are present. --preload=module Load the module module in the initial ramdisk image. The module gets loaded before any SCSI modules which are specified in /etc/mod- ules.conf. This option may be used as many times as necessary. -v Prints out verbose information while creating the image (normally the mkinitrd runs silently). --version Prints the version of mkinitrd that's being used and then exits. --with=module Load the modules module in the initial ramdisk image. The module gets loaded after any SCSI modules which are specified in /etc/mod- ules.conf. This option may be used as many times as necessary. FILES
/dev/loop* A block loopback device is used to create the image, which makes this script useless on systems without block loopback support available. /etc/modules.conf Specified SCSI modules to be loaded and module options to be used. SEE ALSO
fstab(5), insmod(1), kerneld(8), lilo(8) AUTHOR
Erik Troan <ewt@redhat.com> 4th Berkeley Distribution Sat Mar 27 1999 MKINITRD(8)
All times are GMT -4. The time now is 07:56 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy