Sponsored Content
Full Discussion: Create Pool
Operating Systems Solaris Create Pool Post 302996151 by achenle on Thursday 20th of April 2017 07:36:25 AM
Old 04-20-2017
First, I'd say that a single RAID5 disk for both the OS and everything else is a bad setup.

As others have mentioned, put the OS on a hardware RAID mirror using two drives. That one drive will be your root ZFS pool (rpool). (And if this were to be a long-lived server under my control, I'd create another two-disk RAID mirror for a second ZFS root pool (rpool2) to be used for OS upgrades and patches - always creating the new boot environment on the other rpool in order to avoid a nasty hell of ZFS rpool clones and snapshots. If the boot environment being updated is on rpool, the new boot environment is created on rpool2)

Then use the other 6 disks for the database - exactly how would depend strongly on what database and what it stores and how it's going to be used.

And yes, in general you will want to limit the ZFS ARC on a DB server - severely (you don't need to read /var/adm/messages very fast...). If your database isn't using ZFS to store data, there's no need for more than a token ZFS ARC, and especially for an Oracle DB not using ZFS storage an unrestricted ZFS ARC can cause severe performance problems. (Oracle DB tends to use large-page-size chunks of memory. ZFS ARC uses 4k pages. On a server with high memory pressure, dynamic Oracle memory demands will force the kernel to have to coalesce memory to create large pages for the Oracle DB process(es). ZFS ARC pressure then breaks those pages up - rinse, lather, repeat as the server unresponsively just sits and spins...)

HP Proliant? Meh. A few years ago, a customer I supported bought new HP servers - because they were "cheaper" than Oracle's servers. Oh? Well, the new servers weren't any faster than the old (so old they still had "Sun" on them...) servers - and it took quite a bit of BIOS tuning just to get the brand-spanking-new "fast" HP servers to even match the old Sun ones performance-wise. As far as "cheaper"? We had to install the HBAs ourselves (labor time is expensive...) and THEN we found out that the ILOM software wasn't part of the basic HP server - it had to be bought/licensed separately - then installed (even more expensive labor hours). Oh, and the HP server didn't come with four built-in 10 gig ethernet ports, so we had to add ethernet cards - more money and more time. When all was done, the customer paid a lot of money and wound up with new HP servers that took a lot of time and effort to make just as fast as the older Sun servers they replaced. Simply buying new servers from Oracle would have resulted in actually getting faster servers - for less money, less time, and a lot less effort.

Slapping a bunch of commodity parts around a good CPU and a decent amount of RAM doesn't make for a fast server. I/O bandwidth, memory bandwidth, disk controller quality? They matter too, and using the cheapest parts you can find in China slapped onto the cheapest motherboard doesn't cut it - especially when you turn around and nickel-and-dime customers over things like ILOM software licenses...

I'm not impressed with HP.</RANT>
This User Gave Thanks to achenle For This Post:
 

10 More Discussions You Might Find Interesting

1. IP Networking

connection pool

Hi; Can someone please explain how do connections differ from threads? or a link to a good site about connection pooling and how threads are utilized by the OS. Thanks (1 Reply)
Discussion started by: suntan
1 Replies

2. Solaris

project vs pool vs use

hi, i am looking for a tool to see how many CPUs, controlled by FSS inside a pool, a project used over some time.... i have a 20k with several zones inside some pools. the cpu-sets/pools are configured with FSS and the zones with different shares. Inside the zones, i use projects with FSS... (2 Replies)
Discussion started by: pressy
2 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

5. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

6. Solaris

not able to use pool

i have this pool1 on my sun4u sparc machine bash-3.00# zpool get all pool1 NAME PROPERTY VALUE SOURCE pool1 size 292G - pool1 used 76.5K - pool1 available 292G - pool1 capacity 0% -... (1 Reply)
Discussion started by: Sojourner
1 Replies

7. Solaris

Do I need a pool before I can mirror my disks?

Hi! I would also like to know if I need first to create a pool before I can mirror my disks inside that pool. My first disk is c7t0d0s0 and my second disk is c7t2d0s0 as seen in the figure below. I would create a pool named rpool1 for this 2 disks. # zpool create rpool1 c7t0d0p0 c7t2d0p0 ... (18 Replies)
Discussion started by: CarlosP
18 Replies

8. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

Beadm create -p on another pool - making sense of it

Hi all, I am trying out Solaris 11.3 Realize the option of -p when using beadm that i can actually create another boot environment on another pool. root@Unicorn6:~# beadm create -p mypool solaris-1 root@Unicorn6:~# beadm list -a BE/Dataset/Snapshot Flags... (1 Reply)
Discussion started by: javanoob
1 Replies
swap(1M)						  System Administration Commands						  swap(1M)

NAME
swap - swap administrative interface SYNOPSIS
/usr/sbin/swap -a swapname [swaplow] [swaplen] /usr/sbin/swap -d swapname [swaplow] /usr/sbin/swap -l [-h | -k] /usr/sbin/swap -s [-h] DESCRIPTION
The swap utility provides a method of adding, deleting, and monitoring the system swap areas used by the memory manager. OPTIONS
The following options are supported: -a swapname [swaplow] [swaplen] Add the specified swap area. This option can only be used by the superuser or by one who has assumed the Primary Administrator role. swapname is the name of the swap area or regular file. For example, on system running a UFS root file system, specify a slice, such as /dev/dsk/c0t0d0s1, or a regular file for a swap area. On a system running a ZFS file system, specify a ZFS volume, such as /dev/zvol/dsk/rpool/swap, for a swap area. Using a regular file for swap is not supported on a ZFS file system. In addition, you cannot use the same ZFS volume for both the swap area and a dump device when the system is running a ZFS root file system. swaplow is the offset in 512-byte blocks into the file where the swap area should begin. swaplen is the desired length of the swap area in 512-byte blocks. The value of swaplen can not be less than 16. For example, if n blocks are specified, then (n-1) blocks would be the actual swap length. swaplen must be at least one page in length. The size of a page of memory can be determined by using the page- size command. See pagesize(1). Since the first page of a swap file is automatically skipped, and a swap file needs to be at least one page in length, the minimum size should be a multiple of 2 pagesize bytes. The size of a page of memory is machine-dependent. swaplow + swaplen must be less than or equal to the size of the swap file. If swaplen is not specified, an area will be added starting at swaplow and extending to the end of the designated file. If neither swaplow nor swaplen are specified, the whole file will be used except for the first page. Swap areas are normally added automatically during system startup by the /sbin/swapadd script. This script adds all swap areas which have been specified in the /etc/vfstab file; for the syntax of these specifications, see vfstab(4). To use an NFS or local file system swapname, you should first create a file using mkfile(1M). A local file system swap file can now be added to the running system by just running the swap -a command. For NFS mounted swap files, the server needs to export the file. Do this by performing the following steps: 1. Add the following line to /etc/dfs/dfstab: share -F nfs -o rw=clientname,root=clientname path-to-swap-file 2. Run shareall(1M). 3. Have the client add the following line to /etc/vfstab: server:path-to-swap-file - local-path-to-swap-file nfs --- local-path-to-swap-file -- swap --- 4. Have the client run mount: # mount local-path-to-swap-file 5. The client can then run swap -a to add the swap space: # swap -a local-path-to-swap-file -d swapname Delete the specified swap area. This option can only be used by the super-user. swapname is the name of the swap file: for example, /dev/dsk/c0t0d0s1 or a regular file. swaplow is the offset in 512-byte blocks into the swap area to be deleted. If swaplow is not spec- ified, the area will be deleted starting at the second page. When the command completes, swap blocks can no longer be allocated from this area and all swap blocks previously in use in this swap area have been moved to other swap areas. -h All sizes are scaled to a human readable format. Scaling is done by repetitively dividing by 1024. -k Write the files sizes in units of 1024 bytes. -l List the status of all the swap areas. The output has five columns: path The path name for the swap area. dev The major/minor device number in decimal if it is a block special device; zeroes otherwise. swaplo The swaplow value for the area in 512-byte blocks. blocks The swaplen value for the area in 512-byte blocks. free The number of 512-byte blocks in this area that are not currently allocated. The list does not include swap space in the form of physical memory because this space is not associated with a particular swap area. If swap -l is run while swapname is in the process of being deleted (by swap-d), the string INDEL will appear in a sixth column of the swap stats. -s Print summary information about total swap space usage and availability: allocated The total amount of swap space in bytes currently allocated for use as backing store. reserved The total amount of swap space in bytes not currently allocated, but claimed by memory mappings for possible future use. used The total amount of swap space in bytes that is either allocated or reserved. available The total swap space in bytes that is currently available for future reservation and allocation. These numbers include swap space from all configured swap areas as listed by the -l option, as well swap space in the form of physical memory. USAGE
On the 32-bit operating system, only the first 2 Gbytes -1 are used for swap devices greater than or equal to 2 Gbytes in size. On the 64-bit operating system, a block device larger than 2 Gbytes can be fully utilized for swap up to 2^63 -1 bytes. ENVIRONMENT VARIABLES
See environ(5) for descriptions of the following environment variables that affect the execution of swap: LC_CTYPE and LC_MESSAGE. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWcsu | +-----------------------------+-----------------------------+ SEE ALSO
pagesize(1), mkfile(1M), shareall(1M), getpagesize(3C), vfstab(4), attributes(5), largefile(5) NOTES
For information about setting up a swap area with ZFS, see the ZFS Administration Guide. WARNINGS
No check is done to determine if a swap area being added overlaps with an existing file system. SunOS 5.11 11 Apr 2008 swap(1M)
All times are GMT -4. The time now is 04:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy