Sponsored Content
Special Forums UNIX Desktop Questions & Answers oracle performance on solaris 8 Post 30517 by Perderabo on Wednesday 23rd of October 2002 11:11:34 AM
Old 10-23-2002
First, it is possible to obtain an Intel box and a Sparc box such that the Intel box has several times the power of the Solaris box. Once this is done, you cannot fiddle with /etc/system and compensate. You apparently want to increase the performance of your Solaris box to 300% of its current level. It is very rare to achieve something like that by fiddling with /etc/system. Really, the only exception would be a system that is very severely mistuned to start with. You will almost certainly need to buy some hardware.

Start at the beginning. Don't change /etc/system unless you have a very specific reason. Oracle will have suggested changes to the IPC parameters. Make these exactly as Oracle wants them. If they want semmax at 64, then make it 64. The only reason to increase it further would be if you plan to run a second application that needs a lot of semephores of its own. If you set this to, say 640, you have just made your kernel larger for no particular reason. If you were short on memory before, you just exacerbated your problem. If it made any sense at all to set a parameter "up...as far as possible", there would be no point to /etc/system.

Once Oracle is running you want to be sure that you have as much memory as you need to ensure that Oracle's shared memory segments fits entirely into core together with everything else that you want to run. Thus "page-outs" should be zero. Once your page-outs are zero any more memory will just sit there and burn electricity. You do want to have some free memory, but a ton more free memory won't do any good.

At this point, your delete process should be a disk bottleneck. Since you have plenty of memory, the only other choice would be a cpu bottleneck. If we assume that your bottleneck is disk, you need to be sure that each disk is fast enough for your purposes. And that each scsi chain has enough bandwidth for the disks attached to it. And that each buss has the bandwith for the scsi chains attached to it. And so on up the i/o tree.

But even if you have enough bandwidth between the disks and the memory buss, if you have slow disks they remain slow. You can't toss a line into /etc/system and triple your disk speed.

We have never increased autoup here. If you have a very large amount memory and you have so much that you will never run short, increasing autoup may buy you a tiny amount of performance. Forget about 300% though. And you need to keep autoup as a integral multiple of tune_t_fsflushr if you really do this.

According to sunsolve:
Quote:
optional tuning parameters that improve performance slightly.

set slowscan=100

set autoup=300
set tune_t_fsflushr = 5
 

2 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Oracle-performance tuning

Sorry, This is out of scope of this group.But I require the clarification pretty urgently. My Oracle database is parallely enabled. Still,in a particular table queries do not work "parallely" always. How is this? (9 Replies)
Discussion started by: kthri
9 Replies

2. AIX

Oracle performance optimum vs. SGA memory allocation

Dear all experts, I have a p750 Power 7 3.3GHz server with 4 processors and 48GB Memory. This is a DB server which is using Oracle 9i. I have been told that Oracle 9i can only allocate 10GB as SGA Max to get the oracle optimum performance. Anything more will result in overflow of memory and will... (1 Reply)
Discussion started by: kwliew999
1 Replies
PAE(4)							 BSD/i386 Kernel Interfaces Manual						    PAE(4)

NAME
PAE -- Physical Address Extensions SYNOPSIS
options PAE DESCRIPTION
The PAE option provides support for the physical address extensions capability of the Intel Pentium Pro and above CPUs, and allows for up to 64 gigabytes of memory to be used in systems capable of supporting it. With the PAE option, memory above 4 gigabytes is simply added to the general page pool. The system makes no distinction between memory above or below 4 gigabytes, and no specific facility is provided for a process or the kernel to access more memory than they would otherwise be able to access, through a sliding window or otherwise. SEE ALSO
smp(4), tuning(7), config(8), bus_dma(9) HISTORY
The PAE option first appeared in FreeBSD 4.9 and FreeBSD 5.1. AUTHORS
Jake Burkholder <jake@FreeBSD.org> BUGS
Since KLD modules are not compiled with the same options headers that the kernel is compiled with, they must not be loaded into a kernel com- piled with the PAE option. Many devices or their device drivers are not capable of direct memory access to physical addresses above 4 gigabytes. In order to make use of direct memory access IO in a system with more than 4 gigabytes of memory when the PAE option is used, these drivers must use a facility for remapping or substituting physical memory which is not accessible to the device. One such facility is provided by the busdma interface. Device drivers which do not account for such devices will not work reliably in a system with more than 4 gigabytes of memory when the PAE option is used, and may cause data corruption. The PAE kernel configuration file includes the PAE option, and explicitly excludes all device drivers which are known to not work or have not been tested in a system with the PAE option and more than 4 gigabytes of memory. Many parameters which determine how memory is used in the kernel are based on the amount of physical memory. The formulas used to determine the values of these parameters for specific memory configurations may not take into account the fact there may be more than 4 gigabytes of memory, and may not scale well to these memory configurations. In particular, it may be necessary to increase the amount of virtual address space available to the kernel, or to reduce the amount of a specific resource that is heavily used, in order to avoid running out of virtual address space. The KVA_PAGES option may be used to increase the kernel virtual address space, and the kern.maxvnodes sysctl(8) may be used to decrease the number of vnodes allowed, an example of a resource that the kernel is likely to overallocate in large memory configurations. For optimal performance and stability it may be necessary to consult the tuning(7) manual page, and make adjustments to the parameters docu- mented there. BSD
April 8, 2003 BSD
All times are GMT -4. The time now is 09:59 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy