Good evening all, I have a very interesting issue. I have an oracle database on unix and nt. The dba says that his delete process runs in 22 mins on nt. When I run the same delete process on unix, it run in an hour. I downloaded sarcheck to see what was going on. It told me abpout the autoup parameter. I know this is pretty vauge to say the least, but it's making no freaking sense. I tuned the kernel parms in etc/system as well. I will try to up them as far as possible. Any help would be appreciated.
First, it is possible to obtain an Intel box and a Sparc box such that the Intel box has several times the power of the Solaris box. Once this is done, you cannot fiddle with /etc/system and compensate. You apparently want to increase the performance of your Solaris box to 300% of its current level. It is very rare to achieve something like that by fiddling with /etc/system. Really, the only exception would be a system that is very severely mistuned to start with. You will almost certainly need to buy some hardware.
Start at the beginning. Don't change /etc/system unless you have a very specific reason. Oracle will have suggested changes to the IPC parameters. Make these exactly as Oracle wants them. If they want semmax at 64, then make it 64. The only reason to increase it further would be if you plan to run a second application that needs a lot of semephores of its own. If you set this to, say 640, you have just made your kernel larger for no particular reason. If you were short on memory before, you just exacerbated your problem. If it made any sense at all to set a parameter "up...as far as possible", there would be no point to /etc/system.
Once Oracle is running you want to be sure that you have as much memory as you need to ensure that Oracle's shared memory segments fits entirely into core together with everything else that you want to run. Thus "page-outs" should be zero. Once your page-outs are zero any more memory will just sit there and burn electricity. You do want to have some free memory, but a ton more free memory won't do any good.
At this point, your delete process should be a disk bottleneck. Since you have plenty of memory, the only other choice would be a cpu bottleneck. If we assume that your bottleneck is disk, you need to be sure that each disk is fast enough for your purposes. And that each scsi chain has enough bandwidth for the disks attached to it. And that each buss has the bandwith for the scsi chains attached to it. And so on up the i/o tree.
But even if you have enough bandwidth between the disks and the memory buss, if you have slow disks they remain slow. You can't toss a line into /etc/system and triple your disk speed.
We have never increased autoup here. If you have a very large amount memory and you have so much that you will never run short, increasing autoup may buy you a tiny amount of performance. Forget about 300% though. And you need to keep autoup as a integral multiple of tune_t_fsflushr if you really do this.
Thanks for that excellent interpetation of what's going on. I will change the settings back in etc/system as orascle recommends. I believe that disk i/o is the problem right now. Another question to you is. Is there a way to tune a ufs file system?
First, see this post. Depending on which version of Solaris you are running, you may need to patch and/or tune your kernel to enable the priority paging option. And notice that one source I quoted in that post says that a 300% improvement *is* possible with this option on some versions of Solaris.
Now on to your next question. First, you should have your oracle databases on raw disk space, not in any filesystem. Oracle will tell you the same thing. Putting a database in a filesystem is guaranteed to slow it down. A lot! This may also render it unreliable.
There is a tunefs command that can tune an existing ufs filesystem, but it is limited it what is can tune. Any what tuning is does will not impact existing files unless you unload and reload them. If you going to do that, you may as well rebuild the filesystem from scratch which opens up more tuning options.
The ufs filesystem handles a mix of files very well. It's weakness is very large files. The absolute worst case is one file that consumes the entire filesystem. What makes a file "large" or not is whether or not it can reside entirely in one cylinder group. Increasing the size of cylinder group makes ufs lean more towards handles a few large files very well. Smaller cylinder groups makes it lean towards handling small files distributed in many directories well. If you have a few large files, large cylinder groups may help enough that you notice. On Solaris, the default is 16 cylinders per cylinder group. The best you can do is double that. The other thing is the "-o space" option. If you pick "-o time", Solaris will spend less time picking a new block when the file grows, at the cost of perhaps making a poor choice. You will pay for that each time you read the file. If you pick "-o space", Solaris will spend more time allocating a new block when the file grows. But you get a file that's easier to read. These are the two things that might be most worth persuing. I know I got burned when I said this about kernel tuning, but massive improvments are going to be very rare. Expect only marginal results.
Another option is the veritas filesystem. It is extent based rather than block based. It will handle very large files almost as well as it handles small files. I think it's a seperate product though with Suns.
Dear all experts,
I have a p750 Power 7 3.3GHz server with 4 processors and 48GB Memory. This is a DB server which is using Oracle 9i.
I have been told that Oracle 9i can only allocate 10GB as SGA Max to get the oracle optimum performance. Anything more will result in overflow of memory and will... (1 Reply)
This is out of scope of this group.But I require the clarification pretty urgently.
My Oracle database is parallely enabled.
Still,in a particular table queries do not work "parallely" always.
How is this? (9 Replies)