%memused is high

 
Thread Tools Search this Thread
Operating Systems Linux Red Hat %memused is high
# 8  
Old 02-22-2017
I would focus more on the swap/page rates. If you are swapping/paging because you have exhausted real memory, then you will start to feel the performance cost of swapping/paging. What output do you get from vmstat? You might try it with time & count paramters such as vmstat 10 5 giving you ten second intervals for a count of five, although the first is usually counted since last boot.

The columns you are looking for are under the swap heading, probably the si & so sub-headings, although the columns are usually skewed.
  • Swap in (si) is recalling from disk memory that was still needed, but least active.
  • Swap out (so) is writing to disk memory that is still needed, but least active.
Does this reveal anything?

You don't say what the services are that are degraded. If you have a database, that will have a configuration file where you can adjust various parameters, including memory allocations. If set too low, these can cause performance problems within the database. If set too high, they can cause problems for the OS. Most people assume that larger is better, but it has to be within the confines of the server you have. One item in particular is often referred to as resident or pinned memory which cannot be swapped. This is for the performance of the database but if you set it too high there may be insufficient left for the OS to perform other normal work, which can leave your database degraded too, depending on what is happening.

If you are worrying about the VMWare host, have you over-provisioned the memory of your guests? (if that is even possible) It's the same consideration for a server with a database on it in a way.


I hope that this gives you something to work with.
Robin
This User Gave Thanks to rbatte1 For This Post:
# 9  
Old 02-23-2017
Quote:
Originally Posted by jlliagre
No, on the opposite, using your RAM as cache is expected to improve your system performance.

No. Unused RAM is wasted RAM.
I hear this over and over again, from the Linux community. Always use 100%, a usage under 100% is wasted RAM, blabla.

Unix Vendors do not think so.
For example the HP-UX buffer cache that is comparable. The default is 10% RAM minumum and 50% maximum. There is a whitepaper that recommends to tune the maximum to 70% or 80%. But they warn to not go over 90% because the system would respond slower to memory requests from applications.
This User Gave Thanks to MadeInGermany For This Post:
# 10  
Old 02-23-2017
I suppose it comes down to the decision if memory is cleared when the particular process that owned it terminates, or if the OS keeps it in memory in case it is needed again soon.

Additionally, some OSes allow you to pre-read large data files with something like dd if=/path/to/bigfile of=/dev/null to cache the file for later access. This uses memory too but makes subsequent calls to the file faster, particularly for random access files such as Cobol data files or large CSVs where you are pulling out a specific record.

I'm aware that some OSes allow tuning to keep the memory empty, but I always leave it to be full, concentrating on swap activity as that is such a performance overhead.



Just my thoughts. Have I got it all wrong?
Robin
This User Gave Thanks to rbatte1 For This Post:
# 11  
Old 02-23-2017
Thank You Robin

Actually no swapping is not happening

Code:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0  81776 182316      8 12247504    0    0    58    34    2   16  4  1 95  0  0
 0  0  81776 181000      8 12248884    0    0     0   144 1481 3326  7  1 93  0  0


I checked my database config file , seems the buffers were set to min range

Code:
dynamic_shared_memory_type = posix
shared_buffers = 128MB


Moderator's Comments:
Mod Comment Please use CODE tags as required by forum rules!

Last edited by RudiC; 02-23-2017 at 12:18 PM.. Reason: Added CODE tags.
# 12  
Old 02-23-2017
Quote:
Originally Posted by anil529
I checked my database config file , seems the buffers were set to min range
Hold on! If your application is a database then it is usually better to give most memory directly to the database (how this is done depends on the database used: in i.e. Oracle this is called "SGA").

The reason is that DB software makes better use of the memory than the OS is because it can load bigger parts of the DB into memory so that they can be accessed even faster than from disk.

DBs commonly use very specialised ways of accessing their files which circumvent the OSes caching completely anyways (so-called "direct I/O", "concurrent I/O", etc.) so that a reduction of system cache memory won't hurt the DB at all. If you tune a system for a DB as application - as a rule of thumb - you give so much memory directly to the DB that the system just doesn't begin to swap, regardless of how small the file cache will be this way.

I hope this helps.

bakunin
This User Gave Thanks to bakunin For This Post:
# 13  
Old 02-24-2017
Quote:
Originally Posted by MadeInGermany
I hear this over and over again, from the Linux community. Always use 100%, a usage under 100% is wasted RAM, blabla.
This question is asked in the Red Hat forum so there is no doubt the OP is running Linux. The Linux kernel is designed to use all otherwise free RAM as cache with no penalties.

Note that under Unix and Linux, you can't really use 100%, the OS try hard to make sure minfree is left (min_free_kbytes on Linux), although minfree/min_free_kbytes are normally very small compared to the RAM size.

HP-UX might still has an problem freeing buffer cache memory but that's a design issue that should have been fixed if not already. Another System V implementation, Solaris, did it 17 years ago. On Solaris, the cache memory is reported as free memory and is freed almost instantly. See Understanding Memory Allocation and File System Caching in OpenSolaris (Richard McDougall's Weblog)

On the other hand, RAM allocated in kernel buffers, regardless of the OS, is much more difficult to be retrieved for applications so tuning can be useful here, for example when ZFS is used.

Back to the OP issue, he is running on a virtualized environment and has no access to the hypervisor statistics. The hypervisor might well lie about actual resources available to the kernel so anything is possible.
These 2 Users Gave Thanks to jlliagre For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. Solaris

High availability

hi guys I posted problem last time I didn't find answer to my issue. my problem is as below: I have two servers which work as an actif/standby in high availability system. but when i use command HASTAT -a i have the following message: couldn' find actif node. the servers are sun... (1 Reply)
Discussion started by: zineb06
1 Replies

2. Red Hat

CPU is high

Hi , We found CPU is high due to python process .Is this something that Oracle team should look on or Unix team has to work on it ?Could you please advise use of python process ? top - 12:03:03 up 43 days, 15:11, 5 users, load average: 1.53, 1.33, 1.23 Tasks: 126 total, 3 running, 123... (12 Replies)
Discussion started by: Maddy123
12 Replies

3. Shell Programming and Scripting

what would a script include to find CPU's %system time high and user time high?

Hi , I am trying to :wall: my head while scripting ..I am really new to this stuff , never did it before :( . how to find cpu's system high time and user time high in a script?? thanks , help would be appreciated ! :) (9 Replies)
Discussion started by: sushwey
9 Replies

4. Red Hat

apache high cpu load on high traffic

i have a Intel Quad Core Xeon X3440 (4 x 2.53GHz, 8MB Cache, Hyper Threaded) with 16gig and 1tb harddrive with a 1gb port and my apache is causing my cpu to go up to 100% on all four cores heres my http.config <IfModule prefork.c> StartServers 10 MinSpareServers 10 MaxSpareServers 15... (4 Replies)
Discussion started by: awww
4 Replies

5. Solaris

pswch/s too high

Hello Unix gurus, I have a Solaris 8 system on which since last few days we are noticing 0% idle state. When I checked with sar utility, I saw that process switching is very high. The output of sar -w is pasted below: sar -w 3 20 SunOS bdspb306 5.8 Generic_108528-18 sun4u 06/24/08 ... (0 Replies)
Discussion started by: akashgulati
0 Replies

6. Shell Programming and Scripting

Please Help me with this ..High Priority!

Hi, I am a nw bie to Schell Scripting, i have a same king of requirement as posted above. my input file is also a log file as below..... 28.05.2008 07:02:56,105 INFO Validation request recieved 28.05.2008 07:03:57,856 INFO 0:01:13.998 Response sent with: <?xml version="1.0"... (0 Replies)
Discussion started by: balaji_gopal
0 Replies

7. UNIX for Advanced & Expert Users

Sun: High kernel usage & very high load averages

Hi, I am seeing very high kernel usage and very high load averages on my system (Although we are not loading much data to our database). Here is the output of top...does anyone know what i should be looking at? Thanks, Lorraine last pid: 13144; load averages: 22.32, 19.81, 16.78 ... (4 Replies)
Discussion started by: lorrainenineill
4 Replies
Login or Register to Ask a Question