Sponsored Content
Full Discussion: Linux Mem Usage
Operating Systems Linux Linux Mem Usage Post 302140112 by new2ss on Wednesday 10th of October 2007 09:50:34 PM
Old 10-10-2007
Linux Mem Usage

What is amount of free RAM i have now?

Code:
              total       used       free     shared    buffers     cached
Mem:          1010        963         46          0        215        256
-/+ buffers/cache:        491        518
Swap:         1983          0       1983

Above is the output of 'free -m' from my linux machine. I did some searching on the internet and some of the articles point that linux is memory hungry ( not that it use up memory very fast, but it will use what is available). In my case, what is the free RAM i have? 46M or 518M. I highly do not think that its 46M + 518M.

The used and free Mem adds up to 1010 ( 963 + 46)

I am quite buffled by the -/+ buffers/cache value, the used and free values also adds up to 1010.

so linux is telling me it has used up 963MB of ram and 46MB is still available but on the other hand, it has 491 used in the buffer and 518 in the buffer is free... *confused and curious*

My total physical ram is 1GB. Would appreciate if anyone can explain and 'de-mysterifiy' me.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Linux diskspace usage

when /var directory of my machine gets filled up (85%) i removed some old logs. but after cleaning df -k command still shows that /var is still 85% full. It can detect the actual disk space only after I restart the machine. Is there a way to force df to reflect actual free space without... (1 Reply)
Discussion started by: necro
1 Replies

2. UNIX for Dummies Questions & Answers

Difference in Mem usage ?

Hi All, I have a pair of sun ultra 5_10 with SunOS 5.5.1. Both are almost equally patched and set up with simillar applications. host# uname -a SunOS host 5.5.1 Generic_103640-24 sun4u sparc SUNW,Ultra-5_10 Even though both have same amount of RAM ( 512 Mb ) , ... (1 Reply)
Discussion started by: shibz
1 Replies

3. Red Hat

Linux memory usage

What's the best way to find out how much memory is being used/available? I tried using free, but I didn't quite understand the output. Can someone explain it? $ free total used free shared buffers cached Mem: 16304536 16256376 48160 0 ... (6 Replies)
Discussion started by: junkmail426
6 Replies

4. Gentoo

cpu%/mem% usage, scripting, dzen2: howto learn bash the hard way

I am trying to write a small (and rather simple) script to gather some info about the system and piping it to dzen2 first, i want to explain some things. I know i could have used conky, but my intention was to expand my knowledge of bash, pipes and redirections inside a script, and to have fun... (14 Replies)
Discussion started by: broli
14 Replies

5. HP-UX

how can I find cpu usage memory usage swap usage and logical volume usage

how can I find cpu usage memory usage swap usage and I want to know CPU usage above X% and contiue Y times and memory usage above X % and contiue Y times my final destination is monitor process logical volume usage above X % and number of Logical voluage above can I not to... (3 Replies)
Discussion started by: alert0919
3 Replies

6. UNIX for Advanced & Expert Users

Checking mem usage at specific times in a program

Hi all, I'm running a simulator and I'm noticing an slow increase in memory for long simulations such that the simulation has to end because of a lack of memory. A colleague of mine ran Valgrind memcheck and reported that nothing of interest was reported other than known mem leaks. My advisor... (2 Replies)
Discussion started by: pl4u
2 Replies

7. AIX

How to monitor the IBM AIX server for I/O usage,memory usage,CPU usage,network..?

How to monitor the IBM AIX server for I/O usage, memory usage, CPU usage, network usage, storage usage? (3 Replies)
Discussion started by: laknar
3 Replies

8. Linux

Linux Device Driver: avoid mem copy from/to user/kernel space

I recently started working with Linux and wrote my first device driver for a hardware chip controlled by a host CPU running Linux 2.6.x kernel. 1. The user space process makes an IOCTL call with pointer to a user memory buffer. 2. The kernel device driver in the big switch-case of IOCTL,... (1 Reply)
Discussion started by: agaurav
1 Replies

9. Programming

getgroups usage on linux

hi , I have a problem about getgroups usage on linux. getgroups can get supplementary groups of a process but if i run a process with root account and I want to get supplementary groups of nobody then what i should do to realize that. (4 Replies)
Discussion started by: fatshaw
4 Replies

10. Shell Programming and Scripting

Help creating a timestamp script to record mem usage

Hi, I'm looking into doing a few performance tweaks by adjusting my max memory on a few lpars. I would to create a time stamp script so i could review it for a week and determine how much space i can lower my max memory to so i could reclaim and allocate that memory to where it is needed the... (2 Replies)
Discussion started by: vpundit
2 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 04:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy