In the above code I am trying to determine the number of cycles it takes to fetch the first element of an array from memory, and then the next cached element. When I execute this snippet, I am getting almost identical number of cycles for both accesses ~81 cycles. Can anybody explain me why this is happening. By all means, the first access should be very costly, but the access to next sequential element which has been brought into the cache should be much lesser.
I'm looking to get the file cache portion of physical (real) memory on a Solaris workstation (Similar to the Cache: line in /proc/meminfo on some Linux systems):
# swap -s; swap -l; vmstat 2 2; echo "::memstat" | mdb -k
total: 309376k bytes allocated + 41428k reserved = 350804k used,... (5 Replies)
Hi,
I'm running a debian lenny 1GB ram, but with a high I/O. This server has 400IOPS and 3MB/s sustain. So, I noted cached memory use 800MB, buffered memory use 50MB, and no free memory is available. Questions:
What does mean such a high cached memory?
Who's using this cached memory?
Is... (3 Replies)
18:45:47 # free -m
total used free shared buffers cached
Mem: 96679 95909 770 0 1530 19550
-/+ buffers/cache: 74828 21851
Swap: 12287 652 11635
Hi all. The below output is from a RHEL 4.5... (0 Replies)
Right now i am using Red Hat Enterprise Linux AS release 4 and cache memory occupying around 1.5GB mentioned below,
total used free shared buffers cached
Mem: 2026 2021 5 0 161 1477
-/+ buffers/cache: 382 1644 ... (4 Replies)
When I run 'top' command,I see the following
Memory: 32G real, 12G free, 96G swap free
Though it shows as 12G free,I am not able to account for processes that consume the rest 20G.
In my understanding some process should be consuming atleast 15-16 G but I am not able to find them.
Is... (1 Reply)
The environment is Java/Windows. The program keeps near real-time state in memory cache, which is updated by multiple sources, size of the cache is roughly 500 MB, frequency of updates is ~ 20 per second. I am looking into different ways to keep current snapshot of the memory on the disk for a)... (9 Replies)
hi all,
i have noticed that my server has 64 GB RAM and i have application in this server but the server has free memory only 15% and utilized 85% however it didn't eat from swap .
does any parameter can be configured in kernel to make the system clear memory from cache like linux
i found... (4 Replies)
i wish to clear memory cache on a production box and i was wondering what is the worst that can happen if i do?
i already tested this on a backup server and everything seemed fine.
but i need to know from you experts what are the worst things that can happen when i run it on a real server:
... (5 Replies)
Hi,
I am new to AIX, Can someone please help me how to know the swap space, total physical memory and system cache?
We are using AIX 5.3.
Thanks! (3 Replies)
Discussion started by: Phaneendra G
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)