Sponsored Content
Operating Systems Linux Red Hat GFS file system performance is very slow Post 302616719 by susindram on Sunday 1st of April 2012 10:14:11 AM
Old 04-01-2012
Hi Friends,

Any idea you have about above issue??

Thanks,
Susi.S
 

8 More Discussions You Might Find Interesting

1. Post Here to Contact Site Administrators and Moderators

Help! Slow Performance

Is the performance now very, very slow (pages take a very long time to load)? Or is it just me? Neo (6 Replies)
Discussion started by: Neo
6 Replies

2. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

3. Filesystems, Disks and Memory

Slow Copy(CP) performance

Hi all We have got issues with copying a 2.6 GB file from one folder to another folder. Well, this is not the first issue we are having on the box currently, i will try to explain everything we have done from the past 2 days. We got a message 2 days back saying that our Production is 98%... (3 Replies)
Discussion started by: b_sri
3 Replies

4. Filesystems, Disks and Memory

Is GFS 6.2 same as GFS 2 file system?

Hi, I have a bit knowledge of GFS. As per my understanding it is the file system used in the cluster environment. I have a question. Is GFS 6.2 same as GFS 2 files system? Please some one clear my confusion. Thanks in advance. (1 Reply)
Discussion started by: praveen_b744
1 Replies

5. UNIX for Dummies Questions & Answers

Slow copy/performance... between volumes

hi guys We are seeing weird issues on my Linux Suse 10, it has lotus 8.5 and 1 filesystem for OS and another for Lotus Database. the issue is when the Lotus service starts wait on top is very high about 25% percent and in general CPU usage is very high we found that when this happens if we... (0 Replies)
Discussion started by: kopper
0 Replies

6. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

7. Infrastructure Monitoring

99% performance wa, slow server.

There is a big problem with the server (VPS based on OpenVZ, CentOS 5, 3GB RAM). The problem is the following. The first 15-20 minutes after starting the server is operating normally, the load average is less than or about 1.0, but then begins to increase sharply% wa, then hovers around 95-99%.... (2 Replies)
Discussion started by: draiphod
2 Replies

8. Solaris

Solaris 11.1 Slow Network Performance

I have identical M5000 machines that are needing to transfer very large amounts of data between them. These are fully loaded machines, and I've already checked IO, memory usage, etc... I get poor network performance even when the machines are idle or copying via loopback. The 10 GB NICs are... (7 Replies)
Discussion started by: christr
7 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 11:02 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy