Sponsored Content
Full Discussion: Slow Copy(CP) performance
Special Forums Hardware Filesystems, Disks and Memory Slow Copy(CP) performance Post 302360369 by jim mcnamara on Thursday 8th of October 2009 05:29:58 PM
Old 10-08-2009
Rule of thumb:
Seriously consider never letting a given filesystem get above 80-85% full.

Filesystems under heavy I/O loads suffer from various kinds of latency issues
when free space becomes tight, file allocation times increase as well.

The other caveat:
Assuming loads of available free inodes, huge directory files (the directory file itself, not what is in the directory) are the result of adding lots of files to a single directory. As the directory file itself grows, system performance against it - ls, find, stat, etc. - becomes very poor.

This is because any operation that does a readdir, which is sequential, is really slow if it has to read thru 2 million entries to find one filename.

When files are deleted from the bloated directory, it does not shrink. You have to park the remaining files somewhere, delete the directory, recreate the directory, then move the files back into it. And you have a new, smaller size directory file. Having broader directory trees solves this problem in the first place.
 

9 More Discussions You Might Find Interesting

1. Post Here to Contact Site Administrators and Moderators

Help! Slow Performance

Is the performance now very, very slow (pages take a very long time to load)? Or is it just me? Neo (6 Replies)
Discussion started by: Neo
6 Replies

2. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

3. UNIX for Dummies Questions & Answers

Slow copy/performance... between volumes

hi guys We are seeing weird issues on my Linux Suse 10, it has lotus 8.5 and 1 filesystem for OS and another for Lotus Database. the issue is when the Lotus service starts wait on top is very high about 25% percent and in general CPU usage is very high we found that when this happens if we... (0 Replies)
Discussion started by: kopper
0 Replies

4. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

5. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

6. Infrastructure Monitoring

99% performance wa, slow server.

There is a big problem with the server (VPS based on OpenVZ, CentOS 5, 3GB RAM). The problem is the following. The first 15-20 minutes after starting the server is operating normally, the load average is less than or about 1.0, but then begins to increase sharply% wa, then hovers around 95-99%.... (2 Replies)
Discussion started by: draiphod
2 Replies

7. Red Hat

GFS file system performance is very slow

My code Hi All, I am having redhat linux 5.3 (Tikanga) with GFS file system and its very very slow for executing ls -ls command also.Please see the below for 2minits 12 second takes. Please help me to fix the issue. $ sudo time ls -la BadFiles |wc -l 0.01user 0.26system... (3 Replies)
Discussion started by: susindram
3 Replies

8. Solaris

Solaris 11.1 Slow Network Performance

I have identical M5000 machines that are needing to transfer very large amounts of data between them. These are fully loaded machines, and I've already checked IO, memory usage, etc... I get poor network performance even when the machines are idle or copying via loopback. The 10 GB NICs are... (7 Replies)
Discussion started by: christr
7 Replies

9. Filesystems, Disks and Memory

Slow copy (cp) performance when overwriting files

I have a lot of binary files I need to copy to a folder. The folder is already filled with files of the same name. Copying on top of the old files takes MUCH longer than if I were to delete the old files then copy the new files to the now-empty folder. This result is specific to one system -... (3 Replies)
Discussion started by: ces55
3 Replies
mktrashcan(1)						      General Commands Manual						     mktrashcan(1)

NAME
mktrashcan, rmtrashcan, shtrashcan - Attaches, detaches, or shows a trashcan directory SYNOPSIS
/usr/sbin/mktrashcan trashcan directory... /usr/sbin/rmtrashcan directory... /usr/sbin/shtrashcan directory... OPERANDS
Specifies the directory that contains files that were deleted from attached directories. Whenever you delete a file in the specified directory, the file system automatically moves the file to the trashcan directory. Specifies the directory that you attach to a trashcan directory. DESCRIPTION
The trashcan utilities (mktrashcan and rmtrashcan) enable you to attach or detach an existing directory, which you specify as a trashcan directory, to any number of directories within the same fileset. A trashcan directory stores the files that are deleted with the unlink system call. For instance, you can use the mktrashcan utility to attach a trashcan directory called /usr/trashcan to one or more directories; thereafter, when you delete a file from one of the attached directories, the file system moves the file to the /usr/trashcan directory. Note that when more than one directory shares attachment to a trashcan directory, files with the same file name can overwrite each other in the trashcan directory. If you mistakenly delete a file, use the mv command to return the file from the /usr/trashcan directory to its original directory. When you enter shtrashcan at the system prompt, the system shows the trashcan directory, if one exists, for the directory you specified. It is important that trashcan directories have correct access permissions. If the permissions are too restrictive, then it may be impossi- ble to remove files from the directories that are attached to the trashcan directory. In general, all users and groups that expect to use the trashcan directory need write permission to the directory. If unexpected "permission denied" errors occur when deleting files that are in a directory attached to a trashcan directory, use the chmod command to change the permissions on the trashcan directory. RESTRICTIONS
The directory and trashcan directories must be in the same fileset; however, you can attach the trashcan directory to any directory within the fileset. EXAMPLES
The following example creates and attaches a trashcan directory, /usr/trashcan, to two directories, /usr/ray and /usr/projects/sql/test, which are in the same fileset. The chmod command adds write permission for all users and groups on the new trashcan directory. % mkdir /usr/trashcan % chmod a+w /usr/trashcan % mktrashcan /usr/trashcan /usr/ray /usr/projects/sql/test To attach the trashcan directory, /usr/trashcan, to all subdirectories in the /usr directory, enter: % mktrashcan /usr/trashcan /usr/* New subdirectories that you add beneath the /usr directory are not attached to the trashcan directory until you attach them. Also, the mktrashcan utility distinguishes between directories and files, attaching only directories to the trashcan directory. Note that an attached directory produces an EDUPLICATE_DIRS (-1165) error when /usr/trashcan is itself in the directory path you attach to (as in the previous example). You can ignore this error message. SEE ALSO
advfs(4), mkfset(8), showfsets(8) mktrashcan(1)
All times are GMT -4. The time now is 11:34 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy