Slow Copy(CP) performance


 
Thread Tools Search this Thread
Special Forums Hardware Filesystems, Disks and Memory Slow Copy(CP) performance
# 1  
Old 10-06-2009
Slow Copy(CP) performance

Hi all

We have got issues with copying a 2.6 GB file from one folder to another folder.
Well, this is not the first issue we are having on the box currently, i will try to explain everything we have done from the past 2 days.

We got a message 2 days back saying that our Production is 98% full (disk space). So we started compressing all the old files, moved half of them to a backup server, and compressed the remaining on our server. By doing this we effectively bought down the disk space occupancy to 79%.

Now today morning when i tried to run my process, the copying of a 2.6 GB file from folder A to folder B took ages. The SA checked the CPU utilization and told me that it was HIGH. i went ahead and killed some orphaned processes, to bring the CPU Utilization to 20%. Now, when there is high activity the CPU utilization varies between 20-35%.

Even after improving the CPU Utilization, the copy of file is still painfully slow.

Any guesses on what might have gone wrong ??? Smilie

Let me know if i need to elaborate more

Thanks
Sri
# 2  
Old 10-07-2009
Since you don't tell us anything about your OS, your disklayout or anything else, we obviously have to guess, but in any case a copy from A to B that is slow is rather an IO issue than a cpu problem.
My best guess is, that both filesystems are maybe on the same disk and have maybe even different blocksizes. Since your filesystem was almost full, your fragmentation is very likely very high, since the OS had to put additional data where space were left, so typically the data was spread across the remaining diskspace and not nicely lined up like it would have been the case with lots of free space in the volumegroup. And I assume you haven't done a defragfs after cleaning up your diskspace.
When you now copy data from A to B and both locations are on the same disk, your system will take a lot more time to 1. find the data in the 'correct' order in filesystem A and read it - because its spread across the physical volume and 2. it will take a lot of time put the data back to disk in filesystem 'B' in the correct and suitable order - since the system has to find again free blocks big enough for your data chunks - and these chunks are likely as well spread across the entire disk.
Try to defrag your diskspace a few times, maybe that improves performance. If not, backup your data, drop the filesystems, defrag, recreate them and restore the content from backups.

Kind regards
zxmaus
# 3  
Old 10-07-2009
Bug

zxmaus

Thanks for the response. we are on HP-UX server.

you are right, we had fragmentation problem on the unix box. My SA was saying we had the buffer Cache fragmentation as well, which kept adding to our problems.

the Admin ran a over-night de-frag process and increased the kernel parameter bcvmap_size_factor.

We re-booted the box and things look much better now Smilie

we are planning to keep an eye on the system and see when it starts to choke, so that we can plan for a de-frag periodically.

Thanks
# 4  
Old 10-08-2009
Rule of thumb:
Seriously consider never letting a given filesystem get above 80-85% full.

Filesystems under heavy I/O loads suffer from various kinds of latency issues
when free space becomes tight, file allocation times increase as well.

The other caveat:
Assuming loads of available free inodes, huge directory files (the directory file itself, not what is in the directory) are the result of adding lots of files to a single directory. As the directory file itself grows, system performance against it - ls, find, stat, etc. - becomes very poor.

This is because any operation that does a readdir, which is sequential, is really slow if it has to read thru 2 million entries to find one filename.

When files are deleted from the bloated directory, it does not shrink. You have to park the remaining files somewhere, delete the directory, recreate the directory, then move the files back into it. And you have a new, smaller size directory file. Having broader directory trees solves this problem in the first place.
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Slow copy (cp) performance when overwriting files

I have a lot of binary files I need to copy to a folder. The folder is already filled with files of the same name. Copying on top of the old files takes MUCH longer than if I were to delete the old files then copy the new files to the now-empty folder. This result is specific to one system -... (3 Replies)
Discussion started by: ces55
3 Replies

2. Solaris

Solaris 11.1 Slow Network Performance

I have identical M5000 machines that are needing to transfer very large amounts of data between them. These are fully loaded machines, and I've already checked IO, memory usage, etc... I get poor network performance even when the machines are idle or copying via loopback. The 10 GB NICs are... (7 Replies)
Discussion started by: christr
7 Replies

3. Red Hat

GFS file system performance is very slow

My code Hi All, I am having redhat linux 5.3 (Tikanga) with GFS file system and its very very slow for executing ls -ls command also.Please see the below for 2minits 12 second takes. Please help me to fix the issue. $ sudo time ls -la BadFiles |wc -l 0.01user 0.26system... (3 Replies)
Discussion started by: susindram
3 Replies

4. Infrastructure Monitoring

99% performance wa, slow server.

There is a big problem with the server (VPS based on OpenVZ, CentOS 5, 3GB RAM). The problem is the following. The first 15-20 minutes after starting the server is operating normally, the load average is less than or about 1.0, but then begins to increase sharply% wa, then hovers around 95-99%.... (2 Replies)
Discussion started by: draiphod
2 Replies

5. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

6. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

7. UNIX for Dummies Questions & Answers

Slow copy/performance... between volumes

hi guys We are seeing weird issues on my Linux Suse 10, it has lotus 8.5 and 1 filesystem for OS and another for Lotus Database. the issue is when the Lotus service starts wait on top is very high about 25% percent and in general CPU usage is very high we found that when this happens if we... (0 Replies)
Discussion started by: kopper
0 Replies

8. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

9. Post Here to Contact Site Administrators and Moderators

Help! Slow Performance

Is the performance now very, very slow (pages take a very long time to load)? Or is it just me? Neo (6 Replies)
Discussion started by: Neo
6 Replies
Login or Register to Ask a Question