Sponsored Content
Full Discussion: Slow Copy(CP) performance
Special Forums Hardware Filesystems, Disks and Memory Slow Copy(CP) performance Post 302359552 by zxmaus on Wednesday 7th of October 2009 12:56:18 AM
Old 10-07-2009
Since you don't tell us anything about your OS, your disklayout or anything else, we obviously have to guess, but in any case a copy from A to B that is slow is rather an IO issue than a cpu problem.
My best guess is, that both filesystems are maybe on the same disk and have maybe even different blocksizes. Since your filesystem was almost full, your fragmentation is very likely very high, since the OS had to put additional data where space were left, so typically the data was spread across the remaining diskspace and not nicely lined up like it would have been the case with lots of free space in the volumegroup. And I assume you haven't done a defragfs after cleaning up your diskspace.
When you now copy data from A to B and both locations are on the same disk, your system will take a lot more time to 1. find the data in the 'correct' order in filesystem A and read it - because its spread across the physical volume and 2. it will take a lot of time put the data back to disk in filesystem 'B' in the correct and suitable order - since the system has to find again free blocks big enough for your data chunks - and these chunks are likely as well spread across the entire disk.
Try to defrag your diskspace a few times, maybe that improves performance. If not, backup your data, drop the filesystems, defrag, recreate them and restore the content from backups.

Kind regards
zxmaus
 

9 More Discussions You Might Find Interesting

1. Post Here to Contact Site Administrators and Moderators

Help! Slow Performance

Is the performance now very, very slow (pages take a very long time to load)? Or is it just me? Neo (6 Replies)
Discussion started by: Neo
6 Replies

2. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

3. UNIX for Dummies Questions & Answers

Slow copy/performance... between volumes

hi guys We are seeing weird issues on my Linux Suse 10, it has lotus 8.5 and 1 filesystem for OS and another for Lotus Database. the issue is when the Lotus service starts wait on top is very high about 25% percent and in general CPU usage is very high we found that when this happens if we... (0 Replies)
Discussion started by: kopper
0 Replies

4. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

5. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

6. Infrastructure Monitoring

99% performance wa, slow server.

There is a big problem with the server (VPS based on OpenVZ, CentOS 5, 3GB RAM). The problem is the following. The first 15-20 minutes after starting the server is operating normally, the load average is less than or about 1.0, but then begins to increase sharply% wa, then hovers around 95-99%.... (2 Replies)
Discussion started by: draiphod
2 Replies

7. Red Hat

GFS file system performance is very slow

My code Hi All, I am having redhat linux 5.3 (Tikanga) with GFS file system and its very very slow for executing ls -ls command also.Please see the below for 2minits 12 second takes. Please help me to fix the issue. $ sudo time ls -la BadFiles |wc -l 0.01user 0.26system... (3 Replies)
Discussion started by: susindram
3 Replies

8. Solaris

Solaris 11.1 Slow Network Performance

I have identical M5000 machines that are needing to transfer very large amounts of data between them. These are fully loaded machines, and I've already checked IO, memory usage, etc... I get poor network performance even when the machines are idle or copying via loopback. The 10 GB NICs are... (7 Replies)
Discussion started by: christr
7 Replies

9. Filesystems, Disks and Memory

Slow copy (cp) performance when overwriting files

I have a lot of binary files I need to copy to a folder. The folder is already filled with files of the same name. Copying on top of the old files takes MUCH longer than if I were to delete the old files then copy the new files to the now-empty folder. This result is specific to one system -... (3 Replies)
Discussion started by: ces55
3 Replies
xfs_fsr(8)						      System Manager's Manual							xfs_fsr(8)

NAME
xfs_fsr - filesystem reorganizer for XFS SYNOPSIS
xfs_fsr [-v] [-t seconds] [-f leftoff] [-m mtab] xfs_fsr [-v] [xfsdev | file] ... DESCRIPTION
xfs_fsr is applicable only to XFS filesystems. xfs_fsr improves the organization of mounted filesystems. The reorganization algorithm operates on one file at a time, compacting or oth- erwise improving the layout of the file extents (contiguous blocks of file data). The following options are accepted by xfs_fsr. The -m, -t, and -f options have no meaning if any filesystems or files are specified on the command line. -m mtab Use this file for the list of filesystems to reorganize. The default is to use /etc/mtab. -t seconds How long to reorganize. The default is 7200 (2 hours). -f leftoff Use this file instead of /var/tmp/.fsrlast to read the state of where to start and as the file to store the state of where reorganization left off. -v Verbose. Print cryptic information about each file being reorganized. When invoked with no arguments xfs_fsr reorganizes all regular files in all mounted filesystems. xfs_fsr makes many cycles over /etc/mtab each time making a single pass over each XFS filesystem. Each pass goes through and selects files that have the largest number of extents. It attempts to defragment the top 10% of these files on each pass. It runs for up to two hours after which it records the filesystem where it left off, so it can start there the next time. This information is stored in the file /var/tmp/.fsrlast_xfs. If the information found here is somehow inconsistent or out of date it is ignored and reor- ganization starts at the beginning of the first filesystem found in /etc/mtab. xfs_fsr can be called with one or more arguments naming filesystems (block device name), and files to reorganize. In this mode xfs_fsr does not read or write /var/tmp/.fsrlast_xfs nor does it run for a fixed time interval. It makes one pass through each specified regular file and all regular files in each specified filesystem. A command line name referring to a symbolic link (except to a file system device), FIFO, or UNIX domain socket generates a warning message, but is otherwise ignored. While traversing the filesystem these types of files are silently skipped. FILES
/etc/mtab contains default list of filesystems to reorganize. /var/tmp/.fsrlast_xfs records the state where reorganization left off. SEE ALSO
xfs_fsr(8), mkfs.xfs(8), xfs_ncheck(8), xfs(5). NOTES
xfs_fsr improves the layout of extents for each file by copying the entire file to a temporary location and then interchanging the data extents of the target and temporary files in an atomic manner. This method requires that enough free disk space be available to copy any given file and that the space be less fragmented than the original file. It also requires the owner of the file to have enough remaining filespace quota to do the copy on systems running quotas. xfs_fsr generates a warning message if space is not sufficient to improve the target file. A temporary file used in improving a file given on the command line is created in the same parent directory of the target file and is pre- fixed by the string '.fsr'. The temporary files used in improving an entire XFS device are stored in a directory at the root of the target device and use the same naming scheme. The temporary files are unlinked upon creation so data will not be readable by any other process. xfs_fsr does not operate on files that are currently mapped in memory. A 'file busy' error can be seen for these files if the verbose flag (-v) is set. Files marked as no-defrag will be skipped. The xfs_io(8) chattr command with the f attribute can be used to set or clear this flag. Files and directories created in a directory with the no-defrag flag will inherit the attribute. An entry in /etc/mtab or the file specified using the -m option must have the rw option specified for read and write access. If this option is not present, then xfs_fsr skips the filesystem described by that line. See the fstab(5) reference page for more details. In general we do not foresee the need to run xfs_fsr on system partitions such as /, /boot and /usr as in general these will not suffer from fragmentation. There are also issues with defragmenting files lilo(8) uses to boot your system. It is recommended that these files should be flagged as no-defrag with the xfs_io(8) chattr command. Should these files be moved by xfs_fsr then you must rerun lilo before you reboot or you may have an unbootable system. xfs_fsr(8)
All times are GMT -4. The time now is 04:20 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy