Sponsored Content
Operating Systems Solaris Rsync quite slow (using very little cpu): how to improve its speed? Post 303018316 by priyadarshan on Sunday 3rd of June 2018 01:47:32 PM
Old 06-03-2018
Thank you very much for your thoughts on this issue. The rsync is between two local disks, source is a 8-disk vdev, target an ssd. No network involved.


I am travelling and will be able to connect to server on Tuesday, but I will test using only 1 file, source and target on same local disk, to eliminate as many factors as possible.
 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

network speed is slow

Hello, everyone: i encounter a problem these days , pls help me ,thanks in advance. my env: machine: ES40 A ES40 B os: true64 Unix 4.0f note: src.tar 8M network card speed 100M my problem: ... (3 Replies)
Discussion started by: q30
3 Replies

2. AIX

cpu speed

how do i determine the speed of a cpu on AIX 4.3.3 or 5.1? (5 Replies)
Discussion started by: csaunders
5 Replies

3. HP-UX

How to find CPU Speed of HP UX

Need to find the CPU speed of HP UX for a non root login. echo "itick_per_usec/D" | adb /stand/vmunix /dev/mem | tail -1 will give the following for non root users ERROR: cannot open `/dev/mem', errno = 13, Permission denied (2 Replies)
Discussion started by: surajb
2 Replies

4. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

5. UNIX for Advanced & Expert Users

speed test +20,000 file existance checks too slow

Need to make a very fast file existence checker. Passing in 20-50K num of files In the code below ${file} is a file with a listing of +20,000 files. test_speed is the script. I am commenting out the results of <time test_speed try>. The normal "test -f" is much much too slow when a system... (2 Replies)
Discussion started by: nullwhat
2 Replies

6. Shell Programming and Scripting

Help to improve speed of text processing script

Hey together, You should know, that I'am relatively new to shell scripting, so my solution is probably a little awkward. Here is the script: #!/bin/bash live_dir=/var/lib/pokerhands/live for limit in `find $live_dir/ -type d | sed -e s#$live_dir/##`; do cat $live_dir/$limit/*... (19 Replies)
Discussion started by: lorus
19 Replies

7. Red Hat

RHEL 5.6 Slow rsync to NFS array

Hi All, I have RHEL 5.6 with a 70GB local directory of Web content. Images, PHP scripts etc. I need to copy all this content to an NFS array thats mounted on the RHEL server. I did a baseline cp to copy the content one week ago. Since my baseline copy the local directory has grown by 8GB.... (2 Replies)
Discussion started by: general_lee
2 Replies

8. Shell Programming and Scripting

Slow Perl script: how to speed up?

I had written a perl script to compare two files: new and master and get the output of the first file i.e. the first file: words that are not in the master file STRUCTURE OF THE TWO FILES The first file is a series of names ramesh sushil jonga sudesh lugdi whereas the second file (could be... (4 Replies)
Discussion started by: gimley
4 Replies

9. Shell Programming and Scripting

Improve script - slow process with big files

Gents, Please can u help me to improve this script to be more faster, it works perfectly but for big files take a lot time to end the job.. I see the problem is in the step (while) and in this part the script takes a lot time.. Please if you can find a best way to do will be great. ... (13 Replies)
Discussion started by: jiam912
13 Replies
BACKUP(8)						      System Manager's Manual							 BACKUP(8)

NAME
backup - backup files SYNOPSIS
backup [-djmnorstvz] dir1 dir2 OPTIONS
-d At top level, only directories are backed up -j Do not copy junk: *.Z, *.bak, a.out, core, etc -m If device full, prompt for new diskette -n Do not backup top-level directories -o Do not copy *.o files -r Restore files -s Do not copy *.s files -t Preserve creation times -v Verbose; list files being backed up -z Compress the files on the backup medium EXAMPLES
backup -mz . /f0 # Backup current directory compressed backup /bin /usr/bin # Backup bin from RAM disk to hard disk DESCRIPTION
Backup (recursively) backs up the contents of a given directory and its subdirectories to another part of the file system. It has two typ- ical uses. First, some portion of the file system can be backed up onto 1 or more diskettes. When a diskette fills up, the user is prompted for a new one. The backups are in the form of mountable file systems. Second, a directory on RAM disk can be backed up onto hard disk. If the target directory is empty, the entire source directory is copied there, optionally compressed to save space. If the target directory is an old backup, only those files in the target directory that are older than similar names in the source directory are replaced. Backup uses times for this purpose, like make. Calling Backup as Restore is equivalent to using the -r option; this replaces newer files in the target directory with older files from the source directory, uncompressing them if necessary. The target directory con- tents are thus returned to some previous state. SEE ALSO
tar(1). BACKUP(8)
All times are GMT -4. The time now is 03:36 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy