Faster command for file copy than cp ?


 
Thread Tools Search this Thread
Operating Systems HP-UX Faster command for file copy than cp ?
# 1  
Old 01-06-2012
Faster command for file copy than cp ?

we have 30 GB files on our filesystem which we need to copy daily to 25 location on the same machine (but different filesystem).

cp is taking 20 min to do the copy and we have 5 different thread doing the copy.

so in all its taking around 2 hr and we need to reduce it.

Is there any other command which will copy the files faster.

Or any other way in which we can copy the files.
# 2  
Old 01-06-2012
Is there a valid reason why you cannot use 25 symbolic links instead of 25 copies of the same file? Think of the disk space you can save in a week.

Only reason I can think of not to us ln: if each file gets overwritten or edited by different users?

Otherwise you will have to create 25 processes to speed up copying. One for each destination. This is a LOT of possibly needless I/O.
# 3  
Old 01-06-2012
yeh ..all the files get overwritten all day and then we have to come back to the previous state.

starting 25 process didnt help as it was loading the i/o

we found 5 process optimum.
# 4  
Old 01-06-2012
Here are results of time consumed to copy a 1 GB file using different methods. Looks like dd with a block size 1024k beats others by not only the time consumed but also by the CPU usage. But things may turn out different in your environment considering file size, different file system, amount of RAM, and number of processors and speed. Still this may help you choose your tool wisely Smilie

Code:
unixuser@solaris:~$ time dd if=file of=file2 bs=1024k
1024+0 records in
1024+0 records out

real    0m16.75s
user    0m0.00s
sys     0m1.20s
unixuser@solaris:~$ time cp file file3

real    0m24.00s
user    0m0.00s
sys     0m3.06s
unixuser@solaris:~$ rm -f file2; time dd if=file of=file2 bs=2048k
512+0 records in
512+0 records out

real    0m21.67s
user    0m0.00s
sys     0m2.11s
unixuser@solaris:~$ rm -f file2; time dd if=file of=file2 bs=512
2097152+0 records in
2097152+0 records out

real    0m42.79s
user    0m0.89s
sys     0m18.46s
unixuser@solaris:~$ time tar cf - file | ( cd tmp/; tar xf - )

real    0m20.04s
user    0m0.42s
sys     0m5.24s

This User Gave Thanks to admin_xor For This Post:
# 5  
Old 01-07-2012
Im as surprized as Jim, why 25 copies?
To come back to previous state, ever considered LVM mirroring + file system snapshots?
(P.S. cpio could also be a cp alternative perfs. are far better but not as good as dd...)
Are are your filesystems tuned?
on 30 GB, it can have a serious impact (very big file cache that once full need to flush...)
This User Gave Thanks to vbe For This Post:
# 6  
Old 02-03-2012
This User Gave Thanks to bora99 For This Post:
# 7  
Old 02-06-2012
Quote:
we have 30 GB files on our filesystem which we need to copy daily to 25 location on the same machine (but different filesystem).
This is ambiguous.

What version of HP-UX do you have?
How many files are you copying?
How often are you copying these files?
How big is the largest file? Is every file always smaller than 2 Gigabytes?
What is the filesystem type? Is NFS involved or are they all local discs?
Are you actually copying all of the files 25 times, or just copying each file once to one of 25 different directories?
What does "location" mean in unix terms?
Are these files from a recognised package (e.g. Oracle Archive logs)?
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to make awk command faster?

I have the below command which is referring a large file and it is taking 3 hours to run. Can something be done to make this command faster. awk -F ',' '{OFS=","}{ if ($13 == "9999") print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12 }' ${NLAP_TEMP}/hist1.out|sort -T ${NLAP_TEMP} |uniq>... (13 Replies)
Discussion started by: Peu Mukherjee
13 Replies

2. Shell Programming and Scripting

Making a faster alternative to a slow awk command

Hi, I have a large number of input files with two columns of numbers. For example: 83 1453 99 3255 99 8482 99 7372 83 175 I only wish to retain lines where the numbers fullfil two requirements. E.g: =83 1000<=<=2000 To do this I use the following... (10 Replies)
Discussion started by: s052866
10 Replies

3. UNIX for Dummies Questions & Answers

A faster equivalent for this sed command

Hello guys, I'm cleaning out big XML files (we're talking about 1GB at least), most of them contain words written in a non-latin alphabet. The command I'm using is so slow it's not even funny: cat $1 | sed -e :a -e 's/&lt;*&gt;//g;/&lt;/N;//ba;s/</ /g;s/>/... (4 Replies)
Discussion started by: bobylapointe
4 Replies

4. Shell Programming and Scripting

Faster way to use this awk command

awk "/May 23, 2012 /,0" /var/tmp/datafile the above command pulls out information in the datafile. the information it pulls is from the date specified to the end of the file. now, how can i make this faster if the datafile is huge? even if it wasn't huge, i feel there's a better/faster way to... (8 Replies)
Discussion started by: SkySmart
8 Replies

5. Shell Programming and Scripting

Multi thread awk command for faster performance

Hi, I have a script below for extracting xml from a file. for i in *.txt do echo $i awk '/<.*/ , /.*<\/.*>/' "$i" | tr -d '\n' echo -ne '\n' done . I read about using multi threading to speed up the script. I do not know much about it but read it on this forum. Is it a... (21 Replies)
Discussion started by: chetan.c
21 Replies

6. Shell Programming and Scripting

faster command than find for sorting?

I'm sorting files from a source directory by size into 4 categories then copying them into 4 corresponding folders, just wondering if there's a faster/better/more_elegant way to do this: find /home/user/sourcefiles -type f -size -400000k -exec /bin/cp -uv {} /home/user/medfiles/ \; find... (0 Replies)
Discussion started by: unclecameron
0 Replies

7. Shell Programming and Scripting

**HELP** need to split this line faster than cut-command

Hi, A datafile containing lines such as below needs to be split: 500000000000932491683600000000000000000000000000016800000GS0000000000932491683600*HOME I need to get the 2-5, 11-20, and 35-40 characters and I can do it via cut command. cut -c 2-5 file > temp1.txt cut -c 11-20 file >... (9 Replies)
Discussion started by: daytripper1021
9 Replies

8. UNIX for Dummies Questions & Answers

Which command will be faster? y?

i)wc -c/etc/passwd|awk'{print $1}' ii)ls -al/etc/passwd|awk'{print $5}' (4 Replies)
Discussion started by: karthi_g
4 Replies

9. Shell Programming and Scripting

How to make copy work faster

I am trying to copy a folder which contains a list of C executables. It takes 2 mins for completion,where as the entire script takes only 3 more minutes for other process. Is there a way to copy the folder faster so that the performance of the script will improve? (2 Replies)
Discussion started by: prasperl
2 Replies

10. Shell Programming and Scripting

command faster in crontab..

Hi all you enlightened unix people, I've been trying to execute a perl script that contains the following line within backticks: `grep -f patternfile.txt otherfile.txt`;It takes normally 2 minutes to execute this command from the bash shell by hand. I noticed that when i run this command... (2 Replies)
Discussion started by: silverlocket
2 Replies
Login or Register to Ask a Question