Sponsored Content
Operating Systems HP-UX Faster command for file copy than cp ? Post 302588156 by vbe on Saturday 7th of January 2012 04:54:12 AM
Old 01-07-2012
Im as surprized as Jim, why 25 copies?
To come back to previous state, ever considered LVM mirroring + file system snapshots?
(P.S. cpio could also be a cp alternative perfs. are far better but not as good as dd...)
Are are your filesystems tuned?
on 30 GB, it can have a serious impact (very big file cache that once full need to flush...)
This User Gave Thanks to vbe For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

command faster in crontab..

Hi all you enlightened unix people, I've been trying to execute a perl script that contains the following line within backticks: `grep -f patternfile.txt otherfile.txt`;It takes normally 2 minutes to execute this command from the bash shell by hand. I noticed that when i run this command... (2 Replies)
Discussion started by: silverlocket
2 Replies

2. Shell Programming and Scripting

How to make copy work faster

I am trying to copy a folder which contains a list of C executables. It takes 2 mins for completion,where as the entire script takes only 3 more minutes for other process. Is there a way to copy the folder faster so that the performance of the script will improve? (2 Replies)
Discussion started by: prasperl
2 Replies

3. UNIX for Dummies Questions & Answers

Which command will be faster? y?

i)wc -c/etc/passwd|awk'{print $1}' ii)ls -al/etc/passwd|awk'{print $5}' (4 Replies)
Discussion started by: karthi_g
4 Replies

4. Shell Programming and Scripting

**HELP** need to split this line faster than cut-command

Hi, A datafile containing lines such as below needs to be split: 500000000000932491683600000000000000000000000000016800000GS0000000000932491683600*HOME I need to get the 2-5, 11-20, and 35-40 characters and I can do it via cut command. cut -c 2-5 file > temp1.txt cut -c 11-20 file >... (9 Replies)
Discussion started by: daytripper1021
9 Replies

5. Shell Programming and Scripting

faster command than find for sorting?

I'm sorting files from a source directory by size into 4 categories then copying them into 4 corresponding folders, just wondering if there's a faster/better/more_elegant way to do this: find /home/user/sourcefiles -type f -size -400000k -exec /bin/cp -uv {} /home/user/medfiles/ \; find... (0 Replies)
Discussion started by: unclecameron
0 Replies

6. Shell Programming and Scripting

Multi thread awk command for faster performance

Hi, I have a script below for extracting xml from a file. for i in *.txt do echo $i awk '/<.*/ , /.*<\/.*>/' "$i" | tr -d '\n' echo -ne '\n' done . I read about using multi threading to speed up the script. I do not know much about it but read it on this forum. Is it a... (21 Replies)
Discussion started by: chetan.c
21 Replies

7. Shell Programming and Scripting

Faster way to use this awk command

awk "/May 23, 2012 /,0" /var/tmp/datafile the above command pulls out information in the datafile. the information it pulls is from the date specified to the end of the file. now, how can i make this faster if the datafile is huge? even if it wasn't huge, i feel there's a better/faster way to... (8 Replies)
Discussion started by: SkySmart
8 Replies

8. UNIX for Dummies Questions & Answers

A faster equivalent for this sed command

Hello guys, I'm cleaning out big XML files (we're talking about 1GB at least), most of them contain words written in a non-latin alphabet. The command I'm using is so slow it's not even funny: cat $1 | sed -e :a -e 's/&lt;*&gt;//g;/&lt;/N;//ba;s/</ /g;s/>/... (4 Replies)
Discussion started by: bobylapointe
4 Replies

9. Shell Programming and Scripting

Making a faster alternative to a slow awk command

Hi, I have a large number of input files with two columns of numbers. For example: 83 1453 99 3255 99 8482 99 7372 83 175 I only wish to retain lines where the numbers fullfil two requirements. E.g: =83 1000<=<=2000 To do this I use the following... (10 Replies)
Discussion started by: s052866
10 Replies

10. Shell Programming and Scripting

How to make awk command faster?

I have the below command which is referring a large file and it is taking 3 hours to run. Can something be done to make this command faster. awk -F ',' '{OFS=","}{ if ($13 == "9999") print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12 }' ${NLAP_TEMP}/hist1.out|sort -T ${NLAP_TEMP} |uniq>... (13 Replies)
Discussion started by: Peu Mukherjee
13 Replies
SHRED(1)								FSF								  SHRED(1)

NAME
shred - delete a file securely, first overwriting it to hide its contents SYNOPSIS
shred [OPTIONS] FILE [...] DESCRIPTION
Overwrite the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data. Mandatory arguments to long options are mandatory for short options too. -f, --force change permissions to allow writing if necessary -n, --iterations=N Overwrite N times instead of the default (25) -s, --size=N shred this many bytes (suffixes like K, M, G accepted) -u, --remove truncate and remove file after overwriting -v, --verbose show progress -x, --exact do not round file sizes up to the next full block -z, --zero add a final overwrite with zeros to hide shredding - shred standard output --help display this help and exit --version output version information and exit Delete FILE(s) if --remove (-u) is specified. The default is not to remove the files because it is common to operate on device files like /dev/hda, and those files usually should not be removed. When operating on regular files, most people use the --remove option. CAUTION: Note that shred relies on a very important assumption: that the filesystem overwrites data in place. This is the traditional way to do things, but many modern filesystem designs do not satisfy this assumption. The following are examples of filesystems on which shred is not effective: * log-structured or journaled filesystems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.) * filesystems that write redundant data and carry on even if some writes fail, such as RAID-based filesystems * filesystems that make snapshots, such as Network Appliance's NFS server * filesystems that cache in temporary locations, such as NFS version 3 clients * compressed filesystems In addition, file system backups and remote mirrors may contain copies of the file that cannot be removed, and that will allow a shredded file to be recovered later. AUTHOR
Written by Colin Plumb. REPORTING BUGS
Report bugs to <bug-coreutils@gnu.org>. COPYRIGHT
Copyright (C) 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU- LAR PURPOSE. SEE ALSO
The full documentation for shred is maintained as a Texinfo manual. If the info and shred programs are properly installed at your site, the command info shred should give you access to the complete manual. shred (coreutils) 4.5.3 February 2003 SHRED(1)
All times are GMT -4. The time now is 09:41 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy