Here are results of time consumed to copy a 1 GB file using different methods. Looks like dd with a block size 1024k beats others by not only the time consumed but also by the CPU usage. But things may turn out different in your environment considering file size, different file system, amount of RAM, and number of processors and speed. Still this may help you choose your tool wisely
Hi all you enlightened unix people,
I've been trying to execute a perl script that contains the following line within backticks:
`grep -f patternfile.txt otherfile.txt`;It takes normally 2 minutes to execute this command from the bash shell by hand.
I noticed that when i run this command... (2 Replies)
I am trying to copy a folder which contains a list of C executables.
It takes 2 mins for completion,where as the entire script takes only 3 more minutes for other process.
Is there a way to copy the folder faster so that the performance of the script will improve? (2 Replies)
Hi,
A datafile containing lines such as below needs to be split:
500000000000932491683600000000000000000000000000016800000GS0000000000932491683600*HOME
I need to get the 2-5, 11-20, and 35-40 characters and I can do it via cut command.
cut -c 2-5 file > temp1.txt
cut -c 11-20 file >... (9 Replies)
I'm sorting files from a source directory by size into 4 categories then copying them into 4 corresponding folders, just wondering if there's a faster/better/more_elegant way to do this:
find /home/user/sourcefiles -type f -size -400000k -exec /bin/cp -uv {} /home/user/medfiles/ \;
find... (0 Replies)
Hi,
I have a script below for extracting xml from a file.
for i in *.txt
do
echo $i
awk '/<.*/ , /.*<\/.*>/' "$i" | tr -d '\n'
echo -ne '\n'
done
.
I read about using multi threading to speed up the script.
I do not know much about it but read it on this forum.
Is it a... (21 Replies)
awk "/May 23, 2012 /,0" /var/tmp/datafile
the above command pulls out information in the datafile. the information it pulls is from the date specified to the end of the file.
now, how can i make this faster if the datafile is huge? even if it wasn't huge, i feel there's a better/faster way to... (8 Replies)
Hello guys,
I'm cleaning out big XML files (we're talking about 1GB at least), most of them contain words written in a non-latin alphabet.
The command I'm using is so slow it's not even funny:
cat $1 | sed -e :a -e 's/<*>//g;/</N;//ba;s/</ /g;s/>/... (4 Replies)
Hi,
I have a large number of input files with two columns of numbers.
For example:
83 1453
99 3255
99 8482
99 7372
83 175
I only wish to retain lines where the numbers fullfil two requirements. E.g:
=83
1000<=<=2000
To do this I use the following... (10 Replies)
I have the below command which is referring a large file and it is taking 3 hours to run. Can something be done to make this command faster.
awk -F ',' '{OFS=","}{ if ($13 == "9999") print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12 }' ${NLAP_TEMP}/hist1.out|sort -T ${NLAP_TEMP} |uniq>... (13 Replies)