data.file is about 7MB in size and can grow quite bigger than that. when i run the above command on it it, it takes about 6 seconds to complete. Anyway to bring that number down???
I have to admit I can't resolve the logics of your pipe. But, almost sure, I can say that all that (time consuming) piping can be reduced to/done by one single awk command.
You start listing the lines at the epoch value 1454687485, and list down to the end-of-file. Later you grep for Thu Feb 04. Why don't you operate on the lines with $3 between 1454626800 and 1454713199? That would save the first awk, the egrep, and, as the output of A is no more needed, the last awk as well.
The (boolean) Q variable is redundant as well; it is set to 1 and never reset - so what's its meaning?
Thanks RudiC. I took your suggestions into consideration and combined all those commands into one awk command. Thanks so much.
In doing the above, i discovered the code i originally pasted in this thread is not the reason why the script was slow. I found out that it is the for loop below that takes at least 4 seconds to complete.
Hi, you did not specify the shell, since you are using GNU utilities, I presumed it to be bash, this would functionally be these equivalent, but should be a bit more efficient:
It is unsorted, since $(echo ${VALUESA} | sort -r | xargs) produces the same output as ${VALUESA}
So, as is, it could be further reduced to:
Which leaves one external call to perl per iteration. To eliminate that one as well the whole loop would need to be eliminated in favor of -for example- one awk or perl program...
I don't know where AVERAGE and STDEVIATE are determined ? Is that is n a similar loop, if so I suspect similar gains could be made there?
---edit---
This would be a gawk equivalent:
Last edited by Scrutinizer; 02-06-2016 at 04:50 AM..
This User Gave Thanks to Scrutinizer For This Post:
Hi, you did not specify the shell, since you are using GNU utilities, I presumed it to be bash, this would functionally be these equivalent, but should be a bit more efficient:
It is unsorted, since $(echo ${VALUESA} | sort -r | xargs) produces the same output as ${VALUESA}
So, as is, it could be further reduced to:
Which leaves one external call to perl per iteration. To eliminate that one as well the whole loop would need to be eliminated in favor of -for example- one awk or perl program...
I don't know where AVERAGE and STDEVIATE are determined ? Is that is n a similar loop, if so I suspect similar gains could be made there?
---edit---
This would be a gawk equivalent:
thanks so much. sorry for not specifying the shell. i intend to run this on a number of unix systems, some of which have old OSes...i.e. HP-UX, AIX, ubuntu, centos.
i'm afraid some of the bash commands wont work on the older systems.
the shell i'm using is "/bin/sh" for older systems. and "/bin/dash" for newer ones. so i suppose your modifications would most likely work for the newer systems.
Hi guys,
I feel a bit comfortable now doing bash scripting but I am worried that the way I do it is not optimized and I can do much better as to how I code.
e.g.
I have a whole line in a file from which I want to extract some values.
Right now what I am doing is :
STATE=`cat... (5 Replies)
Hello,
I'm wondering if there is a quicker way of doing this.
Here is my mv script.
d=/conversion/program/out
cd $d
ls $d > /home/tempuser/$$tmp
while read line ; do
a=`echo $line|cut -c1-5|sed "s/_//g"`
b=`echo $line|cut -c16-21`
if ;then mkdir... (13 Replies)
Here is my code. What it does is it reads an input file (input.txt which contains roughly 2,000 search phrases) and searches a directory for files that contains the search phrase. The directory contains roughly 1900 files and 84 subdirectories. The output is a file (output.txt) that shows only the... (23 Replies)
Hi,
I have to assign a value for a varaiable based on a Input. I have written the below code:
if
then
nf=65
elif
then
nf=46
elif
then
nf=164
elif
then
nf=545
elif
then
nf=56
elif
then (3 Replies)
Pl help to me to write the below code in a simple way ...
i suupose to use this code 3 to 4 places in my makefile(gnu) ..
****************************************
@for i in $(LIST_A); do \
for j in $(LIST_B); do\
if ;then\
echo "Need to sign"\
echo "List A = $$i , List B =$$j"\
... (2 Replies)
#!/usr/bin/perl
use strict;
use warnings;
use Date::Manip;
my $date_converted = UnixDate(ParseDate("3 days ago"),"%e/%h/%Y");
open FILE,">$ARGV";
while(<DATA>){
my @tab_delimited_array = split(/\t/,$_);
$tab_delimited_array =~ s/^\ =~ s/^\-//;
my $converted_date =... (2 Replies)
can we optimize this command ?
sed 's#AAAA##g' /study/i.txt | sed '1,2d' | tr -d '\n\' > /study/i1.txt;
as here i am using two files ...its overhead..can we optimise to use only 1 file
sed 's#AAAA##g' /study/i.txt | sed '1,2d' | tr -d '\n\' > /study/i.txt;
keeping them same but it... (9 Replies)
Hi,
I have this following script below. Its searching a log file for 2 string and if found then write the strings to success.txt and If not found write strings to failed.txt . if one found and not other...then write found to success.txt and not found to failed.txt.
I want to optimize this... (3 Replies)