With this one, that I already mentioned had worked. Is there a reason why it would take a pretty large amount of time to complete the script, while creating just one file for one search string is done in 2 seconds.
Is there a more effecient way of writing the code to make it faster? Is there another way to write this code, in order to find errors easier, where I can print "something" when there's an error?
---------- Post updated at 04:57 PM ---------- Previous update was at 01:49 PM ----------
Quote:
Originally Posted by Scrutinizer
Try:
Good Afternoon Scrutinizer,
I'm testing this code and there's something very funny about it. First off, the things that are being searched have a file created IMMEDIATELY, but not yet finished being searched until 30 minutes. To manually grep these strings out of the whole directory, it takes less than 2 seconds.
The results between manual search and script finishing are different. Also, the scripts returns different results when ran once, deleted and then ran again. None of the data changed, only the output files have discrepancies. I'm not sure why this would be...
Hi David, I had not really looked at your problem, just provided a fix for the ambiguous redirection and then suggested some improvements on that ..
OK, I see the awk is supposed to be part of a for loop is that correct? How many files are in that directory typically?
There most likely should be for loop. While I ask these questions and try to come up with solutions for easier processing on these logs, I'm taking small tutorials on the different commands (awk, sed, unique, cut, sort). Obviously these things will give me statistics that would take extremely long to manually perform, considering the amount of data that's present. So, in short, I believe it's a loop I'm looking for. =}
The directory contains roughly 50 files. The amount of files will continue to grow, 1 per day. So the idea is to incorporate, into the script, something that will search the NEW files and append to the current, existing file.
Then after that is smoothly running on a cron job, I will look into making a script that will give me stats on particular fields (I'm assuming, using the awk, cut, unique, and sed commands).
Scrutinizer, feel free to point me in a general direction to get started, instead of spelling everything out. It does seem that the examples you've listed in this thread are at an expert level of scripting. Where many tutorials I've been through seem to make separate lines for each (command?) and have a debug included incase something with the files change or it simply just doesn't work someday.
I'm trying to learn about regular expressions. Let's say I want to list all the files in /usr/bin beginning with "p", ending with "x", and containing an "a".
I know this works:ls | grep ^p | grep x$ | grep abut I'm thinking there must be a way to do it without typing grep three times. Some of my... (9 Replies)
Hello,
I am trying to login to multiple servers and i have to run multiple loops to gather some details..Could you please help me out.
I am specifically facing issues while running for loops.
I have to run multiple for loops in else condition. but the below code is giving errors in for... (2 Replies)
Hi Guys,
I've been having a look around to try and understand how i can do the below however havent come across anything that will work.
Basically I have a parser script that I need to run across all files in a certain directory, I can do this one my by one on comand line however I... (1 Reply)
I have a script that I need to run on one file at a time. Unfortunately using for i in F* or cat F* is not possible. When I run the script using that, it jumbles the files and they are out of order. Here is the script:
gawk '{count++; keyword = $1}
END {
for (k in count)
{if (count == 2)... (18 Replies)
How can I Run one script on multiple files and print out multiple files.
FOR EXAMPLE
i want to run script.pl on 100 files named 1.txt ....100.txt under same directory and print out corresponding file 1.gff ....100.gff.THANKS (4 Replies)
How can I run the following command on multiple files and print out the corresponding multiple files.
perl script.pl genome.gff 1.txt > 1.gff
However, there are multiples files of 1.txt, from 1----100.txt
Thank you so much.
No duplicate posting! Continue here. (0 Replies)
Hi,
I want to run a Perl script on multiple files, with same name ("Data.txt") but in different directories (eg : 2010_06_09_A/Data.txt, 2010_06_09_B/Data.txt).
I know how to run this perl script on files in the same directory like:
for $i in *.txt
do
perl myscript.pl $i > $i.new... (8 Replies)
I'm trying some thing like this. But not working
It worked for bash files
Now I want some thing like that along with multiple input files by redirecting their outputs as inputs of next command like below
Could you guyz p0lz help me on this
#!/usr/bin/awk -f
BEGIN
{
}
script1a.awk... (2 Replies)
I am trying to write a script that will ssh into a remote machine and recurse through a specified directory, find mp3 files which may be two or three directories deep (think iTunes: music/artist/album/song.mp3), and scp them back to the machine running the script. The script should also maintain... (3 Replies)
Hello
when I try to run rm on multiple files I have problem to delete files with space.
I have this command :
find . -name "*.cmd" | xargs \rm -f
it doing the work fine but when it comes across files with spaces like : "my foo file.cmd"
it refuse to delete it
why? (1 Reply)