My question is how can i make that account number position a variable so that i can pass it at the same time I'm specifying the file names?
Now you're starting to get tricky.
How about a command line that looks like this:
Would that work? The exclude file comes first, followed by the numeric field number (starting at 1 for the first field), followed by a list of one or more files that use that particular field number. If you use a negative field number, it will count from the end of the line instead of the front, so a field of -2 would mean the second to the last field on every line, even if each line had a different number of fields.
In the code below I have told Perl to rename the original files so that they end in .bak and then write the changes to the original name. For the command above, you'd end up with file1 and file1.bak for example.
If that works for you, try the following. Note the extra -I.bak option on the first line and the extra $field variable.
If there are other options you want to add (such as using a different delimiter between fields), then it's time to start using Getopt::Std and specifying options using the same techniques other commands use: a dash followed by a letter.
@ripat: That's a cute trick with the NR==FNR for awk. I'm going to have to remember that one. Only useful for a single file, but still... (The file handling in awk is terrible!)
---------- Post updated at 04:00 PM ---------- Previous update was at 03:52 PM ----------
Quote:
Originally Posted by ripat
One possibility:
filter.sh
It's a really bad idea to use variables without putting double quotes around them! I can screw up that awk command pretty bad by passing the script a filename with a space or wildcard character in it, especially as the third parameter.
Please put double quotes around ALL variable substitutions. Out of a thousand uses it will only be wrong 3-4 times, so you've got a 99.6% chance of getting it right. Those are pretty good odds.
No way would I use a shell for that job! The following Perl script is probably a hundred times faster and more efficient!
While I agree that Perl is usually well suited for this type of application, I do not think this generalization is accurate. The shell scripts above are fine but there is room for some significant speed optimizations. If we use ksh (ksh93s+) instead of bash and a method that resembles the one in your Perl script, I think there would not be a real big difference in speed.
filter.ksh93
Last edited by Scrutinizer; 08-29-2009 at 05:32 PM..
While I agree that Perl is usually well suited for this type of application, I do not think this generalization is accurate. The shell scripts above are fine but there is room for some significant speed optimizations. If we use ksh (ksh93s+) instead of bash and a method that resembles the one in your Perl script, I think there would not be a real big difference in speed.
Hmm. Let's take a look at your script and its efficiency/performance and compare that to the Perl script, shall we?
First, the perl script loses big time in terms of startup cost; initializing the interpreter and compiling the script are overhead that can never be reclaimed (although it can be amortized if the data files are large enough). The perl script also loses (slightly) in that it's less readable to people unfamiliar with the language (although the OP was able to correctly determine how to change the field used for his particular case). The final lossage comes from the wordiness of my perl example -- it could've been done more concisely but I was at least partially concerned about the OP being able to understand its overall operation.
(I'm modifying your Korn shell script to add some performance and usage benefits, but it remains essentially the same.) Your Korn shell script does not have the startup cost, but as a true interpreter it will have to constantly be reparsing the loop body every time through the loop, so if there are a significant number of iterations it will be a performance problem. There's also the problem of single and double quotes occurring in the input; the Korn shell's read will handle paired quotes correctly (as it interprets the quotes) while perl will need help from a regular expression to do the work (or the Text::Balanced module). The reason I mention this as a problem is that a single apostrophe will screw up the Korn script but have no impact on the perl script (as the perl script ignores the issue entirely!).
Quote:
filter.ksh93
In any case, there is no comparison between the two languages when processing more than a few hundred lines of data. I wrote a Korn script to do some text processing for a client (similar to this task) that took 28+ minutes to process 300k records. The same task in Perl took a little over 2 minutes. That's 10k records per minute for the shell script and 150k records per minute for the perl script. I attribute the difference to the efficiencies of pseudo-compiling and the nature of the I/O between the two scripts (the perl script was in "paragraph" mode, reading 10-20 lines at a time while the shell script had to do one line at a time and maintain a FSM).
Check your ksh snippet as it throws an error with my ksh93 when evaluating your conditional expression:
error:
Which is normal as it tries to evaluate a string (empty string) in a arithmetic expression. Try with:
which is working well.
Talking about performance I did a test on large sample files:
excluded (cardinality: 50000 lines)
infile (cardinality: 29000 lines)
Results:
As often the case in data file crunching awk is fast and terse.
I have a file `/tmp/wrk` containing filenames with paths. I want to remove filenames
from this file, for example
remove all filenames containing alja cagr cavt clta cmdo or corl
remove all filenames containing data for days in region `d.2016.001` to `d.2016.207`
remove all filenames... (10 Replies)
Hi,
I have a linux server that was hacked and I have a bunch of files that sporadically contain the following lines through out the file:
<?php eval(base64_decode("Xxxxxxxxxxxxxx/xxxxxxxx"));
I did't put the exact lines of the file in this post. The "Xxxx" are random letters/numbers.... (8 Replies)
Hello Everyone,
I'm currently have a requirement where I've generated a list of files with specific attributes and I need to know what lines are similar between the two files. For example:
-File 1-
line1
line2
line3
-File 2-
line1
line2
line4
line5
-Desires Output-
line1
line2... (5 Replies)
I have a reference file that needs to remain static and another file that may or may not have duplicate rows that match the reference file. I need help with a command that will delete any duplicate rows from the second file while leaving reference file intact
For example reference file would... (4 Replies)
Hi there, normally if I want to remove a user tht I have added to a specific group, i would do the following
this is what my group2 looks like
# grep group2 /etc/group
group2:x:7777:user2,user1,user4
user1 has been defined in a few groups
# id -nG user1
group1 group2 group3
So... (3 Replies)
Hi folks, I have a long string of DNA sequences, and I need to remove several lines, as well as the line directly following them. For example, here is a sample of my starting material:
>548::GY31UMJ02DLYEH rank=0007170 x=1363.5 y=471.0 length=478... (1 Reply)
Hi Gurus,
I'm a little new to UNIX. How can I do remove the first and last line in a file? Say, supppose I have a file as below:
Code:
1DMA
400002BARRIE
401002CALGARY/LETHBRI
402002CARLETON
500001PORTLAND-AUBRN
501001NEW YORK, NY
502001BINGHAMTON, NY ... (2 Replies)
I'm trying to find a command which will allow me to remove a range of lines (2-4) from a .dat file from the command line without opening the file.
Someone mentioned using the ex command?
Does anyone have any ideas?
thanks (6 Replies)
Hi There,
I've written a script that processes a data file on our system. Basically the script reads a post code from a list file, looks in the data file for the first occurrence (using grep) and reads the line number. It then tails the data file, with the line number just read, and outputs to a... (3 Replies)