Visit Our UNIX and Linux User Community


Removing Lines if value exist in first file


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Removing Lines if value exist in first file
# 8  
Old 08-29-2009
One possibility:

filter.sh
Code:
#!/bin/bash
awk -F',' 'NR==FNR{_[$0]=1}NR!=FNR&&!_[$4]{print}' $1 $2  > $3

Code:
$ filter.sh exclude infile outfile

# 9  
Old 08-29-2009
or always with awk...

Code:
awk -F "," 'NR==FNR{a[$1]=$1;next} !($4 in a) {print $0}' file1 file2



---------- Post updated at 09:59 AM ---------- Previous update was at 09:43 AM ----------

awk alternative...

Code:
awk -F "," 'NR==FNR{a[$1];next} !($4 in a) {print $0}' file1 file2

# 10  
Old 08-29-2009
Indeed. Corrected version:

Code:
awk -F',' 'NR==FNR{_[$0]=1;next}!_[$4]{print}' exclude infile

Merci.
This User Gave Thanks to ripat For This Post:
# 11  
Old 08-29-2009
Quote:
Originally Posted by svn
My question is how can i make that account number position a variable so that i can pass it at the same time I'm specifying the file names?
Now you're starting to get tricky. Smilie

How about a command line that looks like this:
Code:
filter excludeFile 4 file1 file2 file3...

Would that work? The exclude file comes first, followed by the numeric field number (starting at 1 for the first field), followed by a list of one or more files that use that particular field number. If you use a negative field number, it will count from the end of the line instead of the front, so a field of -2 would mean the second to the last field on every line, even if each line had a different number of fields.

In the code below I have told Perl to rename the original files so that they end in .bak and then write the changes to the original name. For the command above, you'd end up with file1 and file1.bak for example.

If that works for you, try the following. Note the extra -I.bak option on the first line and the extra $field variable.

Code:
#!/usr/bin/perl -I.bak

my @a, %exclude;
my $file = shift;
open(EXCLUDE_LIST, "< $file") or die;
chomp( @a=<EXCLUDE_LIST> );
close(EXCLUDE_LIST);
@exclude{@a}=@a;

my $field = shift;
if ($field =~ /\D/) {
    $field = 4;
}
die "Field specifier may not be zero.\n" unless $field;
$field-- if $field > 0;
while (<>) {
    print unless exists $exclude{ (split(/,/))[$field] };
}

If there are other options you want to add (such as using a different delimiter between fields), then it's time to start using Getopt::Std and specifying options using the same techniques other commands use: a dash followed by a letter.

@ripat: That's a cute trick with the NR==FNR for awk. I'm going to have to remember that one. Only useful for a single file, but still... (The file handling in awk is terrible!)

---------- Post updated at 04:00 PM ---------- Previous update was at 03:52 PM ----------

Quote:
Originally Posted by ripat
One possibility:

filter.sh
Code:
#!/bin/bash
awk -F',' 'NR==FNR{_[$0]=1}NR!=FNR&&!_[$4]{print}' $1 $2  > $3

It's a really bad idea to use variables without putting double quotes around them! I can screw up that awk command pretty bad by passing the script a filename with a space or wildcard character in it, especially as the third parameter.

Please put double quotes around ALL variable substitutions. Out of a thousand uses it will only be wrong 3-4 times, so you've got a 99.6% chance of getting it right. Those are pretty good odds. Smilie
# 12  
Old 08-29-2009
Quote:
Originally Posted by Azhrei
No way would I use a shell for that job! The following Perl script is probably a hundred times faster and more efficient!
While I agree that Perl is usually well suited for this type of application, I do not think this generalization is accurate. The shell scripts above are fine but there is room for some significant speed optimizations. If we use ksh (ksh93s+) instead of bash and a method that resembles the one in your Perl script, I think there would not be a real big difference in speed.

filter.ksh93
Code:
#!/usr/bin/ksh
typeset -A EXCLUDED
EXCLUDE_LIST=$(< $1)
INFILE=$2
for excl in $EXCLUDE_LIST; do
  EXCLUDED[$excl]=1
done
IFS=","
while read a b c id d; do
  if [[ ${EXCLUDED[$id]} -ne 1 ]]; then
    echo "${a},${b},${c},${id},${d}"
  fi
done < $INFILE

Code:
./filter.ksh93 excludes infile > outfile


Last edited by Scrutinizer; 08-29-2009 at 05:32 PM..
# 13  
Old 08-29-2009
Quote:
Originally Posted by Scrutinizer
While I agree that Perl is usually well suited for this type of application, I do not think this generalization is accurate. The shell scripts above are fine but there is room for some significant speed optimizations. If we use ksh (ksh93s+) instead of bash and a method that resembles the one in your Perl script, I think there would not be a real big difference in speed.
Hmm. Let's take a look at your script and its efficiency/performance and compare that to the Perl script, shall we?

First, the perl script loses big time in terms of startup cost; initializing the interpreter and compiling the script are overhead that can never be reclaimed (although it can be amortized if the data files are large enough). The perl script also loses (slightly) in that it's less readable to people unfamiliar with the language (although the OP was able to correctly determine how to change the field used for his particular case). The final lossage comes from the wordiness of my perl example -- it could've been done more concisely but I was at least partially concerned about the OP being able to understand its overall operation.

(I'm modifying your Korn shell script to add some performance and usage benefits, but it remains essentially the same.) Your Korn shell script does not have the startup cost, but as a true interpreter it will have to constantly be reparsing the loop body every time through the loop, so if there are a significant number of iterations it will be a performance problem. There's also the problem of single and double quotes occurring in the input; the Korn shell's read will handle paired quotes correctly (as it interprets the quotes) while perl will need help from a regular expression to do the work (or the Text::Balanced module). The reason I mention this as a problem is that a single apostrophe will screw up the Korn script but have no impact on the perl script (as the perl script ignores the issue entirely!).
Quote:
filter.ksh93
Code:
#!/usr/bin/ksh
typeset -A EXCLUDED
while read excl; do
  EXCLUDED[$excl]=1
done < "$1"
IFS=","
while read -A fields; do
  if (( ${EXCLUDED[${fields[3]}]} != 1 )); then
    echo "${fields[*]}"
  fi
done < "$2"

In any case, there is no comparison between the two languages when processing more than a few hundred lines of data. I wrote a Korn script to do some text processing for a client (similar to this task) that took 28+ minutes to process 300k records. The same task in Perl took a little over 2 minutes. That's 10k records per minute for the shell script and 150k records per minute for the perl script. Smilie I attribute the difference to the efficiencies of pseudo-compiling and the nature of the I/O between the two scripts (the perl script was in "paragraph" mode, reading 10-20 lines at a time while the shell script had to do one line at a time and maintain a FSM).
# 14  
Old 08-30-2009
@ Azhrei

Check your ksh snippet as it throws an error with my ksh93 when evaluating your conditional expression:
Code:
if (( ${EXCLUDED[${fields[0]}]} != 1 )); then

error:
Code:
./ex.korn: line 8:   != 1 : arithmetic syntax error

Which is normal as it tries to evaluate a string (empty string) in a arithmetic expression. Try with:
Code:
if [[ ${EXCLUDED[${fields[0]}]} == "" ]]; then

which is working well.

Talking about performance I did a test on large sample files:
excluded (cardinality: 50000 lines)
infile (cardinality: 29000 lines)

Results:
Code:
jeanluc@ibm:~/scripts/test$ time ./ex.pl excluded infile > /tmp/out.pl
real	0m0.214s
user	0m0.176s
sys	0m0.032s

jeanluc@ibm:~/scripts/test$ time ./ex.korn excluded infile > /tmp/out.korn
real	0m1.154s
user	0m1.060s
sys	0m0.088s

jeanluc@ibm:~/scripts/test$ time ./ex.awk excluded infile > /tmp/out.awk
real	0m0.093s
user	0m0.072s
sys	0m0.016s

As often the case in data file crunching awk is fast and terse.Smilie

Previous Thread | Next Thread
Test Your Knowledge in Computers #606
Difficulty: Easy
MySQL 7.0 added the JSON utility function JSON_PRETTY(), which outputs an existing JSON value in an easy-to-read format.
True or False?

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Removing lines from a file

I have a file `/tmp/wrk` containing filenames with paths. I want to remove filenames from this file, for example remove all filenames containing alja cagr cavt clta cmdo or corl remove all filenames containing data for days in region `d.2016.001` to `d.2016.207` remove all filenames... (10 Replies)
Discussion started by: kristinu
10 Replies

2. Shell Programming and Scripting

Removing lines from a file

Hi, I have a linux server that was hacked and I have a bunch of files that sporadically contain the following lines through out the file: <?php eval(base64_decode("Xxxxxxxxxxxxxx/xxxxxxxx")); I did't put the exact lines of the file in this post. The "Xxxx" are random letters/numbers.... (8 Replies)
Discussion started by: nck
8 Replies

3. Shell Programming and Scripting

Remove lines from one file that exist in another file

Hello Everyone, I'm currently have a requirement where I've generated a list of files with specific attributes and I need to know what lines are similar between the two files. For example: -File 1- line1 line2 line3 -File 2- line1 line2 line4 line5 -Desires Output- line1 line2... (5 Replies)
Discussion started by: omnivir
5 Replies

4. Shell Programming and Scripting

Deleting lines of a file if they exist in another file

I have a reference file that needs to remain static and another file that may or may not have duplicate rows that match the reference file. I need help with a command that will delete any duplicate rows from the second file while leaving reference file intact For example reference file would... (4 Replies)
Discussion started by: bjdamon
4 Replies

5. UNIX for Dummies Questions & Answers

Removing a user that doesnt exist from a group

Hi there, normally if I want to remove a user tht I have added to a specific group, i would do the following this is what my group2 looks like # grep group2 /etc/group group2:x:7777:user2,user1,user4 user1 has been defined in a few groups # id -nG user1 group1 group2 group3 So... (3 Replies)
Discussion started by: rethink
3 Replies

6. UNIX for Dummies Questions & Answers

removing several lines from a file

Hi folks, I have a long string of DNA sequences, and I need to remove several lines, as well as the line directly following them. For example, here is a sample of my starting material: >548::GY31UMJ02DLYEH rank=0007170 x=1363.5 y=471.0 length=478... (1 Reply)
Discussion started by: kkohl78
1 Replies

7. Shell Programming and Scripting

Removing the first and last lines in a file

Hi Gurus, I'm a little new to UNIX. How can I do remove the first and last line in a file? Say, supppose I have a file as below: Code: 1DMA 400002BARRIE 401002CALGARY/LETHBRI 402002CARLETON 500001PORTLAND-AUBRN 501001NEW YORK, NY 502001BINGHAMTON, NY ... (2 Replies)
Discussion started by: naveendronavall
2 Replies

8. UNIX for Dummies Questions & Answers

Removing lines from a file

I'm trying to find a command which will allow me to remove a range of lines (2-4) from a .dat file from the command line without opening the file. Someone mentioned using the ex command? Does anyone have any ideas? thanks (6 Replies)
Discussion started by: computersaysno
6 Replies

9. Shell Programming and Scripting

Removing lines within a file

Hi There, I've written a script that processes a data file on our system. Basically the script reads a post code from a list file, looks in the data file for the first occurrence (using grep) and reads the line number. It then tails the data file, with the line number just read, and outputs to a... (3 Replies)
Discussion started by: tookers
3 Replies

10. Shell Programming and Scripting

Removing lines from a file

Hello i have 2 files file1 and file2 as shown below file1 110010000000206|567810008161509 110010000000207|567810072227627 110010000000208|567811368851555 110010000000209|567811422513652 110010000000210|567812130217683 110010000000211|567813220211182 110010000000212|567813449322589... (4 Replies)
Discussion started by: PradeepRed
4 Replies

Featured Tech Videos