Sponsored Content
Top Forums Shell Programming and Scripting Read a file and search a value in another file create third file using AWK Post 302326334 by King Kalyan on Wednesday 17th of June 2009 04:50:34 PM
Old 06-17-2009
Question Read a file and search a value in another file create third file using AWK

Hi,

I have two files with the format shown below. I need to read first field(value before comma) from file 1 and search for a record in file 2 that has the same value in the field "KEY=" and write the complete record of file 2 with corresponding field 2 of the first file in to result file.

File 1:

000000000160191837,00140000637006925269
000000000160191837,00140000637006925270
000000000160191838,00140000637006925271
000000000160191840,00140000637006925272

File 2:

<DATA1><#KEY=000000000160191837><DATA2>
<DATA3><#KEY=000000000160191837><DATA4>
<DATA5><#KEY=000000000160191838><DATA6>
<DATA6><#KEY=000000000160191840><DATA8>

Result File:

<DATA1><#KEY=000000000160191837><DATA2><RESULT>00140000637006925269
<DATA3><#KEY=000000000160191837><DATA4><RESULT>00140000637006925270
<DATA5><#KEY=000000000160191838><DATA6><RESULT>00140000637006925271
<DATA6><#KEY=000000000160191840><DATA8><RESULT>00140000637006925272

I wrote awk command for it but my code doesn't take care of duplicate records. please look at first two records in File 1 in the above example, field 1 is same but field 2 is different. In the same way I will have two exact same entries (same KEY value) in File 2 and I need to assign different values.

My code:

Code:
awk '{ 
  if (FNR==NR) {
    FS=","  
    sample_array[$1]=$2; 
    next 
   }
  FS="KEY=" 
  x=index($2,">")
  sample_num=substr($2,1,x-1);
  if (sample_num in sample_array)
      print $0 "<RESULT>" Sample_array[Sample_num] 
    
 } ' file1 file2 > result_file

Thanks in advance!
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Read words from file and create new file using K-shell.

Hi All, Please help me in creating files through K-shell scripts. I am having one file in this format. OWNER.TABLE_NAME OWNER.TABLE_NAME1 OWNER1.TABLE_NAME OWNER1.TABLE_NAME1 I want to read the above file and create new file through k shell script. The new file should looks like this.... (4 Replies)
Discussion started by: bsrajirs
4 Replies

2. Shell Programming and Scripting

Need help with awk - how to read a content of a file from every file from file list

Hi Experts. I need to list the file and the filename comes from the file ListOfFile.txt. Basicly I have a filename "ListOfFile.txt" and it contain Example of ListOfFile.txt /home/Dave/Program/Tran1.P /home/Dave/Program/Tran2.P /home/Dave/Program/Tran3.P /home/Dave/Program/Tran4.P... (7 Replies)
Discussion started by: tanit
7 Replies

3. Shell Programming and Scripting

Select some lines from a txt file and create a new file with awk

Hi there, I have a text file with several colums separated by "|;#" I need to search the file extracting all columns starting with the value of "1" or "2" saving in a separate file just the first 7 columns of each row maching the criteria, with replacement of the saparators in the nearly created... (4 Replies)
Discussion started by: capnino
4 Replies

4. Shell Programming and Scripting

Want to read data from a file name.txt and search it in another file and then matching...

Hi Frnds... I have an input file name.txt and another file named as source.. name.txt is having only one column and source is having around 25 columns...i need to read from name.txt line by line and search it in source file and then save the result in results file.. I have a rough idea about the... (15 Replies)
Discussion started by: ektubbe
15 Replies

5. Shell Programming and Scripting

awk read one delimited file, search another delimited file

Hello folks, I have another doozy. I have two files. The first file has four fields in it. These four fields map to different locations in my second file. What I want to do is read the master file (file 2 - 23 fields) and compare each line against each record in file 1. If I get a match in all four... (4 Replies)
Discussion started by: dagamier
4 Replies

6. Shell Programming and Scripting

Using awk to read one file and search in another file

Hi Forum. I did some google search on what I'm trying to do but I cannot get my code to work correctly. I have 2 files which are very large and I want to read text from file1 and search in file2 - if present, keep the records. I've tried fgrep -f file1 file2 but it is too slow. File1:... (10 Replies)
Discussion started by: pchang
10 Replies

7. Shell Programming and Scripting

Bash to search file based off user input then create new file

In the below bash a file is downloaded when the program is opened and then that file is searched based on user input and the result is written to a new file. For example, the bash is opened and the download.txt is downloaded, the user then enters the id (NA04520). The id is used to search... (5 Replies)
Discussion started by: cmccabe
5 Replies

8. Shell Programming and Scripting

Read in search strings from text file, search for string in second text file and output to CSV

Hi guys, I have a text file named file1.txt that is formatted like this: 001 , ID , 20000 002 , Name , Brandon 003 , Phone_Number , 616-234-1999 004 , SSNumber , 234-23-234 005 , Model , Toyota 007 , Engine ,V8 008 , GPS , OFF and I have file2.txt formatted like this: ... (2 Replies)
Discussion started by: An0mander
2 Replies

9. Shell Programming and Scripting

Use while loop to read file and use ${file} for both filename input into awk and as string to print

I have files named with different prefixes. From each I want to extract the first line containing a specific string, and then print that line along with the prefix. I've tried to do this with a while loop, but instead of printing the prefix I print the first line of the file twice. Files:... (3 Replies)
Discussion started by: pathunkathunk
3 Replies

10. Shell Programming and Scripting

Splitting a text file into smaller files with awk, how to create a different name for each new file

Hello, I have some large text files that look like, putrescine Mrv1583 01041713302D 6 5 0 0 0 0 999 V2000 2.0928 -0.2063 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 5.6650 0.2063 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 3.5217 ... (3 Replies)
Discussion started by: LMHmedchem
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 11:54 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy