Modifying text file records, find data in one place in the record and print it elsewhere


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Modifying text file records, find data in one place in the record and print it elsewhere
# 1  
Old 12-17-2016
Modifying text file records, find data in one place in the record and print it elsewhere

Hello,

I have some text data that is in the form of multi-line records. Each record ends with the string $$$$ and the next record starts on the next line.
Code:

     RDKit          2D

 15 14  0  0  0  0  0  0  0  0999 V2000
    5.4596    2.1267    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    5.5214    0.6279    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    4.2543   -0.1749    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    4.3161   -1.6737    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    3.0491   -2.4765    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    2.9255    0.5209    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    1.6585   -0.2819    0.0000 N   0  0  0  0  0  0  0  0  0  0  0  0
    0.3296    0.4139    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -0.9374   -0.3889    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -2.2662    0.3069    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -2.3280    1.8057    0.0000 N   0  0  0  0  0  0  0  0  0  0  0  0
   -3.5333   -0.4959    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -4.8621    0.1999    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -6.1291   -0.6029    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -7.4580    0.0929    0.0000 N   0  0  0  0  0  0  0  0  0  0  0  0
  1  2  1  0
  2  3  1  0
  3  4  1  0
  4  5  1  0
  3  6  1  0
  6  7  1  0
  7  8  1  0
  8  9  1  0
  9 10  1  0
 10 11  1  0
 10 12  1  0
 12 13  1  0
 13 14  1  0
 14 15  1  0
M  END
> <id>
1

>  <name>
N1-(2-ethylbutyl)hexane-1,3,6-triamine

>  <ID>
118903148

$$$$

What I need to do is to find the value from the name field and copy it to the first line of the record. I the case above, I would pass "name" to the script and the script would find the value on the line after > <name> and write it to the first line of the record.

Code:
N1-(2-ethylbutyl)hexane-1,3,6-triamine
     RDKit          2D

 15 14  0  0  0  0  0  0  0  0999 V2000
    5.4596    2.1267    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    5.5214    0.6279    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    4.2543   -0.1749    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    4.3161   -1.6737    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    3.0491   -2.4765    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    2.9255    0.5209    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
    1.6585   -0.2819    0.0000 N   0  0  0  0  0  0  0  0  0  0  0  0
    0.3296    0.4139    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -0.9374   -0.3889    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -2.2662    0.3069    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -2.3280    1.8057    0.0000 N   0  0  0  0  0  0  0  0  0  0  0  0
   -3.5333   -0.4959    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -4.8621    0.1999    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -6.1291   -0.6029    0.0000 C   0  0  0  0  0  0  0  0  0  0  0  0
   -7.4580    0.0929    0.0000 N   0  0  0  0  0  0  0  0  0  0  0  0
  1  2  1  0
  2  3  1  0
  3  4  1  0
  4  5  1  0
  3  6  1  0
  6  7  1  0
  7  8  1  0
  8  9  1  0
  9 10  1  0
 10 11  1  0
 10 12  1  0
 12 13  1  0
 13 14  1  0
 14 15  1  0
M  END
> <id>
1

>  <name>
N1-(2-ethylbutyl)hexane-1,3,6-triamine

>  <ID>
118903148

$$$$

I wrote the script below to do that and it does work. It takes about 30 seconds to process a file of 500 records and that is a bit slow.

This script reads through the input file adding each row to an array until $$$$ is found. Along the way, it is checking each line to see if it is > <name>. If it is, the next line is saved.

When the end of the record is reached, the name is printed to the output file and then the lines of data that were stored in the array. The first line in the array is skipped to bypass writing the blank line at the start of the record. The $$$$ is also added. The array and name are cleared and the next record is processed.
Code:
#!/bin/bash

# input file name
input_file=$1
# attribute field tag to use for name line
name_field=$2
# output file name
output_file=$3

# create empty output file
touch $output_file

# declare array for individual sdf record
declare -a sdf_record

# create both possible versions of attribute tag value 
name_string_1='> <'$name_field'>'
name_string_2='>  <'$name_field'>'

# initalize
temp_name='';  save_next_line='0'

# set input field separator to space to preserve spaces
IFS=''

# loop through input file
while read line; do 

   # test if line is last line of record, if not add line to temp record array
   if [ "$line" != "\$\$\$\$" ]; then

      # add each line to sdf record
      sdf_record=("${sdf_record[@]}" "$line")

      # check if this line has been marked to save for the name string
      if [ "$save_next_line" == "1" ]; then
         # save name and reset indicator
         temp_name=$line;  save_next_line='0'
      # check if this is the name tag line, check all three versions of tagging
      elif [[ "$line" == "$name_string_1" ]] || [[ "$line" == "$name_string_2" ]]; then
         # set marked to collect the next line for the name string
         save_next_line='1'
      fi

   # when the $$$$ record terminator is reached, print the record adding the name line
   else
      # add the record termination string $$$$ as the last line of the temp record
      sdf_record=("${sdf_record[@]}" "\$\$\$\$")

      # add the name field to the start of the record
      echo -e $temp_name >> $output_file

      # append the rest of the record lines stored in the array to the output file
      # this skips the first line which is replace by the name above
      for record_line in "${sdf_record[@]:1}"
      do
         echo -e $record_line >> $output_file
      done

      # clear the current sdf record and name
      unset sdf_record;  temp_name=''

   fi

done < $input_file

It is possible that there could already be a name on the first line and the solution above takes care of that. This also allows for any available field to be used for the "name". At this point, it doesn't trap the case if the name field is not found.

As with most of the things I write on my own, it works but is very slow.

Any suggestions that could speed this up, make it more sound, etc, would be very much appreciated.

LMHmedchem

Last edited by LMHmedchem; 12-17-2016 at 02:38 AM..
# 2  
Old 12-17-2016
Try
Code:
awk '
                {OUT[++CNT] = $0
                }

F               {NAME = $0
                 F = 0
                }

/<name>/        {F = 1
                }

$0 == "$$$$"    {print NAME
                 for (i=1; i<=CNT; i++) print OUT[i]
                 delete OUT
                 CNT = 0
                }
' file

This User Gave Thanks to RudiC For This Post:
# 3  
Old 12-17-2016
In case your awk allows for multi char regex record separators, try
Code:
awk 'match ($0, /<name>\n[^\n]*\n/) {printf "%s\n", substr($0, RSTART+7, RLENGTH-7)} 1' RS='\$\$\$\$\n' ORS='$$$$\n' file

This User Gave Thanks to RudiC For This Post:
# 4  
Old 12-17-2016
GNU awk (gawk):
Code:
awk '{$1=$(NF-5)}1' RS='[$]{4}\n' ORS='$$$$\n' FS='\n' OFS='\n' file



--
mawk and gawk:
Code:
awk '{$1=$(NF-5)}1' RS='[$][$][$][$]\n' ORS='$$$$\n' FS='\n' OFS='\n' file

This User Gave Thanks to Scrutinizer For This Post:
# 5  
Old 12-18-2016
Thanks very much for the suggestions. We just got 8 inches of snow and it has turned to rain. It's supposed to freeze again this afternoon so I have to go out and get rid of the snow. I will be back later this afternoon.

---------- Post updated at 10:37 PM ---------- Previous update was at 12:40 PM ----------

Quote:
Originally Posted by RudiC
Try
Code:
awk '
                {OUT[++CNT] = $0
                }

F               {NAME = $0
                 F = 0
                }

/<name>/        {F = 1
                }

$0 == "$$$$"    {print NAME
                 for (i=1; i<=CNT; i++) print OUT[i]
                 delete OUT
                 CNT = 0
                }
' file

This works very well and is very fast.

The only modification I had to make was to change the range of the for loop to for (i=2; i<=CNT; i++) print OUT[i] to skip the original first line. I assume that this data structure uses an index that starts at 1 and not 0? At any rate, if I leave i=1 I get an extra line after the name.

I have been trying to pass a shell variable in as the "name" that I am looking for.

Code:
#!/bin/bash

# input file
input_file=$1
# attribute field to use for name line
name_field=$2
# output file name
output_file=$3

# string to look for, includes tag brackets
name_string='<'$name_field'>'

awk -v name_to_find="$name_string" '
                {OUT[++CNT] = $0
                }

F               {NAME = $0
                 F = 0
                }

/name_to_find/        {F = 1
                }

$0 == "$$$$"    {print NAME
                 for (i=2; i<=CNT; i++) print OUT[i]
                 delete OUT
                 CNT = 0
                }
' $input_file > $output_file

This doesn't work. None of the names are located and printed to the new location. I'm not sure what the problem is here, this follows the syntax I have used passing bash variables to bash in previous scripts.

I have also tried some other variants such as,
Code:
name_string='<'$name_field'>'
...
/"'$name_string'"/        {F = 1

Any idea what I am missing here? Is this an issue with the <> characters?

LMHmedchem

---------- Post updated 12-18-16 at 12:33 AM ---------- Previous update was 12-17-16 at 10:37 PM ----------

I seem to have found the issue. It seems as if there is a problem with using a variable in the /var/ regular expression slashes.

I replaced that line with the match operator $0 ~ name_to_find {F = 1} and now it works fine.

This is the script now.

Code:
#!/bin/bash

# input file
input_file=$1
# attribute field to use for name line
name_field=$2
# output file name
output_file=$3

#add tag braces to name field
name_field='<'$name_field'>'

# store each line of the record in the array OUT[i]
# if the find_name string is matched (anywhere on a line) set an indicator F=1 to save the next line
# if F==1 save the line in the variable NAME, reset indicator to F=0
# when the termination string $$$$ is reached, print NAME and the rest OUT[i]
# start the print at position 2 (skip the first line)
awk -v find_name=$name_field ' { OUT[++CNT] = $0 }
                             F { NAME = $0; F = 0 }
                $0 ~ find_name { F = 1 }
                  $0 == "$$$$" { print NAME
                                 for (i=2; i<=CNT; i++) print OUT[i]
                                 delete OUT
                                 CNT = 0
                               } ' > $output_file

If I am reading this correctly, each line is stored in the array OUT[]. When the match operator is satisfied for the line, the variable F is set to 1. When F==1, the line is stored in the variable "NAME" and F is set back to 0. When $$$$ is found, NAME is printed followed by the rest of the record excepting the first line of the record. Am I right that F { NAME = $0; F = 0 } is the equivalent of F == 1 { NAME = $0; F = 0 } ?

This seems to be exactly what I did in my script using read and while. What is the explanation for why my script takes 30 seconds to process a file and the script above takes less than 0.1 second to do the same thing in more or less the same way?

I have always appreciated how fast awk can be but sometimes it is hard to see where the optimization is coming from.

LMHmedchem

Last edited by LMHmedchem; 12-18-2016 at 01:47 AM..
These 2 Users Gave Thanks to LMHmedchem For This Post:
# 6  
Old 12-18-2016
Let me applaud you to your efforts to circumnavigate the problems you encountered and to your (can I say: ) endurance in finding a solution on your own, as opposed to coming back whining immediately. Other members could certainly take a leaf out of your book!


Trying to answer some of your questions:
- I'd be surprised if extra lines were added if the loop starts with i=1, at least it didn't when I tested it. And, why should it?
- /var/ in awk is a regex constant, so it would try to match the sequence of chars'v', 'a', and 'r'. Your last approach is the right one to match variables.
- awk works on pattern {action} pairs. action is executed if pattern is TRUE. So, F is equivalent to F != 0 as awk treats 0 as FALSE and anything else as TRUE.
- shell scripts are interpreted line by line, even in a loop, and files for e.g. output are opened and closed for every redirection encountered (e.g. echo or printf command). awk reads a script, compiles, and executes it. Files are kept open unless explicitly closed. This will make up for the main execution time difference. Although you are not using external commands (which would consume resources for a process creation for each and make it even slower), some of your statements could be improved.
This User Gave Thanks to RudiC For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Find key pattern and print selected lines for each record

Hi, I need help on a complicated file that I am working on. I wanted to extract important info from a very huge file. It is space delimited file. I have hundred thousands of records in this file. An example content of the inputfile as below:- ## ID Ser402 Old; 23... (2 Replies)
Discussion started by: redse171
2 Replies

2. Shell Programming and Scripting

awk print matching records and occurences of each record

Hi all , I have two files : dblp.xml with dblp records and itu1.txt with faculty members records. I need to find out how many dblp records are related to the faculty members. More specific: I need to find out which names from itu1.txt are a match in dblp. xml file , print them and show how many... (4 Replies)
Discussion started by: iori
4 Replies

3. Shell Programming and Scripting

Find and remove duplicate record and print list

Gents, I needs to delete duplicate values and only get uniq values based in columns 2-27 Always we should keep the last record found... I need to store one clean file and other with the duplicate values removed. Input : S3033.0 7305.01 0 420123.8... (18 Replies)
Discussion started by: jiam912
18 Replies

4. Shell Programming and Scripting

Find x and print its record

Hi all, I have a file containing two fields with 154 rows/records/lines (forgive me, my UNIX terminology is not quite up to par yet). I am trying to read from this list, find a value (lets say 0), then print the record/line/row that value falls on (In this case it would be record/line/row #27)?... (5 Replies)
Discussion started by: StudentServitor
5 Replies

5. Shell Programming and Scripting

Searching for a particular string and modifying text within block of data

Hi Forum. Is there a quick way to do the following search/replace within a block of data? I tried to google the solution but didn't really know what to look for. I have the following text file (I want to search for a particular string "s_m_f_acct_txn_daily_a1" and replace the... (5 Replies)
Discussion started by: pchang
5 Replies

6. Shell Programming and Scripting

How to find and print the last word of each line from a text file

Can any one help us in finding the the last word of each line from a text file and print it. eg: 1st --> aaa bbbb cccc dddd eeee ffff ee 2nd --> aab ered er fdf ere ww ww f the o/p should be a below. ee f (1 Reply)
Discussion started by: naveen_sangam
1 Replies

7. Shell Programming and Scripting

Find and replace data in text file with data in same file

OK I will do my best to explain what I need help with. I am trying to format an ldif file so I can import it into Oracle oid. I need the file to look like this example. Keep in mind there are 3000 of these in the file. changetype: modify replace: userpassword dn:... (0 Replies)
Discussion started by: timothyha22
0 Replies

8. Shell Programming and Scripting

Find lines in text file with certain data in first field

Hi all, Sorry for the title, I was unsure how to word my issue. I'll get right to the issue. In my text file, I need to find all lines with the same data in the first field. Then I need to create a file with the matching lines merged into one. So my original file will look something like... (4 Replies)
Discussion started by: rstev39147
4 Replies

9. Shell Programming and Scripting

converting all records of a file in to one record

I have a file temp.dat. The contents of this file is as follows abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh The multiple records in this file needs to be converted in to a single record. abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh (2 Replies)
Discussion started by: rsriramiyer
2 Replies

10. Shell Programming and Scripting

How to find Duplicate Records in a text file

Hi all pls help me by providing soln for my problem I'm having a text file which contains duplicate records . Example: abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 tas 3420 3562 ... (1 Reply)
Discussion started by: G.Aavudai
1 Replies
Login or Register to Ask a Question