Modifying text file records, find data in one place in the record and print it elsewhere
Hello,
I have some text data that is in the form of multi-line records. Each record ends with the string $$$$ and the next record starts on the next line.
What I need to do is to find the value from the name field and copy it to the first line of the record. I the case above, I would pass "name" to the script and the script would find the value on the line after > <name> and write it to the first line of the record.
I wrote the script below to do that and it does work. It takes about 30 seconds to process a file of 500 records and that is a bit slow.
This script reads through the input file adding each row to an array until $$$$ is found. Along the way, it is checking each line to see if it is > <name>. If it is, the next line is saved.
When the end of the record is reached, the name is printed to the output file and then the lines of data that were stored in the array. The first line in the array is skipped to bypass writing the blank line at the start of the record. The $$$$ is also added. The array and name are cleared and the next record is processed.
It is possible that there could already be a name on the first line and the solution above takes care of that. This also allows for any available field to be used for the "name". At this point, it doesn't trap the case if the name field is not found.
As with most of the things I write on my own, it works but is very slow.
Any suggestions that could speed this up, make it more sound, etc, would be very much appreciated.
LMHmedchem
Last edited by LMHmedchem; 12-17-2016 at 02:38 AM..
Thanks very much for the suggestions. We just got 8 inches of snow and it has turned to rain. It's supposed to freeze again this afternoon so I have to go out and get rid of the snow. I will be back later this afternoon.
---------- Post updated at 10:37 PM ---------- Previous update was at 12:40 PM ----------
Quote:
Originally Posted by RudiC
Try
This works very well and is very fast.
The only modification I had to make was to change the range of the for loop to for (i=2; i<=CNT; i++) print OUT[i] to skip the original first line. I assume that this data structure uses an index that starts at 1 and not 0? At any rate, if I leave i=1 I get an extra line after the name.
I have been trying to pass a shell variable in as the "name" that I am looking for.
This doesn't work. None of the names are located and printed to the new location. I'm not sure what the problem is here, this follows the syntax I have used passing bash variables to bash in previous scripts.
I have also tried some other variants such as,
Any idea what I am missing here? Is this an issue with the <> characters?
LMHmedchem
---------- Post updated 12-18-16 at 12:33 AM ---------- Previous update was 12-17-16 at 10:37 PM ----------
I seem to have found the issue. It seems as if there is a problem with using a variable in the /var/ regular expression slashes.
I replaced that line with the match operator $0 ~ name_to_find {F = 1} and now it works fine.
This is the script now.
If I am reading this correctly, each line is stored in the array OUT[]. When the match operator is satisfied for the line, the variable F is set to 1. When F==1, the line is stored in the variable "NAME" and F is set back to 0. When $$$$ is found, NAME is printed followed by the rest of the record excepting the first line of the record. Am I right that F { NAME = $0; F = 0 } is the equivalent of F == 1 { NAME = $0; F = 0 } ?
This seems to be exactly what I did in my script using read and while. What is the explanation for why my script takes 30 seconds to process a file and the script above takes less than 0.1 second to do the same thing in more or less the same way?
I have always appreciated how fast awk can be but sometimes it is hard to see where the optimization is coming from.
LMHmedchem
Last edited by LMHmedchem; 12-18-2016 at 01:47 AM..
These 2 Users Gave Thanks to LMHmedchem For This Post:
Let me applaud you to your efforts to circumnavigate the problems you encountered and to your (can I say: ) endurance in finding a solution on your own, as opposed to coming back whining immediately. Other members could certainly take a leaf out of your book!
Trying to answer some of your questions:
- I'd be surprised if extra lines were added if the loop starts with i=1, at least it didn't when I tested it. And, why should it?
- /var/ in awk is a regex constant, so it would try to match the sequence of chars'v', 'a', and 'r'. Your last approach is the right one to match variables.
- awk works on pattern {action} pairs. action is executed if pattern is TRUE. So, F is equivalent to F != 0 as awk treats 0 as FALSE and anything else as TRUE.
- shell scripts are interpreted line by line, even in a loop, and files for e.g. output are opened and closed for every redirection encountered (e.g. echo or printf command). awk reads a script, compiles, and executes it. Files are kept open unless explicitly closed. This will make up for the main execution time difference. Although you are not using external commands (which would consume resources for a process creation for each and make it even slower), some of your statements could be improved.
Hi,
I need help on a complicated file that I am working on. I wanted to extract important info from a very huge file. It is space delimited file. I have hundred thousands of records in this file. An example content of the inputfile as below:-
##
ID Ser402 Old; 23... (2 Replies)
Hi all , I have two files : dblp.xml with dblp records and itu1.txt with faculty members records. I need to find out how many dblp records are related to the faculty members. More specific: I need to find out which names from itu1.txt are a match in dblp. xml file , print them and show how many... (4 Replies)
Gents,
I needs to delete duplicate values and only get uniq values based in columns 2-27
Always we should keep the last record found...
I need to store one clean file and other with the duplicate values removed.
Input :
S3033.0 7305.01 0 420123.8... (18 Replies)
Hi all,
I have a file containing two fields with 154 rows/records/lines (forgive me, my UNIX terminology is not quite up to par yet). I am trying to read from this list, find a value (lets say 0), then print the record/line/row that value falls on (In this case it would be record/line/row #27)?... (5 Replies)
Hi Forum.
Is there a quick way to do the following search/replace within a block of data? I tried to google the solution but didn't really know what to look for.
I have the following text file (I want to search for a particular string "s_m_f_acct_txn_daily_a1" and replace the... (5 Replies)
Can any one help us in finding the the last word of each line from a text file and print it.
eg:
1st --> aaa bbbb cccc dddd eeee ffff ee
2nd --> aab ered er fdf ere ww ww f
the o/p should be a below.
ee
f (1 Reply)
OK I will do my best to explain what I need help with.
I am trying to format an ldif file so I can import it into Oracle oid.
I need the file to look like this example. Keep in mind there are 3000 of these in the file.
changetype: modify
replace: userpassword
dn:... (0 Replies)
Hi all,
Sorry for the title, I was unsure how to word my issue. I'll get right to the issue. In my text file, I need to find all lines with the same data in the first field. Then I need to create a file with the matching lines merged into one. So my original file will look something like... (4 Replies)
I have a file temp.dat. The contents of this file is as follows
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
The multiple records in this file needs to be converted in to a single record.
abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh (2 Replies)
Hi all
pls help me by providing soln for my problem
I'm having a text file which contains duplicate records .
Example:
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
tas 3420 3562 ... (1 Reply)