Thanks very much for the suggestions. We just got 8 inches of snow and it has turned to rain. It's supposed to freeze again this afternoon so I have to go out and get rid of the snow. I will be back later this afternoon.
---------- Post updated at 10:37 PM ---------- Previous update was at 12:40 PM ----------
Quote:
Originally Posted by RudiC
Try
This works very well and is very fast.
The only modification I had to make was to change the range of the for loop to for (i=2; i<=CNT; i++) print OUT[i] to skip the original first line. I assume that this data structure uses an index that starts at 1 and not 0? At any rate, if I leave i=1 I get an extra line after the name.
I have been trying to pass a shell variable in as the "name" that I am looking for.
This doesn't work. None of the names are located and printed to the new location. I'm not sure what the problem is here, this follows the syntax I have used passing bash variables to bash in previous scripts.
I have also tried some other variants such as,
Any idea what I am missing here? Is this an issue with the <> characters?
LMHmedchem
---------- Post updated 12-18-16 at 12:33 AM ---------- Previous update was 12-17-16 at 10:37 PM ----------
I seem to have found the issue. It seems as if there is a problem with using a variable in the /var/ regular expression slashes.
I replaced that line with the match operator $0 ~ name_to_find {F = 1} and now it works fine.
This is the script now.
If I am reading this correctly, each line is stored in the array OUT[]. When the match operator is satisfied for the line, the variable F is set to 1. When F==1, the line is stored in the variable "NAME" and F is set back to 0. When $$$$ is found, NAME is printed followed by the rest of the record excepting the first line of the record. Am I right that F { NAME = $0; F = 0 } is the equivalent of F == 1 { NAME = $0; F = 0 } ?
This seems to be exactly what I did in my script using read and while. What is the explanation for why my script takes 30 seconds to process a file and the script above takes less than 0.1 second to do the same thing in more or less the same way?
I have always appreciated how fast awk can be but sometimes it is hard to see where the optimization is coming from.
LMHmedchem
Last edited by LMHmedchem; 12-18-2016 at 01:47 AM..
These 2 Users Gave Thanks to LMHmedchem For This Post:
Hi all
pls help me by providing soln for my problem
I'm having a text file which contains duplicate records .
Example:
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
tas 3420 3562 ... (1 Reply)
I have a file temp.dat. The contents of this file is as follows
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
The multiple records in this file needs to be converted in to a single record.
abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh (2 Replies)
Hi all,
Sorry for the title, I was unsure how to word my issue. I'll get right to the issue. In my text file, I need to find all lines with the same data in the first field. Then I need to create a file with the matching lines merged into one. So my original file will look something like... (4 Replies)
OK I will do my best to explain what I need help with.
I am trying to format an ldif file so I can import it into Oracle oid.
I need the file to look like this example. Keep in mind there are 3000 of these in the file.
changetype: modify
replace: userpassword
dn:... (0 Replies)
Can any one help us in finding the the last word of each line from a text file and print it.
eg:
1st --> aaa bbbb cccc dddd eeee ffff ee
2nd --> aab ered er fdf ere ww ww f
the o/p should be a below.
ee
f (1 Reply)
Hi Forum.
Is there a quick way to do the following search/replace within a block of data? I tried to google the solution but didn't really know what to look for.
I have the following text file (I want to search for a particular string "s_m_f_acct_txn_daily_a1" and replace the... (5 Replies)
Hi all,
I have a file containing two fields with 154 rows/records/lines (forgive me, my UNIX terminology is not quite up to par yet). I am trying to read from this list, find a value (lets say 0), then print the record/line/row that value falls on (In this case it would be record/line/row #27)?... (5 Replies)
Gents,
I needs to delete duplicate values and only get uniq values based in columns 2-27
Always we should keep the last record found...
I need to store one clean file and other with the duplicate values removed.
Input :
S3033.0 7305.01 0 420123.8... (18 Replies)
Hi all , I have two files : dblp.xml with dblp records and itu1.txt with faculty members records. I need to find out how many dblp records are related to the faculty members. More specific: I need to find out which names from itu1.txt are a match in dblp. xml file , print them and show how many... (4 Replies)
Hi,
I need help on a complicated file that I am working on. I wanted to extract important info from a very huge file. It is space delimited file. I have hundred thousands of records in this file. An example content of the inputfile as below:-
##
ID Ser402 Old; 23... (2 Replies)