What I need to do is to find the value from the name field and copy it to the first line of the record. I the case above, I would pass "name" to the script and the script would find the value on the line after > <name> and write it to the first line of the record.
I wrote the script below to do that and it does work. It takes about 30 seconds to process a file of 500 records and that is a bit slow.
This script reads through the input file adding each row to an array until $$$$ is found. Along the way, it is checking each line to see if it is > <name>. If it is, the next line is saved.
When the end of the record is reached, the name is printed to the output file and then the lines of data that were stored in the array. The first line in the array is skipped to bypass writing the blank line at the start of the record. The $$$$ is also added. The array and name are cleared and the next record is processed.
Code:
#!/bin/bash# input file name
input_file=$1
# attribute field tag to use for name line
name_field=$2
# output file name
output_file=$3
# create empty output file
touch $output_file
# declare array for individual sdf record
declare -a sdf_record
# create both possible versions of attribute tag value
name_string_1='> <'$name_field'>'
name_string_2='> <'$name_field'>'
# initalize
temp_name=''; save_next_line='0'
# set input field separator to space to preserve spaces
IFS=''
# loop through input file
while read line; do
# test if line is last line of record, if not add line to temp record array
if [ "$line" != "\$\$\$\$" ]; then
# add each line to sdf record
sdf_record=("${sdf_record[@]}" "$line")
# check if this line has been marked to save for the name string
if [ "$save_next_line" == "1" ]; then
# save name and reset indicator
temp_name=$line; save_next_line='0'
# check if this is the name tag line, check all three versions of tagging
elif [[ "$line" == "$name_string_1" ]] || [[ "$line" == "$name_string_2" ]]; then
# set marked to collect the next line for the name string
save_next_line='1'
fi
# when the $$$$ record terminator is reached, print the record adding the name line
else
# add the record termination string $$$$ as the last line of the temp record
sdf_record=("${sdf_record[@]}" "\$\$\$\$")
# add the name field to the start of the record
echo -e $temp_name >> $output_file
# append the rest of the record lines stored in the array to the output file# this skips the first line which is replace by the name above
for record_line in "${sdf_record[@]:1}"
do
echo -e $record_line >> $output_file
done
# clear the current sdf record and name
unset sdf_record; temp_name=''
fi
done < $input_file
It is possible that there could already be a name on the first line and the solution above takes care of that. This also allows for any available field to be used for the "name". At this point, it doesn't trap the case if the name field is not found.
As with most of the things I write on my own, it works but is very slow.
Any suggestions that could speed this up, make it more sound, etc, would be very much appreciated.
LMHmedchem
Last edited by LMHmedchem; 12-17-2016 at 02:38 AM..
Hi all
pls help me by providing soln for my problem
I'm having a text file which contains duplicate records .
Example:
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
tas 3420 3562 ... (1 Reply)
I have a file temp.dat. The contents of this file is as follows
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
The multiple records in this file needs to be converted in to a single record.
abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh (2 Replies)
Hi all,
Sorry for the title, I was unsure how to word my issue. I'll get right to the issue. In my text file, I need to find all lines with the same data in the first field. Then I need to create a file with the matching lines merged into one. So my original file will look something like... (4 Replies)
OK I will do my best to explain what I need help with.
I am trying to format an ldif file so I can import it into Oracle oid.
I need the file to look like this example. Keep in mind there are 3000 of these in the file.
changetype: modify
replace: userpassword
dn:... (0 Replies)
Can any one help us in finding the the last word of each line from a text file and print it.
eg:
1st --> aaa bbbb cccc dddd eeee ffff ee
2nd --> aab ered er fdf ere ww ww f
the o/p should be a below.
ee
f (1 Reply)
Hi Forum.
Is there a quick way to do the following search/replace within a block of data? I tried to google the solution but didn't really know what to look for.
I have the following text file (I want to search for a particular string "s_m_f_acct_txn_daily_a1" and replace the... (5 Replies)
Hi all,
I have a file containing two fields with 154 rows/records/lines (forgive me, my UNIX terminology is not quite up to par yet). I am trying to read from this list, find a value (lets say 0), then print the record/line/row that value falls on (In this case it would be record/line/row #27)?... (5 Replies)
Gents,
I needs to delete duplicate values and only get uniq values based in columns 2-27
Always we should keep the last record found...
I need to store one clean file and other with the duplicate values removed.
Input :
S3033.0 7305.01 0 420123.8... (18 Replies)
Hi all , I have two files : dblp.xml with dblp records and itu1.txt with faculty members records. I need to find out how many dblp records are related to the faculty members. More specific: I need to find out which names from itu1.txt are a match in dblp. xml file , print them and show how many... (4 Replies)
Hi,
I need help on a complicated file that I am working on. I wanted to extract important info from a very huge file. It is space delimited file. I have hundred thousands of records in this file. An example content of the inputfile as below:-
##
ID Ser402 Old; 23... (2 Replies)