The trick is that I want to check the "Name" column value of each row every time a new column is added. It is very important that this data stay in registration. It would also help to take something from each file name to use for a header in place E0 because I think having all of the columns be named the same is asking for trouble. It would be very easy to have the script change this in the file beforehand if that would make more sense.
My current thought was to use cut or paste to merge all of the columns I want, including the name columns, into one file like,
Then I could use IFS='\t' read -a to grab each line into an array and test the name fields to make sure they are all the same for each row. If they are, I could output the data columns to a new file. I think that would work but would be pretty awkward.
At some point, I also need to create a new column with the average of all of the data columns for each row.
The final script didn't actually work but I thought I would post it anyway in case it would be helpful. This script was supposed to allow the header value of the index key to be passed in the call to the script along with the header names of the columns to be output.
There are a great many ways to do this, so suggestions are greatly appreciated.
LMHmedchem
The script below was kindly suggested by Chubler_XL. I believe it would work for what I need but the output has the id column out of order and includes many blank rows interspersed with data.
Code:
#!/bin/bash
# script data_merge_awk.sh
INDEX=$1
INDEX_FILE=$2
MERGE_FILE=$3
INCLUDE=${4:-.*}
EXCLUDE=${5:-}
awk -vIDX="$INDEX" -vO="$INCLUDE" -vN="$EXCLUDE" '
FNR==1 {
split(O, hm)
split(N, skip)
split("", p)
for(i=1;i<=NF;i++) {
if ($i==IDX) keypos=i
if ($i in have) continue;
for (t in hm) {
x=nul
if (!($i in p) && match($i, hm[t])) {
for(x in skip) if (match($i, skip[x])) break;
if (x&&match($i, skip[x])) continue;
o[++headers]=$i
p[i]=headers
have[$i]
break
}
}
}
next;
}
keypos { for(c in p) {K[$keypos]; OUT[$keypos,p[c]]= $(c) } }
END {
$0=""
for(i=1;i<=headers;i++) $i=o[i]
print
$0=""
for(key in K) {
for(i=1;i<=headers;i++) $i=OUT[key,i]
print
}
}' FS='\t' OFS='\t' $INDEX_FILE $MERGE_FILE
# call with,
# data_merge_awk.sh index_key index_file merge_file [fields] [exclude]
.
.
.
The script below was kindly suggested by Chubler_XL. I believe it would work for what I need but the output has the id column out of order and includes many blank rows interspersed with data.
.
.
.
What if you pipe the output through a sort operation?
Location: Saint Paul, MN USA / BSD, CentOS, Debian, OS X, Solaris
Posts: 2,288
Thanks Given: 430
Thanked 480 Times in 395 Posts
Hi.
It looks like you have a number of requests for help / requirements:
1) aggregate the E0 fields into a single file along with the Id and Name columns -- for 40 files -- a join operation
2) create a new column with the average of all of the data columns for each row
3) take something from each file name to use for a header in place E0
You seem to like to use awk, but I think that given your heavy use of (essentially csv files (with TABs being used in place of commas), that acquiring and learning a csv-specific tool would be useful. That's up to you, of course.
I found that I could use csvtool to at least start on this. Its join is far better than the system join (the latter of which deals only with 2 files). So here is, without supporting scaffolding listed, what csvtool could easily do with your 3 sample files.
Code:
csvtool -t TAB -u TAB join 1,2,3 4 data[1-3]
producing:
Code:
1 V N(,)'1 0.2904 0.2916 0.2581
2 V N(,)'2 0.3180 0.3123 0.2903
3 V N(,)'3 0.3277 0.3234 0.2988
4 V N(,)'4 0.3675 0.3475 0.3496
5 V N(,)'5 0.3456 0.3294 0.3390
Id Group Name E0 E0 E0
However, csvtool does not do arithmetic directly. Incorporating the filename or some other distinguishing feature to replace the E0 also does not seem to be doable. I may look at csvfix, ffe, CRUSH, etc. to see how they might apply.
Best wishes ... cheers, drl
Last edited by drl; 11-30-2016 at 01:15 PM..
Reason: Correct minor typo.
The runtime for this was ~40 seconds for 40 input files, each with 2500 rows. That's not too awful but I think this code is a bit ghastly. It would be faster if I collected all of the data in memory instead of writing it to a file and then reading it back in.
This solution also used sed in the pipe to replaces the E0 values with a value read from the file name as the data is passed to the new file. That is almost the only think about this script that I like. The code is not generalized but could be a bit more so in a few places.
RudiC, I will check out your latest post in a few minutes.
LMHmedchem
Code:
#!/bin/bash# name of output file
output_file=$1
# collect names of all pred output files in array, files are in pwd with script
pred_file_list=($(ls *'_pred.txt'))
# the first file forms the base of the output, so capture the name here
first_file=${pred_file_list[0]}
# get set, fold, rnd from file name
unset FIELD; IFS='_' read -a FIELD <<< "$first_file"
set_fold_rnd=${FIELD[0]}'_'${FIELD[1]}'_'${FIELD[2]}
# use the first output file as the base file for the rest
# collect columns 1,3,and 4 and pipe to aggregate file
# change E0 to set fold and rnd ini from file name
cut -f1,3,4 ${pred_file_list[0]} | sed "s/E0/$set_fold_rnd/1" > tmp_output1.txt
# loop through file list
for pred_file in "${pred_file_list[@]}"
do
# don't enter the first file twice
if [ "$pred_file" != "$first_file" ]; then
# get set, fold, rnd ini from filename
unset FIELD; set_fold_rnd='';
# create substitute column header value from filename
IFS='_' read -a FIELD <<< "$pred_file"
set_fold_rnd=${FIELD[0]}'_'${FIELD[1]}'_'${FIELD[2]}
# collect columns 3and 4 and pipe to temp file
# change E0 to set fold and rnd ini from file name
cut -f3,4 './'$pred_file | sed "s/E0/$set_fold_rnd/1" > tmp_output2.txt
# merge temp file with aggregate file to create second temp
paste tmp_output1.txt tmp_output2.txt > tmp_output3.txt
# rename second temp back to aggregate file name
mv tmp_output3.txt tmp_output1.txt
# cleanup
rm -f tmp_output2.txt tmp_output3.txt
fi
done
# tmp_output1.txt now contains all of the renamed data columns and all of the name columns# name columns to check
# this could be dynamic by reading header line and recording the positions where "name" is found
declare -a field_check_array=(3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79)
# data columns to output
# this could be dynamic by reading header line and recording the positions where "E0" is found
declare -a output_cols_array=(0 1 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80)
# process the resulting aggregate file
while read line; do
# reinitialize array and output line string
unset FIELD; output_line='';
# read tab separated line into array
IFS=$'\t' read -a FIELD <<< "$line"
# for each line check the value of each field in field_check_array against the first field
# check name fields to make sure they are all the same, exit if they are not
for field_check in "${field_check_array[@]}"
do
if [ "${FIELD[1]}" != "${FIELD[$field_check]}" ]; then
echo "names do not match"
echo "FIELD[1]: " ${FIELD[1]}
echo "FIELD["$field_check"]: " ${FIELD[$field_check]}
exit -1
fi
done
# if all name fields check for this row
# add fields in output_cols_array to output_line string
for output_col in "${output_cols_array[@]}"
do
# get value for next field
cell="${FIELD[$output_col]}"
# if this is the first column, the size of the output string will be 0, no tab
if [ -z "$output_line" ]; then
output_line="$cell"
else
# concatenate with row string
output_line="$output_line"$'\t'"$cell"
fi
done
# if file does not exist, this is the first row of output
if [ ! -f "$output_file" ]; then
# create file, touch and then append prevents empty column from newline???
touch $output_file
# write first row
echo "${output_line}" >> $output_file
# if file exists, append
else
echo "${output_line}" >> $output_file
fi
done < tmp_output1.txt
# cleanup
rm -f tmp_output1.txt
---------- Post updated at 12:46 PM ---------- Previous update was at 12:13 PM ----------
I made a few modifications to the script posted by RudiC.
This just changes the code that creates the substitute header from, HD = HD OFS $3 OFS $4 "_" T[2]
to HD = HD OFS $3 OFS T[1] "_" T[2] "_" T[3]
For the filename "A_f0_r179_pred.txt", this results in the header, "A_f0_r179" instead of the header E0_f0.
It also changes the input regular expression from, A_*_pred.txt
to *_*_pred.txt
because there are file names that start with letters other than A.
Code:
#!/bin/bash
# name of output file
output_file=$1
awk '
NR == 1 {HD = $1
}
FNR == 1 {split (FILENAME, T, "_")
HD = HD OFS $3 OFS T[1] "_" T[2] "_" T[3]
}
{IX = FNR - 1
MAX = IX>MAX?IX:MAX
}
FNR == NR {ID[IX] = $1
NAME[IX] = $3
}
$1 == ID[IX] &&
$3 == NAME[IX] {OUT[IX] = OUT[IX] $3 OFS $4 OFS
next
}
{OUT[IX] = OUT[IX] OFS OFS
}
END {print HD
for (i=1; i<=MAX; i++) print ID[i], OUT[i]
}
' OFS="\t" *_*_pred.txt > $output_file
This runs in 0.2 seconds (compared to 40 seconds for my script). The only issue is that the Name columns are still appearing in the final output and I only need the Name once.
I could add more code to process the output and remove all of the "Name" columns except the first one.
HD = HD OFS $4 "_" T[1] "_" T[2] "_" T[3]
to HD = HD OFS T[1] "_" T[2] "_" T[3]
to skip the original "E0" in the new header name.
Run time was 0.2 seconds to process 40 files with 2500 rows and 43 columns.
Code:
#!/bin/bash
# name of output file
output_file=$1
awk '
NR == 1 {HD = $1 OFS $3
}
FNR == 1 {split (FILENAME, T, "_")
HD = HD OFS T[1] "_" T[2] "_" T[3]
}
{IX = FNR - 1
MAX = IX>MAX?IX:MAX
}
FNR == NR {ID[IX] = $1
NAME[IX] = $3
}
$1 == ID[IX] &&
$3 == NAME[IX] {OUT[IX] = OUT[IX] $4 OFS
next
}
{OUT[IX] = OUT[IX] OFS
}
END {print HD
for (i=1; i<=MAX; i++) print ID[i], NAME[i], OUT[i]
}
' OFS="\t" *_*_pred.txt > $output_file
I can more or less follow what this script is doing. I guess you could make it more general by using variables for the columns you are checking and outputting?
I thought I had this figured out but was wrong so am humbly asking for help.
The task is to add an additional column to FILE 1 based on records in FILE 2.
The key is in COLUMN 1 for FILE 1 and in COLUMN 1 OR COLUMN 2 for FILE 2.
I want to add the third column from FILE 2 to the beginning of... (8 Replies)
Please know that I am very new to unix and trying to learn 'on the job'. I'm only manipulating large tab-delimited files (millions of rows), but I'm stuck and don't know how to proceed with the following. Hoping for some friendly advice :)
I have 2 tab-delimited files - with differing column &... (10 Replies)
Here's a sample of the data:
NAME BIRTHDAY SEX LOCATION AGE ID
Jim 05/11/1986 M Japan 27 86
Rei 08/25/1990 F Korea 24 33
Jane 02/24/1985 F India 29 78
I've been trying to sort files using the... (8 Replies)
Hi Forum.
I'm struggling to find a solution for the following issue.
I have multiple files a1.txt, a2.txt, a3.txt, etc. and I would like to insert a tab-delimited header record at the beginning of each of the files.
This is my code so far but it's not working as expected.
for i in... (2 Replies)
Hi,
My requirement is,there is a directory location like:
:camp/current/
In this location there can be different flat files that are generated in a single day with same header and the data will be different, differentiated by timestamp, so i need to verify how many files are generated... (10 Replies)
I have a need to merge two files on the value of an index column.
input file 1
id filePath MDL_NUMBER
1 MFCD00008104.mol MFCD00008104
2 MFCD00012849.mol MFCD00012849
3 MFCD00037597.mol MFCD00037597
4 MFCD00064558.mol MFCD00064558
5 MFCD00064559.mol MFCD00064559
input file 2
... (9 Replies)
Hi
I have two tab delimited file with different number of columns but same number of rows. I need to combine these two files in such a way that row 1 in file 2 comes adjacent to row 1 in file 1.
For example:
The content of file1:
field1 field2 field3
a1 a2 a3
b1 b2 b3... (2 Replies)
I have a tab-Delimited file:
Eg:
'test' file contains:
a<tab>b<tab>c<tab>....
Based on certain condition, I wanna increase the number of lines of this file.How do I do that
Eg:
If some value in the database is 1 then one line in 'test' file is fine..
If some value in the database is 2... (1 Reply)
Hey guys...
Running Solaris 5.6, trying to write an easy /sbin/sh script. I want to run several commands, then have the results appear on one line. Additionally, I want the results to be separated by <TAB>.
Let's say that my script calls three commands (date, pwd, and hostname), I would want... (2 Replies)