Join multiple files by column with awk


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Join multiple files by column with awk
# 1  
Old 06-03-2010
Join multiple files by column with awk

Hi all,
I searched through the forum but i can't manage to find a solution. I need to join a set of files placed in a directory (~1600) by column, and obtain an output with first and second column common to each file, but following columns are taken from the file in the list (precisely the fourth column of the file). I'll show the input and desired output for more clarity:

File 1:
Code:
name    Chr    Position    Log R Ratio    B Allele Freq
cnvi0000001    5    164388439    -0.4241    0.0097
cnvi0000002    5    165771245    0.4448    1
cnvi0000003    5    165772271    0.4321    0
cnvi0000004    5    166325838    0.0403    0.9971
cnvi0000005    5    166710354    0.2355    0

File 2:
Code:
name    Chr    Position    Log R Ratio    B Allele Freq
cnvi0000001    5    164388439    0.0736    0
cnvi0000002    5    165771245    0.1811    1
cnvi0000003    5    165772271    0.2955    0.0042
cnvi0000004    5    166325838    -0.118    0.9883

File 3:
Code:
name    Chr    Position    Log R Ratio    B Allele Freq
cnvi0000001    5    164388439    0.2449    0
cnvi0000002    5    165771245    -0.0163    1
cnvi0000003    5    165772271    0.3361    0
cnvi0000004    5    166325838    0.0307    0.9867
cnvi0000005    5    166710354    0.1529    0

(note that File 2 has a missing line)

Output:
Code:
chr    Position    File1   File2   File3
5    164388439    -0.4241    0.0736    0.2449
5    165771245    0.4448    0.1811    -0.0163
5    165772271    0.4321    0.2955    0.3361
5    166325838    0.0403    -0.118    0.0307
5    166710354   0.2355                  <tab_separator> 0.1529

Now, I managed to join by column all files using:

Code:
awk '{
   if (x[FNR])
      x[FNR] = sprintf("%s\t%s", x[FNR], $4)
   else
      x[FNR] = $0
}  END {
   for (i=1;i<=FNR;++i)
       print x[i]
}'

but this insert all columns from the first file and next join columns from others files without the insertion of a tab separator or an empty field if there is some file with missing lines, obtaining this (after the manual removal of useless columns):

Output:
Code:
chr    Position    File1   File2   File3
5    164388439    -0.4241    0.0736    0.2449
 5    165771245    0.4448    0.1811    -0.0163
 5    165772271    0.4321    0.2955    0.3361
 5    166325838    0.0403    -0.118    0.0307
 5    166710354   0.2355     0.1529

but as I need this huge file as input to another program, this is not right. now I've tried this solution:

Code:
awk 'NR==FNR{ llr[$1]=$4; p[$1]=$2"\t"$3; next } {
    if(llr[$1]){
        p[$1] = p[$1]"\t"llr[$1]; llr[$1]=$4
    }else{
    llr[$1]="\t";
    p[$1] = p[$1]"\t"llr[$1];
    }      
}
END{for(i in p) {
    print p[i]
}}'

after reading this https://www.unix.com/shell-programmin...ple-files.html

But it doesn't work in the desired way. I have the same output of the first script (but only with useful columns).
I hope I have been clear enough.
If anyone has some ideas, any help will be welcome!
Bye, Macsx

ps. actually I don't matter how the file header is, i can create it by hand.

Last edited by Scott; 09-18-2010 at 05:59 PM.. Reason: More code tags
This User Gave Thanks to macsx82 For This Post:
# 2  
Old 06-03-2010
Hi macsx82,

Well using Franklin52 script in thread you suggested here I get this approach, but partially does what you need, maybe
AWK experts may correct
and enhance this script or give us a new better solution.
Code:
WHINY_USERS=1 awk 'BEGIN{ print "chr","Position"} NR==FNR{ a[$1]=$4; s[$1]=$2 " " $3 " " $4; next } {
  s[$1] = s[$1] " " $4;
}
END{for(i in s) {print s[i]}}' file*

chr Position
5 164388439 -0.4241 0.0736 0.2449
5 165771245 0.4448 0.1811 -0.0163
5 165772271 0.4321 0.2955 0.3361
5 166325838 0.0403 -0.118 0.0307
5 166710354 0.2355 0.1529

This script uses file1, file2 and file3 as input file.

Note:
Itīs still needed: (missing things are shown in red)
1-) Add headers to respective files; and
2-) Manage better the missing line in file2 in order to locate "blank values" in correct positions.

Code:
chr Position file1 file2 file3
5 164388439 -0.4241 0.0736 0.2449
5 165771245 0.4448 0.1811 -0.0163
5 165772271 0.4321 0.2955 0.3361
5 166325838 0.0403 -0.118 0.0307
5 166710354 0.2355  <tab delimiter> 0.1529


Last edited by cgkmal; 06-03-2010 at 10:17 PM..
# 3  
Old 06-04-2010
Hi Cgkmal!
Thanks for your code-improvement! this is sure more "awk-ish"! At the moment I'm trying to solve the problem using an R-script, but as I can see it is really slow, so I'll keep on trying the awk-way! I've just found another post that could be useful! This:

https://www.unix.com/shell-programmin...ping-file.html

I'll try and post! Thanks again!
# 4  
Old 06-04-2010
awk is not necessary for this task

If you don't have to use awk, the result you ask for can perhaps be achieved using basic shell tools and sed.

Eg. if the files are called file1, file2 and file3 and have the headers stripped off you could achieve the desired result this way:

Code:
$ cat file1
cnvi0000001 5 164388439 -0.4241 0.0097
cnvi0000002 5 165771245 0.4448 1
cnvi0000003 5 165772271 0.4321 0
cnvi0000004 5 166325838 0.0403 0.9971
cnvi0000005 5 166710354 0.2355 0
$ cat file2
cnvi0000001 5 164388439 0.0736 0
cnvi0000002 5 165771245 0.1811 1
cnvi0000003 5 165772271 0.2955 0.0042
cnvi0000004 5 166325838 -0.118 0.9883
$ cat file3
cnvi0000001 5 164388439 0.2449 0
cnvi0000002 5 165771245 -0.0163 1
cnvi0000003 5 165772271 0.3361 0
cnvi0000004 5 166325838 0.0307 0.9867
cnvi0000005 5 166710354 0.1529 0
$ paste file* | sed -e 's/\t\t/\t     /g;s/\t/ /g;s/ /\t/g' | cut  -f 2,3,4,9,14
5       164388439       -0.4241 0.0736  0.2449
5       165771245       0.4448  0.1811  -0.0163
5       165772271       0.4321  0.2955  0.3361
5       166325838       0.0403  -0.118  0.0307
5       166710354       0.2355          0.1529

This assumes that the fields in file1, file2 and file3 are separated with spaces. If they are sepatared by tabs, the sed command has to be modified so it prints tabs instead of spaces, but I didn't test it.

EDIT: You should post more details of the data you need to manipulate. For example, if more than one files are shorter than the others, the above will not work reliably. Also, is it possible that some records are skipped? For example a line starting with cnvi0000004 is immediately followed by a line starting with cnvi0000006. The best solution in my opinion would be to preprocess the files so they are of equal length of lines, and insert "empty" data for missing fields, eg. "-".

EDIT2: A more robust sed command handling possible consecutive empty records:
Code:
$ paste file* file2 file2 file3 | sed -e 's/\([^\t]\)\t/\1 /g;s/\t/     /g;s/\t/ /g;s/ /\t/g' | cut  -f 2,3,4,9,14,19,24,29
5       164388439       -0.4241 0.0736  0.2449  0.0736  0.0736  0.2449
5       165771245       0.4448  0.1811  -0.0163 0.1811  0.1811  -0.0163
5       165772271       0.4321  0.2955  0.3361  0.2955  0.2955  0.3361
5       166325838       0.0403  -0.118  0.0307  -0.118  -0.118  0.0307
5       166710354       0.2355          0.1529                  0.1529


Last edited by ikki; 06-04-2010 at 10:11 AM.. Reason: addemdum
# 5  
Old 06-04-2010
Hi Ikki!
Thanks for your reply! I use awk bacause I'm more familiar with its sintax, i've just used sed a couple of time!
My files contain datas from genetic chips, and each file belongs to a person.
Each file has ~360000 lines.
You're right, I have more than one file shorter than the other..at the moment there are 95 shorter files. But they could increase, and yes, in shorter files it's possible that a line starting with cnvi0000004 is followed by a line starting with cnvi0000006 or cnvi0000008...it depends on how many records are missing for this person, but all input file are sorted by the first column.
As I said I've written an R-script that works, but it is extremely slow. In this script I compare a list of "complete" names with another and see if there are differences. Once i found elements that aren't in the short list, i add them in this list in order to have elements with same length. In this way I can merge all column and insert tabs instead of missing datas. I post the R-code:

Code:
#define file path
files_path="/home/###/###/people/"

#read all file names in the directory and save in a vector
only_files <- dir(path=files_path, pattern = "*.in") 
files = paste(files_path,only_files, sep="")

#load files to create the "complete list" I need the first column that contain the name of the record
tot_file <- read.table(files[1], sep="\t", header=TRUE)[c(1,2,3)]
tot_file_noname <- cbind(Chr=tot_file$Chr, Position=tot_file$Position)


for (i in 1:length(files)) { 
#
        xx_file <- read.table(files[i], sep="\t", header=TRUE)[c(1,3,4)]
        xx_file_noname <- cbind(xx_file$Position, xx_file$Log.R.Ratio)

#now I read each file and if i find some mismatch from the complete list 
#I add them in the current xx_file object with value "NaN"

    if (length(xx_file$name) != length(tot_file$name)){
                print('different!')
                mismatch=NULL

                match <- tot_file$name %in% xx_file$name
                                    
                for(i in 1:length(match)){ if (match[i]== FALSE){ mismatch = c(mismatch,i)}}

                missing_snp = NULL
# add missing values
                for (i in mismatch){
                    missing <- data.frame(Position = tot_file[i,]$Position, Log.R.Ratio="NaN")
                    missing_snp <- rbind(missing_snp, missing)
                }

                    xx_file_noname <- rbind(xx_file[,c(2,3)], missing_snp)
    }else{
        print('equals!')        
    }    

    tot_file_noname = cbind(tot_file_noname, xx_file_noname[,2])
}

# write the "big" file
write.table(tot_file_noname, file = "gigante.dat", append = FALSE, quote = FALSE, sep = "\t", eol = "\n", na = "NaN", dec =".", row.names = FALSE, col.names =TRUE)

Now I'm trying to port this in a shell script to have a faster response. My purpose was to avoid files preprocessing if i can, because of the large amount of data stored in each file, if possible.
I tried your command also, and it is running, the only problem is to add manually 1664 columns for the cut command, but I think I can work on It!
Hope I have been clear enough, and greatly appreciate your help!
# 6  
Old 06-07-2010
Hi!
In this weekend I wasn't able to work on my script, but i found out that the R script that I made took only six hours on our powerful server to run, so now I had my 4.5G file!! Smilie but today I'll work on the port to bash script...it could be useful!..thanks all for the great help!
# 7  
Old 06-07-2010
So, I pondered your problem a bit. Your task isn't one that requires much processing power, instead the most likely bottleneck is file I/O. If you need to generate this kind of report rarely (say, once a month), then six hours doesn't seem too long.

If it's daily task, or more importantly if you need to generate multiple types of reports often, I'd consider importing the data into a real database. This assumes the data is somewhat static (and even if it isn't.. it could be written directly into the db, depending from the source of your data).

If a database's a no-go and performance has to be acquired through optimizing the code, I think one obvious place to optimize is reading in. Perhaps you could read in "bursts" filling a file specific buffer in one read. However I don't know anything about R-scripts, so you're on your own there.

I did a mock-up of the data (4 files with 360000 lines each) and wrote a perl script to do the heavy lifting. On my 500Mhz pentium it performed this way: processing of only one input file took 70 seconds and processing of 4 input files took 236 seconds. If we use these files for basis of how much time 1700 files would take (we really can't reliably) 33 hours and 27.9 hours, respectively.

I'll paste the code here if you want to play with it. It takes the filenames as input. It ignores a line if there's no values in it, and it doesn't get confused if some records are missing.

Code:
#!/usr/bin/perl

use strict;
use warnings;

my @if = ();    # array of input files
my $ignore_first_line = 1; #

# open all files
while ( <STDIN> ) {
        chomp;
        if ( -r $_ ) {
                my $index = @if;
                open( $if[ $index ]->{ handle }, "<", $_) or die "Couldn't open file $_: $!";
                $if[ $index ]->{ name } = $_; # save the filename
                $if[ $index ]->{ F }[0] = -1; # set default pos value for this file to "unread"
                if ( $ignore_first_line ) {
                        my $dummy_fh = $if[ $index ]->{ handle };
                        my $dummy = < $dummy_fh >;
                }
        }
}

# print the header
print "chr\tPosition";
for ( 0 .. $#if ) {
        print "\t$if[$_]->{name}";
}
print "\n";

my $pos = 0;    # pos indicates which record we're dealing with

# let's loop the files until all are read thru
while ( 1 ) {
        my $ofc = 0;    # open filehandle count
        my $str = "";   # build the infoline here
        my $ref = undef;
        ++$pos;                 # increase the line position

        # loop thru all files
        for my $index ( 0 .. $#if ) {
                if ( defined ( $if[$index]->{handle} ) ) { # check if the file is open and we can read from it
                        ++$ofc;
                        if ( $if[$index]->{F}[0] < $pos ) {
                                my $handle = $if[$index]->{handle}; # save filehandle to a temp variable
                                if ( defined ( $if[$index]->{line} = <$handle> ) ) {
                                        @{$if[$index]->{F}} = split(/\s/, $if[$index]->{line});
                                        $if[$index]->{F}[0] =~ s/.*?(\d+)/$1/; # save only the number, eg. from cnvi0000003
                                }
                                else {
                                        $if[$index]->{handle} = undef; # close filehandle
                                }
                        }

                        if ( defined ( $if[$index]->{handle} ) and $if[$index]->{F}[0] == $pos ) {
                                # according to position we'll print this data now
                                # also save a reference to the data so we can print
                                # character and position later
                                $ref = $if[$index]->{F};
                                $str .= "\t" . $if[$index]->{F}[3];
                        }
                        else {
                                $str .= "\t"; # empty record
                        }

                }
                else {
                        $str .= "\t"; # empty record
                }
        }

        if ( defined ( $ref ) ) {
                print "$$ref[1]\t$$ref[2]$str\n";
        }

        last unless $ofc;
}

Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Join, merge, fill NULL the void columns of multiples files like sql "LEFT JOIN" by using awk

Hello, This post is already here but want to do this with another way Merge multiples files with multiples duplicates keys by filling "NULL" the void columns for anothers joinning files file1.csv: 1|abc 1|def 2|ghi 2|jkl 3|mno 3|pqr file2.csv: 1|123|jojo 1|NULL|bibi... (2 Replies)
Discussion started by: yjacknewton
2 Replies

2. Shell Programming and Scripting

Join columns across multiple lines in a Text based on common column using BASH

Hello, I have a file with 2 columns ( tableName , ColumnName) delimited by a Pipe like below . File is sorted by ColumnName. Table1|Column1 Table2|Column1 Table5|Column1 Table3|Column2 Table2|Column2 Table4|Column3 Table2|Column3 Table2|Column4 Table5|Column4 Table2|Column5 From... (6 Replies)
Discussion started by: nv186000
6 Replies

3. Shell Programming and Scripting

Join 2nd column of multiple files

Dear All, I have many files formatted like this: file1.txt: 1/2-SBSRNA4 18 A1BG 3 A1BG-AS1 6 A1CF 0 A2LD1 1 A2M 1160 file2.txt 1/2-SBSRNA4 53 A1BG 1 A1BG-AS1 7 A1CF 0 A2LD1 3 A2M 2780 (5 Replies)
Discussion started by: paolo.kunder
5 Replies

4. UNIX for Dummies Questions & Answers

Join with awk different column

hi guys, i need help I need to join file2 to file1 when column 3 in my file1 and column 1 in my file2 in the same string file1 AA|RR|ESKIM RE|DD|RED WE|WW|SUPSS file2 ESKIM|ES SUPSS|SS Output AA|RR|ESKIM|ES RE|DD|RED| WE|WW|SUPSS|SS (3 Replies)
Discussion started by: radius
3 Replies

5. Shell Programming and Scripting

Awk: Multiple Replace In Column From Two Different Files

Master_1.txt 2372,MTS,AP 919821,Airtel,DL 0819,MTS,MUM 919849788001,Airtel,AP 1430,Aircel MP,20 405899143999999,MTS,KRL USSDLIKE,MTS,DEL Master_2.txt 919136,DL 9664,RAJ 919143,KOL 9888,PUN Input File: (4 Replies)
Discussion started by: siramitsharma
4 Replies

6. UNIX for Dummies Questions & Answers

How to use the the join command to join multiple files by a common column

Hi, I have 20 tab delimited text files that have a common column (column 1). The files are named GSM1.txt through GSM20.txt. Each file has 3 columns (2 other columns in addition to the first common column). I want to write a script to join the files by the first common column so that in the... (5 Replies)
Discussion started by: evelibertine
5 Replies

7. Shell Programming and Scripting

Awk - join multiple files

Is it possible to join all the files with input1 based on 1st column? input1 a b c d e f input2 a b input3 a e input4 c (2 Replies)
Discussion started by: quincyjones
2 Replies

8. Shell Programming and Scripting

Join and awk max column

Hi Friends, I have a file1 with 3400 records that are tab separated and I have a file2 with 6220 records. I want to merge both these files. I tried using join file1 and file2 after sorting. But, the records should be (3400*6220 = 21148000). Instead, I get only around 11133567. Is there anything... (13 Replies)
Discussion started by: jacobs.smith
13 Replies

9. Shell Programming and Scripting

Join multiple files based on 1 common column

I have n files (for ex:64 files) with one similar column. Is it possible to combine them all based on that column ? file1 ax100 20 30 40 ax200 22 33 44 file2 ax100 10 20 40 ax200 12 13 44 file2 ax100 0 0 4 ax200 2 3 4 (9 Replies)
Discussion started by: quincyjones
9 Replies

10. UNIX for Dummies Questions & Answers

Join 2 files with multiple columns: awk/grep/join?

Hello, My apologies if this has been posted elsewhere, I have had a look at several threads but I am still confused how to use these functions. I have two files, each with 5 columns: File A: (tab-delimited) PDB CHAIN Start End Fragment 1avq A 171 176 awyfan 1avq A 172 177 wyfany 1c7k A 2 7... (3 Replies)
Discussion started by: InfoSeeker
3 Replies
Login or Register to Ask a Question