Hi all,
I searched through the forum but i can't manage to find a solution. I need to join a set of files placed in a directory (~1600) by column, and obtain an output with first and second column common to each file, but following columns are taken from the file in the list (precisely the fourth column of the file). I'll show the input and desired output for more clarity:
File 1:
Code:
name Chr Position Log R Ratio B Allele Freq
cnvi0000001 5 164388439 -0.4241 0.0097
cnvi0000002 5 165771245 0.4448 1
cnvi0000003 5 165772271 0.4321 0
cnvi0000004 5 166325838 0.0403 0.9971
cnvi0000005 5 166710354 0.2355 0
File 2:
Code:
name Chr Position Log R Ratio B Allele Freq
cnvi0000001 5 164388439 0.0736 0
cnvi0000002 5 165771245 0.1811 1
cnvi0000003 5 165772271 0.2955 0.0042
cnvi0000004 5 166325838 -0.118 0.9883
File 3:
Code:
name Chr Position Log R Ratio B Allele Freq
cnvi0000001 5 164388439 0.2449 0
cnvi0000002 5 165771245 -0.0163 1
cnvi0000003 5 165772271 0.3361 0
cnvi0000004 5 166325838 0.0307 0.9867
cnvi0000005 5 166710354 0.1529 0
awk '{
if (x[FNR])
x[FNR] = sprintf("%s\t%s", x[FNR], $4)
else
x[FNR] = $0
} END {
for (i=1;i<=FNR;++i)
print x[i]
}'
but this insert all columns from the first file and next join columns from others files without the insertion of a tab separator or an empty field if there is some file with missing lines, obtaining this (after the manual removal of useless columns):
But it doesn't work in the desired way. I have the same output of the first script (but only with useful columns).
I hope I have been clear enough.
If anyone has some ideas, any help will be welcome!
Bye, Macsx
ps. actually I don't matter how the file header is, i can create it by hand.
Last edited by Scott; 09-18-2010 at 05:59 PM..
Reason: More code tags
Well using Franklin52 script in thread you suggested here I get this approach, but partially does what you need, maybe
AWK experts may correct and enhance this script or give us a new better solution.
This script uses file1, file2 and file3 as input file.
Note:
Itīs still needed: (missing things are shown in red)
1-) Add headers to respective files; and
2-) Manage better the missing line in file2 in order to locate "blank values" in correct positions.
Hi Cgkmal!
Thanks for your code-improvement! this is sure more "awk-ish"! At the moment I'm trying to solve the problem using an R-script, but as I can see it is really slow, so I'll keep on trying the awk-way! I've just found another post that could be useful! This:
This assumes that the fields in file1, file2 and file3 are separated with spaces. If they are sepatared by tabs, the sed command has to be modified so it prints tabs instead of spaces, but I didn't test it.
EDIT: You should post more details of the data you need to manipulate. For example, if more than one files are shorter than the others, the above will not work reliably. Also, is it possible that some records are skipped? For example a line starting with cnvi0000004 is immediately followed by a line starting with cnvi0000006. The best solution in my opinion would be to preprocess the files so they are of equal length of lines, and insert "empty" data for missing fields, eg. "-".
EDIT2: A more robust sed command handling possible consecutive empty records:
Hi Ikki!
Thanks for your reply! I use awk bacause I'm more familiar with its sintax, i've just used sed a couple of time!
My files contain datas from genetic chips, and each file belongs to a person.
Each file has ~360000 lines.
You're right, I have more than one file shorter than the other..at the moment there are 95 shorter files. But they could increase, and yes, in shorter files it's possible that a line starting with cnvi0000004 is followed by a line starting with cnvi0000006 or cnvi0000008...it depends on how many records are missing for this person, but all input file are sorted by the first column.
As I said I've written an R-script that works, but it is extremely slow. In this script I compare a list of "complete" names with another and see if there are differences. Once i found elements that aren't in the short list, i add them in this list in order to have elements with same length. In this way I can merge all column and insert tabs instead of missing datas. I post the R-code:
Code:
#define file path
files_path="/home/###/###/people/"
#read all file names in the directory and save in a vector
only_files <- dir(path=files_path, pattern = "*.in")
files = paste(files_path,only_files, sep="")
#load files to create the "complete list" I need the first column that contain the name of the record
tot_file <- read.table(files[1], sep="\t", header=TRUE)[c(1,2,3)]
tot_file_noname <- cbind(Chr=tot_file$Chr, Position=tot_file$Position)
for (i in 1:length(files)) {
#
xx_file <- read.table(files[i], sep="\t", header=TRUE)[c(1,3,4)]
xx_file_noname <- cbind(xx_file$Position, xx_file$Log.R.Ratio)
#now I read each file and if i find some mismatch from the complete list
#I add them in the current xx_file object with value "NaN"
if (length(xx_file$name) != length(tot_file$name)){
print('different!')
mismatch=NULL
match <- tot_file$name %in% xx_file$name
for(i in 1:length(match)){ if (match[i]== FALSE){ mismatch = c(mismatch,i)}}
missing_snp = NULL
# add missing values
for (i in mismatch){
missing <- data.frame(Position = tot_file[i,]$Position, Log.R.Ratio="NaN")
missing_snp <- rbind(missing_snp, missing)
}
xx_file_noname <- rbind(xx_file[,c(2,3)], missing_snp)
}else{
print('equals!')
}
tot_file_noname = cbind(tot_file_noname, xx_file_noname[,2])
}
# write the "big" file
write.table(tot_file_noname, file = "gigante.dat", append = FALSE, quote = FALSE, sep = "\t", eol = "\n", na = "NaN", dec =".", row.names = FALSE, col.names =TRUE)
Now I'm trying to port this in a shell script to have a faster response. My purpose was to avoid files preprocessing if i can, because of the large amount of data stored in each file, if possible.
I tried your command also, and it is running, the only problem is to add manually 1664 columns for the cut command, but I think I can work on It!
Hope I have been clear enough, and greatly appreciate your help!
Hi!
In this weekend I wasn't able to work on my script, but i found out that the R script that I made took only six hours on our powerful server to run, so now I had my 4.5G file!! but today I'll work on the port to bash script...it could be useful!..thanks all for the great help!
So, I pondered your problem a bit. Your task isn't one that requires much processing power, instead the most likely bottleneck is file I/O. If you need to generate this kind of report rarely (say, once a month), then six hours doesn't seem too long.
If it's daily task, or more importantly if you need to generate multiple types of reports often, I'd consider importing the data into a real database. This assumes the data is somewhat static (and even if it isn't.. it could be written directly into the db, depending from the source of your data).
If a database's a no-go and performance has to be acquired through optimizing the code, I think one obvious place to optimize is reading in. Perhaps you could read in "bursts" filling a file specific buffer in one read. However I don't know anything about R-scripts, so you're on your own there.
I did a mock-up of the data (4 files with 360000 lines each) and wrote a perl script to do the heavy lifting. On my 500Mhz pentium it performed this way: processing of only one input file took 70 seconds and processing of 4 input files took 236 seconds. If we use these files for basis of how much time 1700 files would take (we really can't reliably) 33 hours and 27.9 hours, respectively.
I'll paste the code here if you want to play with it. It takes the filenames as input. It ignores a line if there's no values in it, and it doesn't get confused if some records are missing.
Code:
#!/usr/bin/perl
use strict;
use warnings;
my @if = (); # array of input files
my $ignore_first_line = 1; #
# open all files
while ( <STDIN> ) {
chomp;
if ( -r $_ ) {
my $index = @if;
open( $if[ $index ]->{ handle }, "<", $_) or die "Couldn't open file $_: $!";
$if[ $index ]->{ name } = $_; # save the filename
$if[ $index ]->{ F }[0] = -1; # set default pos value for this file to "unread"
if ( $ignore_first_line ) {
my $dummy_fh = $if[ $index ]->{ handle };
my $dummy = < $dummy_fh >;
}
}
}
# print the header
print "chr\tPosition";
for ( 0 .. $#if ) {
print "\t$if[$_]->{name}";
}
print "\n";
my $pos = 0; # pos indicates which record we're dealing with
# let's loop the files until all are read thru
while ( 1 ) {
my $ofc = 0; # open filehandle count
my $str = ""; # build the infoline here
my $ref = undef;
++$pos; # increase the line position
# loop thru all files
for my $index ( 0 .. $#if ) {
if ( defined ( $if[$index]->{handle} ) ) { # check if the file is open and we can read from it
++$ofc;
if ( $if[$index]->{F}[0] < $pos ) {
my $handle = $if[$index]->{handle}; # save filehandle to a temp variable
if ( defined ( $if[$index]->{line} = <$handle> ) ) {
@{$if[$index]->{F}} = split(/\s/, $if[$index]->{line});
$if[$index]->{F}[0] =~ s/.*?(\d+)/$1/; # save only the number, eg. from cnvi0000003
}
else {
$if[$index]->{handle} = undef; # close filehandle
}
}
if ( defined ( $if[$index]->{handle} ) and $if[$index]->{F}[0] == $pos ) {
# according to position we'll print this data now
# also save a reference to the data so we can print
# character and position later
$ref = $if[$index]->{F};
$str .= "\t" . $if[$index]->{F}[3];
}
else {
$str .= "\t"; # empty record
}
}
else {
$str .= "\t"; # empty record
}
}
if ( defined ( $ref ) ) {
print "$$ref[1]\t$$ref[2]$str\n";
}
last unless $ofc;
}
Hello,
This post is already here but want to do this with another way
Merge multiples files with multiples duplicates keys by filling "NULL" the void columns for anothers joinning files
file1.csv:
1|abc
1|def
2|ghi
2|jkl
3|mno
3|pqr
file2.csv:
1|123|jojo
1|NULL|bibi... (2 Replies)
Hello,
I have a file with 2 columns ( tableName , ColumnName) delimited by a Pipe like below . File is sorted by ColumnName.
Table1|Column1
Table2|Column1
Table5|Column1
Table3|Column2
Table2|Column2
Table4|Column3
Table2|Column3
Table2|Column4
Table5|Column4
Table2|Column5
From... (6 Replies)
hi guys,
i need help
I need to join file2 to file1 when column 3 in my file1 and column 1 in my file2 in the same string
file1
AA|RR|ESKIM
RE|DD|RED
WE|WW|SUPSS
file2
ESKIM|ES
SUPSS|SS
Output
AA|RR|ESKIM|ES
RE|DD|RED|
WE|WW|SUPSS|SS (3 Replies)
Hi,
I have 20 tab delimited text files that have a common column (column 1). The files are named GSM1.txt through GSM20.txt. Each file has 3 columns (2 other columns in addition to the first common column).
I want to write a script to join the files by the first common column so that in the... (5 Replies)
Hi Friends,
I have a file1 with 3400 records that are tab separated and I have a file2 with 6220 records. I want to merge both these files. I tried using join file1 and file2 after sorting. But, the records should be (3400*6220 = 21148000). Instead, I get only around 11133567. Is there anything... (13 Replies)
I have n files (for ex:64 files) with one similar column. Is it possible to combine them all based on that column ?
file1
ax100 20 30 40
ax200 22 33 44
file2
ax100 10 20 40
ax200 12 13 44
file2
ax100 0 0 4
ax200 2 3 4 (9 Replies)
Hello,
My apologies if this has been posted elsewhere, I have had a look at several threads but I am still confused how to use these functions. I have two files, each with 5 columns:
File A: (tab-delimited)
PDB CHAIN Start End Fragment
1avq A 171 176 awyfan
1avq A 172 177 wyfany
1c7k A 2 7... (3 Replies)