Hi all,
I searched through the forum but i can't manage to find a solution. I need to join a set of files placed in a directory (~1600) by column, and obtain an output with first and second column common to each file, but following columns are taken from the file in the list (precisely the fourth column of the file). I'll show the input and desired output for more clarity:
File 1:
File 2:
File 3:
(note that File 2 has a missing line)
Output:
Now, I managed to join by column all files using:
but this insert all columns from the first file and next join columns from others files without the insertion of a tab separator or an empty field if there is some file with missing lines, obtaining this (after the manual removal of useless columns):
Output:
but as I need this huge file as input to another program, this is not right. now I've tried this solution:
But it doesn't work in the desired way. I have the same output of the first script (but only with useful columns).
I hope I have been clear enough.
If anyone has some ideas, any help will be welcome!
Bye, Macsx
ps. actually I don't matter how the file header is, i can create it by hand.
Last edited by Scott; 09-18-2010 at 05:59 PM..
Reason: More code tags
Well using Franklin52 script in thread you suggested here I get this approach, but partially does what you need, maybe
AWK experts may correct and enhance this script or give us a new better solution.
This script uses file1, file2 and file3 as input file.
Note:
Itīs still needed: (missing things are shown in red)
1-) Add headers to respective files; and
2-) Manage better the missing line in file2 in order to locate "blank values" in correct positions.
Hi Cgkmal!
Thanks for your code-improvement! this is sure more "awk-ish"! At the moment I'm trying to solve the problem using an R-script, but as I can see it is really slow, so I'll keep on trying the awk-way! I've just found another post that could be useful! This:
If you don't have to use awk, the result you ask for can perhaps be achieved using basic shell tools and sed.
Eg. if the files are called file1, file2 and file3 and have the headers stripped off you could achieve the desired result this way:
This assumes that the fields in file1, file2 and file3 are separated with spaces. If they are sepatared by tabs, the sed command has to be modified so it prints tabs instead of spaces, but I didn't test it.
EDIT: You should post more details of the data you need to manipulate. For example, if more than one files are shorter than the others, the above will not work reliably. Also, is it possible that some records are skipped? For example a line starting with cnvi0000004 is immediately followed by a line starting with cnvi0000006. The best solution in my opinion would be to preprocess the files so they are of equal length of lines, and insert "empty" data for missing fields, eg. "-".
EDIT2: A more robust sed command handling possible consecutive empty records:
Last edited by ikki; 06-04-2010 at 10:11 AM..
Reason: addemdum
Hi Ikki!
Thanks for your reply! I use awk bacause I'm more familiar with its sintax, i've just used sed a couple of time!
My files contain datas from genetic chips, and each file belongs to a person.
Each file has ~360000 lines.
You're right, I have more than one file shorter than the other..at the moment there are 95 shorter files. But they could increase, and yes, in shorter files it's possible that a line starting with cnvi0000004 is followed by a line starting with cnvi0000006 or cnvi0000008...it depends on how many records are missing for this person, but all input file are sorted by the first column.
As I said I've written an R-script that works, but it is extremely slow. In this script I compare a list of "complete" names with another and see if there are differences. Once i found elements that aren't in the short list, i add them in this list in order to have elements with same length. In this way I can merge all column and insert tabs instead of missing datas. I post the R-code:
Now I'm trying to port this in a shell script to have a faster response. My purpose was to avoid files preprocessing if i can, because of the large amount of data stored in each file, if possible.
I tried your command also, and it is running, the only problem is to add manually 1664 columns for the cut command, but I think I can work on It!
Hope I have been clear enough, and greatly appreciate your help!
Hi!
In this weekend I wasn't able to work on my script, but i found out that the R script that I made took only six hours on our powerful server to run, so now I had my 4.5G file!! but today I'll work on the port to bash script...it could be useful!..thanks all for the great help!
So, I pondered your problem a bit. Your task isn't one that requires much processing power, instead the most likely bottleneck is file I/O. If you need to generate this kind of report rarely (say, once a month), then six hours doesn't seem too long.
If it's daily task, or more importantly if you need to generate multiple types of reports often, I'd consider importing the data into a real database. This assumes the data is somewhat static (and even if it isn't.. it could be written directly into the db, depending from the source of your data).
If a database's a no-go and performance has to be acquired through optimizing the code, I think one obvious place to optimize is reading in. Perhaps you could read in "bursts" filling a file specific buffer in one read. However I don't know anything about R-scripts, so you're on your own there.
I did a mock-up of the data (4 files with 360000 lines each) and wrote a perl script to do the heavy lifting. On my 500Mhz pentium it performed this way: processing of only one input file took 70 seconds and processing of 4 input files took 236 seconds. If we use these files for basis of how much time 1700 files would take (we really can't reliably) 33 hours and 27.9 hours, respectively.
I'll paste the code here if you want to play with it. It takes the filenames as input. It ignores a line if there's no values in it, and it doesn't get confused if some records are missing.
Hello,
This post is already here but want to do this with another way
Merge multiples files with multiples duplicates keys by filling "NULL" the void columns for anothers joinning files
file1.csv:
1|abc
1|def
2|ghi
2|jkl
3|mno
3|pqr
file2.csv:
1|123|jojo
1|NULL|bibi... (2 Replies)
Hello,
I have a file with 2 columns ( tableName , ColumnName) delimited by a Pipe like below . File is sorted by ColumnName.
Table1|Column1
Table2|Column1
Table5|Column1
Table3|Column2
Table2|Column2
Table4|Column3
Table2|Column3
Table2|Column4
Table5|Column4
Table2|Column5
From... (6 Replies)
hi guys,
i need help
I need to join file2 to file1 when column 3 in my file1 and column 1 in my file2 in the same string
file1
AA|RR|ESKIM
RE|DD|RED
WE|WW|SUPSS
file2
ESKIM|ES
SUPSS|SS
Output
AA|RR|ESKIM|ES
RE|DD|RED|
WE|WW|SUPSS|SS (3 Replies)
Hi,
I have 20 tab delimited text files that have a common column (column 1). The files are named GSM1.txt through GSM20.txt. Each file has 3 columns (2 other columns in addition to the first common column).
I want to write a script to join the files by the first common column so that in the... (5 Replies)
Hi Friends,
I have a file1 with 3400 records that are tab separated and I have a file2 with 6220 records. I want to merge both these files. I tried using join file1 and file2 after sorting. But, the records should be (3400*6220 = 21148000). Instead, I get only around 11133567. Is there anything... (13 Replies)
I have n files (for ex:64 files) with one similar column. Is it possible to combine them all based on that column ?
file1
ax100 20 30 40
ax200 22 33 44
file2
ax100 10 20 40
ax200 12 13 44
file2
ax100 0 0 4
ax200 2 3 4 (9 Replies)
Hello,
My apologies if this has been posted elsewhere, I have had a look at several threads but I am still confused how to use these functions. I have two files, each with 5 columns:
File A: (tab-delimited)
PDB CHAIN Start End Fragment
1avq A 171 176 awyfan
1avq A 172 177 wyfany
1c7k A 2 7... (3 Replies)