So, I pondered your problem a bit. Your task isn't one that requires much processing power, instead the most likely bottleneck is file I/O. If you need to generate this kind of report rarely (say, once a month), then six hours doesn't seem too long.
If it's daily task, or more importantly if you need to generate multiple types of reports often, I'd consider importing the data into a real database. This assumes the data is somewhat static (and even if it isn't.. it could be written directly into the db, depending from the source of your data).
If a database's a no-go and performance has to be acquired through optimizing the code, I think one obvious place to optimize is reading in. Perhaps you could read in "bursts" filling a file specific buffer in one read. However I don't know anything about R-scripts, so you're on your own there.
I did a mock-up of the data (4 files with 360000 lines each) and wrote a perl script to do the heavy lifting. On my 500Mhz pentium it performed this way: processing of only one input file took 70 seconds and processing of 4 input files took 236 seconds. If we use these files for basis of how much time 1700 files would take (we really can't reliably) 33 hours and 27.9 hours, respectively.
I'll paste the code here if you want to play with it. It takes the filenames as input. It ignores a line if there's no values in it, and it doesn't get confused if some records are missing.
Hello,
My apologies if this has been posted elsewhere, I have had a look at several threads but I am still confused how to use these functions. I have two files, each with 5 columns:
File A: (tab-delimited)
PDB CHAIN Start End Fragment
1avq A 171 176 awyfan
1avq A 172 177 wyfany
1c7k A 2 7... (3 Replies)
I have n files (for ex:64 files) with one similar column. Is it possible to combine them all based on that column ?
file1
ax100 20 30 40
ax200 22 33 44
file2
ax100 10 20 40
ax200 12 13 44
file2
ax100 0 0 4
ax200 2 3 4 (9 Replies)
Hi Friends,
I have a file1 with 3400 records that are tab separated and I have a file2 with 6220 records. I want to merge both these files. I tried using join file1 and file2 after sorting. But, the records should be (3400*6220 = 21148000). Instead, I get only around 11133567. Is there anything... (13 Replies)
Hi,
I have 20 tab delimited text files that have a common column (column 1). The files are named GSM1.txt through GSM20.txt. Each file has 3 columns (2 other columns in addition to the first common column).
I want to write a script to join the files by the first common column so that in the... (5 Replies)
hi guys,
i need help
I need to join file2 to file1 when column 3 in my file1 and column 1 in my file2 in the same string
file1
AA|RR|ESKIM
RE|DD|RED
WE|WW|SUPSS
file2
ESKIM|ES
SUPSS|SS
Output
AA|RR|ESKIM|ES
RE|DD|RED|
WE|WW|SUPSS|SS (3 Replies)
Hello,
I have a file with 2 columns ( tableName , ColumnName) delimited by a Pipe like below . File is sorted by ColumnName.
Table1|Column1
Table2|Column1
Table5|Column1
Table3|Column2
Table2|Column2
Table4|Column3
Table2|Column3
Table2|Column4
Table5|Column4
Table2|Column5
From... (6 Replies)
Hello,
This post is already here but want to do this with another way
Merge multiples files with multiples duplicates keys by filling "NULL" the void columns for anothers joinning files
file1.csv:
1|abc
1|def
2|ghi
2|jkl
3|mno
3|pqr
file2.csv:
1|123|jojo
1|NULL|bibi... (2 Replies)