08-01-2014
There is no magic here. You have a fairly large number of people who are intimately familiar with various aspects of various parts of Linux and UNIX systems. If you give these volunteers details about what you're trying to do, you can frequently get excellent tips on how to get an efficient solution to your problem. If you omit details about constraints on how the output is to be produced, we can all waste a lot of time wandering around paths that won't meet your needs.
Various systems provide various extensions to the standard utilities. If you always tell us what version of what OS you're using, we can avoid suggesting that you use extensions that are not available on your system.
Simply put, help us help you by giving us the information we would need to get the job done right on the system you will be using. Give us incomplete data, get one or more responses that meet all of your stated requirements, and then complain that it doesn't do something you never said was important in the first place, and the volunteers who wasted time trying to help you might not be interested in responding to your next request for help. This is just simple human nature.
Enough generalities...
If each of your input files are sorted in reverse numeric order on the key field as in your example (even if the key field is a different field in different files and has different field separators in different files), it would probably still be a lot faster to have an awk preprocessing step to create a single file with the key field, an added file number, and an added line number as fields from all of your input files; sort it by the key field, file #, and line #; and then use awk or sed to strip the file #, line #, and (if it had to be duplicated) the key fields added by the preprocessing step to just leave the desired sorted file as a result.
This could still be a lot faster than firing up a shell and awk for each line in each of the lines in each of all but the first of your 12 to 14 files multiplied by the number of lines in your 1st file (fork() and exec() are expensive operations).
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
I have a machine A NFS mounted on machine B
I am doing a build from machine B on the MFS mounted dir of machine A but I keep getting the following:
NFS server A not responding still trying.
I go to machine A and can log onto machine A and everything seems fine.
How do I go about finding... (6 Replies)
Discussion started by: brv
6 Replies
2. Shell Programming and Scripting
Hi All,
read dif
echo `date +%Y%m%d`|./add $dif|./fmtdt %mon%dd
The above script is for adding days to current date to find the new date. This script divides the current date into 20060220(YYYYMMDD) format and pass this output to add script. The add script will add the days to the... (2 Replies)
Discussion started by: muthu_nix
2 Replies
3. UNIX for Dummies Questions & Answers
Hi, I have 600 text files. In each txt file, I have 3 columns, e.g:
File 1
a 0.21 0.003
b 0.34 0.004
c 0.72 0.002
File 2
a 0.25 0.0083
b 0.38 0.0047
c 0.79 0.00234
File 3
a 0.45 0.0063
b 0.88 0.0027
c 0.29 0.00204
...
my filename as "sc2408_0_5278.txt sc2408_0_5279.txt... (2 Replies)
Discussion started by: libenhelen
2 Replies
4. Shell Programming and Scripting
hi,
i used paste file1.txt file2.txt > file3.txt to merge 2 columns from file1 and 4 columns from file2.
file1
scaffold_217 scaffold_217
file2
CHRSM N scaffold_217.pf scaffold_217.fsa
the result is as follows:-
scaffold_217 scaffold_217
CHRSM ... (6 Replies)
Discussion started by: redse171
6 Replies
5. Shell Programming and Scripting
Hi all,
I have list of two kind of files and I want to compare the rows and print the merged data by applying if condition.
First kind of file looks like:
and second kind of file looks like :
I want to print the rows present in second file followed by 3 more columns from first... (6 Replies)
Discussion started by: CAch
6 Replies
6. Shell Programming and Scripting
I am working on a problem in which I need to merge 4 files (say f1,f2,f3 & f4 log files) & then prepare a final file.
1) If the final file created has size more than 1 GB then need to throw error (display error).
2) Need to check after every merge (say f1 + f2, f1 + f2 + f3) that whether... (2 Replies)
Discussion started by: nrm
2 Replies
7. UNIX for Advanced & Expert Users
Hello Folks,
i have to write shell scripting for given expected output manner.
in given input we have to write shell script in such a way that sequence no can b merged/link between start and end digit with hyphen "-" symbol and rest of digit separated by ","
Eg :
For Input "2 6 7 8 11 12... (9 Replies)
Discussion started by: panchalh
9 Replies
8. Linux
I'm trying to performance tune the I/O of my web server, which is at 41.1% reads merged (If my math is correct), which seems a tad high to just be going along with the defaults. Will modifying read_ahead_kb affect the value of "reads merged" in diskstats? If not, what's a good way of tracking... (2 Replies)
Discussion started by: thmnetwork
2 Replies
9. Shell Programming and Scripting
i have a file in the format
acti_id|signature
1|abc
def
xyz
2|lmn
pqr
lmn
3|ggg
ppp
mmm
it is in csv format
i want the file in the format
act_id|signature
1|abcdefxyz
2|lmnpqrlmn
3|gggpppmmm
i have tried awk but without much success. i replaced the new line with null but it... (10 Replies)
Discussion started by: djrulz123
10 Replies
10. UNIX for Beginners Questions & Answers
Like to have shell script to Read the given file contents into a merged one file with header of path+file name followed by file contents into a single output file.
While reading and merging the file contents into a single file, Like to keep the format of the source file.
... (4 Replies)
Discussion started by: Siva SQL
4 Replies
JOIN(1) BSD General Commands Manual JOIN(1)
NAME
join -- relational database operator
SYNOPSIS
join [-a file_number | -v file_number] [-e string] [-o list] [-t char] [-1 field] [-2 field] file1 file2
DESCRIPTION
The join utility performs an ``equality join'' on the specified files and writes the result to the standard output. The ``join field'' is
the field in each file by which the files are compared. The first field in each line is used by default. There is one line in the output
for each pair of lines in file1 and file2 which have identical join fields. Each output line consists of the join field, the remaining
fields from file1 and then the remaining fields from file2.
The default field separators are tab and space characters. In this case, multiple tabs and spaces count as a single field separator, and
leading tabs and spaces are ignored. The default output field separator is a single space character.
Many of the options use file and field numbers. Both file numbers and field numbers are 1 based, i.e., the first file on the command line is
file number 1 and the first field is field number 1. The following options are available:
-a file_number
In addition to the default output, produce a line for each unpairable line in file file_number.
-e string
Replace empty output fields with string.
-o list
The -o option specifies the fields that will be output from each file for each line with matching join fields. Each element of list
has the either the form 'file_number.field', where file_number is a file number and field is a field number, or the form '0' (zero),
representing the join field. The elements of list must be either comma (',') or whitespace separated. (The latter requires quoting
to protect it from the shell, or, a simpler approach is to use multiple -o options.)
-t char
Use character char as a field delimiter for both input and output. Every occurrence of char in a line is significant.
-v file_number
Do not display the default output, but display a line for each unpairable line in file file_number. The options -v 1 and -v 2 may be
specified at the same time.
-1 field
Join on the field'th field of file 1.
-2 field
Join on the field'th field of file 2.
When the default field delimiter characters are used, the files to be joined should be ordered in the collating sequence of sort(1), using
the -b option, on the fields on which they are to be joined, otherwise join may not report all field matches. When the field delimiter char-
acters are specified by the -t option, the collating sequence should be the same as sort(1) without the -b option.
If one of the arguments file1 or file2 is ``-'', the standard input is used.
EXIT STATUS
The join utility exits 0 on success, and >0 if an error occurs.
COMPATIBILITY
For compatibility with historic versions of join, the following options are available:
-a In addition to the default output, produce a line for each unpairable line in both file 1 and file 2.
-j1 field
Join on the field'th field of file 1.
-j2 field
Join on the field'th field of file 2.
-j field
Join on the field'th field of both file 1 and file 2.
-o list ...
Historical implementations of join permitted multiple arguments to the -o option. These arguments were of the form
'file_number.field_number' as described for the current -o option. This has obvious difficulties in the presence of files named
'1.2'.
These options are available only so historic shell scripts do not require modification. They should not be used in new code.
LEGACY DESCRIPTION
The -e option causes a specified string to be substituted into empty fields, even if they are in the middle of a line. In legacy mode, the
substitution only takes place at the end of a line.
Only documented options are allowed. In legacy mode, some obsolete options are re-written into current options.
For more information about legacy mode, see compat(5).
SEE ALSO
awk(1), comm(1), paste(1), sort(1), uniq(1), compat(5)
STANDARDS
The join command conforms to IEEE Std 1003.1-2001 (``POSIX.1'').
BSD
July 5, 2004 BSD