Sponsored Content
Top Forums Shell Programming and Scripting How to merge two files with unique values matching.? Post 302924056 by RavinderSingh13 on Thursday 6th of November 2014 04:08:07 AM
Old 11-06-2014
Hello Sharma331,

Following may help you in same.

Code:
awk 'FNR==NR{A[$1]=$2 OFS $3;next} ($5 in A){print $0 OFS A[$5]}' outputfile2.xls OFS="\t" outputfile1.xls

Output will be as follows.
Code:
hobby  class  subject  sports  rollNumber       First_name last_name
dance  11     science  cricket   1      sonali kumari
dance  12     science  cricket   3      Rini kumari
dance  10     science  cricket   1      sonali kumari
dance  11     science  cricket   2      anni Das
dance  11     science  cricket   1      sonali kumari
dance  11     science  cricket   2      anni Das


Thanks,
R. Singh
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Need to find only unique values for a given tag across the files

Need to find only unique values for a given tag across the files: For eg: Test1: <Tag1>aaa</Tag1> <Tag2>bbb</Tag2> <Tag3>ccc</Tag3> Test2: <Tag1>aaa</Tag1> <Tag2>ddd</Tag2> <Tag3>eee</Tag3> Test3: <Tag1>aaa</Tag1> <Tag2>ddd</Tag2> <Tag3>eee</Tag3> Test4: (8 Replies)
Discussion started by: sudheshnaiyer
8 Replies

2. Shell Programming and Scripting

comparing 2 text files to get unique values??

Hi all, I have got a problem while comparing 2 text files and the result should contains the unique values(Non repeatable). For eg: file1.txt 1 2 3 4 file2.txt 2 3 So after comaping the above 2 files I should get only 1 and 4 as the output. Pls help me out. (7 Replies)
Discussion started by: smarty86
7 Replies

3. Shell Programming and Scripting

merge files with same row values

Hi everyone, I'm just wondering how could I using awk language merge two files by comparison of one their row. I mean, I have one file like this: file#1: 21/07/2009 11:45:00 100.0000000 27.2727280 21/07/2009 11:50:00 75.9856644 25.2492676 21/07/2009 11:55:00 51.9713287 23.2258072... (4 Replies)
Discussion started by: tonet
4 Replies

4. Shell Programming and Scripting

Compare values in two files. For matching rows print corresponding values from File 1 in File2.

- I have two files (File 1 and File 2) and the contents of the files are mentioned below. - I am trying to compare the values of Column1 of File1 with Column1 of File2. If a match is found, print the corresponding value from Column2 of File1 in Column5 of File2. - I tried to modify and use... (10 Replies)
Discussion started by: Santoshbn
10 Replies

5. Shell Programming and Scripting

Merge two files matching columns

Hi! I need to merge two files when col1 (x:x:x) matching and adds second column from file1.txt. # cat 1.txt aaa;a12 bbb;b13 ccc;c33 ddd;d55 eee;e11 # cat 2.txt bbb;b55;34444;d55 aaa;a15;35666;a44 I try with this awk and I get succesfully first column from 1.txt: # awk -F";"... (2 Replies)
Discussion started by: fhluque
2 Replies

6. UNIX for Dummies Questions & Answers

Grep to find matching patern and return unique values

Request: grep to find given matching patern and return unique values, eliminate the duplicate values I have to retrieve the unique folder on the below file contents like; /app/oracle/build_lib/pkg320.0_20120927 /app/oracle/build_lib/pkg320.0_20121004_prof... (5 Replies)
Discussion started by: Siva SQL
5 Replies

7. Shell Programming and Scripting

How to merge two or more fields from two different files where there is non matching column?

Hi, Please excuse for often requesting queries and making R&D, I am trying to work out a possibility where i have two files field separated by pipe and another file containing only one field where there is no matching columns, Could you please advise how to merge two files. $more... (3 Replies)
Discussion started by: karthikram
3 Replies

8. Shell Programming and Scripting

Count Unique values from multiple lists of files

Looking for a little help here. I have 1000's of text files within a multiple folders. YYYY/ /MM /1000's Files Eg. 2014/01/1000 files 2014/02/1237 files 2014/03/1400 files There are folders for each year and each month, and within each monthly folder there are... (4 Replies)
Discussion started by: whegra
4 Replies

9. Linux

Merge two files based on matching criteria

Hi, I am trying to merge two csv files based on matching criteria: File description is as below : Key_File : 000|ÇÞ|Key_HF|ÇÞ|Key_FName 001|ÇÞ|Key_11|ÇÞ|Sort_Key22|ÇÞ|Key_31 002|ÇÞ|Key_12|ÇÞ|Sort_Key23|ÇÞ|Key_32 003|ÇÞ|Key_13|ÇÞ|Sort_Key24|ÇÞ|Key_33 050|ÇÞ|Key_15|ÇÞ|Sort_Key25|ÇÞ|Key_34... (3 Replies)
Discussion started by: PK29
3 Replies

10. UNIX for Beginners Questions & Answers

Merge 4 bim files by keeping only the overlapping variants (unique rs values )

Dear community, I am facing a problem and I kindly ask your help: I have 4 different data sets consisted from 3 different types of array. On each file, column 1 is chromosome position, column 2 is SNP id etc... Lets say I have the following (bim) datasets: x2014: 1 rs3094315... (4 Replies)
Discussion started by: fondan
4 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 07:26 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy