Sponsored Content
Top Forums Shell Programming and Scripting Merging two files without any common pattern Post 302836407 by krishmaths on Wednesday 24th of July 2013 05:34:30 AM
Old 07-24-2013
Try the paste command

Code:
paste file1 file2

This User Gave Thanks to krishmaths For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Merging two files with a common column

Hi, I have two files file1 and file2. I have to merge the columns of those two files into file3 based on common column of two files. To be simple. file1: Row-id name1 13456 Rahul 16789 Vishal 18901 Karan file2 : Row-id place 18901 Mumbai ... (2 Replies)
Discussion started by: manneni prakash
2 Replies

2. Shell Programming and Scripting

Help merging two files if search pattern true.

Hello everyone, I've been reading this forum whenever I had a problem with AWK but I can't seem to find how to solve my problem. What I would like to do is the following: I have a first file with two columns, on the first one is a certain name and in the second one, another corresponding... (4 Replies)
Discussion started by: Teroc
4 Replies

3. Shell Programming and Scripting

Merging 2 files based on a common column

Hi All, I do have 2 files file 1 has 4 tab delimited columns 234 a c dfgyu 294 b g fih 302 c h jzh 328 z c san 597 f g son File 2 has 2 tab delimted columns 234 23 302 24 597 24 I want to merge file 2 with file 1 based on the data common in both files which is the first column so... (6 Replies)
Discussion started by: Lucky Ali
6 Replies

4. Shell Programming and Scripting

merging of 2 files in a particular pattern

how can i merge to files containing some random sort of numbers to a separate file .. file1 11111 10111 11011 file2 00000 01010 10101 file 3 11111_00000 10111_01010 11011_10101 Please let me know how to do this? (1 Reply)
Discussion started by: dll_fpga
1 Replies

5. Shell Programming and Scripting

Merging 2 text files when there is a common time stamp column in them

Dear Unix experts and users I have 2 kinds of files like below, of which I need to merge them in the order of time. File1: Date_Time Context D1 D2 04/19/2013_23:48:54.819 ABCD x x 04/19/2013_23:48:55.307 ABCD x x 04/19/2013_23:48:55.823 ABCD x ... (7 Replies)
Discussion started by: ks_reddy
7 Replies

6. Shell Programming and Scripting

Merging two special character separated files based on pattern matching

Hi. I have 2 files of below format. File1 AA~1~STEVE~3.1~4.1~5.1 AA~2~DANIEL~3.2~4.2~5.2 BB~3~STEVE~3.3~4.3~5.3 BB~4~TIM~3.4~4.4~5.4 File 2 AA~STEVE~AA STEVE WORKS at AUTO COMPANY AA~DANIEL~AA DANIEL IS A ELECTRICIAN BB~STEVE~BB STEVE IS A COOK I want to match 1st and 3rd... (2 Replies)
Discussion started by: crypto87
2 Replies

7. UNIX for Dummies Questions & Answers

Merging tables: identifiying common and unique elements

Hi all, I know how to merge two tables and to remove the duplicated lines based on a field (Column 2) . My next challenge is to be able to identify in a new column those common elements between table A & B, those elements in table A not present in table B and vice versa. A simple count would be... (6 Replies)
Discussion started by: lsantome
6 Replies

8. Shell Programming and Scripting

Merging files with common IDs without JOIN

Hi, I am trying to merge information across 2 files. The first file is a "master" file, with all IDS. File 2 contains a subset of IDs of those in File 1. I would like to match up individuals in File 1 and File 2, and add information in File 2 to that of File 1 if they appear. However, if an... (3 Replies)
Discussion started by: hubleo
3 Replies

9. Shell Programming and Scripting

Checking for pattern and merging files

I am trying to perform the following action. 1. A script runs the 'last' command for some users and prints the output to a file. $ cat last_users.log oracle pts/17 10.120.xxx.xxx Jun 28 14:42 - 18:01 (03:19) oracle pts/11 10.120.xxx.xxx Jun 28 14:28 - 20:17... (2 Replies)
Discussion started by: Nagesh_1985
2 Replies

10. Shell Programming and Scripting

How to check 2 log files for a common pattern?

hi! im new here and to unix. I want to do something with our log files. to compare two log files for a certain pattern. sample: file1.log contains all the "successful" run of a procedure. file2.log contains all the "current" running procedures. sample line from file1.log... (5 Replies)
Discussion started by: cabs_14
5 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 05:34 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy