Sponsored Content
Top Forums Shell Programming and Scripting Command/script to match a field and print the next field of each line in a file. Post 302951799 by Don Cragun on Monday 10th of August 2015 10:53:49 PM
Old 08-10-2015
Assuming there are never any spaces in the Status field, try:
Code:
awk -F'[[:space:]]+|:' '$(NF-3) + 0 >= 48' OP.txt >> output.txt

Note the >> redirection operator instead of >. (You said you wanted to append to the destination file; not replace its current contents.)
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Print last occurrence if first field match

Hi All, I have an input below. If the term in the 1st column is equal, print the last row which 1st column is equal.In the below example, it's " 0001 k= 27 " and " 0004 k= 6 " (depicted in bold). Those terms in 1st column which are not repetitive are to be printed as well. Can any body help me... (9 Replies)
Discussion started by: Raynon
9 Replies

2. Shell Programming and Scripting

how to print field n of line m

Hi everyone, I have a basic csh/awk question. How do I print a given field from a given line in a given file? Thanks in advance! (11 Replies)
Discussion started by: Deanne
11 Replies

3. Shell Programming and Scripting

Script to Match first field of two Files and print

Hi, I want to get script or command in Sun Unix which matches first fields of both the files and print the feilds of one files, example may make it more clear. InputFile1 ================== Alex,1,fgh Menthos,45454,toto Gothica,855,ee Zenie4,77,gg Salvatore,66,oo Dhin,1234,papapa... (3 Replies)
Discussion started by: indian.ace
3 Replies

4. Shell Programming and Scripting

need Shell script for Sort BASED ON FIRST FIELD and PRINT THE WHOLE FILE WITHOUT DUPLICATES

Can some one provide me a shell script. I have file with many columns and many rows. need to sort the first column and then remove the duplicates records if exists.. finally print the full data with first coulm as unique. Sort BASED ON FIRST FIELD and remove the duplicates if exists... (2 Replies)
Discussion started by: tuffEnuff
2 Replies

5. Shell Programming and Scripting

simplify the script, check field match to value in a file

Hi Everyone, Below is the script, i feel there should be more simple way to do the same output, my one works, but feel not nice. like using index i feel it is slow (image my file is very large), maybe awk can do one line code? Please advice. # cat 1.txt 1 a 2 b 3 cc 4 d # cat 1.pl... (6 Replies)
Discussion started by: jimmy_y
6 Replies

6. Shell Programming and Scripting

AWK: Pattern match between 2 files, then compare a field in file1 as > or < field in file2

First, thanks for the help in previous posts... couldn't have gotten where I am now without it! So here is what I have, I use AWK to match $1 and $2 as 1 string in file1 to $1 and $2 as 1 string in file2. Now I'm wondering if I can extend this AWK command to incorporate the following: If $1... (4 Replies)
Discussion started by: right_coaster
4 Replies

7. UNIX for Dummies Questions & Answers

Match pattern in a field, print pattern only instead of the entire field

Hi ! I have a tab-delimited file, file.tab: Column1 Column2 Column3 aaaaaaaaaa bbtomatoesbbbbbb cccccccccc ddddddddd eeeeappleseeeeeeeee ffffffffffffff ggggggggg hhhhhhtomatoeshhh iiiiiiiiiiiiiiii ... (18 Replies)
Discussion started by: lucasvs
18 Replies

8. Shell Programming and Scripting

Match columns and print specific field

Hello, I have data in following format. ... (6 Replies)
Discussion started by: Pushpraj
6 Replies

9. UNIX for Beginners Questions & Answers

Use strings from nth field from one file to match strings in entire line in another file, awk

I cannot seem to get what should be a simple awk one-liner to work correctly and cannot figure out why. I would like to use patterns from a specific field in one file as regex to search for matching strings in the entire line ($0) of another file. I would like to output the lines of File2 which... (1 Reply)
Discussion started by: jvoot
1 Replies

10. Shell Programming and Scripting

awk to print text in field if match and range is met

In the awk below I am trying to match the value in $4 of file1 with the split value from $4 in file2. I store the value of $4 in file1 in A and the split value (using the _ for the split) in array. I then strore the value in $2 as min, the value in $3 as max, and the value in $1 as chr. If A is... (6 Replies)
Discussion started by: cmccabe
6 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 12:48 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy