10-16-2012
Quote:
Originally Posted by
sdosanjh
It is not always "0". we get non-zero values too. $4 was the previous awwk, that contained 6th col value. now more columns are added so that became the 6th col in f1 and f2
That doesn't alter the fact that f1, f2, and f3 in your example are identical and that f3 doesn't match the description you supply of what you want to appear in f3.
PLEASE give us sample f1, f2, and f3 where the contents of f1 and f2 are not identical and where the content of f3 is the actual data that you want to get when you process f1 and f2!
Posting an awk script that is not intended to work on the problem you're asking us to solve doesn't really help unless you show us the input that script got, the output that script produced and explain how that is related to what you want now.
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi,
I want to search particular pattern and splitting the file in to multiple files. (Splitted files may be more than 150). It got splitted upto 20 files after that, I got some error.
nawk: filename.21 makes too many open files.
input record number 654, file xxxxxxx
Can u guide me to... (1 Reply)
Discussion started by: sharif
1 Replies
2. Shell Programming and Scripting
i'm new to shell scripting and have a problem please help me
in the script i have a nawk block which has a variable count
nawk{
.
.
.
count=count+1
print count
}
now i want to access the value of the count variable outside the awk block,like..
s=`expr count / m`
(m is... (5 Replies)
Discussion started by: saniya
5 Replies
3. Shell Programming and Scripting
Hi all, I'm new at forum, I cant find an answer to my problem so ,
I need a file which has pipe as a file separator and I need to add a column to a file in the third column I write this code but it s not enough ,
cat allproblems | nawk'\
BEGIN { FS:"|" } {print $3 $4 $5, ????} '
... (7 Replies)
Discussion started by: circuitman06
7 Replies
4. Shell Programming and Scripting
While i am trying to execute nawk in korn shell iam getting this error.
nawk: can't open file $directory../../../filename.
When the file is in home directory it is executing. But its not able to find file in other directory.
Thanks (2 Replies)
Discussion started by: Diddy
2 Replies
5. Shell Programming and Scripting
Hello,
Hope you are doing fine. I have a file in following format. I only want to process the data inside the section that comes after #DATE,CODE,VALUE
#ITEMS WITH CORRECTIONS
.......
#DATE,CODE,VALUE
2011-08-02, ID1, 0.30
2011-08-02, ID2, 0.40
2011-08-02, ID3, 0.50
......
Means... (3 Replies)
Discussion started by: srattani
3 Replies
6. Shell Programming and Scripting
Hi all, I have a file with records that look something like this,
"Transaction ID",Date,Email,"Card Type",Amount,"NETBANX Ref","Root Ref","Transaction Type","Merchant Ref",Status,"Interface ID","Interface Name","User ID"
nnnnnnnnn,"21 Nov 2011 00:10:47",someone@hotmail.co.uk,"Visa... (2 Replies)
Discussion started by: dazedandconfuse
2 Replies
7. UNIX for Dummies Questions & Answers
Hi,
I am looking for an awk script which should help me to meet the following requirement:
File1 has records in following format
INF: FAILEd RECORD AB1234
INF: FAILEd RECORD PQ1145
INF: FAILEd RECORD AB3215
INF: FAILEd RECORD AB6114
............................ (2 Replies)
Discussion started by: mintu41
2 Replies
8. Shell Programming and Scripting
Hi ,
I have a simple text file with contents as below:
12345678900 971,76 4234560890
22345678900 5971,72 5234560990
32345678900 71,12 6234560190
the new csv-file should be like:
Column1;Column2;Column3;Column4;Column5
123456;78900;971,76;423456;0890... (9 Replies)
Discussion started by: FreddyDaKing
9 Replies
9. Shell Programming and Scripting
Hi,
I am confused how to proceed firther please find the problem below:
Input Files:
DCIA_GEOG_DATA_OCEAN.TXT
DCIA_GEOG_DATA_MCRO.TXT
DCIA_GEOG_DATA_CVAS.TXT
DCIA_GEOG_DATA_MCR.TXT
Output File Name: MMA_RFC_GEOG_NAM_DIM_LOD.txt
Sample Record(DCIA_GEOG_DATA_OCEAN.TXT):(Layout same for... (4 Replies)
Discussion started by: Arun Mishra
4 Replies
10. Shell Programming and Scripting
Hi.. i am running nawk scripts on solaris system to get records of file1 not in file2 and find duplicate records in a while with the following scripts -compare
nawk 'NR==FNR{a++;next;} !a {print"line"FNR $0}' file1 file2duplicate - nawk '{a++}END{for(i in a){if(a-1)print i,a}}' file1in the middle... (12 Replies)
Discussion started by: Abhiraj Singh
12 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)