08-18-2010
Both the commands gave me same output in small file. But I think awk is accurate than grep.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi,
I want to get the line count of the file from the 2nd line of the file ? The first line is header so want to skip that.
Thanks. (8 Replies)
Discussion started by: smc3
8 Replies
2. UNIX for Dummies Questions & Answers
I'd like to create a loop that will display something like:
1
2
29
2
57
2
...
25173
2
I figure I'd want to make some code that counts to 1798 and for the odd numbers displays: 1+28((n-1)/2) and for the even numbers displays 2. This is what I wrote:#! /bin/csh
#include <stdio.h>
int... (4 Replies)
Discussion started by: red baron
4 Replies
3. Shell Programming and Scripting
Hello forum,
I need to append the total line count to the end of each line in a file.
The file where this line count needs to be appended is generated by this script:
The script does a word frequency count by the first column of a file.
if I add wc -l at the end then the line count... (4 Replies)
Discussion started by: jaysean
4 Replies
4. Shell Programming and Scripting
Hello friends,
Could you please help on this,
n=`wc -l test.txt`
echo $n
the above two line code give me
"67 text.txt"
where as i just need n= 67 as line count (3 Replies)
Discussion started by: Danish Shakil
3 Replies
5. Shell Programming and Scripting
Hi,
I have created one shell script in which it will count number of "~" tilda charactors from each line of the file.But the problem is that i need to count each line count individually, that means. if line one contains 14 "~"s and line two contains 15 "~"s then it should give an error msg.each... (3 Replies)
Discussion started by: Ganesh Khandare
3 Replies
6. Shell Programming and Scripting
Hi All,
I have totally 10 source files. I want to count the delimiter In my source files line by line and need to store the result in another file. I got the output for the total count of delimiter for one file. But I am struggling to get the delimiter count line by line for each my files. Plz... (6 Replies)
Discussion started by: suresh01_apk
6 Replies
7. Shell Programming and Scripting
Hi
In my directory i have file of many days , i want to count the number of line of all the files of todays date. every file will have date component on it for eg. V5_T_RIO_TAFM_20120905070015847.LOG
from the file name 20120905- > this show that file is of today's date .
I have written... (8 Replies)
Discussion started by: guddu_12
8 Replies
8. Shell Programming and Scripting
What I'm trying to accomplish. I receive a Header and Detail file for daily processing. The detail file comes first which holds data, the header is a receipt of the detail file and has the detail files record count. Before processing the detail file I would like to put a wrapper around another... (4 Replies)
Discussion started by: pone2332
4 Replies
9. Shell Programming and Scripting
Hello,
I have been working on Awk/sed one liner which counts the number of occurrences of '|' in pipe separated lines of file and delete the line from files if count exceeds "17".
i.e need to get records having exact 17 pipe separated fields(no more or less)
currently i have below :
awk... (1 Reply)
Discussion started by: ketanraut
1 Replies
10. UNIX for Beginners Questions & Answers
I have a file like below with more than 30,000 lines:
Someword "mypattern blah blah mypattern blah mypattern blah "
Someotherword "mypattern blah blah mypattern blah mypattern blah"
Someword "mypattern blah blah blah mypattern blah "
Someword "mypattern blah blah mypattern blah ... (3 Replies)
Discussion started by: ctrld
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)