07-19-2013
Yoda,
I just tried your awk script but i get an error @ line 3 i.e v=$0. As i'm pretty new to awk i could not make out what could be the problem. And also one more doubt what i have is:
Ravinder wanted to delete the line before the ***no hit*** also but with the logic what i understood the below script will print the line A3 and then check for ***no hit*** condition rite..?
bash-3.00# awk '
> NF
> {
> v=$0
> getline
> if ($0!="***no hit***")
> print v RS $0 RS " "
> '
awk: syntax error near line 3
awk: bailing out near line 3
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi,
Please suggest how to write a shell script which delets all the lines containing the word unix in the files supplied as argument in the shell. (4 Replies)
Discussion started by: sireesha9
4 Replies
2. Shell Programming and Scripting
Hi all coders,
I need a help to process some data.
I have this file,
3 09/21/08 03:32:07 started undef mino Oracle nmx004.wwdc.numonyx.co
m
Message Text : The Oracle session with the PID 1103 has a CPU time
consuming of 999.00... (3 Replies)
Discussion started by: vikas027
3 Replies
3. Shell Programming and Scripting
Input:
a
b
b
c
d
d
I need:
a
c
I know how to get this (the lines that have duplicates) :
b
d
sort file | uniq -d
But i need opossite of this. I have searched the forum and other places as well, but have found solution for everything except this variant of the problem. (3 Replies)
Discussion started by: necroman08
3 Replies
4. UNIX for Dummies Questions & Answers
Hiiii
I have a file which contains huge data as
a.dat:
PDE 1990 1 9 18 51 28.90 24.7500 95.2800 118.0 6.1 0.0 BURMA
event name: 010990D
time shift: 7.3000
half duration: 5.0000
latitude: 24.4200
longitude: 94.9500
depth: 129.6000
Mrr: ... (7 Replies)
Discussion started by: reva
7 Replies
5. Shell Programming and Scripting
Let's say we have a file containing:
alllllsadfsdasdf
qwdDDDaassss
ccxxcxc#2222
dssSSSSddDDDD
D1Sqn2NYOHgTI
Hello
Alex
ssS@3
Ok, and let's say we want to delete all words from D1Sqn2NYOHgTI and back, this means
to delete the words (and the lines of them) :
alllllsadfsdasdf... (2 Replies)
Discussion started by: hakermania
2 Replies
6. Shell Programming and Scripting
Hello,
I have a group of text files with many lines in each file.
I need to delete all the lines in each and only leave 2 lines in each file. (3 Replies)
Discussion started by: script_op2a
3 Replies
7. Shell Programming and Scripting
Hi,
I have to search a word in a text file and then I have to delete lines above from the word searched . For eg suppose the file is like this:
Records
P1
10,23423432
,77:1
,234:2
P2
10,9089004
,77:1
,234:2
,87:123
,9898:2
P3
456456
P1
:123,456456546
P2
abc:324234 (2 Replies)
Discussion started by: vsachan
2 Replies
8. UNIX for Advanced & Expert Users
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies
9. Shell Programming and Scripting
hey guys,
I tried searching but most 'search and replace' questions are related to one liners.
Say I have a file to be replaced that has the following:
$ cat testing.txt
TESTING
AAA
BBB
CCC
DDD
EEE
FFF
GGG
HHH
ENDTESTING
This is the input file: (3 Replies)
Discussion started by: DeuceLee
3 Replies
10. Shell Programming and Scripting
Hello,
I'm trying to figure out how to use sed or awk to delete single lines in a file. By single, I mean lines that are not touching any other lines (just one line with white space above and below).
Example:
one
two
three
four
five
six
seven
eight
I want it to look like: (6 Replies)
Discussion started by: slimjbe
6 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)