Hi,
HP-UX gxxxxxxxc B.11.23 U ia64 3717505098 unlimited-user license
I have a file with below pipe separated field values:
xxx|xxx|abcd|xxx|xxx|xx
xxx|xxx|abcd#123|xxx|xxx|xx
xxx|xxx|abcd#345|xxx|xxx|xx
xxx|xxx|pqrs|xxx|xxx|xx
xxx|xxx|pqrs#123|xxx|xxx|xx
The third field has values like... (6 Replies)
Hello,
A basic query. How can I remove a string from a specific column.
For example, remove "abcd" just from column 2 in example file:
abcd abcd1
abcd abcd2
abcd abcd3
to get output:
abcd 1
abcd 2
abcd 3
Thank you!:) (4 Replies)
Hi,
I have a file like below:
.
.
.
.
Jack is going home
Jack is going to school
Jack is sleeping
Jack is eating dinner
John is going home
John is eating breakfast
.
.
.
The specific line is:
Jack is going home (2 Replies)
I have 5 column in sample txt file
where in i have to create report based upon 1,3 and 5 th column..
I have : in first and third coulmn. But I want to retain the colon of fifth coulmn and remove the colon of first column..
5th column contains String message (for example,... (7 Replies)
Hello ,
i have a text file like this :
A123 c12AB c32DD aaaa
B123 23DS 12QW bbbb
C123 2GR 3RG cccccc
i want to remove the numbers from second and third column only.
i tried this :
perl -pe 's///g' file.txt > newfile.txt
but it will remove the number from... (7 Replies)
I have a file1 that looks like this:
File 1
a b
b c
c e
d e
and a file 2 that looks like this:
File 2
b
c
e
e
Note that file 2 is the right hand column from file1. I want to remove any lines from file1 that begin with the column in file2. In this case the desired output... (6 Replies)
I have a file like:
I would like to find lines lines with duplicate values in column 1, and retain only one based on two conditions: 1) keep line with highest value in column 3, 2) if column 3 values are equal, retain the line with the highest value in column 4.
Desired output:
I was able to... (3 Replies)
Greetings All,
I would like to find all occurences of a pattern and delete a substring from the all matching lines EXCEPT the first. For example:
1234::group:user1,user2,user3,blah1,blah2,blah3
2222::othergroup:user9,user8
4444::othergroup2:user3,blah,blah,user1
1234::group3:user5,user1
... (11 Replies)
Hi,
I would like to ask your expertise to remove specific column no. 8 in the below file using but I don't have an idea on how to simply do this using awk command. Appreciate your help in advance.
Input f:
ABC 1 1XC
CDA 1 2YC
CCC 1 3XC
AVD 1 3XA
Expected output file:
ABC 1 1C
CDA... (9 Replies)
Hi ,
I need to remove the lines that matches the pattern
TABLEEXCLUDE *.AQ$_*_F ;
* is wildcard, it can be any word.
For example, I have following file:
TABLEEXCLUDE THOT.AQ$_PT_ADDR_CLEANUP_QTAB2_F ;
TABLEEXCLUDE THOT.AQ$_MICRO_SERVICE_QT_F ;
TEST
TABLEEXCLUDE... (1 Reply)
Discussion started by: rcc50886
1 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)