Hi,
I am having problem of linking .o and I am working with HP -UX gcc version is 3.4.2 . I have complied few cpp files and got .o's. But at the time of linking i am having issue.
it is returning
ld: Can't find library or mismatched ABI for -lstdc++
Fatal error.
I have lib at location ... (0 Replies)
I started building a program which connects to MySQL and got:
ld: Can't find library or mismatched ABI for -lmysqlclient_r
I tried building my own MySQL and use these, and it worked but then it gave me:
ld: Can't find library or mismatched ABI for -lrt
But librt is a system library (and... (1 Reply)
Hi All,
while compiling on HP UX, i am getting the following error
ld: Can't find library or mismatched ABI for -lstlport_aCC
i am new to unix and HPUX, please suggest solution ASAS
thanks in advance
vindhyalesh (3 Replies)
Dear All,
Need your help to rectify this error.
Recently I have upgraded my Linux server from 32 bit to 64 bit server.
OS details are
Red Hat Enterprise Linux Server release 5.3
Kernel 2.6.18-120.el5 on an x86_64
After upgradation, when i try to compile or catalog any program, it is... (2 Replies)
Help needed...
Can you tell me how to compare the last two couple entries in a file and print their result in new file..:confused:
I have one file
Check1.txt
\abc1 12345
\abc2 12327
\abc1 12345
\abc2 12330
I want to compare the entries in Check1 and write to... (1 Reply)
Hi,
i've a .csv file with the data as below: -
file1.h, 2.0
file2.c, 3.1
file1.h, 2.5
file3.c, 3.3.3
file1.h, 1.2.3
I want to remove the duplicate file names considering only the one with the highest version number..
output should be
file1.h, 2.5
file2.c, 3.1
file3.c,... (3 Replies)
I have a table with one column
File1.txt
1
2
3
4
5
6
7
8
9
10
Another table with two columns; This has got a subset of entries from File 1 but not unique because they have differing values in col 2.
File2.txt
1 a
2 d
2 f
6 r
6 e (3 Replies)
I have a dataset with 120 columns. I would like to write a script, that takes the average of every two columns, starting from columns 2 and 3, and moving consecutively in frames of 3 columns, all the way until the last column.
The first column in the output file would be the averages of columns... (1 Reply)
Hi all,
I have a file where in it has lot of records in it.
I have written below stuff to find the number of fields as shown below
`awk -F '|' '{print NF-1}' file.txt| head -1`
how do i proceed if in case any record in particular is having more number of delimiters, if it having??? what... (7 Replies)
Could you tell me the possibilities of the reason to get the Mismatched free() / delete / delete .
I unable to see the line no in the valgrind report. it displays the function name. with that function name, I am not able to find where exactly the issue is there.I am getting the Mismatched free()... (3 Replies)
Discussion started by: SA_Palani
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)