Finding the most positive and negative value and defining its position
Hi,
I have a file that looks like this:
I want to find the most positive and negative value for each row and also define its position (based on column #)
So the output would look something like:
Where the first column is name, 2nd column is the most positive value, 3rd is its column position, 4th column is the most negative value and 5th column is the column position of the most negative value.
Hi Gurus,
In my file I have an amount field from position 74 to 87, which contains values starting with '+' as well as '-'. I want to add all positive values in a varible called "CREDIT" and all negative values in a variable "DEBIT". I know, we can use grep to identify values with positive and... (4 Replies)
Hello,
For my weather station I have made a little perl script to put the data into cacti. The next problem I have.
I can only get positive numbers or negative numbers.
What do I do:
Though a shell scrip I call the perl script.
Shell script:
#!/bin/sh
cat data.txt | stats.pl
Perl... (4 Replies)
Hello,
I have a list like this :
1
2
-4
0
-3
-7
5
6 etc.
Is there a way to remove all the positive values and print only the negative values, without using grep, sed or awk?
Thanks,
Prasanna (4 Replies)
Let, I have three numbers
+00123.25
-00256.54
+00489.23
I need to sum up all those three numbers, after storing them in three variables (say var1, var2, var3).
I used both expr and BC, but they didn't work for me.
But, I am not able to sum up them, as I don't have any idea how to... (13 Replies)
Hi all,
I have a file that looks like shown below. I want to find places where the value in column 2 change from negative to positive and vice versa and return the value on column 1 at that point. I wonder if this is possible in shell script or awk .. please help!
Here is the original data
... (6 Replies)
Hello all,
I'm new to the forums and hope to be able to contribute something useful in the future; however I must admit that what has prompted me to join is the fact that currently I need help with something that has me at the end of my tether.
I have a PDB (Protein Data Bank) file which I... (13 Replies)
Dear All,
I have to split a tab delimited file in two files based on the presence of a positive or negative in column number 9 , for example
file:
A 1 5 erg + 6766 0.9889 0.9817 9.01882 erg inside upstream
B 1 8 erg2 + 6766 0.9889 0.9817 -9.22 erg2 inside... (3 Replies)
Hi ALL,
I am having semi column separated file as below. I am having negative values for the records starting with 11095. How can I convert that positive number
I tried this below seems not working
sed 's/ \(*\)$/ -\1/;t;s/\(.*\)-/\1/ myfile
myfile... (6 Replies)
I have a file that is pipe delimited and in Column F they have number values, both positive and negative. I need to take the one file I am starting with and split it into two separate files based on negative and positive numbers. What is the command to do so? And then I need to also transfer... (4 Replies)
Discussion started by: cckaiser15
4 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)