05-21-2011
the smallest number from 90% of highest numbers from all numbers in file
Hello All,
I am having problem to find what is the smallest number from 90% of highest numbers from all numbers in file. I am having file with thousands of lines and hundreds of columns.
I am familiar mainly with bash but I am open to whatever suggestion witch will lead to the solutions.
If I explain it differently I have fx 1000 numbers between 0 and 10000. The results could be:
90% of numbers are bigger than 1000
80% of numbers are bigger than 2342
70% of numbers are bigger than 5674
etc.
I am looking for numbers like 1000, 2342, 5674 as in this example.
I am sure that there is some statistical method how to do this, but I cannot remember and can find it how it is called. If I know what method can be used to do this I may find the way to calculate it too.
Thank you for help
10 More Discussions You Might Find Interesting
1. AIX
How to replace many numbers with one number in a file.
Many numbers like 444565,454678,443298,etc. i want to replace these with one number (300).Please halp me out. (2 Replies)
Discussion started by: vpandey
2 Replies
2. Shell Programming and Scripting
. . . . . . (3 Replies)
Discussion started by: some124one
3 Replies
3. UNIX for Dummies Questions & Answers
I have two files one (numbers file)contains the numbers(approximately 30000) and the other file(record file) contains the records(approximately 40000)which may or may not contain the numbers from that file.
I want to seperate the records which has the field 1=(any of the number from numbers... (15 Replies)
Discussion started by: Shiv@jad
15 Replies
4. Shell Programming and Scripting
Howdy experts,
We have some ranges of number which belongs to particual group as below.
GroupNo StartRange EndRange
Group0125 935300 935399
Group2006 935400 935476
937430 937459
Group0324 935477 935549
... (6 Replies)
Discussion started by: thepurple
6 Replies
5. Shell Programming and Scripting
Hi all,
I have a large column of numbers like
5.6789
2.4578
9.4678
13.5673
1.6589
.....
I am trying to make an awk code so that awk can easily go through the column and arrange the numbers from least to highest like
1.6589
2.4578
5.6789
.......
can anybody suggest, how can I do... (5 Replies)
Discussion started by: ananyob
5 Replies
6. Programming
Input file:
#data_1
AGDG
#data_2
ADG
#data_3
ASDDG
DG
#data_4
A
Desired result:
Highest 7
Slowest 1
code that I try but failed to archive my goal :(
#include <stdio.h> (2 Replies)
Discussion started by: cpp_beginner
2 Replies
7. UNIX for Dummies Questions & Answers
##### (0 Replies)
Discussion started by: lucasvs
0 Replies
8. Shell Programming and Scripting
Hi, I have a list.txt file with number ranges and want to print/save new all.txt file with all the numbers and between the numbers.
== list.txt ==
65936
65938
65942 && 65943
65945 ... (7 Replies)
Discussion started by: AK47
7 Replies
9. Shell Programming and Scripting
Hi again. Sorry for all the questions — I've tried to do all this myself but I'm just not good enough yet, and the help I've received so far from bartus11 has been absolutely invaluable. Hopefully this will be the last bit of file manipulation I need to do.
I have a file which is formatted as... (4 Replies)
Discussion started by: crunchgargoyle
4 Replies
10. UNIX for Beginners Questions & Answers
Hi!
I found and then adapt the code for my pipeline...
awk -F"," -vOFS="," '{printf "%0.2f %0.f\n",$2,$4}' xxx > yyy
I add -F"," -vOFS="," (for input and output as csv file) and I change the columns and the number of decimal...
It works but I have also some problems... here my columns
... (7 Replies)
Discussion started by: echo manolis
7 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)