06-07-2011
without using -t'_' cant it be sorted?
bcoz sometime i may get files as:
Abc_Def_ghi_091220101458.csv
jkl_pqr_trt_081220101458.csv
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi,
Is there any way to sort a file in cshell by sort command, sorting it by multiple fields, like to sort it first by the second column and then by the first column.
Thanks forhead (1 Reply)
Discussion started by: Takeeshe
1 Replies
2. Programming
Hi All,
I have a list of files in a directory ..which are look like:
42420031.1000
42420031.1001
42420031.396
42420031.402
42420031.403
42420031.404
42420031.405
42420031.406
42420031.407
42420031.408
42420031.409
Here when i do ls 42420031* |sort it gives the output as
... (3 Replies)
Discussion started by: sanj_eck
3 Replies
3. Shell Programming and Scripting
Hi Everyone,
# cat 1.pl
#!/usr/bin/perl
use strict;
use warnings;
my @test=("a","b","c","d");
print @test;
# ./1.pl
abcd
if i want to sort the @test, based on my own partten, the output is "cbda", how to do? as i know perl sort i can use cmp and <=>, but how to do with my own... (7 Replies)
Discussion started by: jimmy_y
7 Replies
4. Shell Programming and Scripting
Hi,
I am having trouble sorting one file based on another file. I tried the grep -f function and failed. Basically what I have is two files that look like this:
File 1 (the list)
gh
aba
for
hmm
File 2 ( the file that needs to be sorted)
aba 2 4 6 7
for 2 4 7 4... (4 Replies)
Discussion started by: phil_heath
4 Replies
5. Shell Programming and Scripting
I have a file which has the following data :-
how can I sort the data in descending order .
My files may have the first column with 1 to 10000 numbers .I need to arrange them in descending order .
Thanks (2 Replies)
Discussion started by: lazydev
2 Replies
6. Shell Programming and Scripting
Hi,
i need to sort content of files based on a specific value. An example as below.
Input1.txt
Col_1
SW_MH2_ST
ST_F72_9S
SW_MH3_S6
Col_2
SW_MH3_AS7
ST_S15_9CH
SW_MH3_AS8
SW_MH3_ST
Col_3
ST_M93_SZ
ST_C16_TC (12 Replies)
Discussion started by: redse171
12 Replies
7. Shell Programming and Scripting
Hello, I have a series of files in sub-directories that I want to loop through, process and name according to the input filename and the various parameters I'm using to process the files. I have a number of each, for example file names like AG005574, AG004788, AG003854 and parameter values like... (2 Replies)
Discussion started by: bdeads
2 Replies
8. UNIX for Beginners Questions & Answers
Hi all, (5 Replies)
Discussion started by: KMusunuru
5 Replies
9. UNIX for Beginners Questions & Answers
Hello Unix experts:
I have dir where few files are there, i want to sort these files and write the output to some other file but i need filenames with filepath too
eg:
i have filenames like
010020001_S-FOR-Sort-SYEXC_20171218_094256_0004.txt
so i want to sort my files on first 5 fields of... (2 Replies)
Discussion started by: gnnsprapa
2 Replies
10. UNIX for Beginners Questions & Answers
Hi All ,
I am having an input file like this
Input file
7 sks/jsjssj/ddjd/hjdjd/hdhd/Q 10 0.5 13
dkdkd/djdjd/djdjd/djd/QB 01 0.5
ldld/dkd/jdf/fjfjf/fjf/Q 0.5
10 sjs/jsdd/djdkd/dhd/Q 01 0.5 21
kdkd/djdd/djdd/jdd/djd/QB 01 0.5
dkdld/djdjd/djd/Q 01 0.5
... (9 Replies)
Discussion started by: kshitij
9 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)