It's a shame to not be able to do what I need, but I am sure you will :
Here is what I have in my log file :
Code:
New File: 95106 Jun 6 48 TAG__KSO__2012092_0.TAB
New File: 95106 Mar 26 48 TAG__KSM__2012020_0.TAB
New File: 95106 Mar 26 48 TAG__KSO__2012020_0.TAB
New File: 95106 May 10 48 TAG__SC___2012065_0.TAB
New File: 95106 Oct 20 48 TAG__SC___2012228_0.TAB
I am looking for sorting the file by day of the year like this :
Code:
New File: 95106 Mar 26 48 TAG__KSM__2012020_0.TAB
New File: 95106 Mar 26 48 TAG__KSO__2012020_0.TAB
New File: 95106 May 10 48 TAG__SC___2012065_0.TAB
New File: 95106 Jun 6 48 TAG__KSO__2012092_0.TAB
New File: 95106 Oct 20 48 TAG__SC___2012228_0.TAB
I have a date that looks like this:
2008/100:18:40:47.040
I need it to look like this:
2008 04 09 18 40 47 040
I have looked at datecalc and it doesn't seem like it takes the day of year for this year (or whatever year is input) and converts it into month and day. It also has to account... (2 Replies)
Hi Im trying to concatenate a specific file from each day in a year/month/day folder structure using Bash or equivalent. The file structure ends up like this:
2009/01/01/products
2009/01/02/products
....
2009/12/31/products
The file I need is in products everyday and I need the script to... (3 Replies)
Hi,
I wold like to know the day of year from a date in input.
I know to get this from sysate with date +%j
But from a date in input? :confused:
Thanks (2 Replies)
Hello,
Thank you in advance for helping a newbie who is having great trouble with this simple task.
I'm allowed to copy one file remotely each night due to bandwidth restrictions.
A new file gets generated once a day, and I need to copy the previous day's file.
Here is what I'd like to do:... (1 Reply)
Hi,
How can I convert day of year value in format(yy,doy) to normal formatted (dd.mm.yyyy) string also all of them with awk or awk system function?
in_file.txt
---------
12,043
12,044
12,045
12,046
out_file.txt
----------
12.02.2012
13.02.2012
14.02.2012
15.02.2012
imagine... (5 Replies)
hello,
I have many files called day001, day002, day003 and I want to rename them by day20070101, day20070102, etc.
I need to do it for several years and leap years as well.
What is the best way to do it ?
Thank you. (1 Reply)
Discussion started by: Ggg
1 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)