Hi,
I need to extract the start time value (bold, red font) under the '<LogEvent ID="Timer Start">' tag (black bold) from a file with the following pattern. There are other LogEventIDs listed in the file as well, making it harder for me to extract out the specific start time that I need.
.
.... (7 Replies)
Hi , I am having a script which will start a process and appends the process related logs to a log file. The log file writes logs with every line starting with date in the format of: date +"%Y %b %d %H:%M:%S".
So, in the script, before I start the process, I am storing the date as DATE=`date +"%Y... (5 Replies)
:confused: I have a tab delimited file that I need to extract data from and into a file with specific field specs. Each field has to be a certain amount of characters. So, the name field (from delimited file) might have only 15 characters but needs to be 25 (in new file) so I need to insert spaces... (5 Replies)
My input:
Data name: ABC001
Data length: 1000
Detail info
Data Direction Start_time End_time Length
1 forward 10 100 90
1 forward 15 200 185
2 reverse 50 500 450
Data name: XFG110
Data length: 100
Detail info
Data Direction Start_time End_time Length
1 forward 50 100 50 ... (11 Replies)
Input file:
#abc_1
SAASFASFGGDSGDSGDSGSDGSDGSDGSDGSDGSDGSDGDS
Output file:
FASFGGDSGDS
I just want to print out the read from position 5 until position 15 from the data.
Below is the code that I just try but it is failed to get my desired output:
grep -v '#' input_file | awk... (5 Replies)
Hello everybody!
I am quit new here and hope you can help me.
Using an awk script I am trying to extract data from several files. The structure of the input files is as follows:
TimeStep parameter1 parameter2 parameter3 parameter4
e.g.
1 X Y Z L
1 D H Z I
1 H Y E W
2 D H G F
2 R... (2 Replies)
Bash scripting beginner here...
I have many folders, each folder representing one subject. Not all subjects have all the required files, so I need to somehow cycle through all the data and then extract the data only from subjects who have no files missing. I tried to output the ls command, but... (4 Replies)
Hi
This is my first post and I'm just a beginner. So please be nice to me.
I have a couple of html files where a pattern beginning with "http://www.site.com" and ending with "/resource.dat" is present on every 241st line. How do I extract this to a new text file?
I have tried sed -n 241,241p... (13 Replies)
Discussion started by: dejavo
13 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)