01-26-2013
Don't know how I spoiled that: append a } to the line NR>1...
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
i am writing a script that reads in a file and i just want it to print each element on a new line here is my code and the data that i want to read in
#!/usr/bin/perl
use strict;
use CGI qw(:standard);
use CGI qw(:cgi);
my $data_file = "/tmp/results.txt";
my $configuration;
my... (3 Replies)
Discussion started by: nmeliasp
3 Replies
2. Shell Programming and Scripting
Hi All,
I have been trying to re-arrange the below data using AWK or Perl.
Can anybody help me ?
Thanks in advance.
Input:
111 222
333 444
AAA BBB
CCC DDD
555 666
777 888
EEE FFF
GGG HHH
Output: (6 Replies)
Discussion started by: Raynon
6 Replies
3. Shell Programming and Scripting
Hi
I have to convert the data in a file
*******
01-20-09 11:14AM 60928 ABC Valuation-2009.xls
01-20-09 11:16AM 55808 DEF GHI Equation-2009.xls
01-20-09 11:02AM 52736 ABC DF Valuation-2009.xls
01-20-09 11:06AM 89600 THE... (6 Replies)
Discussion started by: shekhar_v4
6 Replies
4. UNIX for Dummies Questions & Answers
How would I get this output to look
$ cat newfile
13114
84652
84148
LIKE THIS?:
13114,84652,84148
sed,cut awk?
syntax? (2 Replies)
Discussion started by: ddurden7
2 Replies
5. Shell Programming and Scripting
I want to check whether if any column data has any + , - , = prefixed to it then convert it in such a form that in excel its not read as formula.
echo "$DATA" | awk 'BEGIN { OFS="," } -F" " {print $1,$2,$3,$4,$5,$6,$7,$8.$9,$10,$11,$12}' (4 Replies)
Discussion started by: dinjo_jo
4 Replies
6. Shell Programming and Scripting
Hello everybody,
I have a file containing some statistics regarding CPU usage. The file has this syntax :
Fri Jul 16 14:27:16 EEST 2010
Cpu(s): 15.2%us, 1.4%sy, 0.0%ni, 82.3%id, 0.1%wa, 0.0%hi, 0.9%si, 0.0%st
Fri Jul 16 15:02:17 EEST 2010
Cpu(s): 15.3%us, 1.4%sy, 0.0%ni, 82.3%id, ... (9 Replies)
Discussion started by: spiriad
9 Replies
7. Shell Programming and Scripting
Hi,
I have data coming in like below. Not all data is like that, these are the problem records that is causing the ETL load to fail. Can you pls help me with combining theese broken records!
001800018000000guyMMAAY~acct name~acct type~~"address part 1
address... (8 Replies)
Discussion started by: varman
8 Replies
8. UNIX for Advanced & Expert Users
HI
I want to make it single row if start with braces i.e. { .Any idea
{1:XXX2460275191}{2:SEC00687921131112201641N}{3:{58910}}{4:
:R:GENL
:C::xx//xx1
:20C::yy//yy1
:2S:xxT}
{1:XXX2460275190}{2:SEC00687921131112201641y}{3:{58911}}{4:
:z:GENL
:v::xx//xx1
:10C::yy//yy1
:4S:xxT
... (2 Replies)
Discussion started by: mohan705
2 Replies
9. Shell Programming and Scripting
Need assistance on the data extraction using awk
Below is the format and would like to extract the data in another format
-------------------------------------------------------------------------------------------------
Minimum Temperature (deg F )
DAY 1 2 3 4 5 6 7 8 9 10 11... (4 Replies)
Discussion started by: ajayram_arya
4 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)