Hello
I need help to convert flat file data to HTML Table format.
I am generating everyday Flat file and want to convert into HTML Table format.
The format of my file is:
version host Total YRS NO APPS PSD
10 Sun 30 2 4 6 7
and flat... (11 Replies)
Please help
Need a script which will do the following :
Search on fixed width file , go to position (25,2) which means 25th and 26th position, Find if there are any char in lower case:
For example 25,2 can be (9T) or (9w) or (Ww) or (wW)....The two positions can be numeric or alpha...no... (13 Replies)
Hi Guy's can someone help me in converting the following
I have a flat text file which has several thousand lines which I need to convert to a csv it's got a consistent format but basically want every time it hit's txt to create a new line with the subsequent lines comma delimited for example
... (6 Replies)
hi...... thanks for allowing me to start a discussion
i am collecting usb usage details of all users and convert it into csv files so that i can export it into some database..
the input text file is as follows:-
USB History Dump
by nabiy (c)2008
(1) --- Kingston DataTraveler 130 USB... (2 Replies)
Hi, could some help me convert CSV file (with double quoted strings) to pipe delimited file:
here you go with the same data:
1,Friends,"$3.99 per 1,000 listings",8158here " 1,000 listings " should be a single field.
Thanks,
Ram (8 Replies)
hi
i have written a script for reading a csv file and creating a flat file, suggest if this script can be optimized
#----------------
FILENAME="$1"
SCRIPT=$(basename $0)
#-----------------------------------------//
function usage
{
echo "\nUSAGE: $THIS_SCRIPT file_to_process\n"... (3 Replies)
Hi ,
I have a simple text file with contents as below:
12345678900 971,76 4234560890
22345678900 5971,72 5234560990
32345678900 71,12 6234560190
the new csv-file should be like:
Column1;Column2;Column3;Column4;Column5
123456;78900;971,76;423456;0890... (9 Replies)
Hi all,
I need to find a way to convert excel file into csv or a text file in linux command. The reason is I have hundreds of files to convert. Another complication is the I need to delete the first 5 lines of the excel file before conversion.
so for instance
input.xls
description of... (6 Replies)
Hi I have a file like this:
a=1
b=2
c=3
a=4
b=2
d=3
a=3
c=4
How can I change this to csv format
a,b,c,d
1,2,3,,
4,2,,3
3,,4,,
Please use code tags next time for your code and data. Thanks (10 Replies)
Hi All,
I have a csv file which is comma seperated. I need to convert to flat file with preferred column length
country,id
Australia,1234
Africa,12399999
Expected output
country id
Australia 1234
Africa 12399999
the flat file should predefined length on respective... (8 Replies)
Discussion started by: rohit_shinez
8 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)