A .dat is quite a commonly used extension to mean "data", it can be generated by many programs and although I've seen many .dat file headers, I've never seen that header at the start.
it doesn't have an ascii fourcc at the start, so that gives me no clues. I would guess its just data from the start as the values as longs or shorts are very high, unless there are about 43k records or size? i'm just guessing now.
Can you provide the data originator application name?
Is it possible it could be compressed, and headers are omitted (ie a gz stream)
Hi.
I want to attach a .xls or .dat file while sending mail thru unix.
I have come across diff attachments sending options, but allthose embeds the content in the mail. I want the attachement to be send as such.
Please help me out.
regards
Diwakar (1 Reply)
Dear All,
I want to decode the one of the file field.
Input file:
9393939393|999|2009-02-20 00:00:01|2||4587|2007-02-28 00:00:01|0
9393939393|2001|2009-02-20 00:00:01|2||4587|2007-02-28 00:00:01|0
9393939393|1500|2009-02-20 00:00:01|2||4587| 2007-02-28 00:00:01|0... (1 Reply)
Hello Gurus,
We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this .
Problem Definition:
/Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Hi Experts,
Any idea how to decode file handle in HP-UX? I am getting the following error continously in my HP-UX 11.31 box :mad:
Apr 26 07:15:00 host62 su: + tty?? root-bb
Apr 26 07:15:00 host62 su: + tty?? root-abcadm
Apr 26 07:15:01 host62 vmunix: NFS write error on host peq9vs:... (1 Reply)
Hi All
I have a .dat file, the values are seperated by ". I wish to identify all field values in field 14 that are not '01-APR-2013' band then copy those records to a new file. Can anyone suggest the UNIX command required.
Thanks in advance
Andy (2 Replies)
Hi All,
I thinking on how to accelerate the speed on calculate the dat file against the number of records CTRL file.
There are about 300 to 400 folder directories that contains both DAT and CTL files.
DAT contain all the flat files records
CTL is the reference check file for the... (3 Replies)
hdr=$(cut -c1 $path$file|head -1)#extract header”H”
trl=$(cut -c|path$file|tail -1)#extract trailer “T”
SplitFile=$(cut -c 50-250 $path 1$newfile |sed'$/ *$//' head -1')# to trim white space and extract table name
If; then # start loop if it is a header
While read I #read file
Do... (4 Replies)
hi
i have this file :
<?xml version="1.0" encoding="UTF-8"?>
<OnDemand xmlns="http://xsd.telecomitalia.it/Schema/crmws.entity.OnDemand"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xsd.telecomitalia.it/Schema/crmws.entity.OnDemand... (2 Replies)
Discussion started by: Francesco_IT
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)