Sponsored Content
Full Discussion: Decode windows .dat file
Special Forums Windows & DOS: Issues & Discussions Decode windows .dat file Post 302966128 by wisecracker on Monday 8th of February 2016 01:21:28 PM
Old 02-08-2016
I agree with gull04...

A .dat extension is meaningless unless you know what app created it, and even then one might not be able to decode it easily because the file is _binary_...

The fact that it was sent from a Windows server is immaterial it is the app that created it that is important...

So please give us more HEX hexdump to look at AND also use code tags as per forum rules...

TIA.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to attach an excel file/ dat file thru unix mails

Hi. I want to attach a .xls or .dat file while sending mail thru unix. I have come across diff attachments sending options, but allthose embeds the content in the mail. I want the attachement to be send as such. Please help me out. regards Diwakar (1 Reply)
Discussion started by: diwakar82
1 Replies

2. Shell Programming and Scripting

DECODE file field is required in Bash

Dear All, I want to decode the one of the file field. Input file: 9393939393|999|2009-02-20 00:00:01|2||4587|2007-02-28 00:00:01|0 9393939393|2001|2009-02-20 00:00:01|2||4587|2007-02-28 00:00:01|0 9393939393|1500|2009-02-20 00:00:01|2||4587| 2007-02-28 00:00:01|0... (1 Reply)
Discussion started by: hanu_oracle
1 Replies

3. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

4. Red Hat

How to view .dat file?

What is the command that can be used to open or view the .dat file in linux? Unable to read the contents of .dat file. (7 Replies)
Discussion started by: Rupaa
7 Replies

5. UNIX for Advanced & Expert Users

How to decode nfs file handle in HP-UX?

Hi Experts, Any idea how to decode file handle in HP-UX? I am getting the following error continously in my HP-UX 11.31 box :mad: Apr 26 07:15:00 host62 su: + tty?? root-bb Apr 26 07:15:00 host62 su: + tty?? root-abcadm Apr 26 07:15:01 host62 vmunix: NFS write error on host peq9vs:... (1 Reply)
Discussion started by: vipinable
1 Replies

6. UNIX for Dummies Questions & Answers

ID incorrect field values in dat file and output to new file

Hi All I have a .dat file, the values are seperated by ". I wish to identify all field values in field 14 that are not '01-APR-2013' band then copy those records to a new file. Can anyone suggest the UNIX command required. Thanks in advance Andy (2 Replies)
Discussion started by: aurum1313
2 Replies

7. Shell Programming and Scripting

FASTEN count line of dat file and compare with the CTRL file

Hi All, I thinking on how to accelerate the speed on calculate the dat file against the number of records CTRL file. There are about 300 to 400 folder directories that contains both DAT and CTL files. DAT contain all the flat files records CTL is the reference check file for the... (3 Replies)
Discussion started by: ckwan
3 Replies

8. Shell Programming and Scripting

Execution of loop :Splitting a single file into multiple .dat file

hdr=$(cut -c1 $path$file|head -1)#extract header”H” trl=$(cut -c|path$file|tail -1)#extract trailer “T” SplitFile=$(cut -c 50-250 $path 1$newfile |sed'$/ *$//' head -1')# to trim white space and extract table name If; then # start loop if it is a header While read I #read file Do... (4 Replies)
Discussion started by: SwagatikaP1
4 Replies

9. Shell Programming and Scripting

How to use 'ls' command to list files like *.dat, not *.*.dat?

How to use 'ls' command to list files like *.dat, not *.*.dat (5 Replies)
Discussion started by: pmcginni777
5 Replies

10. UNIX for Beginners Questions & Answers

Decode a file

hi i have this file : <?xml version="1.0" encoding="UTF-8"?> <OnDemand xmlns="http://xsd.telecomitalia.it/Schema/crmws.entity.OnDemand" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xsd.telecomitalia.it/Schema/crmws.entity.OnDemand... (2 Replies)
Discussion started by: Francesco_IT
2 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 12:54 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy