05-19-2010
I've had to create a differential list before for a similar task.
Records 1 to 100 would be sent, followed by 90 - 300, followed by 250 - whatever.
Each time I would create a list of the last N lines captured. In my case, 5 was sufficient, you may need more or less. I would then search for the last lines that I've captured and process from there. Upon completion, I create my new 'last N lines' and repeat for the next time 'round.
Eventually, the solution is to fix the distribution method to be consistent.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi Gurus,
I had a question regarding avoiding duplicates.i have a file abc.txt
abc.txt
-------
READER_1_1_1> HIER_28056 XML Reader: Error occurred while parsing:; line number ; column number
READER_1_3_1> Sun Mar 23 23:52:48 2008
READER_1_3_1> HIER_28056 XML Reader: Error occurred while... (7 Replies)
Discussion started by: pssandeep
7 Replies
2. Shell Programming and Scripting
Hi,
I need to remove duplicates from a file. The file will be like this
0003 10101 20100120 abcdefghi
0003 10101 20100121 abcdefghi
0003 10101 20100122 abcdefghi
0003 10102 20100120 abcdefghi
0003 10103 20100120 abcdefghi
0003 10103 20100121 abcdefghi
Here if the first colum and... (6 Replies)
Discussion started by: gpaulose
6 Replies
3. UNIX for Dummies Questions & Answers
Hi Unix gurus,
Maybe it is too much to ask for but please take a moment and help me out. A very humble request to you gurus. I'm new to Unix and I have started learning Unix. I have this project which is way to advanced for me.
File format: CSV file
File has four columns with no header... (8 Replies)
Discussion started by: arvindosu
8 Replies
4. Shell Programming and Scripting
Hi Experts,
Please check the following new requirement. I got data like the following in a file.
FILE_HEADER
01cbbfde7898410| 3477945| home| 1
01cbc275d2c122| 3478234| WORK| 1
01cbbe4362743da| 3496386| Rich Spare| 1
01cbc275d2c122| 3478234| WORK| 1
This is pipe separated file with... (3 Replies)
Discussion started by: tinufarid
3 Replies
5. Shell Programming and Scripting
Hi All,
I have an xml file that contains information like this
<ID>574922<COMMENT>TEXT
TEXT
TEXT</COMMENT></ID>
<ID>574922<COMMENT>TEXT
TEXT
TEXT</COMMENT></ID>
<ID>412659<COMMENT>TEXT
TEXT
TEXT
TEXT
TEXT</COMMENT></ID>
<ID>873520<COMMENT>TEXT</COMMENT></ID>... (5 Replies)
Discussion started by: TasosARISFC
5 Replies
6. Shell Programming and Scripting
Hi Folks -
I'm quite new to awk and didn't come across such issues before. The problem statement is that, I've a file with duplicate records in 3rd and 4th fields. The sample is as below:
aaaaaa|a12|45|56
abbbbaaa|a12|45|56
bbaabb|b1|51|45
bbbbbabbb|b2|51|45
aaabbbaaaa|a11|45|56
... (3 Replies)
Discussion started by: asyed
3 Replies
7. Programming
Dear All
I have 200 data files and each files has many duplicates.
I am looking for the automated awk script such that it checks and removes the duplicates from the each file and saving them as new files for all 200 files in the respective folder.
For example my data looks like this..
... (12 Replies)
Discussion started by: bala06
12 Replies
8. UNIX for Dummies Questions & Answers
Can u tell me how to remove duplicate records from a file? (11 Replies)
Discussion started by: saga20
11 Replies
9. UNIX for Dummies Questions & Answers
Hi All,
I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file
I,01,000131,764,2,4.00
I,01,000131,765,2,4.00
I,01,000131,772,2,4.00
I,01,000131,773,2,4.00
I,01,000168,762,2,2.00
I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies
10. Shell Programming and Scripting
i hav two files like
i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3.I have tried previous post also,but in that complete line must be similar.In this case i have to verify first column only regardless what is the content in succeeding columns. (3 Replies)
Discussion started by: sagar_1986
3 Replies
LEARN ABOUT DEBIAN
obd2csv
obd2csv(1) General Commands Manual obd2csv(1)
NAME
obd2csv - Convert obdgpslogger(1) logs to csv files
SYNOPSIS
obd2csv [ options ]
DESCRIPTION
Convert obdgpslogger(1) logs to csv files
OPTIONS
-o|--out <output filename>
Output to this .csv file
-d|--db <database>
Work from logs stored in this database file
-s|--start <time>
Only dump rows more recent than this
-e|--end <time>
Only dump rows older than this
-z|--gzip
gzip compress output using zlib [if available]
-v|--version
Print out version number and exit.
-h|--help
Print out help and exit.
NOT OPTIONS
These options aren't intended for end-users, they're for the GUI.
-p|--progress
Print out progress. It will occasionally print a number in the range [0..100], indicating progress percentage.
SEE ALSO
obdgpslogger(1), obd2kml(1), obd2gpx(1), obdsim(1), obdgui(1), obdlogrepair(1)
AUTHORS
Gary "Chunky Ks" Briggs <chunky@icculus.org>
obd2csv(1)