Sponsored Content
Full Discussion: Block processing using awk
Top Forums Shell Programming and Scripting Block processing using awk Post 302756587 by RudiC on Wednesday 16th of January 2013 06:04:48 AM
Old 01-16-2013
For the duration you need complex date/time arithmetics to take into account e.g. durations crossing midnight. There's quite some threads on this available, pls. search this forum.
However, try this:
Code:
awk 'BEGIN     {print "ID|Start_Time|End_Time|Duration"}
     /START/   {st=$2}
     /END for/ {en=$2; ID=$NF; print ID"|"st"|"en"|"0; st=en=ID=""}
    ' file
ID|Start_Time|End_Time|Duration
123|18:07:50|18:07:52|0
124|18:07:52|18:07:53|0

This User Gave Thanks to RudiC For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk processing

Hi all Is there a way in awk to know that you are processing your final line of input if you do no know how many lines were in the input to begin with? Thanks (7 Replies)
Discussion started by: pxy2d1
7 Replies

2. Shell Programming and Scripting

Processing with awk

I currently have a file whose contents are: RangingDates 1 20090224 20090225 17 0 Export Complete 2009-02-23 19:58:00 FutureModuleList 1 20090225 20090226 6 0 Export Complete 2009-02-24 06:00:00 StoreDescription ... (2 Replies)
Discussion started by: zainravi
2 Replies

3. Shell Programming and Scripting

Awk text processing

Hi Very much appreciate if somebody could give me a clue .. I undestand that it could be done with awk but have a limited experience. I have the following text in the file 1 909 YES NO 2 500 No NO . ... 1 ... (8 Replies)
Discussion started by: zam
8 Replies

4. Shell Programming and Scripting

processing the output of AWK

Hi my input file is <so > < Time > <Pid> <some ro><Job Name> 111004 04554447 26817 JOB03275 MBPDVLOI 111004 04554473 26817 JOB03275 MBPDVLOI 111004 04554778 26807 JOB03276 MBPDVAWD 111004 04554779 26807 JOB03276 MBPDVAWD 111004 04554780 26817 ... (4 Replies)
Discussion started by: rakeshkumar
4 Replies

5. Shell Programming and Scripting

Help with File Processing (AWK)

Input File: 1234, 2345,abc 1,24141,gw 222,rff,sds 2232145,sdsd,121 Output file to be generated: 000001234,2345,abc 000000001,24141,gw 000000222,rff,sds 002232145,sdsd,121 i.e; the first column is padded to get 9 digits. I tried with following: (3 Replies)
Discussion started by: karumudi7
3 Replies

6. Shell Programming and Scripting

processing with awk

I have many lines like the following in a file(there are also other kinds of lines) Host: 72.52.104.74 (tserv1.fmt2.he.net) Ports: 22/open/tcp//tcpwrapped///, 53/open/tcp//domain//PowerDNS 3.3/, 179/open/tcp//tcpwrapped/// Ignored State: closed (997) Seq Index: 207 IP ID Seq: All... (9 Replies)
Discussion started by: esolvepolito
9 Replies

7. Programming

awk processing / Shell Script Processing to remove columns text file

Hello, I extracted a list of files in a directory with the command ls . However this is not my computer, so the ls functionality has been revamped so that it gives the filesizes in front like this : This is the output of ls command : I stored the output in a file filelist 1.1M... (5 Replies)
Discussion started by: ajayram
5 Replies

8. Shell Programming and Scripting

Text processing using awk

I dispose of two tab-delimited files (the first column is the primary key): File 1 (there are multiple rows sharing the same key, I cannot merge them) A 28,29,30,31 A 17,18,19 B 11,13,14,15 B 8,9File 2 (there is one only row beginning with a given key) A 2,8,18,30,31 B ... (3 Replies)
Discussion started by: dovah
3 Replies

9. Shell Programming and Scripting

Help with file processing using awk

hello All, I'm new to AWK programming and learned myself few things to process a file and deal with duplicate lines, but I got into a scenario which makes me clueless to handle. Here is the scenario.. Input file: user role ----- ---- AAA add AAA delete BBB delete CCC delete DDD ... (10 Replies)
Discussion started by: julearn
10 Replies

10. Shell Programming and Scripting

awk for text processing

Hi,my file is in this format ", \"symbol\": \"Rbm38\" } ]" I want to convert it to a more user readable format _id pubmed text symbol 67196 18667844 Overexpression of UBE2T in NIH3T3 cells significantly promoted colony formation in mouse cell cultures Ube2t 56190 21764855 ... (3 Replies)
Discussion started by: biofreek
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 09:55 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy