Sponsored Content
Top Forums Shell Programming and Scripting using awk to count no of records based on conditions Post 302323407 by Franklin52 on Sunday 7th of June 2009 03:13:10 PM
Old 06-07-2009
With this awk code you can convert the format "2009 6 6" to "06-Jun-2009", usage:

Code:
awk -v var="2009 6 6" '
BEGIN{
  split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec", month, " ")
  for (i=1; i<=12; i++) mdigit[month[i]]=i
  split(var,d)
  m=int(d[3])
  printf("%02d-%s-%s\n",d[2],month[m],d[1])
}'

You should be able now to use it to fit your code.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Count No of Records in File without counting Header and Trailer Records

I have a flat file and need to count no of records in the file less the header and the trailer record. I would appreciate any and all asistance Thanks Hadi Lalani (2 Replies)
Discussion started by: guiguy
2 Replies

2. Shell Programming and Scripting

Record count based on a keyword in the records

Hi, Am having files with many records, i need to count and display the number of records based on the keyword in one of the column of the records. for e.g THE FILE CONTAINS TWO RECORDS LIKE. 200903031143150 0 1236060795054357lessrv1 BSNLSERVICE1 BSNLSERVICE1 ... (4 Replies)
Discussion started by: aemunathan
4 Replies

3. Shell Programming and Scripting

Awk to Count Records with not null

Hi, I have a pipe seperated file I want to write a code to display count of lines that have 20th field not null. nawk -F"|" '{if ($20!="") print NR,$20}' xyz..txt This displays records with 20th field also null. I would like output as: (4 Replies)
Discussion started by: pinnacle
4 Replies

4. Shell Programming and Scripting

awk merging files based on 2 complex conditions

1. if the 1st row IDs of input1 (ID1/ID2.....) is equal to any IDNames of input2 print all relevant values together as defined in the output. 2. A bit tricky part is IDno in the output. All we need to do is numbering same kind of letters as 1 (aa of ID1) and different letters as 2 (ab... (4 Replies)
Discussion started by: ruby_sgp
4 Replies

5. Shell Programming and Scripting

Extract file records based on some field conditions

Hello Friends, I have a file(InputFile.csv) with the following columns(the columns are pipe-delimited): ColA|ColB|ColC|ColD|ColE|ColF Now for this file, I have to get those records which fulfil the following condition: If "ColB" is NOT NULL and "ColD" has values one of the following... (9 Replies)
Discussion started by: mehimadri
9 Replies

6. Shell Programming and Scripting

count the unique records based on certain columns

Hi everyone, I have a file result.txt with records as following and another file mirna.txt with a list of miRNAs e.g. miR22, miR123, miR13 etc. Gene Transcript miRNA Gar Nm_111233 miR22 Gar Nm_123440 miR22 Gar Nm_129939 miR22 Hel Nm_233900 miR13 Hel ... (6 Replies)
Discussion started by: miclow
6 Replies

7. Shell Programming and Scripting

awk to filter file based on seperate conditions

The below awk will filter a list of 30,000 lines in the tab-delimited file. What I am having trouble with is adding a condition to SVTYPE=CNV that will only print that line if CI= must be >.05 . The other condition to add is if SVTYPE=Fusion, then in order to print that line READ_COUNT must... (3 Replies)
Discussion started by: cmccabe
3 Replies

8. Shell Programming and Scripting

awk to update file based on 5 conditions

I am trying to use awk to update the below tab-delimited file based on 5 different rules/conditions. The final output is also tab-delimited and each line in the file will meet one of the conditions. My attemp is below as well though I am not very confident in it. Thank you :). Condition 1: The... (10 Replies)
Discussion started by: cmccabe
10 Replies

9. Shell Programming and Scripting

Awk/sed/cut to filter out records from a file based on criteria

I have two files and would need to filter out records based on certain criteria, these column are of variable lengths, but the lengths are uniform throughout all the records of the file. I have shown a sample of three records below. Line 1-9 is the item number "0227546_1" in the case of the first... (15 Replies)
Discussion started by: MIA651
15 Replies

10. Shell Programming and Scripting

awk to assign points to variables based on conditions and update specific field

I have been reading old posts and trying to come up with a solution for the below: Use a tab-delimited input file to assign point to variables that are used to update a specific field, Rank. I really couldn't find too much in the way of assigning points to variable, but made an attempt at an awk... (4 Replies)
Discussion started by: cmccabe
4 Replies
clfmerge(1)							     logtools							       clfmerge(1)

NAME
clfmerge - merge Common-Log Format web logs based on time-stamps SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names] DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I have seen it done but it was messy). This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf- fers in memory (the memory footprint is generally only a few megs). OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per- formed) will be rejected and displayed on standard-error. If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com- mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts. If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan- dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web servers do). If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will be removed and displayed on standard output. OPTIONS
-b buffer-size Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the speed. -d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal- izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case. EXIT STATUS
0 No errors 1 Bad parameters 2 Can't open one of the specified files 3 Can't write to output AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>. SEE ALSO
clfsplit(1),clfdomainsplit(1) Russell Coker <;russell@coker.com.au> 0.06 clfmerge(1)
All times are GMT -4. The time now is 08:17 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy