Sponsored Content
Top Forums Shell Programming and Scripting Script Optimization - large delimited file, for loop with many greps Post 302516131 by Corona688 on Thursday 21st of April 2011 05:52:35 PM
Old 04-21-2011
How about this:
Code:
#!/bin/awk -f
# This section gets run only once, before anything's read.
# using it for variable setup.
BEGIN {
        # Don't have to check what the delimiter is, just split on
        # any single character that's not a-z, A-Z, 0-9, _
        FS="[^a-zA-Z0-9_]"
        # Print separated by commas
        OFS=","
}

# Each of the following expressions gets executed once for every
# line that matches the regex.

# Sometimes this one's column 11, sometimes it's column 12
/^IT1/  {       if(!FIRPONUM)
                {
                        FIRPONUM=$11
                        if(!(FIRPONUM ~ /^PONUM/))
                                FIRPONUM=$12;
                }
        }
# Matching these lines is easy
/^TDS/  {       INVTL=$2        }
/^N1.SE/{       SELLCD=$5       }
/^N1.SU/{       SUPCD=$5        }
/^GS/   {       GSCODE=$3       }
# Print on this only once we've read FIRPONUM
/^ST/   {
                if(FIRPONUM)
                        print GSCODE,SUPCD,SELLCD,FIRPONUM,INVTL;

                FIRPONUM=""
        }

# Have to print once on exit or we'll lose the last line
END {   print GSCODE,SUPCD,SELLCD,FIRPONUM,INVTL;       }

Not complete since neither's your example, but much more efficient than grep | cut for every line, and might be enough to get you started.

---------- Post updated at 03:46 PM ---------- Previous update was at 03:44 PM ----------

Quote:
I'm using Korn Shell on Microsoft Windows Services for UNIX 3.5
Blech. Poor imitation of a korn shell.

And since you're not actually running UNIX my awk script of course can't run as a script like I intended. Small difference though. Just run it like awk -f script.awk inputfile

---------- Post updated at 03:52 PM ---------- Previous update was at 03:46 PM ----------

Whoa, is your data actually indented like that? That changes things.
This User Gave Thanks to Corona688 For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Directory sizes loop optimization

I have the following script: #!/usr/bin/ksh export MDIR=/datafiles NAME=$1 SERVER=$2 DIRECTORY=$3 DATABASE=$4 ID=$5 export dirlist=`/usr/bin/ssh -q $ID@$SERVER find $DIRECTORY -type d -print` for dir in $dirlist do SIZE=`</dev/null /usr/bin/ssh -q $ID@$SERVER du -ks $dir` echo... (6 Replies)
Discussion started by: la_womn
6 Replies

2. UNIX for Dummies Questions & Answers

Command that creates file and also greps that file?

I have a command that does something and then creates a log file (importlog.xml). I then want to grep that newly created log (importlog.xml) file for a certain word (success). I then want to write that grep result to a new file (success.log). So far I can run the command which creates the... (2 Replies)
Discussion started by: Sepia
2 Replies

3. Shell Programming and Scripting

Large pipe delimited file that I need to add CR/LF every n fields

I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record. Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies

4. Shell Programming and Scripting

Extracting a portion of data from a very large tab delimited text file

Hi All I wanted to know how to effectively delete some columns in a large tab delimited file. I have a file that contains 5 columns and almost 100,000 rows 3456 f g t t 3456 g h 456 f h 4567 f g h z 345 f g 567 h j k lThis is a very large data file and tab delimited. I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies

5. Shell Programming and Scripting

help with a shell script that greps an error from the logs

Hello everyone. I wrote the following script but the second part is not excecuting. It is not sending the notification by email if the error occurs. the send mail is working so i think the errorr should be in the if statement LOGDIR=/logs/out LOG=`date "+%Y%m%d"`.LOG-FILE.out #the log file ... (11 Replies)
Discussion started by: adak2010
11 Replies

6. Shell Programming and Scripting

Removing dupes within 2 delimited areas in a large dictionary file

Hello, I have a very large dictionary file which is in text format and which contains a large number of sub-sections. Each sub-section starts with the following header : #DATA #VALID 1 and ends with a footer as shown below #END The data between the Header and the Footer consists of... (6 Replies)
Discussion started by: gimley
6 Replies

7. Shell Programming and Scripting

Need a script to convert comma delimited files to semi colon delimited

Hi All, I need a unix script to convert .csv files to .skv files (changing a comma delimited file to a semi colon delimited file). I am a unix newbie and so don't know where to start. The script will be scheduled using cron and needs to convert each .csv file in a particular folder to a .skv... (4 Replies)
Discussion started by: CarpKing
4 Replies

8. Shell Programming and Scripting

Tab Delimited file in loop

Hi, I have requirement to create tab delimited file with values coming from variables. File will contain only two columns separated by tab. Header will be added once. Values will be keep adding upon the script run. If values already exists then values will be replaced. I have done so... (1 Reply)
Discussion started by: sukhdip
1 Replies

9. UNIX for Advanced & Expert Users

Need optimized awk/perl/shell to give the statistics for the Large delimited file

I have a file size is around 24 G with 14 columns, delimiter with "|" My requirement- can anyone provide me the fastest and best to get the below results Number of records of the file First column and second Column- Unique counts Thanks for your time Karti ------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies

10. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies
awk(1)																	    awk(1)

NAME
awk - pattern scanning and processing language SYNOPSIS
/usr/bin/awk [-f progfile] [-Fc] [ ' prog '] [parameters] [filename...] /usr/xpg4/bin/awk [-FcERE] [-v assignment...] 'program' -f progfile... [argument...] The /usr/xpg4/bin/awk utility is described on the nawk(1) manual page. The /usr/bin/awk utility scans each input filename for lines that match any of a set of patterns specified in prog. The prog string must be enclosed in single quotes ( ') to protect it from the shell. For each pattern in prog there can be an associated action performed when a line of a filename matches the pattern. The set of pattern-action statements can appear literally as prog or in a file specified with the -f progfile option. Input files are read in order; if there are no files, the standard input is read. The file name '-' means the standard input. The following options are supported: -f progfile awk uses the set of patterns it reads from progfile. -Fc Uses the character c as the field separator (FS) character. See the discussion of FS below. USAGE
Input Lines Each input line is matched against the pattern portion of every pattern-action statement; the associated action is performed for each matched pattern. Any filename of the form var=value is treated as an assignment, not a filename, and is executed at the time it would have been opened if it were a filename. Variables assigned in this manner are not available inside a BEGIN rule, and are assigned after previ- ously specified files have been read. An input line is normally made up of fields separated by white spaces. (This default can be changed by using the FS built-in variable or the -Fc option.) The default is to ignore leading blanks and to separate fields by blanks and/or tab characters. However, if FS is assigned a value that does not include any of the white spaces, then leading blanks are not ignored. The fields are denoted $1, $2, ...; $0 refers to the entire line. Pattern-action Statements A pattern-action statement has the form: pattern { action } Either pattern or action can be omitted. If there is no action, the matching line is printed. If there is no pattern, the action is per- formed on every input line. Pattern-action statements are separated by newlines or semicolons. Patterns are arbitrary Boolean combinations ( !, ||, &&, and parentheses) of relational expressions and regular expressions. A relational expression is one of the following: expression relop expression expression matchop regular_expression where a relop is any of the six relational operators in C, and a matchop is either ~ (contains) or !~ (does not contain). An expression is an arithmetic expression, a relational expression, the special expression var in array or a Boolean combination of these. Regular expressions are as in egrep(1). In patterns they must be surrounded by slashes. Isolated regular expressions in a pattern apply to the entire line. Regular expressions can also occur in relational expressions. A pattern can consist of two patterns separated by a comma; in this case, the action is performed for all lines between the occurrence of the first pattern to the occurrence of the second pattern. The special patterns BEGIN and END can be used to capture control before the first input line has been read and after the last input line has been read respectively. These keywords do not combine with any other patterns. Built-in Variables Built-in variables include: FILENAME name of the current input file FS input field separator regular expression (default blank and tab) NF number of fields in the current record NR ordinal number of the current record OFMT output format for numbers (default %.6g) OFS output field separator (default blank) ORS output record separator (default new-line) RS input record separator (default new-line) An action is a sequence of statements. A statement can be one of the following: if ( expression ) statement [ else statement ] while ( expression ) statement do statement while ( expression ) for ( expression ; expression ; expression ) statement for ( var in array ) statement break continue { [ statement ] ... } expression # commonly variable = expression print [ expression-list ] [ >expression ] printf format [ ,expression-list ] [ >expression ] next # skip remaining patterns on this input line exit [expr] # skip the rest of the input; exit status is expr Statements are terminated by semicolons, newlines, or right braces. An empty expression-list stands for the whole input line. Expressions take on string or numeric values as appropriate, and are built using the operators +, -, *, /, %, ^ and concatenation (indicated by a blank). The operators ++, --, +=, -=, *=, /=, %=, ^=, >, >=, <, <=, ==, !=, and ?: are also available in expressions. Variables can be scalars, array elements (denoted x[i]), or fields. Variables are initialized to the null string or zero. Array subscripts can be any string, not necessarily numeric; this allows for a form of associative memory. String constants are quoted (""), with the usual C escapes recognized within. The print statement prints its arguments on the standard output, or on a file if >expression is present, or on a pipe if '|cmd' is present. The output resulted from the print statement is terminated by the output record separator with each argument separated by the current out- put field separator. The printf statement formats its expression list according to the format (see printf(3C)). Built-in Functions The arithmetic functions are as follows: cos(x) Return cosine of x, where x is in radians. (In /usr/xpg4/bin/awk only. See nawk(1).) sin(x) Return sine of x, where x is in radians. (In /usr/xpg4/bin/awk only. See nawk(1).) exp(x) Return the exponential function of x. log(x) Return the natural logarithm of x. sqrt(x) Return the square root of x. int(x) Truncate its argument to an integer. It is truncated toward 0 when x > 0. The string functions are as follows: index(s, t) Return the position in string s where string t first occurs, or 0 if it does not occur at all. int(s) truncates s to an integer value. If s is not specified, $0 is used. length(s) Return the length of its argument taken as a string, or of the whole line if there is no argument. split(s, a, fs) Split the string s into array elements a[1], a[2], ... a[n], and returns n. The separation is done with the regular expression fs or with the field separator FS if fs is not given. sprintf(fmt, expr, expr,...) Format the expressions according to the printf(3C) format given by fmt and returns the resulting string. substr(s, m, n) returns the n-character substring of s that begins at position m. The input/output function is as follows: getline Set $0 to the next input record from the current input file. getline returns 1 for successful input, 0 for end of file, and -1 for an error. Large File Behavior See largefile(5) for the description of the behavior of awk when encountering files greater than or equal to 2 Gbyte ( 2**31 bytes). Example 1: Printing Lines Longer Than 72 Characters The following example is an awk script that can be executed by an awk -f examplescript style command. It prints lines longer than seventy two characters: length > 72 Example 2: Printing Fields in Opposite Order The following example is an awk script that can be executed by an awk -f examplescript style command. It prints the first two fields in opposite order: { print $2, $1 } Example 3: Printing Fields in Opposite Order with the Input Fields Separated The following example is an awk script that can be executed by an awk -f examplescript style command. It prints the first two input fields in opposite order, separated by a comma, blanks or tabs: BEGIN { FS = ",[ ]*|[ ]+" } { print $2, $1 } Example 4: Adding Up the First Column, Printing the Sum and Average The following example is an awk script that can be executed by an awk -f examplescript style command. It adds up the first column, and prints the sum and average: { s += $1 } END { print "sum is", s, " average is", s/NR } Example 5: Printing Fields in Reverse Order The following example is an awk script that can be executed by an awk -f examplescript style command. It prints fields in reverse order: { for (i = NF; i > 0; --i) print $i } Example 6: Printing All lines Between start/stop Pairs The following example is an awk script that can be executed by an awk -f examplescript style command. It prints all lines between start/stop pairs. /start/, /stop/ Example 7: Printing All Lines Whose First Field is Different from the Previous One The following example is an awk script that can be executed by an awk -f examplescript style command. It prints all lines whose first field is different from the previous one. $1 != prev { print; prev = $1 } Example 8: Printing a File and Filling in Page numbers The following example is an awk script that can be executed by an awk -f examplescript style command. It prints a file and fills in page numbers starting at 5: /Page/ { $2 = n++; } { print } Example 9: Printing a File and Numbering Its Pages Assuming this program is in a file named prog, the following example prints the file input numbering its pages starting at 5: example% awk -f prog n=5 input See environ(5) for descriptions of the following environment variables that affect the execution of awk: LANG, LC_ALL, LC_COLLATE, LC_CTYPE, LC_MESSAGES, NLSPATH, and PATH. LC_NUMERIC Determine the radix character used when interpreting numeric input, performing conversions between numeric and string val- ues and formatting numeric output. Regardless of locale, the period character (the decimal-point character of the POSIX locale) is the decimal-point character recognized in processing awk programs (including assignments in command-line argu- ments). See attributes(5) for descriptions of the following attributes: /usr/bin/awk +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWesu | +-----------------------------+-----------------------------+ |CSI |Not Enabled | +-----------------------------+-----------------------------+ /usr/xpg4/bin/awk +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWxcu4 | +-----------------------------+-----------------------------+ |CSI |Enabled | +-----------------------------+-----------------------------+ |Interface Stability |Standard | +-----------------------------+-----------------------------+ egrep(1), grep(1), nawk(1), sed(1), printf(3C), attributes(5), environ(5), largefile(5), standards(5) Input white space is not preserved on output if fields are involved. There are no explicit conversions between numbers and strings. To force an expression to be treated as a number, add 0 to it. To force an expression to be treated as a string, concatenate the null string ("") to it. 22 Jun 2005 awk(1)
All times are GMT -4. The time now is 01:11 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy