Sponsored Content
Top Forums Shell Programming and Scripting need to remove duplicates based on key in first column and pattern in last column Post 302449766 by script_op2a on Tuesday 31st of August 2010 12:33:22 PM
Old 08-31-2010
Quote:
Originally Posted by agama
I'm not quite sure what you mean by an if statement cannot be zero.
What I mean is that in order to execute what follows the relational operation of the if statement the value cannot be 0.

For example:

Code:
#!/bin/sh
awk '{split($4,a,"_"); if (b[$1]) {print "this is b[$1]" b[$1]}}' dups.txt

This returns absolutely nothing because b[$1] is 0. Or is it?
Or is it an empty string? How can you tell?

But if you do it like this:
from the working script
Code:
if (b[$1]<=a[4]a[5])

It does NOT evaluate to 0 and continues to what follows the if.


I guess my real question is what value does b[$1] have by itself.

What value does an empty array have in awk when no values has been assigned yet?

If it's 0, I don't understand because how can you say 0>=20100826010517.txt ?

If it IS an empty string when no values have been assigned to it yet then it would be "">=20100826010517.txt . Which I Can understand.

Could you or anyone point me to some online documentation that specifically states if the value is of an empty array 0 or "an empty string" or what it is?
Or a quote from a specific book?

Last edited by Scott; 08-31-2010 at 01:45 PM.. Reason: Code tags
 

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

joining files based on key column

Hi I have to join two files based on 1st column where 4th column of a2.txt=at and take 2nd column of a1.txt and 3rd column of a2.txt and check against source files ,if matches list those source file names. a1.txt a1|20090809|20090810 a2|20090907|20090908 a2.txt a1|d|file1.txt|at... (9 Replies)
Discussion started by: akil
9 Replies

2. Shell Programming and Scripting

How can i delete the duplicates based on one column of a line

I have my data something like this (08/03/2009 22:57:42.414)(:) king aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbb (08/03/2009 22:57:42.416)(:) John cccccccccccc cccccvssssssssss baaaaa (08/03/2009 22:57:42.417)(:) Michael ddddddd tststststtststts (08/03/2009 22:57:42.425)(:) Ravi... (11 Replies)
Discussion started by: rdhanek
11 Replies

3. UNIX for Dummies Questions & Answers

Remove duplicates based on a column in fixed width file

Hi, How to output the duplicate record to another file. We say the record is duplicate based on a column whose position is from 2 and its length is 11 characters. The file is a fixed width file. ex of Record: DTYU12333567opert tjhi kkklTRG9012 The data in bold is the key on which... (1 Reply)
Discussion started by: Qwerty123
1 Replies

4. Shell Programming and Scripting

Remove duplicates based on the two key columns

Hi All, I needs to fetch unique records based on a keycolumn(ie., first column1) and also I needs to get the records which are having max value on column2 in sorted manner... and duplicates have to store in another output file. Input : Input.txt 1234,0,x 1234,1,y 5678,10,z 9999,10,k... (7 Replies)
Discussion started by: kmsekhar
7 Replies

5. Shell Programming and Scripting

remove duplicates based on single column

Hello, I am new to shell scripting. I have a huge file with multiple columns for example: I have 5 columns below. HWUSI-EAS000_29:1:105 + chr5 76654650 AATTGGAA HHHHG HWUSI-EAS000_29:1:106 + chr5 76654650 AATTGGAA B@HYL HWUSI-EAS000_29:1:108 + ... (4 Replies)
Discussion started by: Diya123
4 Replies

6. Shell Programming and Scripting

Request to check:remove duplicates only in first column

Hi all, I have an input file like this Now I have to remove duplicates only in first column and nothing has to be changed in second and third column. so that output would be Please let me know scripting regarding this (20 Replies)
Discussion started by: manigrover
20 Replies

7. Shell Programming and Scripting

Remove duplicates within row and separate column

Hi all I have following kind of input file ESR1 PA156 leflunomide PA450192 leflunomide CHST3 PA26503 docetaxel Pa4586; thalidomide Pa34958; decetaxel docetaxel docetaxel I want to remove duplicates and I want to separate anything before and after PAxxxx entry into columns or... (1 Reply)
Discussion started by: manigrover
1 Replies

8. Shell Programming and Scripting

Remove Duplicates on multiple Key Columns and get the Latest Record from Date/Time Column

Hi Experts , we have a CDC file where we need to get the latest record of the Key columns Key Columns will be CDC_FLAG and SRC_PMTN_I and fetch the latest record from the CDC_PRCS_TS Can we do it with a single awk command. Please help.... (3 Replies)
Discussion started by: vijaykodukula
3 Replies

9. Shell Programming and Scripting

Remove duplicates according to their frequency in column

Hi all, I have huge a tab-delimited file with the following format and I want to remove the duplicates according to their frequency based on Column2 and Column3. Column1 Column2 Column3 Column4 Column5 Column6 Column7 1 user1 access1 word word 3 2 2 user2 access2 ... (10 Replies)
Discussion started by: corfuitl
10 Replies
awk(1)																	    awk(1)

NAME
awk - pattern scanning and processing language SYNOPSIS
/usr/bin/awk [-f progfile] [-Fc] [ ' prog '] [parameters] [filename...] /usr/xpg4/bin/awk [-FcERE] [-v assignment...] 'program' -f progfile... [argument...] The /usr/xpg4/bin/awk utility is described on the nawk(1) manual page. The /usr/bin/awk utility scans each input filename for lines that match any of a set of patterns specified in prog. The prog string must be enclosed in single quotes ( ') to protect it from the shell. For each pattern in prog there can be an associated action performed when a line of a filename matches the pattern. The set of pattern-action statements can appear literally as prog or in a file specified with the -f progfile option. Input files are read in order; if there are no files, the standard input is read. The file name '-' means the standard input. The following options are supported: -f progfile awk uses the set of patterns it reads from progfile. -Fc Uses the character c as the field separator (FS) character. See the discussion of FS below. USAGE
Input Lines Each input line is matched against the pattern portion of every pattern-action statement; the associated action is performed for each matched pattern. Any filename of the form var=value is treated as an assignment, not a filename, and is executed at the time it would have been opened if it were a filename. Variables assigned in this manner are not available inside a BEGIN rule, and are assigned after previ- ously specified files have been read. An input line is normally made up of fields separated by white spaces. (This default can be changed by using the FS built-in variable or the -Fc option.) The default is to ignore leading blanks and to separate fields by blanks and/or tab characters. However, if FS is assigned a value that does not include any of the white spaces, then leading blanks are not ignored. The fields are denoted $1, $2, ...; $0 refers to the entire line. Pattern-action Statements A pattern-action statement has the form: pattern { action } Either pattern or action can be omitted. If there is no action, the matching line is printed. If there is no pattern, the action is per- formed on every input line. Pattern-action statements are separated by newlines or semicolons. Patterns are arbitrary Boolean combinations ( !, ||, &&, and parentheses) of relational expressions and regular expressions. A relational expression is one of the following: expression relop expression expression matchop regular_expression where a relop is any of the six relational operators in C, and a matchop is either ~ (contains) or !~ (does not contain). An expression is an arithmetic expression, a relational expression, the special expression var in array or a Boolean combination of these. Regular expressions are as in egrep(1). In patterns they must be surrounded by slashes. Isolated regular expressions in a pattern apply to the entire line. Regular expressions can also occur in relational expressions. A pattern can consist of two patterns separated by a comma; in this case, the action is performed for all lines between the occurrence of the first pattern to the occurrence of the second pattern. The special patterns BEGIN and END can be used to capture control before the first input line has been read and after the last input line has been read respectively. These keywords do not combine with any other patterns. Built-in Variables Built-in variables include: FILENAME name of the current input file FS input field separator regular expression (default blank and tab) NF number of fields in the current record NR ordinal number of the current record OFMT output format for numbers (default %.6g) OFS output field separator (default blank) ORS output record separator (default new-line) RS input record separator (default new-line) An action is a sequence of statements. A statement can be one of the following: if ( expression ) statement [ else statement ] while ( expression ) statement do statement while ( expression ) for ( expression ; expression ; expression ) statement for ( var in array ) statement break continue { [ statement ] ... } expression # commonly variable = expression print [ expression-list ] [ >expression ] printf format [ ,expression-list ] [ >expression ] next # skip remaining patterns on this input line exit [expr] # skip the rest of the input; exit status is expr Statements are terminated by semicolons, newlines, or right braces. An empty expression-list stands for the whole input line. Expressions take on string or numeric values as appropriate, and are built using the operators +, -, *, /, %, ^ and concatenation (indicated by a blank). The operators ++, --, +=, -=, *=, /=, %=, ^=, >, >=, <, <=, ==, !=, and ?: are also available in expressions. Variables can be scalars, array elements (denoted x[i]), or fields. Variables are initialized to the null string or zero. Array subscripts can be any string, not necessarily numeric; this allows for a form of associative memory. String constants are quoted (""), with the usual C escapes recognized within. The print statement prints its arguments on the standard output, or on a file if >expression is present, or on a pipe if '|cmd' is present. The output resulted from the print statement is terminated by the output record separator with each argument separated by the current out- put field separator. The printf statement formats its expression list according to the format (see printf(3C)). Built-in Functions The arithmetic functions are as follows: cos(x) Return cosine of x, where x is in radians. (In /usr/xpg4/bin/awk only. See nawk(1).) sin(x) Return sine of x, where x is in radians. (In /usr/xpg4/bin/awk only. See nawk(1).) exp(x) Return the exponential function of x. log(x) Return the natural logarithm of x. sqrt(x) Return the square root of x. int(x) Truncate its argument to an integer. It is truncated toward 0 when x > 0. The string functions are as follows: index(s, t) Return the position in string s where string t first occurs, or 0 if it does not occur at all. int(s) truncates s to an integer value. If s is not specified, $0 is used. length(s) Return the length of its argument taken as a string, or of the whole line if there is no argument. split(s, a, fs) Split the string s into array elements a[1], a[2], ... a[n], and returns n. The separation is done with the regular expression fs or with the field separator FS if fs is not given. sprintf(fmt, expr, expr,...) Format the expressions according to the printf(3C) format given by fmt and returns the resulting string. substr(s, m, n) returns the n-character substring of s that begins at position m. The input/output function is as follows: getline Set $0 to the next input record from the current input file. getline returns 1 for successful input, 0 for end of file, and -1 for an error. Large File Behavior See largefile(5) for the description of the behavior of awk when encountering files greater than or equal to 2 Gbyte ( 2**31 bytes). Example 1: Printing Lines Longer Than 72 Characters The following example is an awk script that can be executed by an awk -f examplescript style command. It prints lines longer than seventy two characters: length > 72 Example 2: Printing Fields in Opposite Order The following example is an awk script that can be executed by an awk -f examplescript style command. It prints the first two fields in opposite order: { print $2, $1 } Example 3: Printing Fields in Opposite Order with the Input Fields Separated The following example is an awk script that can be executed by an awk -f examplescript style command. It prints the first two input fields in opposite order, separated by a comma, blanks or tabs: BEGIN { FS = ",[ ]*|[ ]+" } { print $2, $1 } Example 4: Adding Up the First Column, Printing the Sum and Average The following example is an awk script that can be executed by an awk -f examplescript style command. It adds up the first column, and prints the sum and average: { s += $1 } END { print "sum is", s, " average is", s/NR } Example 5: Printing Fields in Reverse Order The following example is an awk script that can be executed by an awk -f examplescript style command. It prints fields in reverse order: { for (i = NF; i > 0; --i) print $i } Example 6: Printing All lines Between start/stop Pairs The following example is an awk script that can be executed by an awk -f examplescript style command. It prints all lines between start/stop pairs. /start/, /stop/ Example 7: Printing All Lines Whose First Field is Different from the Previous One The following example is an awk script that can be executed by an awk -f examplescript style command. It prints all lines whose first field is different from the previous one. $1 != prev { print; prev = $1 } Example 8: Printing a File and Filling in Page numbers The following example is an awk script that can be executed by an awk -f examplescript style command. It prints a file and fills in page numbers starting at 5: /Page/ { $2 = n++; } { print } Example 9: Printing a File and Numbering Its Pages Assuming this program is in a file named prog, the following example prints the file input numbering its pages starting at 5: example% awk -f prog n=5 input See environ(5) for descriptions of the following environment variables that affect the execution of awk: LANG, LC_ALL, LC_COLLATE, LC_CTYPE, LC_MESSAGES, NLSPATH, and PATH. LC_NUMERIC Determine the radix character used when interpreting numeric input, performing conversions between numeric and string val- ues and formatting numeric output. Regardless of locale, the period character (the decimal-point character of the POSIX locale) is the decimal-point character recognized in processing awk programs (including assignments in command-line argu- ments). See attributes(5) for descriptions of the following attributes: /usr/bin/awk +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWesu | +-----------------------------+-----------------------------+ |CSI |Not Enabled | +-----------------------------+-----------------------------+ /usr/xpg4/bin/awk +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWxcu4 | +-----------------------------+-----------------------------+ |CSI |Enabled | +-----------------------------+-----------------------------+ |Interface Stability |Standard | +-----------------------------+-----------------------------+ egrep(1), grep(1), nawk(1), sed(1), printf(3C), attributes(5), environ(5), largefile(5), standards(5) Input white space is not preserved on output if fields are involved. There are no explicit conversions between numbers and strings. To force an expression to be treated as a number, add 0 to it. To force an expression to be treated as a string, concatenate the null string ("") to it. 22 Jun 2005 awk(1)
All times are GMT -4. The time now is 09:38 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy