Sponsored Content
Top Forums Shell Programming and Scripting How to get the column number in awk? Post 302876659 by reignangel2003 on Tuesday 26th of November 2013 01:58:23 AM
Old 11-26-2013
How to get the column number in awk?

Hi Guys,

I have a question on how i can get the column number in a file and used it in awk.

i have a file which it has these records inside it.

Code:
SUB_ACC_NO|CLIENT_ID|ANUM|AKEY|ACCT_TYPE|FUND_ID|FUND_ABBR|TRADER_ID|TRADER_NAME|BROKER|VARIABLE_FIELDS_PRIMARY|CURRENCY|CLIENT_FUND_ID|ANAME|CASH_ANUM|ANUM_GROUP|VALUE_DATE|BALANCE|CUR_BALANCE|TRAN_STATUS|TRAN_TYPE|STMT_PG|SIDE|SIGN|REC_TYPE|BALANCE_TYPE|GIN2CXL|ETL_DATE

and there are other files that have the same header name but with different positions. what i wanted to achieve is to get the column number for the header field that starts with SUB_ACC_NO.

thanks
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk to select a column from particular line number

The awk command awk -F: '{print $1}' test1 gives the first columns of all the lines in file ,is there some command to get a particular column from particular line . Any help is appreciated. thanks arif (4 Replies)
Discussion started by: mab_arif16
4 Replies

2. Shell Programming and Scripting

column number, awk, help

All, $ cat myf.txt A|xyz|1000|mm B|9000|xyz|ss C|BDE|2000|kk D|xyz|1000|nn I am searching "xyz" $ awk -F "|" ' {for(k=0;k<=NF;k++) if ( $k == "xyz" ) print "line="NR"(column="k")" }' myf.txt Output: line=1(column=2) line=2(column=3) line=4(column=2) (2 Replies)
Discussion started by: jkl_jkl
2 Replies

3. UNIX for Dummies Questions & Answers

Adding a column with the row number using awk

Is there anyway to use awk to add a first column to my data that automatically goes from 1 to n , where n is the numbers of my rows?:confused: (4 Replies)
Discussion started by: cosmologist
4 Replies

4. UNIX for Dummies Questions & Answers

how to count number of rows and sum of column using awk

Hi All, I have the following input which i want to process using AWK. Rows,NC,amount 1,1202,0.192387 2,1201,0.111111 3,1201,0.123456 i want the following output count of rows = 3 ,sum of amount = 0.426954 Many thanks (2 Replies)
Discussion started by: pistachio
2 Replies

5. UNIX for Dummies Questions & Answers

count number of distinct values in each column with awk

Hi ! input: A|B|C|D A|F|C|E A|B|I|C A|T|I|B As the title of the thread says, I would need to get: 1|3|2|4 I tried different variants of this command, but I don't manage to obtain what I need: gawk 'BEGIN{FS=OFS="|"}{for(i=1; i<=NF; i++) a++} END {for (b in a) print b}' input ... (2 Replies)
Discussion started by: beca123456
2 Replies

6. Shell Programming and Scripting

Awk, appending a number in the first column of a row with a condition

Hi everyone, I have a data file in which the data is stored in event blocks. What I would like to get is that the same file with every data row starting with the number of event block. So here is two event blocks from my file: <event> -2 -1 0 0 0 501 0.00000000000E+00 ... (2 Replies)
Discussion started by: hayreter
2 Replies

7. Shell Programming and Scripting

awk - Print column number that return value comes from

I have the following awk script that I am using to find the max value in the file and print results. awk 'BEGIN {MAX=-1E100} {for (x=2; x<=NF; x++) if ($x>MAX) {MAX = $x; C1 = $1}} END {print substr(C1,1,11), substr(C1,13,4), substr(C1,18,2), MAX}' ABC* Input (ABC*) ... (6 Replies)
Discussion started by: ncwxpanther
6 Replies

8. Shell Programming and Scripting

awk to print column number while ignoring alpha characters

I have the following script that will print column 4 ("25") when column 1 contains "123". However, I need to ignore the alpha characters that are contained in the input file. If I were to ignore the characters my output would be column 3. What is the best way to print my column of interest... (3 Replies)
Discussion started by: ncwxpanther
3 Replies

9. Shell Programming and Scripting

Count Repetitive Number in a column and renumbering using awk

Unable to get the desired output. Need only the rows which has repeated values in column 5. Input File <tab separated file> chr1 3773797 3773797 CEP10 1 chr1 3773797 3773797 CEP104 2 chr1 3689350 3689350 SMIM1 2 chr1 3773797 3773797 CEP4 3 chr1 3773797 3773797 EP104 ... (7 Replies)
Discussion started by: himanshu
7 Replies

10. Shell Programming and Scripting

awk split columns to row after N number of column

I want to split this with every 5 or 50 depend on how much data the file will have. And remove the comma on the end Source file will have 001,0002,0003,004,005,0006,0007,007A,007B,007C,007E,007F,008A,008C Need Output from every 5 tab and remove the comma from end of each row ... (4 Replies)
Discussion started by: ranjancom2000
4 Replies
funjoin(1)							SAORD Documentation							funjoin(1)

NAME
funjoin - join two or more FITS binary tables on specified columns SYNOPSIS
funjoin [switches] <ifile1> <ifile2> ... <ifilen> <ofile> OPTIONS
-a cols # columns to activate in all files -a1 cols ... an cols # columns to activate in each file -b 'c1:bvl,c2:bv2' # blank values for common columns in all files -bn 'c1:bv1,c2:bv2' # blank values for columns in specific files -j col # column to join in all files -j1 col ... jn col # column to join in each file -m min # min matches to output a row -M max # max matches to output a row -s # add 'jfiles' status column -S col # add col as status column -t tol # tolerance for joining numeric cols [2 files only] DESCRIPTION
funjoin joins rows from two or more (up to 32) FITS Binary Table files, based on the values of specified join columns in each file. NB: the join columns must have an index file associated with it. These files are generated using the funindex program. The first argument to the program specifies the first input FITS table or raw event file. If "stdin" is specified, data are read from the standard input. Subsequent arguments specify additional event files and tables to join. The last argument is the output FITS file. NB: Do not use Funtools Bracket Notation to specify FITS extensions and row filters when running funjoin or you will get wrong results. Rows are accessed and joined using the index files directly, and this bypasses all filtering. The join columns are specified using the -j col switch (which specifies a column name to use for all files) or with -j1 col1, -j2 col2, ... -jn coln switches (which specify a column name to use for each file). A join column must be specified for each file. If both -j col and -jn coln are specified for a given file, then the latter is used. Join columns must either be of type string or type numeric; it is illegal to mix numeric and string columns in a given join. For example, to join three files using the same key column for each file, use: funjoin -j key in1.fits in2.fits in3.fits out.fits A different key can be specified for the third file in this way: funjoin -j key -j3 otherkey in1.fits in2.fits in3.fits out.fits The -a "cols" switch (and -a1 "col1", -a2 "cols2" counterparts) can be used to specify columns to activate (i.e. write to the output file) for each input file. By default, all columns are output. If two or more columns from separate files have the same name, the second (and subsequent) columns are renamed to have an underscore and a numeric value appended. The -m min and -M max switches specify the minimum and maximum number of joins required to write out a row. The default minimum is 0 joins (i.e. all rows are written out) and the default maximum is 63 (the maximum number of possible joins with a limit of 32 input files). For example, to write out only those rows in which exactly two files have columns that match (i.e. one join): funjoin -j key -m 1 -M 1 in1.fits in2.fits in3.fits ... out.fits A given row can have the requisite number of joins without all of the files being joined (e.g. three files are being joined but only two have a given join key value). In this case, all of the columns of the non-joined file are written out, by default, using blanks (zeros or NULLs). The -b c1:bv1,c2:bv2 and -b1 'c1:bv1,c2:bv2' -b2 'c1:bv1,c2 - bv2' ... switches can be used to set the blank value for columns common to all files and/or columns in a specified file, respectively. Each blank value string contains a comma-separated list of col- umn:blank_val specifiers. For floating point values (single or double), a case-insensitive string value of "nan" means that the IEEE NaN (not-a-number) should be used. Thus, for example: funjoin -b "AKEY:???" -b1 "A:-1" -b3 "G:NaN,E:-1,F:-100" ... means that a non-joined AKEY column in any file will contain the string "???", the non-joined A column of file 1 will contain a value of -1, the non-joined G column of file 3 will contain IEEE NaNs, while the non-joined E and F columns of the same file will contain values -1 and -100, respectively. Of course, where common and specific blank values are specified for the same column, the specific blank value is used. To distinguish which files are non-blank components of a given row, the -s (status) switch can be used to add a bitmask column named "JFILES" to the output file. In this column, a bit is set for each non-blank file composing the given row, with bit 0 corresponds to the first file, bit 1 to the second file, and so on. The file names themselves are stored in the FITS header as parameters named JFILE1, JFILE2, etc. The -S col switch allows you to change the name of the status column from the default "JFILES". A join between rows is the Cartesian product of all rows in one file having a given join column value with all rows in a second file having the same value for its join column and so on. Thus, if file1 has 2 rows with join column value 100, file2 has 3 rows with the same value, and file3 has 4 rows, then the join results in 2*3*4=24 rows being output. The join algorithm directly processes the index file associated with the join column of each file. The smallest value of all the current columns is selected as a base, and this value is used to join equal-valued columns in the other files. In this way, the index files are traversed exactly once. The -t tol switch specifies a tolerance value for numeric columns. At present, a tolerance value can join only two files at a time. (A completely different algorithm is required to join more than two files using a tolerance, somethng we might consider implementing in the future.) The following example shows many of the features of funjoin. The input files t1.fits, t2.fits, and t3.fits contain the following columns: [sh] fundisp t1.fits AKEY KEY A B ----------- ------ ------ ------ aaa 0 0 1 bbb 1 3 4 ccc 2 6 7 ddd 3 9 10 eee 4 12 13 fff 5 15 16 ggg 6 18 19 hhh 7 21 22 fundisp t2.fits AKEY KEY C D ----------- ------ ------ ------ iii 8 24 25 ggg 6 18 19 eee 4 12 13 ccc 2 6 7 aaa 0 0 1 fundisp t3.fits AKEY KEY E F G ------------ ------ -------- -------- ----------- ggg 6 18 19 100.10 jjj 9 27 28 200.20 aaa 0 0 1 300.30 ddd 3 9 10 400.40 Given these input files, the following funjoin command: funjoin -s -a1 "-B" -a2 "-D" -a3 "-E" -b "AKEY:???" -b1 "AKEY:XXX,A:255" -b3 "G:NaN,E:-1,F:-100" -j key t1.fits t2.fits t3.fits foo.fits will join the files on the KEY column, outputting all columns except B (in t1.fits), D (in t2.fits) and E (in t3.fits), and setting blank values for AKEY (globally, but overridden for t1.fits) and A (in file 1) and G, E, and F (in file 3). A JFILES column will be output to flag which files were used in each row: AKEY KEY A AKEY_2 KEY_2 C AKEY_3 KEY_3 F G JFILES ------------ ------ ------ ------------ ------ ------ ------------ ------ -------- ----------- -------- aaa 0 0 aaa 0 0 aaa 0 1 300.30 7 bbb 1 3 ??? 0 0 ??? 0 -100 nan 1 ccc 2 6 ccc 2 6 ??? 0 -100 nan 3 ddd 3 9 ??? 0 0 ddd 3 10 400.40 5 eee 4 12 eee 4 12 ??? 0 -100 nan 3 fff 5 15 ??? 0 0 ??? 0 -100 nan 1 ggg 6 18 ggg 6 18 ggg 6 19 100.10 7 hhh 7 21 ??? 0 0 ??? 0 -100 nan 1 XXX 0 255 iii 8 24 ??? 0 -100 nan 2 XXX 0 255 ??? 0 0 jjj 9 28 200.20 4 SEE ALSO
See funtools(7) for a list of Funtools help pages version 1.4.2 January 2, 2008 funjoin(1)
All times are GMT -4. The time now is 11:02 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy