Thanks Don , Solution is working fine but Need clarification for below
IAm=${0##*/} - I have understand that it is something like $0 but here we are removing leading string if it matches with pattern */
printf '%s\n' "$@" - It is converting horizontal argument to verital(like Column to Rows) but It would be helpful if you can explain how awk works after passing these values to awk.
I have tried to understand with manual in filesystem but could not understand.
It looks like you're doing well.
The script sets IAm to the basename of the name of the script. The shell expands $0 to the pathname used to invoke your script. Let's assume your script is named removecols and is located in your $HOME/bin directory. If you invoke your command with the command line:
the script sets IAm by throwing away everything from the beginning of the string through the last / character leaving just removecols as the string to be substituted in the usage message. And, if $HOME/bin is in your PATH variable and you invoke your script using the command:
The shell sets $0 to removecols when it starts and IAm gets the same string since there are no / characters in the expansion of $0.
The shift command throws away the 1st positional parameter after file=$1 saves the name of the file to be processed that you passed to your script as the 1st positional parameter. So, after the shift, all of the remaining positional parameters are the numbers of the fields you want to delete. And "$@" expands to a list of each positional parameter as a quoted string with each parameter treated as a separate argument. So, for the 1st command-line shown above, the shell executes the command: printf '%s\n' "4" "2"
which produces the output:
which is fed as standard input through the pipeline into the awk script. And the - as the 2nd operand passed to the awk script tells it to read its standard input as its 1st input file.
The awk code:
tells awk to set the input field separator (FS) and the output field separator (OFS) to a comma before reading any lines from any input files.
The awk code:
says that for each line where the line # in the current input file (FNR) is the same as the line # of all lines read (NR) (which is true only for lines read from the 1st input file) create an array element with the value in the 1st field on that line as a subscript (d[$1]) and the awk next command tells the script to read the next input line and start again.
For lines read from the second input file (where FNR is not the same value as NR) we execute the remaining commands in the awk script:
which sets ofs to an empty string (note that awk variable names are case sensitive). Then for each field in the current input line starting with the first field (i = 1) and continuing while i is less than the number of fields in that line (NF), and incrementing i by one (i++) each time through the loop, it checks to see if the field number is a subscript in the array containing being used as the list of fields to delete ((i in d)) and if it is not (!), then it prints the current value of ofs followed by the current field ($i) and then sets ofs to the current output field separator (OFS). (So the first field printed will not have a field separator printed before it and all fields printed after that will print a comma as a field separator. After the loop completes the awk command:
prints a trailing <newline> character.
This User Gave Thanks to Don Cragun For This Post:
i have a program ABC, which runs every two minutes and takes the input according to the a value called "pointer"
files need to be processed by ABC are
input0001
input0002
input0003
input0004
input0005
current value of pointer is 0001, now program runs and takes all the files from... (2 Replies)
Hi all,
I have created a script which adding two columns and removing two columns for all files.
Filename: Cust_information_1200_201010.txt
Source Data:
"1","Cust information","123","106001","street","1-203 high street"
"1","Cust information","124","105001","street","1-203 high street"
... (0 Replies)
Hi,
I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file.
Source filename: Filename.csv
"1","ccc","information","5000","temp","concept","new"
"1","ddd","information","6000","temp","concept","new"... (2 Replies)
Hello
i have a text file like this:
1 AB AC AD EE
2 WE TR YT WW
3 AS UY RF YT
the file is bigger , but that's an example of the data
what i want to do is to merge all columns together except the first one,
it will become like this :
1 ABACADEE
2 WETRYTWW
3 ASUYRFYT (8 Replies)
I want to take the file name as an input to the program and copy that file into new location using shell. Below program is not working properly.
#!/bin/sh
if ; then
`/usr/bin/perl -pi -e's/(notifications_enabled\s*)(\d+)/$sub = "$1" . ("$2"== "0" ? "1":"0")/e; ' $file`
`cp... (2 Replies)
I have file as below
column1|column2|column3|column4|column5|
fill1|fill2|fill3|fill4|fill5|
abc1|abc2|abc3|abc4|abc5|
.
.
.
.
i need to remove column2,3, from that file
column1|column4|column5|
fill1|fill4|fill5|
abc1|abc4|abc5|
.
.
. (3 Replies)
Hi Experts ,
we have a CDC file where we need to get the latest record of the Key columns
Key Columns will be CDC_FLAG and SRC_PMTN_I
and fetch the latest record from the CDC_PRCS_TS
Can we do it with a single awk command.
Please help.... (3 Replies)
Hello,
I have some tab delimited files that may contain blank columns. I would like to delete the blank columns if they exist. There is no clear pattern for when a blank occurs.
I was thinking of using sed to replace instances of double tab with blank,
sed 's/\t\t//g'
All of the examples... (2 Replies)