Goo afternoon Sir'sould like to ask your help reagrding in this scenario using sed and awk.
********************************************************
Host:CDRMSAPPS1
Operating System:Linux 2.6.9-42.ELsmp
Machine Type:UNIX
Host Type:Client
Version:5.1... (2 Replies)
The problem I have is that I have 23,000 records I need to sort through to pull out LEN: XXXX XX XX XX XX and NCOS: XXX entries from so I can insert them into a database. But some of my records include TYPE: ISDN, THE DN IS UNASSIGNED, or INVALID entries in between some records and I would like... (2 Replies)
Hello folks,
I have 2 files one( file1) contains the ddl for a view and file 2 contains the view defination/alias columns.
I want to merge the 2 into a third file using awk/sed as follows:
cheers !
:b:
FILE1
-----
PROMPT FIRST_VIEW
CREATE OR REPLACE FORCE VIEW FIRST_VIEW
AS
SELECT... (2 Replies)
Hi All,
I have huge trade file with milions of trades.I need to remove duplicate records (e.g I have following records)
30/10/2009,trdeId1,..,..
26/10/2009.tradeId1,..,..,,
30/10/2009,tradeId2,..
In the above case i need to filter duplicate recods and I should get following output.... (2 Replies)
Hi,
I am new to unix and would greatly appreciate some help.
I have a file containing multiple colums containing different sets of data e.g.
File 1:
John Ireland 27_December_69
Mary England 13_March_55
Mike France 02_June_80
I am currently using the awk... (10 Replies)
Hi,
I am currently filtering a file that has multiple sets of data. An example of some of the data is as follows;
Sat Oct 2 07:42:45 2010 01:33:46 R1_CAR_12.34
Sun Oct 3 13:09:53 2010 00:02:34 R2_BUS_56.78
Sun Oct 3 21:11:39 2010 00:43:21 R3_TRAIN_COACH_90.12
Mon Oct 4... (1 Reply)
Experts Good day,
I want to filter multiple lines of same error of same day , to only 1 error of each day, the first line from the log.
Here is the file:
May 26 11:29:19 cmihpx02 vmunix: NFS write failed for server cmiauxe1: error 5 (RPC: Timed out)
May 26 11:29:19 cmihpx02 vmunix: NFS... (4 Replies)
Hello,
Does anyone know an easy way to filter this type of file? I want to get everything that has score (column 2) 100.00 and get rid of duplicates (for example gi|332198263|gb|EGK18963.1| below), so I guess uniq can be used for this?
gi|3379182634|gb|EGK18561.1| 100.00... (6 Replies)
Hello Members,
I have a file, having below contents:
<KEYVALUE>$4,0,1,4,966505098999-->RemoteSPC: 13 SSN: 146</KEYVALUE>
<KEYVALUE>$4,123,1,4,966505050198-->RemoteSPC: 1002 SSN: 222,Sec:RemoteSPC: 1004 SSN: 222</KEYVALUE>
<KEYVALUE>$4,123,1,4,966505050598-->RemoteSPC: 1002 SSN:... (9 Replies)
Discussion started by: umarsatti
9 Replies
LEARN ABOUT DEBIAN
igawk
IGAWK(1) Utility Commands IGAWK(1)NAME
igawk - gawk with include files
SYNOPSIS
igawk [ all gawk options ] -f program-file [ -- ] file ...
igawk [ all gawk options ] [ -- ] program-text file ...
DESCRIPTION
Igawk is a simple shell script that adds the ability to have ``include files'' to gawk(1).
AWK programs for igawk are the same as for gawk, except that, in addition, you may have lines like
@include getopt.awk
in your program to include the file getopt.awk from either the current directory or one of the other directories in the search path.
OPTIONS
See gawk(1) for a full description of the AWK language and the options that gawk supports.
EXAMPLES
cat << EOF > test.awk
@include getopt.awk
BEGIN {
while (getopt(ARGC, ARGV, "am:q") != -1)
...
}
EOF
igawk -f test.awk
SEE ALSO gawk(1)
Effective AWK Programming, Edition 1.0, published by the Free Software Foundation, 1995.
AUTHOR
Arnold Robbins (arnold@skeeve.com).
Free Software Foundation Nov 3 1999 IGAWK(1)