agama said he found a # by itself on a line as the section separator. When I copied the sample input and fed it through od -cb, I found that the separator line contained the octal byte values 343, 200, and 200 terminated by the <newline> character.
I believe the following meets the criteria specified, but nothing will be printed given the sample input because no section header in the sample input has $12 > 3.
Code:
awk 'BEGIN {line1 = 1} # Next line with no alpha-numeric is a section header.
!/[0-9a-zA-Z]/ { # Found what is assumed to be a blank line.
# The sample input had three bytes with octal values 343, 200, and 200
# followed by a <newline> as the separator between sections.
# The submitter described this as a "blank line".
# This script will use empty lines as section separators no matter what
# section separator lines are found in input files.
copy = 0 # Turn off copy mode.
line1 = 1 # The next non-"blank" line is a sectoin header.
next
}
copy {print;next} # Copy any lines found before the next "blank" line.
line1 {if(($1 ~ /^2011/) && ($12 > 3)) {
# The text in the first post in this thread said sections were
# to be printed only for the year 2011 and $12 is > 3.
# The script in the first post was looking for years 2010-2019.
# All entries in the sample input were for 2011, but no entries
# had $12 > 3 (the only entries had $12 set to 2.1 and 1.7,
# so no entries match the criteria.
copy=1 # Turn on copy mode for the rest of the section.
# Add an empty line as a section separator, except before the 1st
# section to be printed.
if(found++ > 0) print ""
print # Print the 1st line of the section.
}
# Whether a match was found or not, don't look for another seciion
# header until we find another separator line.
line1 = 0
}' input
These 2 Users Gave Thanks to Don Cragun For This Post:
Hi guys,
I have got this working OK but I am sure there is a more efficient/elegant way of doing it, which I hope you can help me with.
It can be done in whatever is most suitable i.e perl/awk..
Any suggestions are welcome and many thanks in advance.
What I require is to extract... (5 Replies)
Hello,
I have written a script that removes duplicates within a file and places them in another report.
File:
ABC1 012345 header
ABC2 7890-000
ABC3 012345 Content Header
ABC5 593.0000 587.4800
ABC5 593.5000 587.6580
ABC5 593.5000 587.6580
ABC1 67890 header
ABC2 1234-0001
ABC3... (2 Replies)
Input file:
HS04636 type header 836 7001 ID=g1
HS04636 type status 836 1017 Parent=g1.t1
HS04636 type location 966 1017 ID=g1.t1.cds;Parent=g1.t1
HS04636 type location 1818 1934 ID=g1.t1.cds;Parent=g1.t1
HS04636 type status 1818... (8 Replies)
I have a list of Servers in no particular order as follows:
virtualMachines="IIBSBS IIBVICDMS01 IIBVICMA01"And I am generating some output from a pre-existing script that gives me the following (this is a sample output selection).
9/17/2010 8:00:05 PM: Normal backup using VDRBACKUPS... (2 Replies)
I have searched in a variety of ways in a variety of places but have come up empty.
I would like to prepend a portion of a section header to each following line until the next section header. I have been using sed for most things up until now but I'd go for a solution in just about anything--... (7 Replies)
Hello;
i have a file contains N continues records. i want to split these lines to some separate sections with each lines of a section has the desired condition compared to other sections
input:
AZR ? ? ? Pn 37.202 48.82 1136119044 1136119009
SHB ? ? ? Pn 37.802 48.02 1136119047 1136119008... (4 Replies)
Hi ALL
I have a script where in i need to check for several values in if conditons but when i execute the script it throws error such as "TOO MANY ARGUMENTS"
if
then
msg="BM VAR Issue :: bmaRequestVAR=$bmaRequestVAR , nltBMVAR=$nltBMVAR , bmaResponseVAR=$bmaResponseVAR ,... (10 Replies)
Hi, I have a log file from Munin like this:2012/12/04 13:45:31 : Munin-update finished (29.68 sec)
2012/12/04 13:50:01 Opened log file
2012/12/04 13:50:01 : Starting munin-update
2012/12/04 13:50:01 Error occured in under in the configuration.
2012/12/04 13:50:01 Could not parse datafile... (9 Replies)
Hi Guys,
I am new to shell script.I need your help to write a shell script.
I need to write a shell script to extract data from a .csv file where columns are ',' separated.
The file has 5 columns having values say column 1,column 2.....column 5 as below along with their valuesm.... (1 Reply)
Hello,
I have a log file that has several sections "BEGIN JOB, End of job" like in the following example:
19/06/12 - 16:00:57 (27787398-449294): BEGIN JOB j1(27787398-449294) JOB1
19/06/12 - 16:00:57 (27787398-449294): DIGIT: 0
number of present logs : 1
19/06/12 - 16:00:57... (4 Replies)
Discussion started by: mvalonso
4 Replies
LEARN ABOUT HPUX
join
join(1) General Commands Manual join(1)NAME
join - relational database operator
SYNOPSIS
[options] file1 file2
DESCRIPTION
forms, on the standard output, a join of the two relations specified by the lines of file1 and file2. If file1 or file2 is the standard
input is used.
file1 and file2 must be sorted in increasing collating sequence (see Environment Variables below) on the fields on which they are to be
joined; normally the first in each line.
The output contains one line for each pair of lines in file1 and file2 that have identical join fields. The output line normally consists
of the common field followed by the rest of the line from file1, then the rest of the line from file2.
The default input field separators are space, tab, or new-line. In this case, multiple separators count as one field separator, and lead-
ing separators are ignored. The default output field separator is a space.
Some of the below options use the argument n. This argument should be a or a referring to either file1 or file2, respectively.
Options
In addition to the normal output,
produce a line for each unpairable line in file n, where n is or
Replace empty output fields by string
s.
Join on field
m of both files. The argument m must be delimited by space characters. This option and the following two are provided for
backward compatibility. Use of the and options ( see below ) is recommended for portability.
Join on field
m of file1.
Join on field
m of file2.
Each output line comprises the fields specified in
list, each element of which has the form where n is a file number and m is a field number. The common field is not printed
unless specifically requested.
Use character
c as a separator (tab character). Every appearance of c in a line is significant. The character c is used as the field sepa-
rator for both input and output.
Instead of the default output,
produce a line only for each unpairable line in file_number, where file_number is or
Join on field
f of file 1. Fields are numbered starting with 1.
Join on field
f of file 2. Fields are numbered starting with 1.
EXTERNAL INFLUENCES
Environment Variables
determines the collating sequence expects from input files.
determines the alternative blank character as an input field separator, and the interpretation of data within files as single and/or multi-
byte characters. also determines whether the separator defined through the option is a single- or multi-byte character.
If or is not specified in the environment or is set to the empty string, the value of is used as a default for each unspecified or empty
variable. If is not specified or is set to the empty string, a default of ``C'' (see lang(5)) is used instead of If any internationaliza-
tion variable contains an invalid setting, behaves as if all internationalization variables are set to ``C'' (see environ(5)).
International Code Set Support
Single- and multi-byte character code sets are supported with the exception that multi-byte-character file names are not supported.
EXAMPLES
The following command line joins the password file and the group file, matching on the numeric group ID, and outputting the login name, the
group name, and the login directory. It is assumed that the files have been sorted in the collating sequence defined by the or environment
variable on the group ID fields.
The following command produces an output consisting all possible combinations of lines that have identical first fields in the two sorted
files sf1 and sf2, with each line consisting of the first and third fields from and the second and fourth fields from
WARNINGS
With default field separation, the collating sequence is that of with the sequence is that of a plain sort.
The conventions of and are incongruous.
Numeric filenames may cause conflict when the option is used immediately before listing filenames.
AUTHOR
was developed by OSF and HP.
SEE ALSO awk(1), comm(1), sort(1), uniq(1).
STANDARDS CONFORMANCE join(1)