Unfortunately, the script above is functionally equivalent to:
which only writes the 2nd field from the last line in your input file into a file named junk. But, I think understand what you're trying to do now.
To be sure that I do understand what you want, please confirm or correct the following statements:
You want files named file_x for 1 <= x <= 173 which contain the copies of the lines from the file named file where the value in the first field in the line is in corresponding range.
Then for each file named file_x you want a file named file_x_f that contains the same number of lines as the file_x file, but only contains the contents of the 2nd field of each line instead of the entire line.
In your description above you sometimes talk about files named file_x and at other times talk about files named output_x. Am I correct in assuming that "output_" was a typo and you meant "file_"?
Is this correct?
Note that since you're creating up to 346 output files from this script, the script is going to have to open and close files while it is running rather than opening everything and letting awk automatically close them when the script terminates.
Please also answer the following questions:
Do you want empty files created for files that don't have any lines that will be directed to those files?
Do existing file_x and file_x_f files need to be removed when this script starts?
If not, should lines to be written by this script replace the contents of existing files or append lines to them?
I'm hoping that you either want all existing files to be removed or overwritten by the script rather than appending to existing files. The file handling logic is much more difficult in an awk script if you want to portably append to existing files. Given the script: print >> file_x some systems will create file_x if it doesn't already exist. Others will only create a file when using print > file_x and will give an error if you try print >> file_x when file_x doesn't already exist.
The script Chubler_XL provided in the message before this should work fine as long as you don't care about the order in which lines appear in the output files and don't want to append to existing files. If you want to append rather than replace, or if you want to have all entries in the output files be in the same order that they appeared in the input file, but script will be more complex.
================
I apologize. Chubler_XL's script does indeed maintain order, and (as he said) you can just replace > with >> if you want to append rather than overwrite. (It is w >> file in ex that may fail if file doesn't already exist. In awk >> file is guaranteed to create the file if it didn't exist and append to it if it did exist.)
The script Chubler_XL wrote works great. I am not in need of specific ordering of lines, so it's all sorted! I was impressed by your use of arrays for the problem.
Do I understand correctly that this is a 2D array comprised of the nearest integer defined by the bucket function and v[bucket]++?
Can somebody give me a cleaner way of writing the following script. I was thinking that I could use a loop in the awk statement. It works fine the way it is but I just want the script to be cleaner.
#!/usr/bin/sh
for r in 0 1 2 3 4 5 6
do
DAY=`gdate --date="${r} days ago" +%m\/%d\/%y`... (3 Replies)
Hi,
I tired to do this in a korn shell on an HP-UX 9000/800...
var1="a b c"
var2="d e f"
vars="var1 var2"
for i in $vars
do
for j in $i
do
echo $i $j
done
done
When run, this would output
var1 var1 (1 Reply)
I am having a problem with awk when I run it with a loop. It works perfectly when I echo a single line from the commandline. For example:
echo 'MFG009 9153852832' | awk '$2 ~ /^0-9]$/{print $2}'
The Awk command above will print field 2 if field 2 matches 10 digits, but when I run the loop... (5 Replies)
I am pretty new to this, but imagine what I am trying to do is possible
iI am trying to make an automated DB comparison tool that selects all columns in all tables and compares them to the same thing in another DB.
anyway I have created 2 files to help with this
the first file is a... (13 Replies)
If I have a file with a bunch of various numbers in one column, how can I make a script to take each number in the file and put in into a command line?
Example:
cat number_file
2
5
8
11
13
34
55
I need a loop to extract each of these numbers and put them into a command line... (1 Reply)
Hi
I have a file which is having following text. The file is in a tabular form with 5 fields. i.e field1, field2 ..... field5 are its columns and there are many rows in it say COUNT is the number of rows
Field 1 Field2 Field3 Field4 Field5
------- ------- ... (8 Replies)
I'm trying to parse a configuration text file using awk. The following is a sample from the file I'm searching. I can retrieve the formula and recipe names easily but now I want to take it one step farther. In addition to the formula name, I would like to also get the value of the attribute... (6 Replies)
I have the data like this:
PONUMBER,SUPPLIER,LINEITEM,SPLITLINE,LINEAMOUNT,CURRENCY
IR5555,Supplier1,1,1,83.1,USD
IR5555,Supplier1,1,3,40.4,USD
IR5555,Supplier1,1,6,54.1,USD
IR5555,Supplier1,1,8,75.1,USD
IR5556,Supplier2,1,1,41.1,USD
IR5556,Supplier2,1,3,43.1,USD
... (3 Replies)
I am trying to parse a text file and send its output to another file but I am having trouble conceptualizing how I am supposed to do this in awk.
The text file has a organization like so:
Name
Date
Status
Location (city, state, zip fields)
Where each of these is on a separate line in... (1 Reply)