Looking to improve the output of this awk one-liner
I have the following awk one-liner I came up with last night to gather some data. and it works pretty well (apologies, I'm quite new with awk, and don't know how to format this pretty-printed). You can see the output with it.
My input file for this example: (this is uniq'd but there are 75 lines total)
and my output is:
The code logic is pretty simple:
My primary objective is to format the output as a CSV that I can just send off as a report like this (the headers are illustrative, and I'm not looking to actually print them out...unless I can). :
My secondary objective is to clean up the code. For example, having to check the 8th column twice for 41015 to increment both counters seems wasteful.
Any advice is welcome, but please keep in mind this is my first time doing anything more complex than awk '{print $2,$4,$8}' file, so I'd appreciate explanations as well as code snippets.
Last edited by DeCoTwc; 08-01-2013 at 04:06 AM..
Reason: cleaning
Hello
I wrote simple one liner that take RunTime *.exe and link them to the output of the compilation output:
find ~/DevEnv/. -name "*.exe" | xargs ls -l | awk '{ x=split($9,a,"/"); print "ln -s " $9 " "a}'
and it gives me the desire output , but how can I execute this ln command on every... (1 Reply)
Hi !! I've finished an awk exercise. Here it is:
#!/bin/bash
function calcula
{
# Imprimimos el mayor tamaño de fichero
ls -l $1 | awk '
BEGIN {
max = $5; # Inicializamos la variable que nos guardará el máximo con el tamaño del primer archivo
}
{
if ($5 > max){ #... (8 Replies)
Hello,
I have two files...
File #1
1 3
2 5
File #2
3 5 3
1 3 7
9 1 5
2 5 8
3 3 1
I need to extract all lines from File #2 where the first two columns match each line of File #1. So in the example, the output would be:
1 3 7
2 5 8
Is there a quick one-liner that would... (4 Replies)
Thanks for giving your time and effort to answer questions and helping newbies like me understand awk.
I have a huge file, millions of lines, so perl takes quite a bit of time, I'd like to convert these perl one liners to awk.
Basically I'd like all lines with ISA sandwiched between... (9 Replies)
Hello experts,
I'm stuck with this script for three days now. Here's what i need.
I need to split a large delimited (,) file into 2 files based on the value present in the last field.
Samp: Something.csv
bca,adc,asdf,123,12C
bca,adc,asdf,123,13C
def,adc,asdf,123,12A
I need this split... (6 Replies)
The below code is a simple modified sample from a file with millions of lines containing hundreds of extra columns xxx="yyy" ...
<app addr="1.2.3.4" rem="1000" type="aaa" srv="server1" usr="user1"/>
<app usr="user2" srv="server2" rem="1001" type="aab" addr="1.2.3.5"/>What's the most efficient awk... (2 Replies)
Hi,
OS = Solaris
Can anyone advise if there is a one liner to print specific output from a df -k output?
Running df from a command line, it sometimes gives me 2 lines for some volume. By re-directing the output to a file, it always gives 1 line for each.
Below is an example output,... (4 Replies)
Gents,
Using the following script, I got the changes as desired in the output file called (spread_2611.x01.new). Complete file as input (spread_2611.x01).
Can you please have a look to my script and improve it please. :b:
Also I would like to I get a additional selecting only the records... (21 Replies)
I have a very inefficient awk below that I need some help improving. Basically, there are three parts, that ideally, could be combined into one search and one output file. Thank you :).
Part 1:
Check if the user inputted string contains + or - in it and if it does the input is writting to a... (4 Replies)