Just my second week working on awk I need a hint for the following tasks.
I want to limit my logfile from the very outset to 200 lines. All I do until now is
head -c 10K >> /home/uplog.txt | awk 'END{print NR " swap " NF$5; exit}' /home/uplog.txt;
After being read it shall print the very last record " some text " of the fifth field and right after that exit, due to the size of the file.
How can I set the limit to a certain number of lines?
My wiry selfmade logfile should be read only the last n lines, e.g. 10. I've been typing something like NR-1 but that gives just the line before the very last one. So how can I set a range like NR>=1&&NR<=5 or even more sophisticated /~^start !/,/~^stop !/ ?
If someone can give me a hint that would be great, thanks in advance.
Not sure I understand the logics of your code snippet. After appending stdout of the head command to /home/uplog.txt, you pipe stdout (which will be empty) to awk's stdin but make awk read the recently appended to file /home/uplog.txt at the same time? That can't work.
Does it have to be awk? Then you need e.g. a circular buffer that you print in the END section. Did you try the tail command?
It is not a homework, I am not a student of IT, surely just for my machine here. Yes, there are people out there who do scripts for themselves, just like me. And I've been searching on various sites to find a solution.
---------- Post updated at 07:10 PM ---------- Previous update was at 07:07 PM ----------
It is not a homework, I am not a student of IT, surely just for my machine here. Yes, there are people out there who do scripts for themselves, just like me. And I've been searching on various sites to find a solution. I tried that tail-command as well. But I'd need that one as doing a step ahead in awk.
I agree with RudiC. Your code:
doesn't seem to be related to what you said you're trying to do. This code (depending on what OS you're using) will give you a diagnostic for an invalid head -c option-argument, a diagnostic saying that head doesn't have a -c option, or append the 1st 10000 or 10240 characters from standard input for this script to the end of /home/uplog.txt while simultaneously having awk read whatever it finds in /home/uplog.txt and then print the number of lines it found in the file (before head started adding data to it, at some point in time while head is writing to it, or after head has finished writing to it) followed by the string swap followed by (depending on what OS you're using) the number of fields in the last record in the file and the contents of the 5th field from the last line of the file, nothing, or one but not both of those values.
Your requirements are ambiguous.
Limiting a file to 200 lines from the outset is not the same thing as reading it at some later time and discarding all but the 1st 10k characters (without checking for line boundaries).
Please give us a clear English description of what you are trying to do.
@Don Cragun
Much ambition means many errors, no errors means no trouble at all, I agree that this is a task for me.
From the very outset I want to set this file to just 200 entries, nothing more. And catch the last seven and in a furhter step the last thirty lines of the fifth field for a calculation. The text-string " swap " could be any other. BTW I am not a pro so: I do not know anything about circular buffers, excuse me.
My OS here is debian wheezy 7.5, no server.
So I cut out the first statement do direct it after the awk-statement to the stdout. As you may see, this could be a beginner, but I assure you I am right in the middle, because this is my third week around with awk.
The awk below executes and is close (producing the first 4 columns in desired). However, when I add the sum of $7, I get nothing returned. Basically, I am trying to combine all the matching $4 in f1 and output them with the average of $7 in each match. Thank you :).
f1
... (2 Replies)
The output of an awk script is the below file. Line 1,3 that starts with the Ion... need to be under line 2,4 that starts with R_. The awk runs but no output results. Thank you :).
file
IonXpress_007 MEV37
R_2016_09_20_12_47_36_user_S5-00580-7-Medexome
IonXpress_007 MEV40... (6 Replies)
Sorry for the weird title but i have the following problem.
We have several files which have between 10000 and about 500000 lines in them. From these files we want to remove lines which contain a pattern which is located in another file (around 20000 lines, all EAN codes). We also want to get... (28 Replies)
I want to count lines of a file using AWK (only) and not in the END part like this awk 'END{print FNR}' because I want to use it.
Does anyone know of a way?
Thanks a lot. (7 Replies)
Hi,
how do i read a file using awk for a given no of line?
e.g
1. read only first 50 line.
2. read starting from line 20 to line 60..
thanks in advance.
-alva (8 Replies)
Hi,
Here i have to check first record $3 $4 with second record $1 $2 respectively. If match found, then check first record $2 == second record $4 , if it equals , then reduce two records to single record like as desired output.
Input_file
1 1 2 1
2 1 3 1
3 1 4 1
3 1 3 2
desired... (3 Replies)
I have a file with 8 fields. I need the subtotals for fields 7 & 8 when field 5 changes.
cat wk1
01/02/2011/18AB/17/18/000000071/000000033
01/02/2011/18AB/17/18/000000164/000000021
01/02/2011/18AB/17/18/000000109/000000023
01/02/2011/28FB/04/04/000000000/000000000... (2 Replies)
Hi there, I have a text file with several colums separated by "|;#" I need to search the file extracting all columns starting with the value of "1" or "2" saving in a separate file just the first 7 columns of each row maching the criteria, with replacement of the saparators in the nearly created... (4 Replies)
I have a file which is one big long line of text about 10Kb long.
Can someone provide a way using awk to introduce carriage returns every 40 chars in this file.
Any other solutions would also be welcome.
Thank you in advance. (5 Replies)