"p" is just a variable I am using to control the execution of the code within the curly braces. If p=0, it will not enter, if p=1 it will enter.
So, we need to execute the code in the first iteration and hence "p" is initialized to 1 in the BEGIN block. Once it enters the block, we will turn it off i.e. p=0.
When the range limit is reached, we need to re-initialize end, increment n and decrement howmanytimes and so I set p=1 within the if loop.
Why not split the whole file in one go? With this line of awk, you split the whole file into files with filenames like 00000000-00001000.exp for the record range below 1000, 00001000-00002000.exp for the next range and so on.
My input:
Command:
$ grep '' *exp
I used a per (period) of 5 since my file is very small. Change this value to 1000 for your application.
If your file grows from time to time you really need a way to start exporting where you left off the last time. That is not difficult to do.
Greetings/Groeten,
Eric
---------- Post updated at 07:03 PM ---------- Previous update was at 06:46 PM ----------
While composing my reply I left my PC for a while and did not see there was already an answer given in the meantime. Sorry, Ahamed! I hope I didn't offend you.
Still, I chose a different approach, so maybe it has some use for anyone anyway.
the op -- from what i understand -- needed to be able to grab chunks of the original file into manageable chunks therefore the question ...
however ...
if your solution is not needed here, it may be more appropriate to another one or it may give the op another option that he/she has not yet thought of ... so never apologize about offering another solution ... we are all here to learn ...
I'm new on this forum (my second day!), but this was already the second time I gave a solution to a problem Ahamed already solved. I thought it was time to point out I was not competing or anything.
On the other hand: maybe I do want to compete, but not where it comes to producing the most brilliant solution (I would definately loose that battle...), but by providing a solution with the best explanation to give the OP knowledge to build upon instead of mere awe.
@edehont- Thanks I will try it out soon, I am just looking at it.
@ahamed- What I had planned for was something like this-
1. get the range
2. store the values within the range to a file, save the last matching line number
3. get new range, and start looking from the stored line number.
4.repeat over the file
The following is a multi-line shell command example:
$cargo build
Compiling prawn v0.1.0 (/Users/ag/rust/prawn)
error: failed to resolve: could not find `setup_panix` in `human_panic`
--> src/main.rs:14:22
|
14 | human_panic::setup_panix!();
| ... (2 Replies)
All, I appreciate any help you can offer here as this is well beyond my grasp of awk/sed...
I have an input file similar to:
&LOG
&LOG Part: "@DB/TC10000021855/--F"
&LOG
&LOG
&LOG Part: "@DB/TC10000021852/--F"
&LOG Cloning_Action: RETAIN
&LOG Part: "@DB/TCCP000010713/--A"
&LOG
&LOG... (5 Replies)
I need to search the file using strings "Request Type" , " Request Method" , "Response Type" and by using result set find the xml tags and convert into a single line?. below are the scenarios.
Cat test
Nov 10, 2012 5:17:53 AM
INFO: Request Type
Line 1.... (5 Replies)
Hi,
i have a file say file1 having following data
/abc/def:ghi/jkl/ some other text
Now i want to extract only ghi/jkl/using sed, can some one please help me.
Thanks
Sarbjit (2 Replies)
Hi,
I have a written a shell script to get the previous line based on the pattern.
For example if a file has below lines:
----------------------------------------------
#UNBLOCK_As _per
#As per
205.162.42.92
#BLOCK_As_per
#-----------------------
#input checks
abc.com... (5 Replies)
I'll try explain this as best I can. Let me know if it is not clear.
I have large text files that contain data as such:
143593502 09-08-20 09:02:13 xxxxxxxxxxx xxxxxxxxxxx 09-08-20 09:02:11 N line 1 test
line 2 test
line 3 test
143593503 09-08-20 09:02:13... (3 Replies)
Dear All,
I have a file with the syntax below (composed of several <log ..... </log> stanzas)
I need to search this file for a number e.g. 2348022225919, and if it is found in a stanza, copy the whole stanza/section (<log .... </log>) to another output file.
The numbers to search for are... (0 Replies)
Hello,
I am hoping someone can provide some guidance on using context based search and replace to search for a pattern and then do a search and replace in the line that follows it. For example, I have a file that looks like this:
<bold>bold text
</italic>
somecontent
morecontent... (3 Replies)
I have a file (status.file) of the form:
valueA 3450
valueB -20
valueC -340
valueD 48
I am tailing a data.file, and need to search and modify a value
in status.file...the tail is:
tail -f data.file | awk '{ print $3, ($NF - $(NF-1)) }'
which will produce lines that look like this:
... (3 Replies)