Nice and concise, as usual. But as the original poster has a large file, I wondered if opening and closing a file every line of the input file was as time efficient as a more traditionnal code:
you're right - it's just a habbit of mine to close the output descriptors if dealing with multipe files (knowing that awk's have a low set limit on the number of openned output descriptors).
As in OP's situation there will be only TWO output descriptors, there's no need to keep closing and reopenning one.
timing the scripts WITH the 'close' and withOUT the 'close' on the 100K line file:
Quote:
real 0m3.280s
user 0m1.259s
sys 0m2.019s
real 0m1.491s
user 0m0.746s
sys 0m0.739s
Quote:
Originally Posted by ripat
Speaking of speed..... This implementation will try to do TWO pattern matches for EVERY input line. In essense, you need to do just ONE and the result of your match is binary: either "Good" OR "Bad".
<modestyON>
I believe my implementation should perform a bit better
</modestyON>
Quote:
Originally Posted by ripat
Timing the two scripts on a 500.000 lines file gives me this:
Dear guru's,
I am learning shellscripting and now I 'm struggeling with this problem:
When the number in the left column is equal or higer then 200, I want to send an email to "postmaster" @ the corresponding domain in the right column.
220 shoes.com
217 dishwashers.net
209 ... (11 Replies)
Hi All,
I have two files one of which having some mobile numbers and corresponding value whose sample content as follows:
9058629605,8.0
9122828964,30.0
And in second file complete details of all mobile numbers and sample content as follows and delimeter used is comma(,):
... (8 Replies)
I have a software which generates excel report with some specific data. The excel file format is .xls (old 2003 format)
The data are in the forms like differenct cells contains numeric, string and alphanumeric data.
The data per cell for specific input data is fixed.
I need to retrive specific... (11 Replies)
Can anyone please help with this? I have 2 files as given below.
If 2nd column of file1 has pattern foo1@a, find the matching 1st column in file2 & replace 2nd column of file1 with file2's value.
file1
abc_1 foo1@a ....
abc_1 soo2@a ...
def_2 soo2@a ....
def_2 foo1@a ........ (7 Replies)
Hello experts,
Please help me in achieving this in an easier way possible. I have 2 csv files with following data:
File1
08/23/2012 12:35:47,JOB_5330
08/23/2012 12:35:47,JOB_5330
08/23/2012 12:36:09,JOB_5340
08/23/2012 12:36:14,JOB_5340
08/23/2012 12:36:22,JOB_5350
08/23/2012... (5 Replies)
Hi,
I have a data file with :
01/28/2012,1,1,98995
01/28/2012,1,2,7195
01/29/2012,1,1,98995
01/29/2012,1,2,7195
01/30/2012,1,1,98896
01/30/2012,1,2,7083
01/31/2012,1,1,98896
01/31/2012,1,2,7083
02/01/2012,1,1,98896
02/01/2012,1,2,7083
02/02/2012,1,1,98899
02/02/2012,1,2,7083
I... (1 Reply)
Hi,
I am looking for a ready shell script that can help in loading and validating a high volume (around 4 GB) .Dat file . The data in the file has to be validated at each of its column, like the data constraint on each of the data type on each of its 60 columns and also a few other constraints... (2 Replies)
Hello,
I have this file outputData:
# cat /tmp/outputData
__Capacity^6^NBSC01_Licences^L3_functionality_for_ESB_switch
__Capacity^2100^NBSC01_Licences^Gb_over_IP
__Capacity^1837^NBSC01_Licences^EDGE_BSS_Fnc
__Capacity^1816^NBSC01_Licences^GPRS_CS3_and_CS4... (1 Reply)
The below bash is a file validation check executed that will verify the correct header count of 10 and the correct data type in each field of the tab-delimited file. The key has the data type of each field in it. My real data has 58 headers in it but only the header and next row need to be... (6 Replies)
Source Code of the original script is down below please run the script and try to solve this problem
this is my data and I want it column wise
2019-03-20 13:00:00:000
2019-03-20 15:00:00:000
1
Operating System
LAB
0
1
1
1
1
1
1
1
1
1
0
1 (5 Replies)