Search Results

Search: Posts Made By: brenoasrm
7,734
Posted By brenoasrm
What I did was stop the overhead descompressing...
What I did was stop the overhead descompressing and stop reading the file when timestamp was higher than what I need.
Another thing I realized was that every file has only 3 consecutive hours, so I...
7,734
Posted By brenoasrm
I am not hitting disk limit, I saw with iotop and...
I am not hitting disk limit, I saw with iotop and htop, it's using less than 10mb/s the majority of time, just in the begining it reaches 60mb/s, I guess. It's a spinning disk, SATA. I will write to...
7,734
Posted By brenoasrm
I have ~120GB of compressed files per day. It's...
I have ~120GB of compressed files per day. It's taking many hours to process


real 217m15.559s
user 763m17.030s
sys 87m40.926s


I didn't know using tail -n 1 would descompress...
7,734
Posted By brenoasrm
File sizes varies, for instance, some files have...
File sizes varies, for instance, some files have ~300MB, others ~1GB or ~2GB.
My disk is not operating in full capacity while running awk, so I think your solution to read multiple files at once...
7,734
Posted By brenoasrm
First, like you've said about comparing string...
First, like you've said about comparing string should be faster than coverting to time for compare later, it's exactly what I think happens and that's why my code is how it is and it's working just...
7,734
Posted By brenoasrm
How to make awk command faster for large amount of data?
I have nginx web server logs with all requests that were made and I'm filtering them by date and time.
Each line has the following structure:

127.0.0.1 - [13/Jul/2018:21:51:31 +0000] xyz.com GET...
Showing results 1 to 6 of 6

 
All times are GMT -4. The time now is 03:02 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy