Hi
This helps.
But a concern here is that i need to put a while loop in place for reading the bulk characters until i come across "\n" character as my aim is to get line by line from the file.
The big advantage of using the streams is that they are buffered whereas the file handles are not. What this means is that for the nonbuffered functions, every time you call read() it has to go out to the physical disk and read some data.
With the buffered functions, it allocates a block of memory internally (I believe 8kb but I'm not sure) and when you call fread() or fgets() it only hits the disk if there isn't enough data already in the buffer. This is much faster.
By the way, you can increase buffer size with setbuf() and you can use fgets() to get the next line (next occurrence of \n) rather than a fixed number of characters.
To get the fastest possible speed, as mentioned above, you would have to use a big buffer, read a large chunk of file at once and then go through it looking for line ends. This avoids extra copying the data, i.e. it's copied from disk into memory and then out again.
But I'd try just using fgets() first as it probably is fast enough.
Steven's book on advanced unix programming has a table showing read performance on files.
Since you are returning lines, somewhere down inside the C++ stdio module is calling something like fgets. It does call read() to fill a buffer. Steven's table show that buffer sizes of 4096 are probably close optimum. There are other examples that show using
struct statvfs.f_frsize - the block size of the filesystem in question will also help.
See man setvbuf.
The other components of speed are the i/o queue request length, on board disk caching
and how "far above" the native read call your code operates. The first two are system related. If you call this low-level read routine directly and parse out you own lines it will probably speed things up - use 4096 or f_frsize as the number of bytes to read:
Below script is used to search numeric data from around 400 files in a folder. I have 300 such folders. Need help in performance improvement in the script.
Below Script searches 20 such folders ( 300 files in each folder) simultaneously. This increases cpu utilization upto 90% What changes... (3 Replies)
Hi This is my Following code:
#!/bin/sh
echo "TOTAL_NO_OF_MAILS"
read TOTAL_NO_OF_MAILS
echo "TOTAL_NO_OF_TICKETS "
read TOTAL_NO_OF_TICKETS
echo "TICKETS_IN_QUEUE"
read TICKETS_IN_QUEUE
rm -rf `pwd`/Focus
echo "Hi Team\nSTATS IN CLRS MAIL BOX\n\n==============================" >> Focus... (11 Replies)
Hi All,
Here is my script
#! /bin/sh
var1=some email id
var2=some email id
grep -i "FAILED FILE FORMAT VALIDATION" /opt >tmp2
diff tmp1 tmp2 | grep ">" >tmp3
if
then
cat tmp3 | mailx -s " Error Monitoring" $var2
else
echo "Pattern NOt Found" | mailx -s " Error Monitoring" $var1... (1 Reply)
can anyone help to share the knowledge on linux os improvement?
1) os account
- use window AD authentication, such as ldap, but how to set /etc/passwd, where to put user home?
2) user account activity
- how to log os user activity
share the idea and what tools can do that...thx (5 Replies)
Hello,
I am pretty new to shell scripts and I recently wrote one that seems to do what it should but I am exploring the possibility of improving its performance and would appreciate some help. Here is what it does - Its meant to monitor a bunch of systems (reads in IPs one at a time from a flat... (9 Replies)
Hi!
Thank you for the help yesterday
This is the finished product
There is one more thing I would like to do to it but I’m not to certain
On how to proceed I would like to log all output to a log in order to
Be able to roll back
This script is meant to be used in repairing a... (4 Replies)
Hi All,
I have written a script which does some editing in the files, based on user input.This might not be the most elegant way of doing it and there would be many improvements needed.
Please go through it and let me know how it could be improved.
Suggestions are welcome!!
Thanks!... (2 Replies)