I have a log file which has 5K lines in it, I need to send only 200 lines to an application at a time and delete the 200 lines from log fileafter its been fed to application.
The script should keep on running until all 5K has been processed..
If the log file might still grow while you are handling it, doing this in a shell script sounds rather dangerous. From Perl or Python or C, you might be able to use some file locking primitives to coordinate with processes which might be writing to the file concurrently.
Having said that, consider this (lack of) proof of concept.
I'm building a yum mirror on Oracle Enterprise Linux, which is a fork of RHEL. I'm using uln-yum-mirror to create and maintain the mirror. In the Yum client, more specifically in /etc/yum.conf there is a throttle setting.
Is there a like feature in /etc/sysconfig/uln-yum-mirror? If so, what is... (0 Replies)
I get poor performance when sftp'ing a file to a server on a SunOS 5.10 system, with Sun_SSH_1.1.4. The same client performs much better to a linux system at the same site.
From a TCPdump, it appears that the Solaris server is throttling the thruput. After proceeding normally for a while, the... (0 Replies)
I receive the following warning messages on a very new machine which has FreeBSD 8.1 x64 installed on it:
Interrupt storm detected on "irq 20" throttling interrupt source
It is unclear what this means and what its origins are (motherboard? CPU? RAM?).
I can start the desktop and the message is... (4 Replies)
Is there a way of throttling a process resources, something akin to limits but for processes not users? ie I want processX to be restricted in the amount of memory it can consume. For process cpu I guess I can simply nice the process, but total memory consumption is my primary concern. (3 Replies)