You're running grep, awk, and sed hundreds of thousands of times to process thousands of lines. I think your script may need a rewrite. You could probably do it all in one awk instance.
Can you show the input data you have, and the output you want?
here's the input data. Notice the below is what i refer to as a chunk. in this particular case, the chunk below belongs to the service called "MEMORY_CHECK".
so there's a file that contains several of these chunks for several services.
lets say the file is called status.log. for processing, I want to pull out all chunks called "service_description=MEMORY_CHECK" and then pull out other attributes of each of the chunk as needed.
attributes can be, i.e what is the name of the hostname that this MEMORY_CHECK is on? in this case, the host name would be "sky.log.net".
what is the plugin output, in this case, it would be "CRITICAL: Process was not found".
Hello Everybody,
Could anyone please tell me how to get ssh to work without asking for passwords? (i want to do a ssh <hostname> without getting a request for a password but getting connected straight away)
I have attempted the following but to no avail :( ...
I tried to generate a SSH... (5 Replies)
Dear all
I have group of input lines which look like this
These input lines is placed in a file named phonelines.txt and there is a script which match $4 and $5 with country codes placed in another file named country-codes.txt and its contents is :
Italy 39
Libyana 21892
Thuraya... (12 Replies)
One of our servers runs Solaris 8 and does not have "ls -lh" as a valid command. I wrote the following script to make the ls output easier to read and emulate "ls -lh" functionality. The script works, but it is slow when executed on a directory that contains a large number of files. Can anyone make... (10 Replies)
I am processing some terabytes of information on a computer having 8 processors (each with 4 cores) with a 16GB RAM and 5TB hard drive implemented as a RAID. The processing doesn't seem to be blazingly fast perhaps because of the IO limitation.
I am basically running a perl script to read some... (13 Replies)
Hi all,
In bash scripting, I use to read files:
cat $file | while read line; do
...
doneHowever, it's a very slow way to read file line by line.
E.g. In a file that has 3 columns, and less than 400 rows, like this:
I run next script:
cat $line | while read line; do ## Reads each... (10 Replies)
Hi,
I have a large number of input files with two columns of numbers.
For example:
83 1453
99 3255
99 8482
99 7372
83 175
I only wish to retain lines where the numbers fullfil two requirements. E.g:
=83
1000<=<=2000
To do this I use the following... (10 Replies)
Hi All,
I'm new to the forum and to bash scripting. I did some stuff with VB.net, Batch, and VBScripting in the past, but because I shifted over to Linux, I am learning to script in Bash at this moment. So bear with me if I seem to script like a newbie, that's just because I am ;-)
OK, I... (9 Replies)
Discussion started by: cornelvis
9 Replies
LEARN ABOUT NETBSD
pack_fopen_chunk
pack_fopen_chunk(3alleg4) Allegro manual pack_fopen_chunk(3alleg4)NAME
pack_fopen_chunk - Opens a sub-chunk of a file. Allegro game programming library.
SYNOPSIS
#include <allegro.h>
PACKFILE *pack_fopen_chunk(PACKFILE *f, int pack);
DESCRIPTION
Opens a sub-chunk of a file. Chunks are primarily intended for use by the datafile code, but they may also be useful for your own file rou-
tines. A chunk provides a logical view of part of a file, which can be compressed as an individual entity and will automatically insert and
check length counts to prevent reading past the end of the chunk. The PACKFILE parameter is a previously opened file, and `pack' is a bool-
ean parameter which will turn compression on for the sub-chunk if it is non-zero. Example:
PACKFILE *output = pack_fopen("out.raw", "w!");
...
/* Create a sub-chunk with compression. */
output = pack_fopen_chunk(output, 1);
if (!output)
abort_on_error("Error saving data!");
/* Write some data to the sub-chunk. */
...
/* Close the sub-chunk, recovering parent file. */
output = pack_fclose_chunk(output);
The data written to the chunk will be prefixed with two length counts (32-bit, a.k.a. big-endian). For uncompressed chunks these will both
be set to the size of the data in the chunk. For compressed chunks (created by setting the `pack' flag), the first length will be the raw
size of the chunk, and the second will be the negative size of the uncompressed data.
To read the chunk, use the following code:
PACKFILE *input = pack_fopen("out.raw", "rp");
...
input = pack_fopen_chunk(input, 1);
/* Read data from the sub-chunk and close it. */
...
input = pack_fclose_chunk(input);
This sequence will read the length counts created when the chunk was written, and automatically decompress the contents of the chunk if it
was compressed. The length will also be used to prevent reading past the end of the chunk (Allegro will return EOF if you attempt this),
and to automatically skip past any unread chunk data when you call pack_fclose_chunk().
Chunks can be nested inside each other by making repeated calls to pack_fopen_chunk(). When writing a file, the compression status is
inherited from the parent file, so you only need to set the pack flag if the parent is not compressed but you want to pack the chunk data.
If the parent file is already open in packed mode, setting the pack flag will result in data being compressed twice: once as it is written
to the chunk, and again as the chunk passes it on to the parent file.
RETURN VALUE
Returns a pointer to the sub-chunked PACKFILE, or NULL if there was some error (eg. you are using a custom PACKFILE vtable).
SEE ALSO pack_fclose_chunk(3alleg4), pack_fopen(3alleg4)Allegro version 4.4.2 pack_fopen_chunk(3alleg4)