Rules for these UNIX forums:
- Don't ask about homework
- Don't ask about homework
- Don't ask about homework
Nonetheless, this is a really important question that should be answered, because, by golly, I was working on it today.
Actually, my problem is slightly different. I'm augmenting a shared-library routine which parses the command-line arguments. My task is to pass those arguments to sprintf(). How do I make sure:
- I have enough space allocated for all the arguments?
- I don't write past the end of memory?
- I don't run out of memory
Imagine a directory with 10,000 files in it. (This happens quite a lot in Bioinformatics and cluster computing). Then I do a "echo *". Let's say I'm augmenting the command "echo". I have to make sure it can handle 10,000 arguments, which means dynamically allocating a very long string.
One way is to statically allocate a large chunk of memory, and if it's not enough, report that the program is out of memory or that there are too many arguments and process what can be processed.
Another way is to dynamically allocate memory byte-by-byte (or in my case, argument-by-argument). You can do this easily enough with the realloc() call. A good realloc() implementation is very efficient, and actually reserves memory in pools, so that realloc'ing 1 more byte does nothing more than increment some internal counter somewhere. The realloc() call can be used on a NULL pointer in place of malloc(), so you can just it from the start.
If the realloc() call is inefficient, you can create one yourself by allocating memory in chunks. You allocate, say, 256 bytes at a time, and you have a counter of how long the buffer actually is and another counter tracking how much is being used. Every time you read in (or in my case, copy in) a new value (or argument), you check to see if you have enough space in the buffer. If you don't, you go out and allocate some more.