I am working on this idea that I want to process some information from a command dump.
Using the dump I will search for a string. If it finds the string, it must post a
different/associated string to output/logfile.
'Find "cookie jar"' then 'echo "carpool/tomorrow" > logfile'
It does not stop with 1 string to search for, it will have around 100 different
strings to search for, and everyone have some other content assigned to it, thus it
should be some clever loop.
I COULD have written one grep for each value, but since it is growing by the day,
I find that a waste of lines and bytes.
To prevent an infinite loop, it should remove what it has already found from the
sourcelist of what to search for or somehow mark it as processed.
(The sourcelist is overwritten each time the PC boots with an updated one)
The tricky bit for me is both how I can arrange it with a HASH table, I've done
something similar in PERL, but in BASH/AWK I am unable to do this. The system is very
limited and its not permitted toexpand on its available apps by any byte atm.
I've drawn out how I think the design is going to work.
In itself, there are several commands it will search the output for, but I am used to
having a launch script, that then uses a "core search" script with the variables that
designs it to that and that output file, predefined variables file and logfile.
It gives control and scalabillity ;-)
(The pink area is where the core script loops in itself to search for every available word)
Is this hard to acomplish?
I've written some BASH scripts in the past, aswell as some BATCH and PERL scripts,
though nothing close to this complexity.