Hi.
If you want to use the single-process perl code as is, you could make use of
parallel. From the GNU project page in part:
Quote:
If you use xargs and tee today you will find GNU parallel very easy to use as GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel.
GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU parallel as input for other programs.
--
GNU Parallel - GNU Project - Free Software Foundation
From my point of view, you'd split the input file of URLs into smaller files, then feed each one into a process generated by parallel. You could arrange to skip lines in the same input file, but that design seems error-prone.
Not knowing any more about the process except that it uses
curl, it would seem to be network-bound, so it's unclear if anything is gained by parallelism, and, in fact, additional overhead might make the situation worse. Seems like a good example for a benchmark.
Best wishes ... cheers, drl