Jim's suggestion is a good one, however if they are large files and you read them more than a few times, then you might just create yourself a bottleneck. For a read once on each client, then use Jim's suggestion.
Perhaps your write to the central file server would be best done with
rsync so that only data that has changed gets written across the network each time you need to update it, assuming that the source file is not removed & re-written each time.
You also have to consider how your clients will behave if the file is incomplete when they try to read it. Perhaps a flag file on the central server so that the other clients will only read a file when it's present. Your write process would then need to delete/rename the flag file before it starts re-writing the data file and then recreate it when it is complete.
When you say that the files are remote, if you mean physically remote (different city/country etc.) then your biggest issue will be the network link.
Some things to consider:-
- Where is the data source?
- Where are the clients?
- How much data are we talking?
- How often will it be written?
- How often will it be read?
- Will a file be ignored if has not been updated?
- What is the network like?
These need to be answered however you choose to implement this.
There are (probably expensive) technologies that can replicate data between remote sites if that is your need, but it then depends on what you have already available, e.g. SANs, ZFS/NAS, etc.
Can you expand a little more on these?
Kind regards,
Robin