Are you saying that all your servers have direct internet access and that they all download patches when the command is issued?
As some have already said, there needs to be an agreed set that you are installing else your testing does not match what you put into production.
For
AIX and
HP-UX, I pull down a block of fixes to a directory for testing and then copy that directory to production servers. I don't get a fresh download for that very reason. There may be a neater way, but it's not a huge overhead.
For
Red Hat Linux we use their Satellite Server, which means that servers with no business need are not directly on the internet and it reduces our public traffic (that we pay for by usage) This allows us to set up cloned channels (as they call it) into which we can move fixes as we require and then each OS still does a network pull, but from this controlled list. We can then be sure that production gets the same as testing. We then update the patches in the cloned channel and start testing the next updates, and round we go again.
I think
Centos has the same and I'm sure others do too.
You can even use Red Hat Satellite for Solaris patching (no roll-back though)
Of course, this is always done if anyone agrees that we will actually do some patching. Let's not get into that debate here though.
Robin
As a DBA, you must have some software patch responsibilities too. It's just common sense and you are right.