I have similar story regarding UPS, diesel motor and not so automatic power switch
Needless to say i got a call in 3 in the morning, jumped on my bike, drove to DC to see sweaty engineers with laptops everywhere.
The senior management was also there, breading behind our necks - which did not speed up the process, rather slowed it down.
So i joined the party and started breaking sweat, to find some NFS clustered UFS filesystems needed fsck (big filesystems), but the clusterware timeout happend due to lasting to long.
Then when timeout occurred, the package was switched to another node, started fsck again ... nightmare loop to say.
Was happy since i prepared ZFS some days ago and clustered to do migration to new storage, so i offlined and unmanaged the package in question, mounted the UFS in read only and ran couple of rsyncs.
By the time everything else got up, everything but couple of TB filesystems mostly used for archive, were done.
The NFS was accessible immediately so apps can write, but data was continuously pouring for the rest of the day, and checksuming in the days that followed
It went pretty well (no major consequence in general, except downtime), for which part i
blame enterprise storage batteries which held the data in cache and flushed it to disks when powered back on again.
Among other things, since this was a extremely mixed environment hardware and operating system wise.
Regards
Peasant.