Quote:
While it seems to be a reflex both new and seasoned Linux admins fall for and while information can be gleaned from existing files, killing processes without recording details first does not help or help speed up the fact-finding process as clues like deleted files on open file descriptors and environment information like user details, working directory and connection data is lost.
While I would love to leave a port scanner running on my system while I gather details ineptly, I must disagree with the generalization of this statement. My first priority is to stop whatever malicious activity may be occurring on my server that may be affecting the well-being of someone else's server. In this case, my regard for other system administrators trumps my love of data.
Quote:
the best thing to do is do nothing.
Again, when AT&T, abuse networks and other sysadmins are emailing me, this is actually the opposite of what anyone should do.
Quote:
as "anything obviously fishy" doesn't convey much
I agree. Data trumps anecdotes. However, I'm not asking anyone else to diagnose the problem. That statement was merely an indication that the log files aren't flashing "WARNING: INTRUDER" type messages. I was hoping someone might suggest which logs were most likely to contain information, and what this type of problem might look like in them.
Your suggestion about utmp, wtmp, lastlog, etc is sound, and that will certainly be a step I take.
The last command revealed two logins without IPs under my personal login. Perhaps that's meaningless, but the limited number of places I log in from all have IPs recorded.
I also realized that this production server had many settings cloned from a development server. Which means that non-root user had sudo access, and ssh was accepting passwords and PAM.
I have since switched SSH to key auth only, completely removed any and all non-system users from sudo-enabled groups, as well as revisited my iptables firewall. I haven't been able to correctly limit the OUTPUT chain without killing web services, but I'll keep researching.
At this point, I have seen no other logins, no rogue processes and the victims have reported the port scanning as ceased. That's enough for a tentative declaration of "fixed" while I dig deeper.
---------- Post updated at 03:24 PM ---------- Previous update was at 09:49 AM ----------
Here's another interesting development. I have found that the system looks to be sending out requests that computers all over the internal network answer on port 8080. When I plug the network cable in, the flood begins. When I unplug, it stops.
When I moved all functionality to another server, and booted into a LiveCD to reinstall the OS from scratch? It's still doing it. Plug network in, traffic surge. Unplug, traffic stops.
I'm in the process of capturing the outbound data (only had the inbound answer) to get more info. But it seems that whatever this infection is, it runs at boot time. Has anyone ever experienced something like this?