Splunk timechart of fail2ban working its magic

Using fail2ban to mitigate Apache POST flooding

Every so often a bot has a go at my poor little Linode, triggering a notification about CPU/memory/IO usage and I have to come up with a way of defending it. Today’s notification was about high CPU usage, as usual.

linode CPU graph

linode CPU graph

I toddled off to my handy (free) splunk instance and saw that this time it was an attack against my wordpress instance’s login page. I could see there were lots of 302 responses to POST requests, so it was an easy case for fail2ban to handle – something I’ve used before with great success.

I found some information on an article about protecting Apache with fail2ban and ran with it. They weren’t using virtualhosts in their configuration and it seemed to be missing a declaration for the identifying the host, but I got there with some regex tweaking after referring to the documentation for fail2ban filters.

The really handy thing I learnt was to use fail2ban-regex to test my filters before putting them into production. You can quite easily test to see if your filter’s going to work, its outputs and matches and all sorts of things.

An example command is shown below. I pipe it out to a file because I learnt the hard way that there’s a LOT of output sometimes.

An example of the start of the output:

The configuration files I used are listed below:

/etc/fail2ban/filter.d/apache-postflood.conf

/etc/fail2ban/jail.conf

So far it’s worked out pretty well – CPU usage dropped off and so did the attacks on the script – even after the bans were automatically removed.

I can’t find anything in the logs that regularly uses POST AND expects a 302 response, but if it does I can change the regular expressions to match the “wp-login.php” filename as well.

Splunk timechart of fail2ban working its magic

Splunk timechart of fail2ban working its magic