Keep Server Online
If you find the Apache Lounge, the downloads and overall help useful, please express your satisfaction with a donation.
or
A donation makes a contribution towards the costs, the time and effort that's going in this site and building.
Thank You! Steffen
Your donations will help to keep this site alive and well, and continuing building binaries. Apache Lounge is not sponsored.
| |
|
Topic: Security challenge, rejecting specific requests (not IP) |
|
Author |
|
James Silva
Joined: 21 Jan 2014 Posts: 1 Location: UK, London
|
Posted: Tue 21 Jan '14 20:35 Post subject: Security challenge, rejecting specific requests (not IP) |
|
|
Hello
I have been trying to solve a big problem for the last 2 weeks with one of our servers (apache 2.2 , windows, php).
The client using our system is a contact center firm.
They have about 120 operators, all connect to our websever with the same IP, their outgoing IP.
We have been suffering DoS attacks from some of these operators.
These are simple, browser attacks , namely 5 or 10 operators will just hold
F5 key and bombard the server with requests when they shouldnt.
There is very little we can do to improve performance of these specific url's the attackers are using. This is a software, not a public portal, so a lot of screens have a good amount of processing and real time querying in them.
We did manage to produce a php protection which will recognize the multiple requests and blacklist the user.
We use the user ID in the system to control who should be blacklisted, so this is all dependent on our own authentication.
It works like this :
- user logs in our software, we write his ID in a cookie
- a control file is created using that ID as the unique key
- from there we control if he's hitting the same url repeatedly, if the cookie exists
- after x requests on the same url, the script will die, and a message will be displayed.
- the control cookie is erased when the users logoff or after a 24 hours lifetime
This works to some extent, but it’s a little "too late" since the request have already been sent and processed by the webserver.
Even after trimming down the request to a bare minimum, its still a php request that will be enqueued and normally processed by the handler.
So, the attackers now have to "hold F5" for a much longer time, but they are still keen to doing it anyhow.
Ideally, we need something EXACTLY like mod_evasive, but for rejecting single requests instead of blocking the IP.
Exemplifying : if a user calls the same url, 5 times, in a 3 second spawn, we will reject every next request for 30 seconds, but only the requests by that user.
Also, we can only work with apache on windows so far, but linux only solutions are also of interest if there are any.
Any help, suggestion or idea how to brain storm this issue is greatly appreciated. |
|
Back to top |
|
James Blond Moderator
Joined: 19 Jan 2006 Posts: 7371 Location: Germany, Next to Hamburg
|
Posted: Thu 23 Jan '14 21:57 Post subject: |
|
|
I thought about this a few days. The only chance that you have is to get the computer name from that person who does that and send your self an email via ajax or save it to a DB.
If they use IE than it is easy.
Code: |
function GetComputerName()
{
try
{
var network = new ActiveXObject('WScript.Network');
// Show a pop up if it works
var pc_name = network.computerName;
}
catch (e) { }
}
|
with other browsers I haven't come to an idea yet. |
|
Back to top |
|
James Blond Moderator
Joined: 19 Jan 2006 Posts: 7371 Location: Germany, Next to Hamburg
|
Posted: Mon 27 Jan '14 21:06 Post subject: |
|
|
Also an idea would be blocking the client via the cookie content. (cookies needs to be http-only false)
Code: |
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTP_COOKIE} ^.*(<|>|'|%0A|%0D|%27|%3C|%3E|%00).* [NC]
RewriteRule ^(.*)$ - [F,L]
</IfModule>
|
|
|
Back to top |
|
maba
Joined: 05 Feb 2012 Posts: 64 Location: Germany, Heilbronn
|
Posted: Tue 28 Jan '14 13:19 Post subject: Throttling via iptables |
|
|
If your server is running on Linux you could throttle down the rate of incoming new requests to a reasonable value.
iptables would be used to do this.
You could either limit the number of NEW requests or you could limit the absolute number of requests. What I typically do in such situations is something like this:
(Note: In the example I am using a special queue called GOOGLESEARCH, you can use whatever name you like to replace that)
Code: | iptables -N GOOGLESEARCH
|
This creates the queue.
Code: | iptables -A INPUT -s 10.110.10.45 -p tcp -m state --state NEW -j GOOGLESEARCH
iptables -A INPUT -s 10.16.61.168 -p tcp -m state --state NEW -j GOOGLESEARCH
|
This means I want two source IP addresses to be throttled. All NEW connect request will be passed on to a separate queue GOOGLESEARCH.
Code: | iptables -A GOOGLESEARCH -p tcp -m hashlimit --hashlimit 30/min --hashlimit-burst 10 --hashlimit-mode srcip --hashlimit-name google_search -j ACCEPT
iptables -A GOOGLESEARCH -j LOG --log-prefix "Excessive requests: "
iptables -A GOOGLESEARCH -p tcp -j REJECT --reject-with tcp-reset
|
The first line says, allow 30 CONNECT requests per minutes. In burst mode allow 300. Limit based on source IP.
The second line just logs something if this happens. It will show up in the standard system log files.
The third line will reject the reques.
So if you don't want to see any logs, skip the second line.
There are other ways to filter at the network connection layer. For example you can also tag specific packets and then filter on the tags and so on. |
|
Back to top |
|
|
|
|
|
|