DDoS, Fail loop, Rate limits, Compression Bombs, Quality, Handshake
Cloudflare New DDoS Landscape? - New? I've written about this over and over again. And this is exactly the method that has been used for ages to troll sites and users. Especially where rate limits aren't well implemented. Because it's extremely trivial to bring down sites with targeted heavy requests. It also works very well against cloud and hosted services. Because bulk dossing those would require lots of bandwidth, but bringing the hosted service down with it's very limited quotas is trivial with well selected attacks. This has been the chosen way to annoy admins and bring down sites and services for decades. Resource consumption attack. Also works in real world, especially if services are produces inefficiently and they can't take shortcuts to deal with the load easier. If attacks are well designed you don't need lot of network resources, some DDoS work with amazingly little network resources, like WinNuke did. I'm not mentioning the applications, but I've seen at least one database server to be easily crash able with malformed requests. So you'll probably need just around one kilobyte of network traffic to crash the server. If I would have time and interest, I would play more with that scenario. Thanks to Shodan, it would be trivial to crash large amounts of public instances with one VPS or even via Tor just in minutes. Should I try it? Everyone said it can't be done, so just for lulz? I know the APIs very well, I could leave key message fuzzing running for days or weeks, until I find what I want. Also some services are quite suspectable to attacks which leave connection into some specific state. Attacker of course can take it to that point and then discard the "session" leaving the other end to keep the resources reserved. I'm sure someone is doing that fuzzing thing, record valid traffic and on purpose modify it so that it's likely to crash application which doesn't handle all kind of exceptions perfectly and so many other things. Which are trivial and well known, but still not perfectly handled by many not so popular applications. Of course you can also instead of fuzzing to make your best guess of the cases, the programmer probably hasn't bothered to do properly. Because nobody's going to do this. Ahem. When I hunt for trouble, I'll take several complex things and bring those together in some way which isn't "normal". It quite often leads to trouble, with many programs. Anyway, nothing new in that article at all.
In my own code, if I encounter some unexpected situation, I usually handle it so that the whole service won't crash. But there's usually a little delay combined with it. So if the error repeats, it doesn't bring the whole system down (via resource consumption) running into that error repeatedly. But that delay might be enough, to bring the system down, if the exceptions are intentionally trigger in massive scale.
I took one image hosting site accidentally down. I just wanted to download all images on the site. And wrote multi process / async fetcher, which used 1024 parallel HTTP requests to pull content from the site. It was immediately obvious that the response times shot to around 2 minutes and then lots of requests started dropping. I of course had set extremely long timeout for my self, so my requests wouldn't timeout, but everyone else was for sure timing out. Everything needed, was just my home gigabit connection and standard desktop and Python. Boom. Of course I stopped it and stepped it to more reasonable 32 parallel requests, but even that caused huge performance drop on their side. This kind of massive harvesting is also a problem, because even if they would have caching, fetching all content in this way is guaranteed to cause cache miss almost every time. Depending from their cache solution, this could also cause all hot data to be flushed from cache, so even the normally from cache served requests aren't served anymore from cache, etc. Some sites run out of sockets, when you do stuff like that and they haven't tweaked relevant settings and the list goes on. Some sites got bandwidth quotas, all you need to do is download 10 terabytes from them, and boom, they're down, and so on. Some times for connection clog up using Tor is perfect, you can have high number of TCP connections coming from large number of exit nodes, which all act extremely slow. Blocking slow Tor clients is problematic, because it's unfortunately often the normal way.
Many image hosting sites are susceptible to compression bombs and other kinds of intentionally malformed content, which is the processed by their code, until something fails. Having thumbnail is fun, but it can also kill your server. Some sites check image file size and reported dimensions, but when decompressing data, won't stop when enough data to fill the image bitmap has been extracted. That's implementation fail and allows compression bomb to consume or even in some cases overwrite memory. These things are one area, where the list of endless tricks goes on. I'm just hobbyist, but I'm sure pros got extensive lists of all kind of much meaner tricks in their sleeve.
Even if some software works well in production, it doesn't mean that it would have been designed to hand all kind of ingenious abuse that can be thrown at it. Yet handling all the trouble that can be thrown at it, requires lots of extra work. Often it's hard enough to get the software "barely working" and then working in stable way and being even somewhat usable. Work required from that point to 'bulletproof' status is a long and expensive way. That's why it isn't often done.
After checking a few protocol details, the server I'm talking about doesn't send anything when it accepts connection. So it seems that the server isn't indexed by Shodan. I guess I'll have to run my own scanner, which sends specific query when connected, to find out how many potentially vulnerable instances there are available. Scanning whole IPv4 address space is trivial job, so I'll do it just for fun. I'll just capture the handshake, and use custom Python scanner, instead of trying to connect every instance with the official client implementation.
This post about game servers - All the classic attacks which I've been using against, P2P / mesh and DHT networks. Just trying to see how easy it is to fiddle with those.Or use the networks to provide strange traffic to IPs which aren't running the client software as reflection attack. And usually it's easy.