Integrity & cleanup code design patterns, Crypto-Gram, DDoS

Post date: Oct 16, 2016 2:44:06 PM

  • Anything new here? Nope, it's all just very basic data management stuff. Which should be familiar for all of us. Bit messy? Ok, this is just random night rambling about the topic due to some cases where things wasn't done properly.
  • Mostly it starts with the question, should we use fast & leaky method or logically guaranteed correct method? And if that's executed from bottom up or top down. Of course it's also possible to shard / strictly partition these processes to make partial quick runs.
  • When such code is required? When any storage with complex relations is being verified or cleaned up. As example, you got standard object / blob storage where blobs got UUID and that's it. How do you know if the all relations are actually in place, data isn't corrupted and there aren't lose / extra objects left in the blob storage. Verifying all this might be quite expensive operation. Of course some of the parts are done runtime, but it's almost inevitable that it'll leak at some point, so fast and quick method from top down during runtime isn't actually working perfectly even if it would be working very well mostly. If there are additional optimization tables like quick reference tables which back link objects to the data structure or can be used to verify existence of objects in object storage without accessing the actual object storage, these tables / data structures can also get out of sync and need to be verified every now and then.
  • I mostly prefer guaranteed and fool proof method aka full logical verification, but it might be too expensive to run even if sharded. In this case all links are being verified for whole set or strictly defined subset. As example, back checking all objects from single object storage module back to the metadata servers and internally verifying links in metadata system. This is just like the full check with fsck or chkdsk or git gc, git prune or git fsck. This method will cleanup any mess potentially left in the system, or simply remove unnecessary / lose data.
  • But for the normal operations which happen most of time, it might be better to implement runtime execution where when user deletes object from object storage the metadata and actual objects are deleted. Yet, because this all happens in steps and isn't atomic operation it's well possible there's an leak due to multiple different reasons. Of course thist method can be also journaled and each step rolled backward of forward to improve processing integrity. But that adds lot of complexity.
  • Best solution? Normal operation uses fast and leaky "it should work" approach, and it's totally acceptable that there is some leak. Plus full sharded checkups at required intervals whatever that happens to be in that environment. This is quick and easy to implement and still gives good overall results.
  • The mark and sweep runs can be also implemented from top down or bottom up. Do we start from metadata and check objects or do we start from the objects and look for metadata. For that there's no perfect answer. Also the level of verification required can wary. Do we actually verify object / blob hashes or is it enough that the data seems to be there. Sharding and pruning can be done in levels too. Instead of verifying whole stack it can be verified in layers as well as the hash checks and other stuff can be naturally optimized. Because the data hash is verified on access, there's no need to verify hashes for objects which have been accessed in N time units etc.
  • It's bit like mostly counter based garbage collector with additional mark and sweep runs.
  • Last words: Just pruned a lot of dangling blobs from git repository. ;)
  • Read about 20 last Cryto-Gram Newsletters from my backlog. Many interesting stories there. Like taking down the Internet and Zero-Day NSA exploits, etc.
  • Verisign reports that Application Layer Attacks as form DDoS are going up. Well of course. Well crafted ALA attacks are indistinguishable from getting some extra traffic. Another noteworthy thing is that Even 100Gbps (Gbit/s) pipe won't be enough to protect you from even simple DDoS flood attacks.