Side channel attack, EMV weakness, PyClockPro, SARC, Project management, OpenStack, MongoDB, Recycled bits

Post date: Nov 13, 2012 3:40:42 PM

  • Read this white paper of CPU cache eviction based side channel attack, which allows stealing crypoto keys from other VMs running on same hardware. More compact article about topic at threatpost.
  • Read this article about Chip and pin (EMV) card copying: Article 1, Article 2
  • I have proceeded with PyClockPro project quite nicely. First version was written while reading the specification and trying to figure out the details. So code ended up to be experimental entangled mess. Which was naturally expected from the beginning. Now I'll rewrite it completely so it's releasable. There isn't much code about 12 kilos where about 50% of code is comments, but it's very rich in detail. Original test code was based on three lists, where each list contains entires waiting for next hand. There was list cold, list hot and list test. Basically it's really simple, but when you add adaptivity to the picture it's really easy to get off by one errors when allocation is modified simultaneously when you run the eviciton code. Suddenly cache is just one entry smaller or larger than it was supposed to be. Well, not to bad in reality, but it's not perfect. I really can't accept that. I also have benchmarked and profiled all key sections and design choices. It was really clear that even if many people recommend using linked list with CLOCK-Pro it's really bad idea with Python, it's just simply slow. Native Python list is quick to access with indexes, it's even quicker to access than dictionary with keys. "Walking the list" with for is even faster than that. If you compare it to walking recursive linked list, it's super fast. I now use this one list for meta data and one dictionary for data and reference bits. It seems to be optimal design as far as I know. As soon as code is relesed, I really hope to get feedback, I'm very sure, it's not perfect yet. I also have a few questions about the internal logic, that specification or any other documentation which I have read doesn't really answer. I'll release those questions with first source. I think biggest questions are if hand hot should be allowed to delete all test pages (non-resident cold pages) and if hand hot is allowed to "dive in" the cold page pool in front of hand cold when memory allocation for hot pages is really low. I personally would prefer to skip deleting test pages and instead of looking for hot pages in cold list, I want to expire cold pages instead of hot pages, even if this leads to situation where we have more hot pages than we should have. Main problem with hand hot moving "too far" forward is that actaully hand hot is the primary hand which coordinates adding new entries in clock. Therefore it's important that it really can't move too far forward and something else should be done instead. I have run multiple test runs with vistual clock operation debug output, and it's very clear that if clock works without some of these minor modifications results aren't going to be optimal.
  • I have also studied Sequential Prefetching in Adaptive Replacement Cache (SARC). Unfortunately it's yet another nice algorithm which is patented by IBM and written another experimental code based on specificiation. SARC is really novel solution which allows using same storage space for random read caching, as well as prefetching data for sequential reads to cache, minimizing storage latency to applications under different mixed workloads. I just wonder if I could rip parts from SARC in incorporate those to CLOCK-Pro. Like the prefetching detection counter system. I already have implemented mixed random read, and writeback modes when using same shared memory. Adding prefetching to that, could be really interesting idea. That's definitely something to think about.
  • Finished reading Proportional Rate Reduction for TCP (TCP PRR) specification. Make fast retransmit more reliable and reduce retransmit bursts.
  • Had a very long discussion with colleagues if it's better to have wider and shorter roles or narrower and longer roles in projects. I personally think that having short roles where you only do one step in project might be harmful. You can't really even learn from mistakes what you made, if you aren't ever informed about those mistakes. When you have longer role (in time), it means that you're part of project from the beginning to the very end. If any decisions made at the beginning turn out to be bad, you know exactly why those were bad and you can avoid those in future. This approach also makes people to think in detail how things should be done. If I know that someone else is going to operate the system in production which I'm writing, I can make it really crappy. It's not my problem. It got delivered and accepted. But if I know that I'm responsible for the operation of the whole project, I'll make it so that it's easy to maintain and preferably works very reliably so I'm not getting bothered by random problems which might arise from poorly designed / implemented project. But this is just my personal opinion, as well as everything else in this blog. This is also linked to DevOps role, because if designers make bad design, or devs make poor code, it's still after all operators problem. But if you're DevOps, then it's your own problem.
  • Read even more stuff about OpenStack. Maybe I should install open stack test environment just for fun. It's great if there would be nice cloud services on higher than IaaS level which do not cause vendor lockin.
  • Checked out a few tools for documenting business processes. Also refreshed my memory by reading about business process management, business process modeling, continuous improvement process and total quality management. Nothing new, but it's good to remind your self about somethings from time to time.
  • Read this great post about MongoDB gotchas, which new deveopers encounter with MongoDB. Nothing new in this article either.
  • Encountered (from a friend) an interesting PHP malware, which was obfuscated using recursive encoding eval(gzinflate(base64_decode("Payload"))). Guy who had that script said that it was encrypted. Duh! That's not an encryption! It's funny to call gzipped and base64 erncoded data to be encrypted. I wrote small Python script and after about 50 decoding iterations I got the source out. It was some kind of proxy tool which they can use to scan, attack and exploit other systems. As well as carry data back to source. File was dropped from Germany server hosting and inside script there was IP addresses referring to another German hosting site. I'm sure those servers were hacked too. Unfortunately there's no reply to abuse report from either hosting company yet.
  • Finally just to cheer you up, I would like to announce that all content of this site is produced using recycled bits.