Integration, NoScript, RFID, RAID, Cloudflare, Data Breach, Data Corruption

  • Classic integration projects, with ever growing work amounts, extra testing, extra documentation, extra requirements and nobody wants to pay for anything. - Oh yeah, this is called as 'normal project', it's very exceptional that things would be otherwise.
  • Had long discussion about Bitcoin, world trade, currencies, etc. We were discussing about what's the best "internal" currency to use for software, where currency mappings aren't done in real time for efficiency reasons, but having values updated daily should be as close as possible to the real prices. It was instantly obvious that using Bitcoin as internal currency wouldn't be a great idea. After thinking it for quite a while, I started to think that Special drawing rights (XDR / SRD) could be the best currency to use. Isn't that exactly what it is 'designed for?'. Yet the currency rate API we're using, doesn't provide direct XDR rates.
  • Aah, finally, NoScript for Firefox Quantum is here. - Thanks! Decided to revoke all existing permissions and build the whitelist from scratch. Great thing to do every now and then. You'll find lots of sites, you're not using anymore. Just like cleaning up the credentials storage, whichever you're using. When you'll do yearly cleanup, you'll find just so many sites to be removed.
  • More Mobile / RFID payment, loyalty, order, discount, promotions and prepaid voucher stuff. But it's almost always the same, with only minor twists. I still dislike the concept that every business should have their own app.
  • One organization mailed DIMMs without any protection in normal mail and only enclosed in paper envelope. No, not even that flimsy light plastic packaging was used. Great move, those weren't damaged, but well, it wouldn't have surprising if those would have been totally destroyed.
  • Had extremely boring discussion with one organization IT administration which really didn't get point of off-site backup at all. They thought that RAID is as good as off-site backup with history. Well, this is absolutely normal. Then at some point in future, we can wonder how did this disaster happen. Especially if we combine that with the fact, that you don't need to swap out broken disk from RAID, because it still works. Hahaha.
  • Awesome article from Cloudflare about multi-vendor cloud. I totally agree about the negotiation power and business continuity risks of not being able to deploy with new vendor quickly and easily.
  • Really liked that The Register posted article where they compared data breaches and plane crashes. Yep, it's a same thing. Multiple overlapping reasons why something bad happened after all. Only difference is that plane crashes are researched, and something is usually done about the problem. Ehh, not probably immediately, but especially if the pattern repeats. But in computer and network security, that's not often happening. Why is that? What should we learn from aviation industry. Here's another great CRE post from Google. Or is it better to think the fails being so shameful, that it's better to ask people not to talk about those at all.
  • Fixed one project. Initially I heard claim that the "data corruption" was caused by system reboot. And I should remove the weekly reboot. I don't even bother to say, what I thought about that. After a while, I dug in the project source code suffering from data corruption. Yep, it was obvious. All the classic fails. First of all, no proper signal handling, so the program didn't react to the requests to quit. And as bonus, no proper transactional implementation. So depending from action, data was duplicated or lost, if the process got terminated. After my fixes, it's perfect. First of all, if transactionality is correctly implemented. Everything rolls back or forward after unexpected termination. As well as all parts of the key data processing now listen to signals and abort gracefully (or finish if short task) when termination is requested. This code has been now in production for three months and there hasn't been a single problem with it. Earlier it was weekly churn to manually inspect what has been processed twice and what data hasn't been processed at all. Even if process is abruptly killed, it shouldn't lead to data corruption.