Marketing, Linux, Duplicati, SFTP, CSP, Materialize, Crypto, FS

  • Marketing = BS, it's the official disinformation & lies department, propaganda and ministry of truth. Now local Telco is marketing "5G Ready" mobile subscriptions, even if the technology isn't available and not even licensable yet. It's just like when they marketed "mobile payments" by sticking NFC payment sticker on phone. I could tell where they should stick that sticker, and then pay with it. It would be really fun to market it that way, I'm sure that marketing campaign would make global attention if they're just rude enough about it. Maybe it's not best marketing, but sure, it would make attention. Anyway 4G+ (LTE Advanced Pro) is not 5G. Just like they marketed 3G DC as 4G. Duh, don't ever trust sales guy or marketing team. Their job is to be full of it.
  • Hardening Linux server configuration, securing paths, adding ACLs, groups, setting file permissions, and so on. Lots of work to get all things right. But yeah, it's the same pain with Windows too. It's pretty bad if anyone can write to file, which later gets executed as root. Extremely common mistake. I'm sure hackers looking to take over systems know to look for such files and misconfigurations. You don't need any user rights elevation hack, if you can do it the very easy way.
  • Sigh as expected, n+1 backup sets are broken and Duplicati 2 now fails. I'll try once again recovery procedures, but it seems that those are also broken on code level. If the recovery would work, I would automate it so all of the backup sets would get fixed automatically, without manual (or minimal) intervention. Currently the process is pretty much catastrophic, because it involves deleting whole backup set and pushing all data again to the backup system. And that's a lot of data. AFAIK, the recovery process should work much better than it currently does. Accept the fact that some data is lost, and continue. Not getting 100% stuck with it and failing. - Yet from integration point this is the classic question. Do we accept that something isn't exactly right, or do we full stop and force someone to deal with the stuff and how well we can do the recovery when such situation is encountered if it's possible to do it automatically.
  • I wonder how many people don't realize that SFTP, FTPS and FTPES has absolutely nothing to do with each other. It seems that in many cases even IT departments are totally clueless about this.
  • Took a while and got Content-Security Policy (CSP) straight for a few sites which I'm administering and lacked it.
  • Had amazing fun (not) with latest Materialize and jQuery. So many things broken, again. Due to new versions and incompatibility issues requiring migration.
  • Daily integration WTF. Customer complains that feature XYZ isn't working. I check the specification, it doesn't say anything about XYZ. My reply is very short and absolutely correct. I just confirmed that feature XYZ is working exactly as requested, specified and implemented. - Isn't that a correct and perfect answer? Such a statements are truly real joy to give. Very simple and 100% accurate answer.
  • NIST is looking for lightweight cryptography. That's very welcome approach, something like AES is total overkill for many truly low power devices, just making those expensive and power hungry or dead slow.
  • Someone asked what's the point of small file inlining with file systems. Well, a lot. I just finished integration project, where customer wanted absolutely to have one file per transaction. Even on top of that it's a very compact and space efficient data format inside the file. The files being generated are typically between 50 and 200 bytes. There are tens of thousands of those files daily. In this case, inlining data in file system does make perfect sense. Sure, I didn't recommend using individual files, but they insisted it, so be it. With this design the payload effectiveness is very low and most of computing & IO power is wasted on small file handling overhead. This also made the integration over 25x (2500%) slower than it used to be.
  • Managed to probably repair some backup sets created by Duplicati by running purge-broken-files and then repair (twice) to make sure that everything is ok. Have to verify the results and do random restore tests tomorrow. - Worryingly one purge-broken-files seems to destroy all data from one backup set. It must mean, that there was pre-existing hidden corruption. Ouch! - Or maybe those are cases where the compaction has happened during the "lost" section. That would cause something like this to be observed. Yeah, I could check that from logs.

2019-09-22