Duplicati, Briar, Git, Race condition, foilChat, RDP

  • Duplicati failure story continued: Covertly broken backups, restore fails, otherwise ok - repair, test, purge-broken-files, backup -> ok - restore -> says it's successful, but in reality no files were restored. - Steps to reproduce? -Generate backups as usual - Delete one backup set, let's say the "last one" - Run purge-broken-files and repair - Run more backups - Try to restore -> Fail - This is very dangerous. Now I got "unknown" number of backups, which are officially in good state, but in reality totally useless. I think I'll have to do the full restore test & reset. Where I restore every backup fully with latest version. And if that fails, I delete the state file (SQL database) as well as files from the backup storage. This will trigger backups to be completely rebuilt, but seems to be the only way to recover from missing files reliably. Afaik, the purge-broken-files & repair functions are broken, because this shouldn't be necessary after such a "simple" failure. It's also possible, that the restore from repository function which rebuilds the database for restore, does include some invalid references in the process and therefore the restore isn't working. So it's not obvious that the repair / purge is broken. It's a definition question how this kind of mismatch should be dealt when restoring directly from repository without local database.But to summarize it, this is very serious and dangerous situation. Ref: GitHub / Duplicati Issue #3037
  • Tried latest Briar 1.0 beta. Yes, it's nice and works. But it would be still preferable to be able to add contacts by exchanging identity keys, which must be transported securely. This is trivial for security conscious people, so. App should allow it, of course with required warnings. But still allow it for people who know what they're doing and can make it happen securely.
  • Run git gc as batch run on repository server. Over 200k files got removed and packed into repositories. Nice!
  • Fixed a few serious race conditions from one project. In testing environment, it always worked. But in production, there's much more parallelism and I/O performance / thread can be much worse. So the race condition started to cause actual problems.
  • Played a bit with secure chat application foilChat. Also went through their Encryption Protocol Overview in detail. Hmm, no date or version mentioned in the documentation. Usually documents like these contain both. Anyway, the documentation PDF file itself is created on 13.11.2018 (DD.MM.YYYY). They also requested email to receive the file, let's see if it's fingerprinted or something, no it wasn't. Four first bytes of current versions sha256 dd b6 0b d2 foilChat-Encryption_Protocol_Overview.pdf and compared it between the second document document version. Yup, it's same, even if sent to different recipient. So it's not recipient specific version. Yes, I used other device, ip address, and domain to receive it to, so it's not any of those parameters either, if there would be some kind of fingerprinting. - Just basic stuff in the documentation RSA 4096, Fortuna PRNG, AES-256 CBC and blah blah. Some aspects of the document are very inaccurate and even slightly misleading. RSA creates and then publishes - What? AES allows encrypted communication with multiple parties - What? Only obvious stuff in this documentation. Some of the most interesting parts are absolutely vague, group chat documentation is bad. Key Storage documentation is nonexistent. VVoIP, DTLS, SRTP, pbkdf2. What? When the user logs off, the container is encrypted and the key is removed from local and persistent memory. What if user doesn't log off and device is powered off abruptly? How do you make sure, that something is removed from persistent memory, it's nearly impossible to be sure about that. Also mentioned technologies being used REDIS, MySQL and MOGILE FS. - In general, very bad documentation which doesn't contain any required details and only light and partly misleading stuff.
  • Remote Desktop (RDP) CredSSP errors (CVE-2018-0886) - Lot's of problems. Yet it's quite obvious that the systems should have been updated before getting to this stage.

2019-10-06