Keybase - TOFU is bad (?)

Ref: Keybase TOFU Article

Long article, let's see. Here's my thoughts and comments on first read. No links on keywords, because I believe anyone reading this stuff already knows the basic stuff what I'm referring to on this post. Anyway if needed you can always ddg.gg the keywords if required.

If user identity is tied to user encryption keys, of course losing the keys means that the trust needs to be rebuilt using alternate methods. That's obvious. Can you call that a flaw? I think the real flaw is the users whom lose the keys. Also, when that weakness is known, any smart user can implement required fallback / backup / recovery / restore process. Like exchanging the new identity or verifying it's "security numbers / security codes" whatever those are called using alternate cryptographically secure channel. As example sending signed OpenPGP message containing that information, as well as the other party confirming that it's valid.

Things to work securely, there are so many things that have to be done correctly. It doesn't matter at all, if one layer is secure, if all other layers or "users" are totally broken. This means that even if the security technology would be absolutely perfect. You're still screwed, if you users are brain dead. Lack of OpSec can't be easily fixed with any single technology.

The claim that comparing security numbers, requires meeting in person is totally false. No it doesn't. This is the thing I've been criticizing hard with BriarApp as well.

They also forget that meeting someone in person can put you in very great danger. Strong pseudonymous identity can be a great and valuable thing. Even if it can't be connected to "you" as a person.

When communicating always, have multiple trusted channels, with different pre-agreed authorization / identity keys as well as pre-agreed protocol to deal with such situations. Don't trust on single "secure" channel alone. Also including duress protocols, which just hint the questions should be answered but with valid looking false information as well as other parties should be alerted.

Having such strict protocols to follow in place, is also great test. Are the people able to follow the protocols or not? If they aren't, it's obvious that the parties shouldn't be trusted with anything important anyway. Because they're security risk. I'm sure everyone working in the field, is shocked how little (most of) people care about security after all.

If someone loses their identity keys and account is reset. You can always verify the new identity using other strong methods.

I do agree that server-trust and SMS trust is bad, but as mentioned, those shouldn't be ever trusted anyway. Next question is that if you should trust any chat app or even mobile operating systems with serious secrets. I would be very hesitant doing that if I would have such secrets. After very long consideration, very best way to protect serious secrets, is to have none. Secondly, if there would be such secrets, it's still best approach not to tell anyone. -> No need for secure communication.

Their break into communication example is bad. - When people are careful about security, any such questions raise immediate alarm. If it's not alarming, then it's probably not either very secure community to begin with. Sometimes people are annoyed, by the fact that if they ask stupid or "wrong" questions, they'll also get stupid and pure disinformation as answers. - That's something you shouldn't have asked at all. So I gave you an answer, which will be entrapment or simply wasting your time with disinformation.

About large group chats. People using random devices & mobiles, with bad security, with random locations and random people, having a room with 20 such people and serious secrets is guaranteed failure anyway. I'm not using screen lock and I think I forgot the phone in some toilet in some random bar last night when I were drunk. - Absolutely great. Also because people do not follow security protocols, they'll probably wait for two weeks before telling anyone that they lost the device and keys, because they'll be just waiting if it happens to turn up somewhere. - Yep, that's totally normal. So, no secrets in large group chat rooms. Only on need to know basis and with properly trained and competent people.

Keybase is similarly a bit annoying company like Cloudflare or Google. They push their own agenda as great innovation, totally forgetting to tell about all the downsides related to their products. Many posts aren't honest technical articles,, but cleverly disguised marketing. Make no mistake here.

If you lose access to all of your devices. What kind of stupid assumption that is, that all of my devices would be linked to my Keybase identity? Deep sigh.

Device-to-device provisioning is tricky, which also could easily mean that having single device compromised can lead to complete loss of identity and security. Is that preferable? Depends on situation again. Now the attacker, let's say one who stole the phone in bar, can get rid of all of your other devices, add one new device via the trust. So even if you manage to lock, wipe, or whatever the one device they stole, it's useless. They still got your account, encryption keys and access.

In general the full security picture sounds good.

Anyway, I would personally prefer recovery key, instead of multi-device support. About "registration locks" of course those are used, always. At least WhatsApp and Telegram provide such locks. Hijacking (or phone-porting) telephone number alone isn't enough. "Ephemeral or self-destructing messages" won't work, because there's no guarantee that the client follows the protocol(s). Account resets wouldn't be necessary, if there would be a recovery key. Afaik, lack of such key is is bad design.

Yes, I made it to the end and reading the whole article. ;) Btw. When they said device keys newer leave the device, are they using HSM? If not, it's just "application data", what guarantees that it can't leave the device or be cloned? Cloning device would allow returning the temporarily "borrowed" phone back to you, without you realizing it has been cloned.

In general, it would be preferable to allow different security levels, for different use cases. This is something I've been thinking when reading and considering the Megolm protocol of Matrix. Some times automatic key provisioning between devices is great, sometimes it should be absolute no no. Same applies to the group chats. Should new devices be allowed? What's the registration policy? TOFU (Trust On First Use), TADA (Trust After Device Additions), N existing users from the room, or N administrators from the room? N users from the room is problematic, as mentioned, because people are stupid. Who guarantees that the authentication was made properly. My guess is that it woulnd't be made in most of cases, even if it would be technically required. Users would just click Ok, I can vouch for that, without properly questioning the request. Even N administrators would be problematic due to same reason. Also with lower security settings, it would be really handy if after accepting user(s) new device to a room, user could get encryption keys for past messages from other users. Currently this isn't possible with Megolm / Matrix. The keys can be only obtained from my other device, which might not have all the required keys.

Some after thoughts after sleeping over it and discussing with friends & colleagues:

As I discussed this article view friends. Many mentioned, that they have anonymous "trusted" contacts. Which means that the meeting in real world authentication wouldn't work anyway. Anyone could pretend to be the contact. Unless, there's secondary authentication protocol agreed, as I described. It could be something very simple. But I'm not going to extrapolate this. Because the security protocol is something you've agreed with specific peer / contact, and it's not public information.

Many friends also mentioned using long term OpenPGP keys for identification. Those provide convenient pseudonymity, if key pair is created for a specific purpose. As I've mentioned in many of my posts. As well as having multiple "strong" identities. Which of course aren't interlinked. Actually occasion specific non-interlinked identity can be very important security factor.

Friends also mentioned, that if you ask wrong questions, you're just going to get disinformation. In specific context, there are only specific things you can talk or ask about. Going outside that scope, just leads to zero professional peer trust. Building confidence with pseudonymous contacts is a long process. But it can lead to good results, while providing security that people won't even know each other even if they would accidentally meet in real space.

Some of my friends were also worried about "trusted device key". How well that key is really protected? Are all "trusted devices" working as security anchors? Compromising single one... Oh, I wrote about that already.

Having the user root trust anchor encrypted and stored separately of course came up in the discussion. Right, in this case there wouldn't be need to TOFU the user identity key, because it could be restored. And it could be then used to sign the device keys. But this is a process which needs to be "hard enough", so you can't just use it to add rogue devices and then unregister all existing devices. - Yet one more secret to keep.

Some friends brought up the matter, that nobody cares about security. Caring about security is just annoying. So if new encryption keys are derived, great let's then use those. It's still better than no encryption, and it throws off passive attackers. Of course it won't help in case of active attacker, but who would bother for that anyway. - This is of course in context of daily non-sensitive chitchat. - This is also part of the problem, because in this scenario it's highly unlikely that even if there would be strong secondary re-authentication protocol, users would follow through it.

Others claimed that the authentication challenge/response can't be made in band. Well, it depends how you do it. In this case, the challenge question needs to contain the new identity key with some pre-agreed secret. If there's an active MITM attack, they can't modify the message so that it would pass. Assuming that the "security numbers" itself aren't falsified on both ends to look the same. This can be made in many different ways, as example one trivial way would be to take a screenshot of the "security numbers" and encrypt and sign it with the recipients key. Then they verify the key and send back signed confirmation that the check passed. - This leads just to the next question how the "security numbers" are derived and if there's a way to falsify that information at the client end. Of course other simpler shared secret authentication protocols could be used as well. HMAC, Password-authenticated key agreement (PAKE) and Kerberos comes to mind immediately. After thinking a while, actually the communication doesn't need to be encrypted, signatures alone are enough for mutual authentication. And this can be made in band while the MITM attacker is between the parties.

Final words: A good post, lot of thoughts. Nothing new. Just sparked me to ask a few technical questions about Riot.im / Matrix.org and Megolm implementation and practical matters. Like how to read history of encrypted discussion if messages are encrypted with device specific keys and if I'm not logged in all the time with any devices.

2019-04-07