AppImage, Scuttlebutt, Dat Project, Client / Server Security
- AppImage - I like concept of AppImage. It's kind of docker but lighter. Yet, using app images over lighter distributions, often leads to distribution bloat. Dependency management becomes a great question. Sure, without images, it's a big mess. But also wrapped image can be kind of bloated. But at least it's very easy to install, use and get rid of. Application packaging.
- Scuttlebutt - It seems yet another 'statusnet' or so. But with technically bit different implementation, where system is distributed, but supports "Pubs" for consistent online presence. Some concepts on the network seems to be slight problematic to me. But it's nice that it adds end to end encryption, also for some use cases the data replication protocol might seems to be desirable. But that's also where the problems lie. What data is stored, where, how long, and so on. Any of the light documentation didn't make that instantly obvious and getting that info would probably need digging specification or implementation specific documentation / source code / default values and so on. No, not for me this time. Another problem seems to be once again flood casting, and message routing isn't exactly specified. Those are traditional pitfalls on P2P networks, everything might be working very well, with dozens of users, but when you add more posts, users, and someone decides to use the network to share huge files, boom. That's it. Suddenly everyone's out of bandwidth, disk space, ram and CPUs are pinned, network starts to lag seriously. Been there done that, over and over again. Open OpenBazaar 1.0 did run into that classic trap, even if it used DHT which is already pretty good option for routing and limiting data duplication. Unfortunately no time or interest to deep dive into this protocol, because it seems to have so many undesirable aspects.
- Dat Project - A distributed data community - Quick look at Dat Docs. So far it seems that it's nothing new at all. Let's still check the key concepts to be sure. This is something which could be worth of considering for some specific use cases. Yet currently, I've got no use for project like this. My current efficiency problems are usually these two cases. Encrypted remote copy with limited history, which Duplicati is used for and another is remote copy without history, and another problem is remote synchronized copy without history (normal data replication), which has been so far using robocopy / rsync. I know it's not the most efficient way, but it has been good enough. Many files are anyway compressed, which makes de-duplication quite useless in most of cases. Also larger files are quite static. For non-static larger files Duplicati is used for backup purposes. If I need something more efficient, then I guess this file synchronization article @ Wikipedia is a good start.
- Checked one random project and found out that it got fatal design flaws. It seems that their developers don't clearly understand the security benefits of client server approach. Instead they do lots of stuff in the client, which then directly accesses database with R/W credentials. This is inherently extremely insecure. Even if the client controls the data visibility to the data for the end user. What if someone takes those credentials and writes their own custom client, or simply sends alternate SQL queries over any generic SQL client or modifies the operation of the client. This is actually the attack vector I've used over and over again. Whatever client does, doesn't matter. If the server side doesn't enforce security. This is very bad. I've reported the issues to the project developer team, but unfortunately it's highly likely that nobody cares. Because everything is working just fine. For trolling purposes it would be fun to encrypt data in their database every now and then. I'll decrypt it when the bug has been fixed. Eh... Sure, they're not using (global) default credentials, but it's not hard to figure out the credentials every client is using in hard coded fashion. Because the credentials are hard coded, it also means that if the credentials need to be changed, it will break all the previous versions. Which makes it even more unlikely they want to change / fix this issue because it would create a new problem. Unless situation becomes really bad.