SQL Server, FF, CA, Beacons, IPv4, Backups

  • Had long battle to recover one seriously messed up Microsoft SQL Server. As usual, there were so many layers which were broken on top of each other to fix. First of course setup was corrupted, running installation fix fixed that up. Then file permissions were messed up. Fixed that too. Next the tempdb creation failed, because it's path was invalid. Fixed it. And then changed default file locations in service parameters and finally run repair for the database files. It took quite a while to get all this done, but the most important thing is that now I know so much more about SQL Server recovery, configuration, requirements to get the service started and data served. Job done. Most important trick I had to learn is to use the trace flags to get the SQL Server started, even if configuration data is invalid in masterdb.
  • Firefox configuration guide for privacy and performance. Great read. Personal comments and views: Cache in RAM, of course. "Shame on you if you're running Windows.", that's well said. Nice list of stuff, but nothing new out there. As well as many many configuration parameters weren't discussed at all.
  • GitHub hit by DDoS. Nothing new here either. Quite massive reflection attack and standard procedures to mitigate it.
  • How not to run a CA. That's funny. But to be honest, they just did what customers want and tried to make it as user friendly as possible. Up, it happened to be also totally insecure, but that's life. Security is something, which is unfortunately hated by most of people. Which often leads to obvious trade-offs.
  • EPIRB, Search and Rescue Transponder (SART) and AIS-SART. Emergency location devices, using Cospas-Sarsat.
  • IPv4 is going out, but what about users who want to keep using IPv4? It seems that the protocols designed to allow that are developing fast: 464XLAT (RFC 6877)) and DS-lite (RFC 6333) are something I've covered earlier. But I haven't read about 4rd (RFC 7600) and MAP. Which got split into two RFCs MAP-T (RFC 7599) and MAP-E in (RFC 7597) detail.
  • Some interesting observations about the backup software tests. Some of the programs build larger blocks of data, and de-duplicate everything by default. Some other programs send file by file to the remote, and do not de-duplicate, utilize delta compression and so on. After these tests, I have to say, that Duplicati 2 is actually awesome even when compared to these basic commercial backup software alternatives. Backup is just one of computing things, which is absolutely full of different trade-offs. Do you utilize data blocks, how de-duplication is done, what's the block size. Are small or large chunks used, are files compressed / stored independently. Small files compression independently is very inefficient. Yet restore times might grow if you use large data blocks and the stuff you're restoring is distributed between very many blocks, and so on. Also using large blocks and not compacting backups at remote server, is also inefficient and time consuming, etc. But to summarize it, Duplicati 2 does extremely well on all of these scales. I guess I have to do personal donation to Duplicati 2 project. One of the programs was particularly painful to watch when it was building backup sets. It did compress same files over and over again, and sent those individually to the backup service. Even if those files were repeated tens of times in different directories with exactly same content. Those files are also very small and therefore the kbytes/s transfer rate was something ridiculous and I guess the data transfer overhead was larger than the payload. Just a few kilobytes/second.

2019-07-07