Blog‎ > ‎

Security Headers, JSON feed, Postgres, SCRAM, Google.ai, Duplicati, DB, 4G, TFS, HAMMER

posted May 27, 2018, 4:04 AM by Sami Lehtinen   [ updated May 27, 2018, 4:05 AM ]
  • HTTP security headers - Nice list of HTTP security headers. I think I've posted about month ago or so lots of stuff about HTTP security headers and related implications. Like that getting CSP right can be challenging.
  • About the customer tracking, like being said. One 'entity' gave up their loyalty program. Because they could already track people at least as well, without it. Using cameras, mobile tracking, and all the technologies the tracking industries are pushing. This might be also one of the reasons why basically every shopping mall providers free WiFi / WLAN. Providing some Internet bandwidth is virtually free, but at the same time you can get so much other information about the devices being used, where those are being used and what those are being used for, etc.
  • Implemented JSON feed for one site / project. Why? Because it was fun and I can.
  • Studied PostgreSQL 10 new features.  Awesome stuff like: Logical Replication, Native Table Partitioning, Additional Query Parallelism, Quorum Commit for Synchronous Replication, Multi-host failover, Crash-safe hash indexes, Multi-column correlation statistics, XMLTABLE, FTS with JSON and JSONB (Binary JSON).
  • Salted Challenge Response Authentication Mechanism (SCRAM) - Pretty traditional solution, nothing new afaik.
  • Google.ai - Federated Learning will change how smart AI systems are. It's just as the collective mind, seen in many cases. Collaborative Machine Learning. kw: Distributed Learning,  Stochastic Gradient Descent (SGD), Federated Averaging algorithm, Secure Aggregation protocol.
  • Other comments about Google.ai writing. Another ridiculous engineering fail. Why they need wireless connection for updates? Isn't 300 Mbit/s unlimited 4G LTE data enough? No, it seems. Maybe they're expecting to get unlimited 1 gigabit wifi? You know what guys, in many cases the WiFi / WLAN is slower than the 4G LTE network. Especially in congested areas, 4G is often really much faster than WiFi. Because WiFi networks very rarely cut down on power, to improve density. But 4G networks of course allow all kind of small cells very efficiently, versus WiFi which unfortunately doesn't do that in most of cases. It's so smart to connect mobile phone with unlimited data to WiFi which is connected to mobile network with unlimited data. Does that sound like sane engineering to you? It's only sane, if you're selling 4G access points or data plans. Duh! I've for a long time recommended for normal users, that there's no reason to get anything else than just one 4G data plan, and that's for your phone. You can then use everything else utilizing the phones tethering. No need for extra devices, no need for extra SIMs no need for extra plans. Just get the data moving simply and sanely.
  • Found issue with Duplicati 2 - which prevents backups from running. Reported it on GitHub issues. It also got fixed incredibly fast. That's what I really do like.
  • Mind blowing discussions about data cardinality and indexing, etc. Nothing to comment about this. But yep, some things can be done correct, or absolutely wrong. I think I've written about this several times too.
  • Finland is planning for national AI strategy. How it's going to change the technology and even more importantly whole society.
  • One major Finnish telco doesn't know difference between bytes and bits. I wonder if it's intentional misleading, or if they just honestly think their customers are stupid.
  • Another silly article, where they talk about WiFi / WLAN usage and 4G usage. Don't they realize that at least half of people are using 4G WiFi routers, which create WiFi from 4G network. So saying using wifi or 4G doesn't really make any difference. Well, fixed wireless, is interesting. Yet it's not going to replace good old single mode fiber.
  • Checked out: TFS and HAMMER file systems, which are under development.  This one made me smile, it's almost like from b-class Sci-Fi movie: "implementing pulse-width modulated time-domain multiplexer on B-tree cursor operation". W0w, that must be something. Maybe we could fit in quantum fusion and dimensional parallel universes too? Yet they do have some interesting points especially on TFS side. But as being said, everything is a trade-off. Something gets better and something else gets worse. It's interesting to see if this ever gets off the ground. File system development is extremely demanding and tedious task. Of course some things can be simplified on purpose, making naive implementations. But that usually leads to other trade-offs like poor performance in some or even many cases. The page and cluster compression scheme is something very similar, I used with my archive system.