Files, Compression, ARTS, MitM, Google CDN, IPv6, OpenAI, Flash, USB, Internet Users

Post date: Jan 10, 2016 6:13:12 AM

First post in 2016. Which is blast from past? I've got just so huge backlog right now. One day when I've got the right feeling, I'm going to make 2015 mega dump post(s) again.

  • Files are hard - Yep, that's the truth. Most users have been there, they just might not realize it. People administering systems and severs have been quite surely there and know that things do fail and things do get corrupted and getting everything to wokr actually properly is very hard.
  • This Squash Compression Benchmark this is awesome source! Reminded me about different compression algorithms and speed & resources required for compression & decompression. Also installed lz4 for Python and Python 3. It's really nice to notice that lz4 compression seems "immediate" and still compresses text 90%, gzip compresses twice as well but takes lot more time. Lzma compresses even twice as well compared to gzip but takes a considerable while. Yeah, it's just making the charts at the site concrete. Better compression and longer processing time. I'm ofter preferring lzma, but with light compression. It seems to better compress better and faster than gzip making it generally superior algorithm.
  • Re-studied ARTS NRF digital receipt (e-receipt) descriptions: ARTS XML Digital Receipt Charter v 2.0 - Charter, Schema
  • Applications doing Man-in-the-middle attacks. This is a good question, should it be done or not? - Here's one story about Avast. The update tells that the original certificates aren't being checked, this is horrible. Now we can finally conclude that "HTTPS" in practical sense is just horribly broken. Which actually isn't any news. Complex things are complex and can be messed up in multitude of ways.
  • Wrote aggregate statistics collection code, which generates UTC timestamps for hour, day, week, month and year which then pulls data out from database and updates statistics for aggregate views etc. It's also optimized so that source data will be used to generate hour aggregates. Then the hour aggregates are used to build daily aggregate. Daily aggregate data is used to build weekly and monthly data and monthly data is used to build yearly data. Why so? Well, the source data can be removed quickly yet the statistics do still work perfectly.
  • Slush 2015 - The future of payments , Cybersecurity session , Cybersecurity and IoT, China's new wave of entrepreneurship.
  • Checked out Google Cloud CDN - Neat, CDNs are clearly the mainstream and there is more and more competition coming to market. Google got quite dense network of edge servers, so probably it can deliver really neat performance. Comparable to Akamai. Many other CDN networks have only 'continental' servers, not a cluster in every big city. Or even own server(s) / every ISP.
  • Google's Global IPv6 Adoption chart is awfully close to 10%.
  • Introducing OpenAI. - It's great that we get libraries like this.
  • Internet World Stats say it out loud, ~50% of Internet users are in Asia. - That famous North America, got less users than Africa. That's also a good figure to remember. Yet when you check many maps, you'll see tons of data centers / POPs in US. A few in Central Europe and just one or two in Asia, usually in Singapore, maybe Hong Kong and Tokyo.
  • Nice. AprilBeacon (Eddystone compatible) / AprilBrothers guys confessed that the beacon configuration software had a serious bug which I found and it didn't work properly at all. They also swifly provided updated version of the software during sunday. At least that's good attitude. I've seen so many companies which take months or years to fix trivial stuff. Yet, there are other issues with the fixed version. Just so typical, we'll fix issue 1. and leave the same issue in several other places of the app. How about getting all of those fixed at once. Just plz? No? Ok, I've seen that happening over and over again, nothing new.
  • Daily tech and storage fun. I got USB stick which was supposed to be a lot faster 64 GB USB 3.0 stick. Yet in reality it's much slower than my old 8 GB stick? Why? Well, it uses larger erase block. Therefore stuff like git commits, storing these blog writings, and other daily stuff which handles large number of small files is very slow compared to the much faster and smaller USB 2.0 stick. I've seen situations before where 'faster' is lot slower. It's like in telecommunication, this 'faster' solution bumps your ping from under 1ms to over 20ms and at the same time they claim it's faster? - Just sigh.
  • Google Sites seem to have repeated issues, as well as Google+. When you try to save / post, it just fails. Hello there? How about fixing your s... systems. {'Messages': ['Server Error', 'Unable to save the page at this time, please try again later.']}