posted Dec 27, 2014, 9:23 AM by Sami Lehtinen
updated Mar 25, 2015, 10:49 AM
Stuff isn't in any particular order. Some of the most interesting starts from middle of the post.
- Internal Versus External BLOBs in SQLite - Should data blobs images, be stored in the actual database or in individual files? Good question. Here's SQLite3 benchmark. This is just a good starter for discussion, because there are just extremely many parameters which all affect this question in several ways.
- Why India outsourcing is Doomed - Nothing to add really. I just think I've covered similar aspects in my blog over and over again.
- Why where so busy - The Economist - Crunch time, busy, lack of time, constant hurry, time is money, efficiency, otpimization. Busy at work? Even busier on free time? How to perfectly optimize vacations, etc. How much money is enough? Btw. I have to warn you, it's a lengthy article. I'm right now wondering who got time to read such a long articles? Of course it's a great question if I'm working or not when I'm studying during evening and weekends the stuff, which is highly related to my work and hobbies?
- Roaring bitmaps - Roaring Bitmaps, yet another bitmap implementation with compression. As comparison graphs show, there's no single best solution available. Also the use cases are often different. As well as situations where data is often modified or accessed in very small blocks or even individual bits.
- Had a long discussions about hardware reliability, like HDD, SSD, Computers, Displays in general in one Finnish forum. My point is that in most of cases, with current IT & electronics stuff. There's no way to know if the product is high quality or not. The only actual way to know it, is to buy a large number (thousands) of the products from multiple manufacturing batches and then keep using those in different environments way beyond the warranty period. Only at that point, you know if you bought high quality and durable product or if the product had some kind of durability issues. It doesn't matter if it's printer, display or whatever. This is especially hard in "consumer" forums, where they don't have detailed data, nor enough samples, and so on. Forums and normal consumer users simply can't provide any kind of reliable source for this kind information.
- Watched Google Cloud Platform Event videos. Keywords: Building a best-in-class Cloud Platform application, Hot topics in Cloud computing, Design your Cloud app, technologies, scalability, portability, productivity, Virtualization, Managed VMs, Containers and Kubernetes, design, concept, working, viable product, APIs, Cloud Platform, efficient, Mobile Development, biggest trends shaping technology, Firebase, Deploy, Operate, monitoring, logging, continuous deployment, staged rollouts, Privacy, security, data center design, Optimize, Analyze, Hadoop, BigQuery, meaningful insights, debugging, Cloud Debugger, and Cloud Trace and a few surprises.
- Thought about concept of "Test Driven System Administration". What does it mean? It's just Test Driven Development (TDD) and taken from software development and using the same methodologies for system administration. Basically it means that you'll write a set of tests and test if the system works as expected. I guess this falls actually into subset of configuration management and monitoring which can be done using Configuration Management (CM) or Software Configuration Management (SCM). Any failures of tests after system configuration changes or software version updates will trigger immediate alarms on the monitoring system.
This simple approach fulfills the criterion required by agile software development: "Continuous Integration (CI)". Everything can be tested and monitored in dev/test environment and then deployed into production. If problems arise there, it's also really quick to rollback any changes. Immediate feedback also makes it much easier to find problems. In software development unit tests and automated build pass tests have been used for a long time with automatic feedback. Test Driven System Administration is great part of typical agile way of working, such as TDD and continuous integration. Errors are thus automatically discovered in real-time debugging is omitted, the operation model saves a lot of time and costs. If the checks are done from "clear slate" it also provides easy way to make sure that restore procedures and deployment scripts are all working. Everything from source code, and configuration files is automatically configured to test nodes, packages installed, software configured until the system is production ready and passes all of the tests.
- Previous part is also related to the discussions about how expensive it is to change cloud service provider. If everything is working as described you can practically launch the system very quickly using any of the (at least) thousands of different IaaS service providers. Of course docker and related projects will make this even simpler in future. Or thinking reversely part of building the docker container is just one set of tests and scripts being run whenever any changes are implemented into the system.
- When the systems trigger alerts. It should be reasonable to react to the the problems with sense of urgency. This can happen only a few times / day, before people start ignoring the alerts. Every alert should be such, that immediate action can be taken. Alerts which can't be acted on, are absolutely horrible distractions. If issues are such that there's known simple plan to act, which can be automated. Then it should be automated and there shouldn't be any kind of alert in the very first place.
- I would love to write more about DevOps and generic in corporation communication issues. But I don't have time for that. I'm just shortly telling that I'm the guy who's been and is actively part of the whole organization including all levels.
Investors, Executive Board, CEO, CTO, Stakeholders, Branch Managers and Units, Programmers, Project Managers, Product Managers, Developers down to the individual help desk guys. As well as with individual customers IT staff, project managers, decision makers, integrators, developers, business analysts, and even the actual end users of the system. If there's an issue I need to solve. I'm able to get direct information as well as pass any directions directly. This makes it easy to solve and push through even most complicated matters. Compartmentalization in organizations often makes communication, decision making and information passing the bottleneck and problems aren't getting efficiently solved. Misunderstandings, bad communication, among other things often lead to miserable failure. Even if the actual technical problem which needs to get solved would be really easy to fix.
- Checked out MadeSafe (Massive Array of Internet Disks -Secure Access For Everyone). I can see several problems with this design. If it's not being used with cloud servers or NAS devices, rest of devices provide such a massive node churn that system simply will fall apart. This is a problem with many P2P networks, but in this case, it would be especially bad problem. Even if the nodes do provide storage space, if the nodes aren't available all the time, the storage space isn't especially usable. When some of the nodes storing the data go offline, replication from the remaining nodes must be triggered to new nodes. Unfortunately as usual, most of these new nodes will be also short lived and the cycle goes on. For this system to work, there must be a large percentage of nodes which are stable and provide enough bandwidth and storage space. Short life nodes or nodes with aren't connected 100% of the time just cause trouble. In many P2P networks the number of stabile nodes is actually really low, compared to the number of short term nodes. With this kind of storage model, it's very destructive pattern which also wastes a lot of bandwidth.
Keywords: Safecoin, redundancy, offline. Data stored, encrypted,cryptographically signed, network storage infrastructure, Single Sign On (SSO), Decentralized Public Key Infrastructure, Self Authentication, Safecoin is essentially an independent peer-to-peer payment system and digital currency and makes use of a distributed blockchain approach. faster transactions, anonymity and privacy, data handling network layer for secure structured data, unstructured data, communications, cryptographic signature.
- Studied: How To Validate Your Business Idea By Testing A Hypothesis.
Just random stuff because year is nearing it's end. I'll try to get one "mega dump" out soon, which will just leave stuff here, which I don't have time to write about.