CCC, Jugaad, RDS/VDI/DaaS, Outsourcing, Backdoors, Crypto, Digital Ocean

Post date: Jan 12, 2014 4:09:13 PM

  • Studied: Gecko: Contention-Oblivious Disk Arrays for Cloud Storage video (Increased virtualization leads to increased seeks and kills performance). Because the video I watched a few days ago was so good, that I really had to read the paper too. I really liked, their design and analysis. Also the final test results were quite impressive.
  • Studied Intel RdRand (aka Bull Mountain), and how it's being utilized in Linux kernel.
  • Studied several articles including this one about VDI and boot storms caused by Windows updates occurring on all virtual machines at the same time. Nothing new, but it was a nice article. As well as things mentioned about memory ballooning during boot storms.
  • CCC.de videos: HbbTV Security, Persistent, Stealthy, Remote-controlled Dedicated Hardware Malware, Security of the IC Backside, Lasers in space, Y U NO ISP, taking back the net, Europe, the USA and Identity Ecosystems.
  • Wondered new features included in Android 4.3 version. Because now it was actually released by Samsung for phones. It seems that new Android releases are used & delivered quite slowly to devices.
  • Troubleshooted choppy USB-Mouse on Xubuntu platform. For some reason it seems that mouse events are getting lost when system is heavily loaded, on CPU side or swapping etc.
  • Gave away bunch of old (3 years) XP computers to local Linux association. Those computers with 2 gigs of ram and dual core Pentium processors work just fine with Lubuntu or as servers without GU / Desktop.
  • Nice blog post about playing with HTML5 localStorage, even if' not very advanced use of it.
  • Nice post about SSL/TLS and it's current state. Aka TLS Survey.
  • Jugaad, made me really laugh. But hey, I have to be honest. I'm good at it. Thing X needs to be done with very limited resource, I'll get it done. Yes, for sure I'm cutting some corners to get what's required done. But usually it works out just fine. Maybe it isn't elegant or high-tech solution, but it simply does what's required. Quickly, cheaply and efficiently.
    • Some examples are situations where there are two systems communicating with each other, both got stuff hard coded, and then some update breaks the interoperability. Both teams managing their integrations are stubborn and otherwise hard to get any fix delivered quickly. What will I do? Well, I'll write simple message proxy, which fixes the problem. It's not perfect solution, it's a hack, but it just works out. System is again in production, and it might take half an year, or be ridiculously expensive to get official fix to either of the integration partner softwares.
    • In one case, we were already delivering data X to Y, but then there was requirement to also send the same data to Z. Engineers said, that it's impossible, it's so complex and hard. We have only one transmission flag in database, etc. Blah blah. I did just what I did with the previous project. I added a proxy, with own database. Now it stores the messages, and forwards those asynchronously to two different destinations. Doesn't add much delay, guarantees retransmissions in case of temporary failure, and maintains queues separately for Y and Z. Wasn't hard at all, and at least it wasn't nearly impossible as engineers said. A hack, yet another layers, yes, but works out beautifully.
  • Played with isapi-wsgi. But after all, the integration APIs being used are so low traffic anyway, that I'll probably use traditional CGI. Not a perfect solution, but very simple solution + easy to configure and undestand. Yet it doesn't really make any difference in this situation.
  • For one service, which is real time user service and therefore latency critical. I made a prepared denormalizated data set, and now queries run at least 100x faster.
  • There's something strange with Ultra Defrag. Quick optimize and full optimize totally hangs on CPU, and practically nothing happens. Running full optimization would take a very long time. Even if I got only about 20000 files on system. I think there has to be some kind of problem with the algorithm they're using. One factor which might be affecting it, is usage of NTFS compression, but it really shouldn't affect it. As well as all other defraggers run just fine with that disk / file system.
  • Read a bit of MS documentation: Remote Desktop Services Overview, What's New in Remote Desktop Services in Windows Server 2012 R2, Test Lab Guide: Virtual Desktop Infrastructure Quick Start, Test Lab Guide: Remote Desktop Services Session Virtualization Quick Start, Test Lab Guide: Remote Desktop Services Session Virtualization Standard Deployment, Test Lab Guide: Remote Desktop Services Publishing, Test Lab Guide: Remote Desktop Services Licensing, Windows Server 2012 Capacity Planning for VDI White Paper - All can be found from MS VDI page, virtual desktop infrastructure ( VDI ) solution. Also see: RDS as DaaS replacement.
  • Modern operating systems and SWAP. I sometimes love to put things in "other words", so it's easier to understand what it's all about.
    • Using swap doesn't mean that you're running out of memory. It means that it's more efficient to actually use ram than keep it reserved for things that aren't used. Just like you maintain your home. Why you got stuff in cellar or at attic? Why you don't have everything in middle of living room? That's right. There are programs which reserver potentially a lot of ram, but do not actually use it. As well as there are things like disk cache, which can potentially utilize as much memory as you have disk space on your system. So it makes sense to literally SWAP. You'll put in the cellar the stuff that has been 2 months in your living room and your friend is going to pick it up tomorrow. And you'll get the baby or dog supplies from attict and bring those hallway. Now space is more efficiently utilized, even if you really didn't exactly run out of space before that.
  • Studied Python 3.4b2 release notes. Interesting parts are pip and asyncio. + Feature freeze.
  • The Pirate Bay is building again distributed solution, but not yet fully distributed. This is actually quite interesting hybrid solution. So it's distributing fixed version of the site, which is cached locally and can be updated easily. I wonder how search functions etc, will work with this solution, or if it's more like fixed version. If it's more like fixed version, then Freenet would have been as good solution. Good thing about this solution is, that it can be used to distribute also other sites than TPB. Which currently hide in Tor, I2P or Freenet land. I'm not just sure, if it's security is anonymous enough. I assume it doesn't provide proper anonymity from the information I have read this far about the project.
    • What I still would have liked to see, is fully distributed TPB client solution, which runs locally, updates data using DHT and communicates with other clients. Secures data with PKI signatures, etc. So it could be just like the TPB web-site, but completely written as distributed client.
  • Can you outsource your IT? Great question, it's also a good point what is considered to be outsourcing.
    • Outsourcing and outsourcing, using subcontractors etc, it's great overall question. Why you're outsourcing your operating system? Can't you built one in house? It's always just question of efficiency
    • and scale and benefits versus problems. We have seen this in many many projects, why so many mobile operating systems are based on Linux kernel? They're outsoursing their kernel development. Can't they build one themselves? In the light of the NSA stuff, it's really great question. Do you really trust the firewall, network equipment, operating system and hardware manufacturers? Or should you do that stuff also in house. Is it any different it you run the Exchange 'in house' dedicated hardware server or in the 'private' or 'public' cloud? Anytime there can be remote accessable backdoor or even if there isn't, next software update can deploy those. And if you hastily build system in house, it's probably just even worse by security standards. Though decisions.
  • About Python Threads vs Processes: Well, it depends what is best solution. Threading can consume a lot less resources on I/O intensive tasks with large number of threads. Also cross thread communication is much lighter than cross process. But as we all know, there's the GIL which makes multiprocessing essential when utilizing CPU. That's why I don't even consider using threading with ETL tasks.
  • Wondered again horrible mobile pages, which are absolutely crappy. Many sites forward users to wrong destination. So if I try to access example.com/page-a, then there's pop-up asking if I want to use mobile version. Yeah why not. Why, why, why, they then redirect me to m.example.com/ I just lost reference to the article I was going to read. So poor usability, it's horrible. Had also a few discussions about the topic in UbuntuForums & Google+.
  • Studied TCP-32764 backdoor case. Interesting stuff, does *hardware* firewall make your networks safe? - Noup.
  • The year 2013 in crypto slides (PDF).
  • Finished reading Remote. I'll blog a few highlights bit later in separate post.
  • Digital Ocean seems to fail in isolating client data properly. - Thats bad, and things like that give bad reputation to all cloud services. I have been seriously considering moving my private server to Digital Ocean, but maybe it's not a such great idea after all.
  • Secure Erase isn't so secure always. Those in high-risk, high-sensitivity situations should assume that a “secure-erase” of a card is insufficient to guarantee the complete erasure of sensitive data. Therefore, it’s recommended to dispose of memory cards through total physical destruction (e.g., grind it up with a mortar and pestle).