Blog

Google+
My personal blog about stuff I do, like and I'am interested in. If you have any questions, feel free to mail me! My views and opinions are naturally my own and do not represent anyone else or other organizations.

[ Full list of blog posts ]

Gamification, MyData, Twitter, Docker, PyPy, Hidden Tor Server, Management, AI, Citizenfour

posted Jan 25, 2015, 3:40 AM by Sami Lehtinen   [ updated Jan 25, 2015, 6:59 AM ]

  • Read long article about benefits of Gamification
  • Developing mobile applications and utilizing mobile applications in business. Product Market Fit. Mobile is personal, always with you, real time, context aware, used when decisions are made, bi-directional, location aware, followed, social and connected with all the sensors. Simplicity is beautiful and beneficial. What are the features expected from great mobile application. How user should be guided to use the product, so separate instructions or manual isn't needed.MyData, mHealth, mPayments, Application Lifecycle Management, Continuous Integration, Automatic Testing, Version Control, Communication, Issue Management, Documentation.
  • Checked out: Pgcli 
  • Security stuff, crypto, key exchange, DH, ECDH, PFS (FS), authentication, (client & server), asymmetric & symmetric ciphers, message authentication (MAC), system hardening, traffic analysis resistance, playback attacks, storing keys securely, logging, monitoring, configuration management.
  • Checked out SSD interface NVM Express
  • GPGPU, CUDA, OpenCL
  • Something different: Hamina-class missile boat & Stealth Ship
  • By implementing just the functionality that was required meant a much simpler system which lead to higher availability and reliability. Any way to win is a good way to win. - Over engineering adds complexity which can easily make systems less robust.
  • Studied New Datacenter networks and architectures including MinuteSort and flat datacenter storage and north-south, east-west traffic, a CLOS network topology.
  • How do 'new' CPU features affect code performance? Does it affect programmers?
  • Project management best practice steps: Initiative, Concept, Projection, Planning, Execution, Testing, Piloting, Production. At the very beginning it's important to validate business case, and bit later it's important to verify it. 
  • Something differrent: MLRS Tornado and it's load rockets
  • Excellent post: Why Remote Engineering Is So Difficult?
  • Started to use uBlock instead of AdBlock Plus. This reminded me about the fact that there aren't currently Finnish adblock filter list. I think there's need for such. Which lead to secondary question. What is the best line based collaboration tool? Like Wikipedia or Github, but much simpler to use, yet allowing guest posts, moderators (accept & confirm guest posts) and collaborators / contributors which can update content directly. As well as allows efficient downloading of raw content and history features. If there isn't such? Could there be global need for such in group of techies? I could write one easily. But I'm unfortunately already fully booked with my side projects so I don't want to start something new, unless it's a "sure hit".
  • Created my first realtime Twitter integration for one hobby project which still remains secret.
    Played with Docker. Checked out what it takes to create, share, download and run custom Docker containers. How data separation is done etc.
    So far I've used LXC for isolation, but it might be reasonable to use Docker. So if I rent heavy duty server for my systems, I would use Docker to run my systems and leave the host only as hardened virtualization platform. Yet LXC has provided this portability so I've been moving systems on and off servers easily into testing and staging environments, and so on. LXC also offered easy way to limit resources, but docker does it too. Actually Docker is using LXC anyway. - lxc-ls vs docker ps, cpu.shares
  • Had again long discussions about users which are so .... that it's practically impossible to get them to use proper passwords. Only solution so far, is giving users proper random password used as "authentication token", which they naturally can't change.If they want they can of course get new authentication token, which isn't user selectable. It has worked securing systems so far very well.
  • Best way to learn Docker is to Try It.
  • Want to learn JavaScript and play logic game at the same time? Try out Elevator Saga. This was excellent game for one evening, figuring out how to optimize elevator action. Yet it still lacked one feature which is being used by the most modern elevator systems. What's that? Well, it's the option to select right people into the car. Now the problem is that when you arrive at a floor, you'll get random set of floors which you have to visit (of course slightly narrowed down by the up and down button requests.). But especially on lowest floor, efficiency would go drastically up during high demand times if you could select that this elevator car should be filled only with people going to floors 11,12,13,14. Then straight up and efficient drop off and during trip down you could collect people going down or just to the lowest floor. Current version of the game doesn't allow this most efficient optimization trick. (15.1.2014) aka destination controlled elevators.
  • Want to learn Data Science and Python in your browser? Try out DataQuest.io.
  • Discussions about data privacy are getting interesting throughout the World, including Europe and Finland. Finland it's currently doing mass Internet surveillance. But some are demanding that it should be done, others say it shouldn't. In the news there has been mentions that in Finland police should also have access to all encryption keys and data. But these are hard things to balance out correctly and in some cases technically infeasible or basically impossible. Shoud Finland be privacy safe haven for data centers or should this be the ultimate police state where we don't have any secrets at all? Good luck balancing that out.
  • Tried PyPy with some of my (20+) Python projects. Even if many say it's "fully compatible", well it isn't. First issue will be third party binary libraries which all would require recompiling and potential tuning for PyPy. If there's a project which absolutely needs PyPy due to performance reasons, great. It'll be worth of it. But with projects which don't require PyPy there's no point of going through that trouble. Most of my projects run with standard CPython just fine on Windows and on Linux, yet using PyPy presented a problem.
    Based on this I posted this discussion into LinkedIn Python group: "I've been looking for PyPy and other ways of making Python runtime faster for a long time. Yet I'm using standard CPython all the time. Why? Even if it should be quite trivial to use PyPy, that's not the case. It's just like Python 2.X to 3.X it's trivial, yet it might require quite an effort.
    I tested about 20 of my projects with PyPy and only two of those did run without modifications. Most of projects hanged on thirdparty binary libraries (Windows). Which I don't have any intent to recompile to gain PyPy compability. As well as truth is that in most of cases the CPython isn't the performance bottleneck with x86 computers, it's databases or communication aka I/O bound parts.
    Any opinions, views, experiences around here?"
  • Helped one person to build layered hidden Tor service server solution. First all traffic is tunneled via Tor. Then it's tunneled from the Tor hidden service over SSH to the primary server. Primary server is connected to the internet over anonymous 4G connection. All that the final server needs is power and 4G network. Even at the final server, everything is isolated using virtualization. So it should be quite hard to find the actual server. The server location has nothing to do with the person administering it, so any traditional looking for connections won't work. It's also in an area, which got enough client density, so it's not trivial to look for it based on base station / sector information. It should be obvious that if the Tor relay gets raided, it's immediate hint simply to shut down the server and connections remotely. Everything on the server itself is encrypted, so if the system is powered down, it's completely worthless. All networking hops are also configured so that even if they gain full root access to the relay or the actual virtual host serving the final hidden service, it won't help them. The only way in and out is via Tor. No I don't know it's hidden service address, nor I know the SIM card, phone number, location or even the operator, I don't certainly have access to it after making preliminary configuration and testing that everything works. I really don't know what the server will be used for, if anything at all. All I said is that drop it somewhere populated where it can get powered up without the "hosting location" knowing anything about it at all if possible. Actually this makes hosting some ug services quite interesting, because those can be basically anywhere where there is mobile network and power available. After initial drop off, it's possible not to visit the site never again. Hardware is cheap and it will be eventually discovered, but at that point it's totally useless anyway. There are often many places where you can enter without authorization, gain access to power and hide a small server.
  • Managing projects & companies using information. Well, world is full of information, actually way too full. It's really important to utilize right analytic methods to trim the amount of data down into meaningful information. Many smaller companies operate on feeling base, without any actual data to show direction for their decision making. Another really important factor is quality of data. I've seen it so many times, garbage in, garbage out. After this comes the measuring, everything needs to be measured so we know how the changes we made affect things.
  • Had interesting meeting with a private cloud service provider. It seems that in some cases Microsoft licensing terms could make private cloud cheaper than public cloud. Otherwise it's hard to see how private cloud could provide benefits over public cloud with quite generic computing tasks.
  • About passwords and authentication. Isn't one authentication token enough? It's much safer than username and password, especially if user can freely select those and ruin the entropy in password.
    Here's one of the authentication tokens generated by my app: fwhBza5CJOhIU_F1. Yes, it's just like most of API keys you're going to see. You can't change it. You have to deal with it. Of course login information is saved, so you don't need to enter it unless you need to get in from new device. For most of users that seems to be fine. You can get new token using email recover if required. (This is due to the fact that service isn't "high security") if it would be more secure service, getting new token could require identification using official national online identification scheme (mobile or paper based OTP list). Which is very reliable, all banks and official authorities use it too.
  • What's the best hard drive by BackBlaze. Did I mention measuring this earlier? I guess I did. Here's great example what kind of results you can get if you just measure things. Without measuring, engineers would just say I think these drives are bad. But how bad? Here's the facts.
  • The road to superintelligence (AI Revolution) - It's very interesting to see when the future will be actually here.
  • Docker is excellent addition to my LXC / VirtualBox solutions. I think I'll use it quite often in future. I might even convert some of my LXC setups to Docker. But right now I don't see any reason to do so.
  • Watched Citizenfour.

Topic mega dump 2013 (2 of 2)

posted Jan 24, 2015, 5:41 AM by Sami Lehtinen   [ updated Jan 25, 2015, 6:03 AM ]

Dumping stuff from backlog. ;) As fast as possible... (1 of 2)

  • One invalid path caused system to hang without any error messages, even in logs. Logging doesn't reveal when task was finished, even if it was started. Lack of proper exception logging and handling. Caused huge mess hard to debug, even if reason was slight misconfiguration. As we know, making "program that mostly works require 1 unit of time", then making program which will work and handle all kind of exceptions correctly and also give clear logs / error messages can take 10-100 times the work amount. But some cloud environments require you to write programs so that the process / server can be killed at ANY TIME without any warning, it should not cause any problems what so ever. (Amazon, Google, etc…) If program works correctly, uses journaling and transactions, this should be the end result. If program is written badly without those primitives, result will be a program which works well in test environment ,but will end up causing total disaster in production when loads go up.
    How is scheduler state (done state / running state) defined? Does execution "end" when process tells it's done? What if process aborts / crashes? This actually is one of the parts what I have been especially working with my ERP projects. Because I have error handling for most basic errors which I'm expecting. But as we know, there are well, about 1000 times more errors which are possible, but I'm just not simply expecting those. I'm usually expecting errors like slight misconfiguration or invalid server name, login/password, invalid paths, file locking, network errors, data structure exceptions etc. But if there are other issues outside that circle, I'm catching ALL other exceptions and logging those. If the logged error message would have been clear enough, we would have immediately caught the problem.e
    In some cases I have gone to extreme lengths with checking all those listed errors, because I have wanted to make so robust solution that even if the process is killed at ANY TIME, next run will get things right. It checks for all temp files, doesn't ever write to "final file" directly, always uses temp files & lock files and prevents parallel execution by checking lock file time stamps. Naturally even this can fail if clock is played with or the locking process becomes really exceptionally slow and then is being executed again in parallel. It's hard to make programs which work correctly in all situations.
  • NoSQL, Lock free algorithms (pitfalls), random bit flips
  • Project budgeting, coordination and cost estimation, customers point of view, user experience, usability. Technical sales support tasks. Key customer responsibility. Customer contacts. Service desk experience. Wide ranged experience scope. Lets get this done attitude. Don't spend days thinking why we can't do it, if there's no real reason. Self guided, I'll study what ever I think I need to know to get the job done. Risk assesment.
  • Wondered stuff about WiFi DropBox like WiDrop designs. I guess I've blogged this already. New Yorkers StrongBox.
  • I should have written longer analysis about different anonymous P2P applications and what are the pros and cons of each design. Sorry, no time for that. (Freenet, GNUnet, Perefct Dark, Rodi, RShare, Share, Sumi, Winny, Nodezilla, Mute, Marabunta, OFF System, GitaTribe, Alliance, Direct Connect, Hybrid Share, OnShare, anoNet, OneSwarm, Retroshare, Galet, Turtle F2F, Waste, Kerjodando, I2P, JAP, Psiphon)
  • It needs to be carefully monitored when Big Data analytics is beneficial and when it isn't. Should be quite obvious to any business. Article.
  • Comodo provides P/ CSR S/MIME certs for free.
  • Played with SciKit-Learn
  • How Facebook shares your data with data brokers
  • Operational Transformation
  • No no time to play with Derby.js and Meteor even if those are interesting.
  • Tor & HTTPS by EFF
  • SIGNINT
  • Reminded my self about off the record messaging, pfs, deniable auth, deniable crypto, malleable encryption.
  • SSD device block generation, Could reveal enctypted hidden container(s). Because places in files are written repeatedly which aren't expected to be written often. Copy on write file systems might have similar problems.
  • Gitian - Compare that delivered binaries correspond to source code.
  • smtp_tls_policy_maps = hash:/etc/postfix/tls_policy
    Content:
    ferraro.net     encrypt
    gmail.com       encrypt
    mailsuomi.com   encrypt
  • State of outlook & email serurity is here: status=deferred (TLS is required, but was not offered by host mx1.hotmail.com[65.54.188.94])
  • Gmail TLS cert fails.
  • When certificate fingerprint monitoring is on, delivery fails if fingerprint doesn't match: status=deferred (Server certificate not verified) - So email can't get delivered to wrong destination or without enryption even if there's MitM attack. Successful negotiation: xxx postfix/smtp[805]: Verified TLS connection established to s.sami-lehtinen.net[]:25: TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits) - fingerprint & Public key pinning.
  • We should all have something to hide.
  • Ad networks are delivering malware, nothing new. But still very annoying problem. Encountered malware at GrooveShark and SendSpace.
  • Nonblocking Algorithms and Scalable Multicore Programming - Exploring some alternatives to lock-based synchronization
  • Piledriver microarchitecture, Integer Factorization Optimization, MariaDB
  • Reminded myself about: Mixmaster anonymous remailer, Anonymous remailer, Nym server, Van Eck Phreaking
  • ITaaS, Software Defined Data Center
  • Check your browsers Cipher Suite Details.
  • OWASP TOP 10 list, legendary. 
  • Misconfigurations do affect affect all security solutions, not only computer security. One building just had it's lock system mis-configured. Anyone having key to the building, could access all areas. I really mean all areas, like telecom and power distribution rooms and any spaces occupied by other businesses. Nothing new, just a minor configuration issue. It's ridiculous to first spent a lot of money for high end security system(s), and then totally mis-configure those. 
  • VSRE standard - Yes!
  • Unhosted idealogy, I kind of like it. But I doubt normal users will like it too much.
  • Played with compressing data on paper. Practically useless, but nice play. See data compression
  • EAX mode is better than CCM yet GCM is really complex. AEAD, AE.
  • TLS/SSL RC4 beast ephemeral keys, ecdh edh, session resumption.
  • EU data protection requirement
  • Coroutines and concurrency
  • Ladder of Abstraction - Yes, check it out. It's worth of it.
  • HubSpot SaaS 101 lessions
  • Bootstrapping a software product
  • SifterApp, simpler bug & issue tracking
  • STARTTLS - No comments, just dumping
  • How much does the selected algorithm change the end result? Maybe a lot. Jump point search. It just makes a huge difference!
  • At least I have learned a long time ago, to avoid doing something complex cool and smart. Doing the simplest working solution is usually also the most reliable solution. Any smart tricks which add any complexity to logic of program are going to cause a lot of trouble later. I have very bad personal experiences even from most simplest solutions. Like comparing a few fields to few other fields, with logic. It was too complex for programmers to implement correctly and lead to annoying bugs that persisted for months. All this could have been very simply avoided by dropping the comparison logic and using simple unique ID on every record. It's worth of noting that most probably anyone later modifying the code would break that logic again, even if it would work.
  • zRAM / Compcache - Compressed memory, nothing new afaik.
  • Chromium Disk Cache structure - Small data is but in blocks - Yet I remember seeing some other article (later?) where they implemented SIMPLE DISK CACHE which did always use files for simplicity. It's funny how optimizations can be bad at times. Just like should MORK file format (msf) be used as index file for standard mailbox data... Or is simple maildir approach better? I'm using both. Or maybe data just should be simply in SQLite database? Who knows. Does it really matter if it works out?
  • It's nothing new that Telcos require all confidential information to be encrypted... I wonder why, I'm sure their security stuff must have known about this. Also, encrypting everything which isn't classified as public just makes sense anyway.
  • Recursive PostgreSQL queries
  • Studied documentation why Skype abandoned original P2P model and started to use cloud servers instead. Everything was done in very reasonable manner from product managers and normal end users point of views. - Sorry, no link.
  • Securing the Cloud, Cloud Computer Security Techniques and Tactics - Vic J.R. Winkler - Cloud Security, I really liked Epic Fail sections which showed what kind of epic failures there has been with each kind of security issue type. Made me smile. - type 1, type 2 vm. VirtualBox, Parallels, Virtual PC, VMware Fusion, VMware Server, Xen, XenServer. LXC (Linux Containers), BSDjails, OpenVZ, Linux-VServer, Parallels Virtuozzo
  • Gallery how different machine learning algorithms detect patterns. Excellent stuff!
  • FileDropper advertices 5 GB files, but actually 1.7 GB file is too big. - Btw. They'll still have similar issues in 2015. Upload large files and those somehow disappear or something similar.
  • Cross & double monitoring of services - Always use external and independent system to monitor other systems. Also deliver alerts "out of band" if possible. I've seen it happening so many times that when systems go down, there's no alert... Because the monitoring & alerting system went down too.
  • Elliptic Curve by NSA
  • That's why they take memory snapshot first, which is trivial with VPS and then pick encryption keys from it to access encrypted volumes. This is well known method and works with pure hardware machines too with physical access. It's great question when you get to server, to shut it down or leave on. If on, it could destroy data, if turned off encryption keys are gone. I think it would require some individual case analysis before deciding which one is better approach.
  • Nightweb protocol
  • Actually it seems that I blogged this later. But back in 1.7.2013 I also made memo which clearly showed that when you browse using DuckDuckGo with Dolphin Browser data is flowing over non-secure (non https) channels. I do have packet logs. But because sanitizing those would take too much time, I'm not going to post those.
  • Excellent chart from XKCD guys. Is it worth of doing? - I actually have printed this one on my wall.
  • Motorola is listennng in. Android phones do spy users.
  • Never trust Facebook. - Uhhm? Who would be foolish enough to trust it?
  • Functionality and differences between BeiDou, Glonass, Compass, IRNSS, GPS, Galio satellite navigation systems, signaling and design. As well checked out legacy eLoran and TIMU aia simple Inertial Navigation.
  • When it comes to the price tag for BI, a lots of people are shocked to learn how much some companies charge. For large companies 50k is kind of a low end instillation, and BI can cost over half a mil. However, the question shouldn't be how much is BI going to cost me, but how much can I expect to save/increase profits through this tool?
  • Android Master Key - Android security is seriously broken.
  • Studied documented Priority Inversion software problem with NASAs Mars Pathfinder spacecraft software. - Nothing new.
  • USB Hell - Drivers, sockets (4 different), charging & power issues etc, different cable connector types etc.
  • One thing has become clear to me over time, especially in the current financial crisis,” she said. “No matter the job, most of us no longer have job security. Our labor is replaceable.” - Yet another reason to keep studying.
  • PGP Tutorial - Yes, you should already know everything mentioned in this document. If you don't, just go and read it.
  • Prism Break
  • Additional VPN transport - TCP over WebSockets over HTTPS. This is great way to access all services when guest networks have TCP port restrictions. Yet much faster and more efficient (less protocol overhead) than tunneling over DNS which I did setup earlier
  • SIM card insecurities
  • Liked Hong Kong, Singapore, Sydney, etc. Peking airport WiFi was strongly authenticated (passport), but poorly secured. Which means that I could have hijacked anyones identity and made them look guilty. Afaik, this is very bad way of doing things.
  • Death of public key encryption? - Sounds even more viable in 2015.
  • Applied practical cryptography
  • The AEAD modes combine stream cipher modes with MAC constructions that developers don’t have to think about.
  • In computer security, a covert channel is a hidden signaling mechanism. Attackers exploit covert channels to leak messages across security boundaries . Side channels are the flip side of covert channels; they’re actual signaling performed unexpectedly.
  • Bank allows reusing authentication codes in case of some errors...
  • Tox.im - The way of secure (?) future messaging?
  • Dynamic real-time pricing
  • Probabilistic programming and Bayesian methods for hackers
  • HTTP/2.0 initial draft
  • NSA monitors everything  
  • GSMA:n Wallet-POS APDU, GlobalPlatform Security Domain
  • SSL/TLS broken - Gone in 30 seconds
  • Checked out Mifare Ultralight and Ultralight C variant with crypto
  • Enjoyed psftp key management using registry editor... So wonderful.
  • SQL injections - Joining Union query-based injection, squeal, error-based injection, boolean-based injection, time-based blind injection. 
  • True Hacking, hacking hard drives! - Excellent stuff!
  • Private / Public surveillance relationship - Just what I have said. If something can't be officially done. You'll just find a way to get it done unofficially.
  • Hidden Services, Current Events, and Freedom Hosting - Multiple hidden Tor services taken down and some of those tried to plant malware on visitor systems. Afaik, some of those pages planted back then do still exist (2015) with that yellowish background and some kind of frame near the top.
  • Watched BBC what's killing bees
  • Unprofitable SaaS business model trap
  • Studied a nice list Linux TCP related parameters
  • Why web apps are slow (?)
  • Studied & Configured firewall: ICSA certified ipsedc & firewall, USG (Unified Security Gateway), SPI, SNAT, DNAT, BWM, ADP, IDP, AV, AP, CF, AS, Routing, HA. In long format, destination nat, routing, stateful firewall, anomaly detection and preventation, application classifier, intrusion detection and prevention, anti-virus, application patrol, content filter, authentication, system management, logging & monitoring, anti-spam, threat database, high availability, vlan, ipsec, sll, l2tp, real-time, dynamic malware protection, IPV6, security zones, dual stack.
  • Computer insecurity
  • Enabled PFS for my own server https://s.sami-lehtinen.net/
  • Lavabit, Silent Circle, Lavaboom, Pidgin, Empathy
  • SSD Endurance myths
  • End to end (e2e), pre internet encryption (pie)
  • Python Statistics - PEP-0450
  • Watched GNUnet video - Nothing new in this video, I have been liking GNUnet project for several years.  But if you don't know GNUnet project, check it out.
  • The GNU Alternative Domain System (GADS) (GNUnet)
  • Of course checked out HyperLoop!
  • ENISA
  • I really like teams which get stuff done. Quick iterations, meetings, let's get it really done. Push and do. Test, iterate, quickly! I hate projects where every single question takes days and test iterations and stuff takes weeks or months. I love the feeling when stuff really gets done, and everyone got the same mood. I can do it. - Not the attitude, that I don't know if I really need to get it done, if someone else could do it some day or ... Aww!
  • Seems that 42registry is working again. My postfix also handles mails to myaddress @ s-l.42 or myaddress @ samilehtinen.geek properly now.
  • I've got separate email address for GPG only messages. If any email which isn't encrypted to my public key is sent to the address, it bounces back with link to my public key.
  • Windows server access policy management.
  •  Yes, not using cache at all would make everything very slow. I'm now of course talking about using in session memory cache. If it's too small you can reconfigure it using browser.cache.memory.capacity parameter with Firefox. With fiber I never use caching. But yes, with 512kbit/s connection I unfortunately had to use disk caching too, to avoid re-downloading anything I simply could. But of course in that kind of situation and configurate you're really aware that you're not destroying all data between sessions. For privacy virtual machine with hardened configuration + tor is good idea. Otherwise there's no reasonable expectation of privacy anyway, as they're saying. In technical terms, there are so many ways to track users who do not harden those, that there's no reason to expect any privacy. As we have seen with all these NSA discussions, all technical options were pre-known already. You don't know if sites use some techniques or not, but it's reasonable to expect that they do use at least all publicly known techniques. And possibly some unknown. So making attack (or tracking) surface as small as possible, when looking for privacy is reasonable. Maintaining any data between sessions is just stupid. Always boot clean virtual machine, which is similar to other virtual machines, is best approach. Otherwise there are tons of things they can do to track you.
    Btw. even if browser keeps cache, you can always clear storage paths.
    One of things that doesn't seem to be known to many users is that many databases contain 'deleted' data for long time. They just don't think about it. Just go through all files stored by browser, you'll end up finding stuff that you woulnd't expect to be there, if you're naive. Right attitude is to expect everything to be stored always, and take proper care to destroy data when it's required. This is just like the issue with SSD drives. If you write something on drive, you wan't to destroy. There's no sure way to destroy the data from drive, without totally physically destroying the drive. You simply don't know, if the controller has written data to cell XYZ, and then re-mapped XYZ to somewhere else. Just overwrite it approach does not work in this case. And you can't even guarantee that the manufacturer tool could properly erase that cell.
    Just final words. Etag doesn't have anything to do with "images", it's not tied to content-type at all. Next week I could release "css" tracking exploit, which uses etags, which is kind of css checksum. Uh...
  • RFID readers, NFC tags
  • PostgreSQL basics by example
  • IronPigeon - Yet another secure communication protocol
  • SEO and Google+ correllations
  • Low level flaws in 3G Modems might allow hijacking computers. SMS shell fuzzing usb internet modems.
  • Circumventing Network Backouts @ Schneier
  • Especially traps were important when processing money. Of course you can work around those using alternate code, but why bother, when there's existing library. Python Decimal Yes. I've known this for 20+ years, but this is just reminder. Yet another coder used floats for money and as we know, it's sure way to fail.
  • People often laugh at bad password policies, but sometimes password policies are just catastrophically bad. Like in some cases, shared passwords are used so widely, that it becomes impossible to renew the password, because nobody knows how to inform everybody who needs the password. That's one of the ultimate fails I have seen happening. No no, we can't change the password, because we don't have any idea how to inform the people who need it. Afaik, password should have been changed ages ago, and should be changed monthly or so. Of course it would be better not to use shared password at all, but in some cases that's also too difficult goal to reach.
  • Otr main key collision would mean that random numbers are bad, really bad.
  • IT security stuff: SSH shell accounts, when SFTP should be only available. When informed about this matter, they don't care, it works. Rights elevation exploits. Passwords stored at end users workstations. Not in secure environment. Exe files which are run with system account, but all system users do have full write access to those files. etc... Business as usual.
  • Lean software development, Lean Integration
  • Multi-process listening sockets
  • Reddit lessons learned scaling to 1 billion users
  • SQlite3 - Partial Index, Next Generation Query Planner
  • Checked out Zoined Retail Analytics in detail (with demo account)
  • Bruce Schneier's blog: " We're always making trade-offs between security and efficiency. "
  • iDRAC IDRAC DELL Server stuff, Remote management, etc, VNLP.
  • IPsec is crappiest protocol ever, at least in real world.... Yes, I'm having serious problems with it AGAIN. No words for this "#¤"#4.
  • F-Secure securemail requires that password contains symbol, but " isn't valid symbol, how lame!
  • Dissent accountable anonymous group communication
  • I received email from F-Secure: "
    This is a secure message, click that link right now!
    That's just one more reason to use individual email address for every service & communication instance. You'll immediately notice that the message is out of context of that email address it was sent to. I've been also receiving UPS can't deliver packet notifications and other scams, which are only asking you to click link provided.
    In this case this message actually is legit. Anyhow the first impression is quite much, yet another scam, delete.
    I just had to spent quite a while analyzing message headers, body content and link destination  & source servers,  AS numbers, IP-addresses, before I decided to open this link with my broser."
  • Watched many PyCon CA 2013 videos
  • Great example how accurately people can be tracked using mobile phone data, even without GPS, see the cell sectors marked on map.
  • Finished reading book(s) Securing The Cloud, Innovators Dilemma, Enisa Cloud Security, PCI DSS Cloud, HIPAA check list, PCI SSC Quick Reference, The Dangers of Surveillance, "
    Bitmessage: A Peer‐to‐Peer Message Authentication and Delivery System Jonathan Warren", Go Faq, 57 startup lessons.
  • Replaced my encryption key with 4096 bit RSA key, signing key is still old 1024 bit DSA. Still waiting gor Elliptic-Curve keys to pop into main release og GnuPG.
  • Studied Linux Kernel 3.11, especially Zswap seems to be very good trade off between slow I/O and fast CPU.
  • Studied distributed file system lustre.
  • Found a few sites which looked like a scam. I had to use wget to examine headers and content before opening the content with traditional browser.
  • Nice post about using LXC for network isolation.
  • Played with DebianDB in VirtualBox environment. See dbtechnet.org for more information. Databases used MySQL DB2 Express-C, Oracle XE PostgreSQL Pyrrho.
  • A few minor fixes in one project using Tomcat / Hibernate / MySQL made whole project 16x faster.
  • One coder used framework he didn't completely understand. Originally there wasn't password requirement for users. But when site got more confidential data password feature had to be added. Well how it was done? When user gave login and clicked ok, came page asking for password. But if you changed url at this point, everything worked. When I checked the actual code, the user form logged user in and password form logged user out if password was incorrect. Oh boy, what a joy.
  • Keccak (SHA-3) slides at NIST
  • Toyed with different index types and partial indexes and stuff like that with SQLite3 and PostgreSQL just for fun. Because PostgreSQL 9.3 was released.
  • Blog more about ... ERP stuff like: inventory reporting, financial reporting, sales commissions reporting, inventory management, purchase orders, sales statistics, customer accounts, distribution logistics, EDI, credit checks, electronic invoicing, invoice processing, EDI, stock management, dynamic (time-based automatic) pricing, web shops, system architecture, enterprise resource planning, SaaS, delivery reliability analysis, supply chain management, SCOR-model (Supply-chain operations reference), performance measurements.
  • Great source for studies edX
  • SSEBITDA for SaaS business profitability measurement
  • Beyond passwords, new biometric identification methods.
  • A very nice checklist for authentication & passwords.
  • Some customers just got "Total illusion" about project scope, schedule, costs, etc... But I guess this isn't news to anyone.
  • I'm using SQLite3 for my many of my small utility projects to track processed messages, just as Bitmessage does... Why? Well because usully the main database doesn't have any flags, tables or reserved fields in database I could use. So I use another database to track and compare changes in another database. Isn't that great? Why I won't add fields or tables? Well, it would break several compatibility things, that's why.
  • OpenVPN uses the OpenSSL - Security Now show said that OpenVPN is not OpenSSL, it's more secure. I guess they don't know too much about OpenVPN at all? Because it's directly based on OpenSSL. Ouch.
  • I don't know what Hushmail's web font does. But it doesn't idle properly. If you leave hushmail tab open in mobile browser (Android), it'll drain 50% of battery in 45 minutes.
  • It's better to verify certificates fingerprint than it's signed by "official trusted authority". Because fake certs could easily have trusted signature, but it's much less likely that they got right fingerprint.
  • Studied Ulam Spiral for network node partitioning purposes.
  • Checked list of goals for Phantom project and read it's white paper. Goals of the project do sound really great. [ Complete Decentralization, maximum Resistance Against All Kinds of DoS Attacks, Secure Anonymization, Secure End-to-End encryption, Isolation from the Normal Internet, Protection against Protocol Identification / Profiling, High Traffic Volume and Throughput Capacity ]. Documentation contains important aspects like: routing paths, addressing, network database, encryption and authentication, and technical description. As well as the most important part, which makes Phantom better(?) than I2P, TOR or other existing anonymized file sharing software.
  • Hosting backdoors in hardware by Oracle.
  • SQLite4 LSM Design - A very interesting read.
  • Lightly checked out Incapsula - Yet another CDN.
  • Checked out zBase - Yet another large distributed key value storage.
  • Reminded my self about Kademlia, Chord, Tapestry and Pastry internals.
  • Studied: Gossip Protocol
  • BitMessage Streams / Partitioning, how to prevent all data spreading to whole network. Yet maintaining the benefit of broadcasting not revealing who's the actual recipient of the message.
  • Verified TLS connection established to s.sami-lehtinen.net [x.x.x.x]:25: TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits) - Seems to be working.
    Received: from x.x (x.x) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "x.x", Issuer "x" (verified)) by s.sami-lehtinen.net (Postfix) with ESMTPS id X for  <my-mail@sami-lehtinen.net>; Thu, 19 Sep 2013 21:58:07 +0300 (EEST).
    Anyway without cert check / fingerprint, using starttls (smtps) still foils passive attacks.
  • Donated money to EFF, PBS, SomaFM. Actually it was quite funny process. Because PBS didn't allow donations outside US. I had to send them feedback that I'm having a problem. I can't donate money to you. Nor I can watch your shows legally, I have to pirate those. I'll download their shows and donate money, that should be ok, everyone should be happy.. They fixed it, now they allow global donations using credit card.
  • Downloaded Dell iDRAC6 software and had to run it as standalone app using webstart because it simply didn't work properly with browser. Anyway, I'm glad I got java background so it was quite trivial to start the app using the jawaws.jnlp file from command line. Main trick is that the credentials needed for access are inside that laucher packet, so it has to be fresh.
  • Google knows WiFi passwords. - Isn't "central command cloud just so beautiful thing?"
  • Interestingly if x is None is faster than if x == None when using Pyhton 3.x
  • Accidental downtime caused. Ouch! One image hosting site had interesting stuff, using wget was way too slow, so I wrote my own version. I had it configured to work in two steps, first I go through all index pages and collect image urls. And then next phase is to fetch the actual images. Simple stuff. Because I've been playing with 8 thread CPU lately (i7) I had to try what I can do. So I wrote small Python app which utilizes both, multiprocessing and threading. Only problem was that I was running 16 processes and 32 threads per process. No, it wasn't problem for my system, but it was clearly a major problem for that small image hosting site. 512 concurrent HTTP connections fetching large images over keep-alive connections shouldn't be a problem, but in this case it was clear that it brought their site down. I wonder why they don't have rate limits or limit connections / IP address etc. But that was the unfortunate result of that. Of course I stopped my fetcher and tuned it down a bit. But it was funny to notice how easily some sites simply stop working, even if you're not even trying to distribute requests causing the overload.
  • Throughly studied 1024 cores - concurrency, parallelism, wait-freedom, lock-freedom, obstruction-freedom, algorithms, waitfree, lockfree, algorithm.
  • Had a Google I/O 2012 and 2013 marathon. It's much more informative than watching random TV soap opera or other "junk"
  • Read ARM64 and you.
  • Studied GCM mode. AESNI
  • I just hate some Android bloat as much as some Laptop vendors which pre-install all kind of crap. No thank you. I want as bare system as possible, and then select just a few applications which I actually do use.
  • Had long discussions about developers if UI buttons should have single role or if those can be multi-role buttons depending from the user situation. Some key buttons should be static, but rest of it can well be dynamic. Just like the few key buttons on Android (back, home, properties) and then the touch screen is "dynamic multi-role button section".
  • Lot of discussion and articles about fingerprint security and so on. But as said this brings it all together, fingerprints alone shouldn't be used for authentication. It's only identity.
  • Win-Win Consulting - How to deal stuff so everyone will be happy. Been there done that.
  • Article how to switch to HTTPS for free. But as said StartSSL certs aren't free for businesses.
  • Use CheckTLS to confrim your SMTPS secure email settings, yes you can use it to check also your service provider even if you aren't administering the systems.
  • People who are able to manage private and public cloud as well as communication between departments and understand business needs, integration and system architecture, will be in high demand.
  • Privacy and Trust - Bruce Schneier
  • Facebook email had it's problems: conversation with msgin.t.facebook.com[66.220.155.11] timed out while sending end of data -- message may be sent more than once.
  • Detector.io project - Detecting possible MITM attacks with Tor
  • It's true, after I posted my latest blog post (to blog) and posted it publicly to G+. It was then indexed by Google in less than 15 minutes, even if I haven't updated my blog for months. It's clear that Google+ posts trigger Google crawler and indexing activity. Posted a bunch of other stuff to see if it works and yes, it seems to be working.
  • NSA generates map of populations social connections? - Ugh, no news. Sociogram analysis isn't anything now.
  • Amazon RedShift What do you need to know? 
  • Telehash yet another P2P secure MESH protocol.
  • Freelan yet another VPN software.
  • tinc yet another VPN software. Compression & ECC.
  • Robots2 - Extended standard for robot exclusion - Updated my own robots.txt's accodring it.
  • RoundCube WebMail all contains 2700 files... OMG! No wonder there's some "random seek". That's one of the reasons why I had to get that SSD, because starting simple app like VLC took for ever, due to sheer number of additional libraries in files to load.
  • How NSA attacks Tor.
  • Installed latest RoundCube and converted it to use SQLite3 instead of needlessly heavy MySQL. - Everything works well.
  • Had deep technical and analytical discussions about SQRL with a few friends. 
  • The problem with SQRL article which our discussions were based on.
  • Lol, found yet another case where Windows IIS FTP service were continuously used to access server over Internet world wide using Administrator account. RDP was also available. Hmm, so much security fail here it's hard to describe. 
  • Had to tune for a while to get Windows 2008R2 virtual domain support to work with IIS FTP.
  • Anonymity is hard, great reminder. You're probably deceiving your self if you think you're being anonymous. OPSEC
  • The problem with timestamps - Nice post, time is hard, even if it's easy to think it's quite simple thing.
  • /dev/sdb1 on /mnt/s1 type ext4 (rw,relatime,data=writeback,commit=300) - Great way to get better than SSD burst write performance on traditional disks. Just use writeback mode and long enough commit time. Btw. This also works great for slow USB sticks and stuff like that.
  • Checked out NFTables which is going to replace (?) good old IPtables at some point in future. 
  • Checked out Quantum Algirthm - Made my head hurt. No, I'm not mathematician, too deep and theoretical for me.
  • Great TED talk, why privacy matters.
  • About buggy software. I just remembered classic Laser Squad targeting fail. In the game, when you were shooting far away, it computed the hit accuracy based on distance, weapon and shooter. But main trick was that if there was straight linear line from you to the target. You didn't aim 100 blocks away. You aimed to the first available block to that direction. When the shot was fired, it continued to travel through that point forward, because it didn't hit anything. This made it virtually impossible to miss. Let's make ASCII demonstration.
    YT------------------------------------------------------------------------E
    So instead of You shooting at Enemy you shot aimed and shot for Target point, and the projectile then just traveled to the Enenemy and hit it for sure. Nice fail, right?
  • Added custom HTTPS rule for my own server. In this case if I forgot to write the HTTPS and HSTS isn't primed my client will still use HTTPS.
    <ruleset name="Sami Lehtinen">
      <target host="s.sami-lehtinen.net" />
      <rule from="^http://s\.sami-lehtinen\.net/" to="https://s.sami-lehtinen.net/"/>
    </ruleset>
    Using the custom rules allows you to guarantee HTTPS access for any other domain you wish to.
  • Played with Python BitString, just for fun. I haven't needed it this far, but if I need it. It'll be nice tool. It's good for situations where there's long list of something, which is
    is in  contiguous space but there are different boolean states for entires. I just often use dictionary which contains integer ID with Python. But I do know it's not space efficient representation when data
    set gets larger. Good part of this that it's able to handle gaps without problems in case IDs aren't in contiguous space. If space is contiguous list with booleans would work too, but it uses 8 times the memory for same representation.
  • This is old stuff of course, but I reminded my about it. It's still a very nice optimization. Perfect example how things can be made faster, if you just bother to think for a while.
  • I just noticed that LG servers accept absolutely huge HTTP post data. Several 25 gigabyte files were posted successfully in parallel. So who says that you would only send them the filename or tv channel
    information. You can stream them full hd tv-shows and send full bluray rips to them in real time. curl -F "file=@your-show.extension" http://their-http-post-url/ I just sent them a lot of data and they accepted it happily and didn't even ban my IP. Which is nice, no reason even to use multiple IP's for sending. Maybe I'll send more? Or what about if million users would send them a few terabytes or even bit less like hundred of gigabytes. From now on I'm sending them everything I'm watching. So if you got seedbox with free (or near free) bandwidth, just send a few copies to them of everything you're seeding.
    I personally currently paying for about 30TB of traffic per month, and I'm only using something like hundreds of gigabytes. So now I got great destination for my excess bandwidth.   
  • This reminds me from time in mid 90's, when I asked for OS/2 from my friend, and he uuencoded whole cd image and sent it to my email box from university servers. My ISP wasn't quite ready to receive about 1 gigabyte of email, which was absolutely huge amount of data back then, they got quite mad. Btw. It was the time when even ISP were using 10 megabit coaxial ethernet networks.
  • Yet again I kept Internet engineers hand, and told how to configure routing table and interfaces, because "things just won't work". Of course those won't work, because you have invalid configuration. It's not so hard after all. Or it is hard, if you don't have a clue what you're doing, even if you're supposed to. 
  • Decompiled some .class files and inserted my own parasite thread inside the program to do stuff I wanted to do. Worked great. I even added automatic scoring features, spam text messaging and so on. Because it was "flood cast" game, which sent messages to all other peers, it nearly crashed whole game because they didn't have proper rate limits and my flooding of X Mbit/s to server caused it to flood 1000's X the bandwidth out. Duh! I'm sure there were many clients which weren't able to even handle that data flow even if the server wouldn't gotten swamped. Then the queues would simply have eaten the servers resources, if those wouldn't have been properly limited. So the server itself created amplification and resource consumption attack against itself.
    Well one of my friends did even better. He reverse engineered whole applications communication protocol, including some protections they had there and wrote own client using C.
    It's good question which solution was better, at least mine was simpler to execute, as well as it allowed me to still use full client with just my own tweaks.
  • Studied covert channels.
  • Cloud services are natural way to go. Exactly same thing has happened in other sectors much earlier, and it's nothing new. drinking water production, electricity production, banking, etc. Self-producing something is only viable, if scale is large enough or needs are very specific and outside commonly provided services. For this reason some industrial plans which require a lot of water or electric power do have own systems for producing what's needed. But for most, it's good idea to outsource. It's just like running own data center in office closet. You can do it, but it's probably much worse option than out source it to cloud. Of course Facebook or Google could use cloud servers from Amazon, but scale is huge enough, so it doesn't make sense. At least in Finland many companies which are selling server hosting, server management etc, are currently also promoting cloud services from Google and Microsoft, even if it's direct competitor to their own services. Only difference is that Google and Microsoft probably give them economical incentive as well as, services are after this still cheaper for end user than services provided by local service provider.
  • 2FA won't protect against cookie stealing.
  • Microsoft povided Office 365 is ok'ish. But the sharepoint which came with it, is ridiculously slow.
  • I love encryption, but I usually use it for backups or only highly private information. Because error in key or loss of keys etc, could render a backups / data totally inaccessible / unusable. But for that reason, I usually take from most very private data a backup copy to another off-site location, which is encrypted using different key & technology. So I'm hopeful that if something bad happens, at least one copy is restorable.
    I never trust cloud services with the only copy of the data. Either it's local data, which is backed up to cloud (encrypted) or it's cloud data which is backed back to office (encrypted). Both backups data and connections are of course bot encrypted. And all with different keys. If data is encrypted at the original source, is secondary question, it might be or might not be, depending what it is.
    Even my own email server is backed up off-site daily, with encryption, and proper backup history (for months).
  • One message routing protocol was written so badly that it was only able to handle one remote connectivity point. The project was really complex and basically unmaintainable. When there was need for multiple destinations what did I do? Well, of course I wrote simple message store and forward solution with multiple outputs. Now the original project just forwards the data to my python forwarder, which then pushes it as ordered but asynchronous process to multiple destinations. Everyone is happy and things work well. Related terms, SOAP, WSDL, Python, WebService, routing, router, fork, splitter.
  • Something different: Russian and Chinese air craft carriers and radar & weapons systems being used. Especially story of Liaoning is interesting. 
  • Broadcom PhyR - DSL Physical Layer Retransmission, Bit Error rate (BER), Reed Solomon (RS), coding gain (CG), forward error correction (FEC), error correcting code (ECC), Impulsive Noise Protection (INP), Repetitive Impulse Noise (REIN), Interleaving, Error Propagation, IPTV.
  • Chocolatey - APT-GET for Windows? Yes, I've always loved proper package manager.
  • BoxStarter - Easy Windows environment installations.
  • Played with SqliteAdmin and SqliteBrowser. I think the browser one was better and clearer for my humbleneeds. Admin one is better if you got more stuff to do.
  • Authenticated SMTP might be blocked in some cases. Elisa (Finnish ISP) seems to be blocking even authenticated SMTP outside from their network. So operator is basically forcing customers to user their SMTP out relay.
  • Lars Knudsen, Accumulated Block Chaining(ABC) mode, Carl Campbell, calledInfinite Garble Extension (IGE) mode
  • Cryptography is hard. - So you want to crypto?
  • Big data quality issues ... Just so common, daily stuff, without constant tracking, monitoring, auditting, trail logs, reporting, it's going to be just garbage.
  • Block, hash, delta, data compression, de-duplication ... Duplicati, duplicity and generic rsync etc related discussions.
  • Batch, realtime, near realtime, integrations, etc. I thought I would about this... but I guess this post is already long enough.
  • Just want to remind people about the fact that something what is being triggered asynchronously after something has happened is NOT realtime. It seems that some people are pretty confused about this.
  • Had interesting discussions with ThinStuff guys. Discussions were related to update automation, security, authentication, notifications, RSS feeds, timing and so on.
  • Keywords: Customers point of view, user experience, usability. Technical sales support tasks. Key customer responsibility. Customer contacts. Service desk experience. Wide ranged experience scope. Lets get this done attitude. Don't spend days thinking why we can't do it, if there's no real reason. Self guided, I'll study what ever I think I need to know to get the job done.

Sorry if there's something unclear. I just dumped this stuff as fast as I could. Now my backlog contains anymore only 385 entries for 2014. ;) I'll be posting more soon.

Samsung S3 4G LTE KitKat 4.4.4 update (Finland)

posted Jan 23, 2015, 10:22 PM by Sami Lehtinen   [ updated Jan 26, 2015, 9:15 PM ]

Interesting update as usual, lol. First of all it took forever before it was delivered over OTA to Finland. The change log as usual lacked tons of important things. Which doesn't surprise me, changelogs are usually just big marketing lies, unless it's about some very well managed open source project.

Unnecessary software was added to phone as usual:
  • Samgsung uPic
  • Google Drive
  • Google Play Movies & TV
  • HP Print Service Plugin
  • Samsung Print Service Plugin
Other changes:
  • Alarm clock widged stopped working..
  • Some widgets won't work anymore at all and some got removed completely not available anymore.
  • UI is slightly different.
  • Top of screen status bar icon's don't use colors anymore.
  • My own impression is that phone seems bit more responsive in general.
  • Gallery share now works. Previous version always immediately crashed when I tried to share any photos. Now it seems to be working again.
  • It remains to be seen in the email client has been fixed. Earlier it got really ridiculously sluggish after being used for a some time.
  • Samsung keyboard continuous input got really bad compared to old version
    Now it places space at end of words or doesn't place at all. Earlier it inserted space at start of words, so it was much easier to delete or change words. As well as all manually inserted punctuations are incorrectly placed with leading and trailing space . like this , Isn't it great . ifautospacingisdisabledwellthentheendresultislikethis. Auto punctuate does work as supposed, because it erases the leading space before putting the period. As far as I know the learned data for the keyboard got cleared, not a biggie but still bit annoying. New version is also really especially bad with Finnish language. If I first write word test and then just go back and ing to make it testing, it learns test and ing separately. Of course I would like it to recognize whole testing as one word in future. So yeah, it's really bad. You can't modify endings of long words unless you'll write the whole word again.
    Or maybe this is how they want to people write. You'll write test and ing separately to make testing. Might work for some languages pretty well, but in Finnish the base part of might also change slightly so editing the end and storing it as whole works better than separating one word to multiple individual segments.
  • For some reason not all applications are visible in Apps menu nor in installed applications. You might think that this would mean that the applications won't exist. Well, but when you have some of the applications linked as default app to launch with some action, those do still start. But there's no other way to find, launch or remove those apps. (?) Any tips about this?
  • Warning! In newer Android version (KitKat) OS limits the 3rd party apps (non preinstalled on the device) to access external USB flash or SD card write function. - So this kind of cripples SD card usability for many purposes.
And the list of official changes:
  • OS upgrade - KitKat OS 4.4.4
  • Music Album art will be displayed on the lock screen while playing music.
  • The new launcher setting menu has been added. - Home Screen, SMS.
  • Camera shortcut is added in the Lock screen.
  • Improved the User Interface and stability.
  • Some changes may vary depending on country or network operator.
  • After updating, you will not be able to downgrade to the old software because of updates to the security policy.

Other things to notice:

  • Update takes more than one hour, can take up to two hours.
  • Installation itself takes about 15 minutes.
  • Wondered why the download was so slow, even if seemed being served by CloudFront.

That's it for now. I'll update this list when and if I find some meaningful differences. After all changes are quite small, only major drawback is the new Samsung keyboard version if you're using continuous mode. Some discussion about this update at Google+.

kw: Samsung S3 Android, SIII, Samsung S III, Finland, Suomi, Finnish, päivitys, ominaisuudet, muutokset, changes, features, experiences, review, Samsung Galaxy, GT-I9305N, GT-I9305, NEE region.

Telegram, OBMP, AMQP, Projects, Residential Network, Cloud Automation, Google Pub/Sub, Git, FISC

posted Jan 18, 2015, 1:22 AM by Sami Lehtinen   [ updated Jan 18, 2015, 8:06 AM ]

  • Attack on Telegram - Security is hard.
  • First fail is that users won't probably check the fingerprints at all. So there's no need to find suitble finger print. It's so common to see this fail happeing all the time. Another favorite story is that they forgot password / lost their key, and now you'll need to use new one. Who bothers to check if it's authenticity? 
  • Checked out OpenBazar Market Protocol (OBMP) Tools
  • Read The problem with Angular, Why you should not use Angular
  • Also read not so interesting article about internal business communication in companies. As we all know, it's bad or worse.
  • Reminded my self about Advanced Message Queuing Protocl (AMQP)
  • Checked out new residential building networking specifications (In Finnish) by Finnish Communications Regulatory Authority (Viestintävirasto)
  • Once again typical project. This is just re-applying existing product in new environment. No changes should be required, just slight reconfiguration. After a while there's huge list of different new requirements and of course those should be done immediately on site and put straight into production. Business as usual. Then everyone is wondering, why there are problems? Well, what about really thinking trhough the requirements? What about doing proper coding (which takes time). Now everything is just hack it on, simples possible dirty execution which might work. No testing before putting into production, because it would take time, and nobody got time to test anything anyway. Well well. At least there's one thing which I won't skip. It's committing the changes into git at office after the changes have been already done into production. In some cases situation is that the only working copy of the program with all changes actually exists in customers production environment. There might be copy of it elsewhere but it could be out of date as well. Horrible things, but that's exactly how the customers request things to be done. It might seem cheap and quick, but the bill will bad with potential bugs and really bad maintainability later. 
  • Explored more cloud process automation and configuration management. So that systems can be fully automatically deployed and configured into production without any manual intervention. This is how things should work. 
  • Wondered how some really inefficient companies can exist. Some companies take several weeks and invoice ridiculous amounts for tasks which should be done in seconds and fully automatically. How can these inefficient companies even exist? I guess the reason is inefficient market which clueless customers. There are huge differences between service and automation levels between different cloud service providers. 
  • Should encryption be illegal? "British Prime Minister David Cameron proposes outlawing communications that the government cannot eavesdrop on." 
    Finland is also discussing if there there should be mass surveillance of everything. 
  • Checked out Google Pub/Sub - and Google Cloud Monitoring
  • I've always wondered why the solution to "unresponsive system" is adding just a few more CPU cores. Is it really true that developers today don't know that processes and threads can have different priorities? I'll always set heavy tasks below average priority. I've often heard that we need more CPUs to make system responsive? To me that sounds more like priority issue than adding just a few more cores. 
  • Thought some of my projects and Lean Canvas evolved (FTE Canvas)
  • 25 tips for intermediate git users
  • Future, more cloud SaaS, PaaS, IaaS, orchestration, predictive analytics, big data, social media, consumer data, Internet of things, digital transformation, wearable mobile technology, networked economy, seamlessly integrated and mash-up applications. 
  • I've seen lately that most of servers getting hacked are hacked by fully automated botnets. Those often take over server and do not actually touch anything, except add the server to the botnet and keep scanning the network for more bots.
    Of course this is very dangerous assumption. System got hacked, ok, we removed the added processes let's just continue as things were. Skilled attacker might do exactly that to make administrators to think that the hack was completely automated by script kiddies and we don't need to worry about the overall security of the system now when the "malware / parasite" processes have been evicted.
    Only secure way to deal with this, would be full re-installation of the system. Even restoring from (full) backups might not be a good idea because the exact time when the system was taken over might be unclear.
    On the other hand, it's interesting to see that even if they gain access to server(s) with valuable information, the information might be left completely untouched, because I'm sure they got just so many servers that they can't analyze individually what they even gained the access on. Which kind of makes me think how bad the security is. They're not like wow we got sever hacked, they're more like ok, we got one more to our botnet of million servers, who cares what the server even got.
  • Checked out Finnish Information Security Cluster (FISC) - Security Management, Policy Management, Cybersecurity, Security Technology, Enterprise Data Security, 
  • New Snowden docs indicate scope of NSA preparations for cyber battle. - Doesn't surprise anyone, does it?
  • Pony Foo Cross-tab Communication using HTML5 - I just wish more apps would use something like this. Because many web-apps totally break when you start to use those in parallel tabs.

Backlog still lingering with 768 entries, ugh! I'll deal with it some day. ;)

Docker, Ori, ThorCon, HFS+, OpenBazaar, Ratchet, Monitoring, IPv6 uWSGI, Remmina RDP

posted Jan 11, 2015, 9:56 AM by Sami Lehtinen   [ updated Jan 11, 2015, 10:32 AM ]

Some fresh stuff for change.

  • Microsoft now offers Docker images in Azure Marketplace & other Azure related stuff- Here
  • Ori File System - A secure distributed file system - No personal thoughts about this, these projects seem to come and go non-stop. I did read basic documentation and as far as I can see, it doesn't provide anything special, yet it's available for multiple operating systems.
  • Something different: ThorCon Nuclear power plant design. It's based on Molten Salt Rector Experiment (MSRE). The most interesting aspect is that the project is walkaway safe, even if people would disappear from the planet right now, power plant would still make automatic controlled shutdown. Really interesting stuff, modular safe design. I had to read everything they offered in their site. Also read Coal and Petroleum articles just to remind my self about this stuff.
  • Checked out HFS+ article, just for generic file system comparison information. I haven't ever used HFS or HFS+. Support for things like extents and data inlining, doesn't look bad at all.
  • Reviewed extensive Security Model documentation for one project, including Thread models, assumed adversaries: users, corporations, governments, developers). Reasons for attacks financial gain, making money, disturbing functionality of the product making in unreliable and insecure, breaking trust, weakening network & connectivity, unmasking users and breaking anonymity, block certain objects from network, sybil-attacks, man in the middle attack, developers pushing code which will damage software integrity and functionality, DDoS, denial-of-service attacks. Password policies, mobile, desktop and laptop security settings and policies. Public key encyption key management, storage, security, protection. Data access policies, never use devices which aren't fully under your control to access any trusted systems. Don't use any unknown hardware, like USB sticks or other bus connecting devices. Password protect BIOS. Prefer using Full Disk Encryption (FDE). Sign & encrypt all important messages. Only use programs from trusted sources. Always verify binary & program integrity. Never install or use arbitrary programs or scripts. Prefer package manager over web downloads. Don't fall a victim of social engineering attacks. Always verify contacts & identity. Using 4096 bit RSA keys is recommended. Users are required to follow security guidelines. Compliance checking is done monthly. Revoking access, granting access, checklists. All requests must be signed and separately verified to be authentic. Writing secure code, avoiding XSS, SQL injections and transferring confidential data without encryption. When things require privacy, be very careful about encryption ,data expiry policies and logs, as well as with cloud services.
  • OpenBazaar will be at FOSDEM'15.
  • OTR Advanced Ratchet / SCIMP Ratchet Future Secrecy, Axoltl Ratchet - Ways to use temporary short term public keys, so each message (or small number of messages) only utilize same key. This is just a way to automate what I said about GnuPG and it's ephemeral keys. Of course you can change keys and generate new ones, as often as you want to. In this case, keys are renewed basically on every round trip.
  • Wrote a program which checks multiple systems for default credentials. But then the reality check? What's the point of monitoring systems for default credentials? To be sure that there aren't any? Well, not so. Because there are plenty. Whole point is that using default credentials won't break anything, so nobody's interested to do anything about it. Business as usual, tons of gaping security holes, everything is well and working, so why worry?
  • Finally got my test project to fully work with IPv6. Original problem was that uWSGI only listened for tcp4 connections. But after asking it, one guru told me that I'll need to use --http [::]:80 parameter to enable listening and services for both IPv6 + IPv4. That's nice. But there's no documentation what so ever about this question. This is once again something you'll need to know and assume based on extensive knowledge base.
  • Found oud that Remmina connectivity problems are due to setting RDP connection encryption to high. NLA and TLS do work well, but if encryption is set to high, then thinks break up. I don't yet know why this is happening, but I'll try to find out. Acutally I don't know even if that encryption setting is especially meaningful when TLS is used to secure the connection instead of using legacy RDP encryption. Kw: remote desktop protocol, linux, os x, mac, ubuntu, remmina, windows server 2008 r2, windows server 2012, remote desktop problems, unable to connect, won't work.

I'll be dumping more stuff from backlog later.

Filezilla @ SourceForge = Malware

posted Jan 7, 2015, 8:00 AM by Sami Lehtinen   [ updated Jan 7, 2015, 8:46 AM ]

A hacker news discussion and comments about this topic I made earlier today.

I just hate it when good programs and recommended official download sites can't be trusted at all.
 
Filezilla client setup.exe @ VirusTotal

Filezilla server setup.exe @ VirusTotal
Filezilla server setup.exe @ MetaScan
Filezilla server setup.exe @ VirusScan (Jotti's Malware scan)
 
Downloaded the client setup again, now it got different content and hash, but it still contains same malware.

It seems that the malware package has been customized separately for each download. Because hashes of files won't ever match previously downloaded versions.
Many others have noticed the problem, but malware is STILL being delivered.

I personally would prefer that safer-browsing and other similar security tool bars would directly warn that SourceForge is dangerous malware site and shouldn't be visited at all.

SourceForge is official source for Filezilla binaries, which makes me really sad. If this would be just random "fake downloader site" it would be different story. But now the Filezilla project is publicly supporting installation of malware. This is ... speechless ...

Never, trust .exe files (or any other files either!). Even if those are from reliable and reputable sources. Just build your projects from the source code directly.

This is HUGE problem with production systems and made me really mad.

#filezilla #sourceforge #malware

Topic mega dump 2015

posted Jan 6, 2015, 10:44 AM by Sami Lehtinen   [ updated Jan 6, 2015, 10:44 AM ]

Start of 2015 topic mega dump. Unordered random stuff. Just leaving it here.

  • Wrote a tool which checks if database replication is working as it is supposed. If there are any differences, detailed reports are generated. I just wonder, why anyone hasn't done this before. This is just the usual situation. Developers claim that everything is working, Operators say that no it isn't working. Nothing is done and problem goes on for years. Even if writing additional tool to check and automatically document any problems would be trivial. Been there, done that, just so many times I can't even count it with longint. 
  • Had to deal with CERT guys. A few servers got hacked, and what did they use the hacked servers for? Of course for hacking more servers. Then I found log list containing administrative account credentials for a few hundred Windows servers. Which all seem to nicely respond to 3389. But I didn't dare to login without Tor, so I didn't actually try any. 
  • Experienced a few stability issues with OpenBazaar and it's ZeroMQ implementation. Client might crash or hang and requires frequent restart to work reliably. Can that be even called as work reliably due constant restarts, duh! It also seems that they have some problems with peer and connection management. System might make ~100 parallel TCP connections to same node as well as constantly keep 100+ "half open" SYN_SENT connections to peers which never respond etc. I've also heard reports about DHT routing table and peer information not expiring in reasonable time, causing a situation where "gone" peers are being attempted to get connected over and over again, even if those are gone. With networks like OpenBazaar which got naturally high node churn, this isn't good behavior. If network would be bigger, this could be easily used as attack tool toward smaller TCP based services.
  • Wondered Administrators who install RAT remote access tool, without password. Lol, yes, it's very handy. Anyone anywhere can connect and do what ever. It's also especially good idea to install the RAT using System account, so you can easily also reset Administrator account password remotely, in case you happen to forget it. 
  • There was long discussion if data should be stored in database or as files? Well, if you really think about it. As far as I know, a file system is just a hierarchical key value storage. It's no different from standard dictionary implementation. You can store dictionaries in dictionaries, called directories and then you can access keys which are file / directory names and content is what it is.
    Just as FTP doesn't have to do anything with files. Nor http with pages. Anything can be relayed / mapped over anything and encoded inside it. Many ERP related FTP servers actually do not handle files. They server data blobs from database, just as I said earlier, file system is database. There's no way actually for the user to know, if the FTP server handles files as files are known to the host operating system on the server running the ftp server. It's only about when it's smart to use some specific method due to it's common availability. Like I said that my integration systems do run over multiple protocols, sftp, ftps, ftp, scp, http(s), json, rest, xml, csv, smtp, webservice. Doesn't matter, it really doesn't matter, it's just bits technically. 
  • Donated money to GnuPG project, because it really is essential privacy / security tool.
  • DNSSEC allows DDoS via reflection and amplification of attacks. Is DNSSEC bad? What should be done to fix it?
  • In Finland DDoS attack against OP Bank caused also it's ATMs and Credit & Bank Cards to fail. But why? It's clear that system separation isn't done properly. If attack against web site brings whole bank down, there's something wrong how they have implemented their infrastructure. I guess military guys could tell them why Out of Band is a great idea. Relaying on public internet is vulnerability waiting to be exploited.
  • Someone just woke up to my earlier comments about "anything over anything". This one is just about yay, we could pass bits over SSH connection. "Why aren't we using SSH for everything".
  • The Hidden Costs That Engineers Ignore - Been there done that, nothing new in this article. But if you're engineer who's just doing the stuff, without big picture, this is well worth of considering. KW: Code Complexity, System, Hidden, Product, Organizational, Simplicity, Focus, Modular Structure, Interfaces and APIs, Standardization, Refactor when required, Purne un-used features & code, Themes.
    Complexity isn't always even so hidden. In some cases engineers decide to make things simpler by using this our very complex data model/strcuture as a standard integration format. Isn't that great? Why it isn't? Well, because the data format is so darn complex, it's really hard to decode to any reasonable simple format. But guess what? This only leads to situation where everyone else, doing the integration, has to deal with that awful and extremely complex format. Bugs and slow & expensive development is guaranteed, in every integration case. Real win, isn't it?
  • Dark Internet Mail Environment Architecture and Specifications (DIME) - Yet another private email implementation which does allow Mixed mode including Dark and Naked messages, also introduced Dark/Multipurpose Internet Mail Extension (D/MIME) and utilizes Onion (Tor) as one routing option, Dark Mail Transfer Protocol (DMTP).
  • ThunderStrike 31c3 - Hacking Apple EFI. Protecting software and hardware from unauthorized modifications.
  • Bad performance isn't a problem, until it is. - That's well said. In many cases, programmers don't mind performance at all. And develop programs which consume ridiculous amounts of resources, but doing things in a some silly way. I've had multiple interesting discussions why my friends about this topic. How low you can really go? I mean good versus bad performance, if you just bother to think about it how things should be done and why.
  • We love surveillance [31c3] - Yeah, that's about it. Well said. Why would like to have privacy or encryption, isn't it simply aiding terrorists and criminals?
  • Apple HSTS super cookies - HSTS can be used with some browsers with Super Cookies which can't be removed. Privacy flaws are just about everywhere.
  • Reminded my self about Remote Desktop / Terminal Server - RDP security settings, Encryption High, NLA, TLS security, etc.
  • Steve Gibson from GRC introduced SQRL (YouTube) - He also covers many of the authentication related topics pretty well on the video.
  • wifiphisher - Automated WiFi / WLAN phishing attack tool.
  • StackExchange performance - How they handle 560 Million page views every month. It's al about performance.
  • PostgREST - Automatic RESTful API generation from PostgreSQL database. Exactly what I said earlier. It's not so hard to just map data to alternate format. This tool fullu automatically generates RESTful API for whole PostgreSQL database. Yes, it's probably not the optimal way of doing things. But it's still a generic and quick way to get things done, quite simply, if that's required. Very cool stuff after all.
  • How does the SQLite3 work, part 1, part 2
  • I love simple and efficient solutions. Large standards which try to be everything for everyone are just horrible. Trying to do some simple things over those is usually ridiculously hard, because there's so much overhead on too complex implementations.
  • Dangers of public wifi use. Nothing new. Nobody reads the ToS, so it could say anything. It would be interesting to make ToS which allows me to abuse all of your accouns, when you use my wifi it's ok to MitM you and steal your data. Because you agreed to it when you started using my wifi.
  • How Hong Kong protesters are connecting without cell or wifi networks. - Firechat, decentralized, mesh networked messaging application.
  • PostgreSQL outperforms MongoDB. This just shows how great PostgreSQL simply is. It's just so refined and tuned application. Real work horse for handing data.
  • Not a bash bug - Posting about Shellshock
  • All of my friends are now running their own mail servers and all connections are now encrypted and authenticated using: ECDHE-RSA-AES256-GCM-SHA384
  • Should you use ORM or just plain SQL. Do you use ORM extensively, or only ORM? Do you use SQL at all? Because I've been having my problems with ORM and as far as I can see, often the only way to debug ORM problems is complete understanding of SQL statements it generates and running those through EXPLAIN. This post made me smile, because when debugging things, I've thought exactly the same.
  • Credit Card Debt, how they maximize it? - I just received e-invoice from my bank. Only problem is that it really sucks. Invoice contains tons of rows, and then the payment information is about the minimum palyment you can do, so they can charge all kind of fees from you. But guess what? What's the amount I should pay to avoid these surcharges? Well, it's nowhere to be found. You'll need to copy paste the invoice to spreadsheet ( Excel / Libre Calc ) and then calculate sum of the rows to find out the final sum you're going to pay if you're any smart. Absolutely horrible user experience and usability. So much fail! I really don't know who designed this, but I think they're clearly trying to maximize amount of loan people are having.
  • Making sure crypto stays insecure. - This is how we're all being seriously mislead into trusting cryptography given at us. Just use it, it's guaranteed to be safe. Sure.

Topic mega dump 2013 (1 of 2)

posted Jan 6, 2015, 10:36 AM by Sami Lehtinen   [ updated Jan 24, 2015, 5:45 AM ]

Yeah, I know it's already 2015, but this is exactly the reason I'm just doing this quick dump:
  • Multi-tenant, cloud based, on demand capacity scaling. Average server load is under 10%, so why do we need the 90% for?
  • Intel HTML5 tools
  • SQLite3 parallel access
  • Error detection, handling and self recovery
  • BGP / Anycast
  • Implementing message queues in relational databases, I think I've covered this topic already. Not optimal, but works.
  • Database as queue anti pattern, and again. I even posted longer post about this. But other queue solutions might not be much better, if data is being queued for too long.
  • I have seen way too many directory / file based or database based queues. - Yet those do work as well. My current mail server is actually doing just that. (Postfix, with maildir)
  • So about using database as queue, here are a few problems and few suggestions how to improve it (a post by me). 
  • How to design rest APIs for Mobile.
  • Web store, card acquirer, point of sale, bookkeeping, invoicing, ERP, card issuer, card transaction clearing, order number, data structure, ticket sales, reference numbers.
  • Physical Aggregation, Fabric Integration, Subsystem Aggregation. Solution where there are several completely separated modular subsystems like processing, memory and I/O.
  • Whole rack is "single computer" with shared resources. (Rack-Scale Architecture, Open Compute Project, silicon photonics technology, Intel Atom S1200)
  • Slow database server? Yeah. After checking why it's slow. I found out process which repeatedly requested 69421 rows 536 times in 10 minutes. And another query by another process did request 55512 rows 536 in 10 minutes. So it seems that some of these queries were basically running in never ending loop. This started interesting question, should the code be fixed, is it designed correctly, why data is being requested all the time. How much memory server should have, does the requests even hit indexes and so on. Engineers though it would be best to add a lot of memory and CPU resources to the system, because it's now slow. Finding out these things took less than hour, fixing the issue. It remains to be seen if it ever happens. I guess the fix is adding more resources after all.
  • Performance issues, endless loop without delay. Why solutions x,y,z and being applied when nobody knows what the problem is? I have seen this happening over and over again.
  • Discussed above matters with independent IT professionals and they all agreed that these problems are really common. Code is horrible, there's a lot of lock contention and so on, performance tanks, even adding resources might not help. When things are done so badly. One guys said that they're having huge problems with their ERP systems database. The ERP system alone runs just fine, but there are some reports which are written by 3rd party consults and those seriously abuse locking causing lock contenting and basically stalling whole database server.
    It's generic problem that people do not understand the problem is nor they are able to use tools which would reveal what the problem is. Instead of doing that, they just do something random and hope it would fix the problem.
  • Total home renovation project, is still taxing my IT 'hobby'. I've been so busy with it. I'M getting new just about everything. Every surface in the apartment is being renewed as well as plumbing, networking, electric cabling, floors, walls, kitchen, home appliances etc. I hope this means that I don't need to worry about these things in next 20 years or so.
  • Haiku OS - Played with it for a few hours.
  • Tachyon - Fault Tolerant Distributed File System with 300 Times Higher Throughput than HDFS
  • Designing REST JSON APIs - Nothing new at all. But could be problematic with mobile apps, if round trip times are high. All required data should be received using one request, even if it technically involves multiple resources. So some kind of 'view' resources should also be available which technically merge multiple lower level resources.
  • Working with systems which are broken by design, is really painful and frustrating. There are multiple scheduled tasks, which might not run or can run, if any of those tasks does not work, data is is lost and requires complex manual recovery etc. I don't want to see that kind of systems at all anymore. It's so darn painful doing the manual recovery process, especially if it rarely happens. Let's say the crap fails 0,5-2 times / year. Nobody even remembers how the systems should get recovered. Simple and reliable, are the keywords which I really like. Sometimes I do prefer multi-step process, where in each step you can check the data and state easily. So you'll know that everything is good so far. One big black box which gets something in and spits out something, might be a lot harder to debug.
  • More great code. Let's assume we have huge database. Each database row got flag if the row has been processed. I would prefer pointer to monotonically increasing counter, instead of per row flag. But his is someone implemented it. Now they're doing it like this.

    SELECT type, and_so_on WHERE processed = 0

    The data is getting processed:

    if type == do_stuff_type:
        do_stuff()
        update processed = 1


    Well, guess what? All rows which aren't typed with do_stuff_type are selected over and over again. How about just updating the processed flag outside the if statement? Or maybe skipping the records on select query (which is still inefficient). Current implementation is exactly what I didn't want to see. Sigh.

  • Configured mail client to show spamgourmet and trashmail specific headers by default, so I don't need to use show all headers or message source to see a few important fields.
  • PGStorm accelerating PostgreSQL with GPU. 
  • MongoDB hash based sharding
  • Wildfire.py - Self-modifying Python bytecode: w.i.l.d.f.i.r.e
  • Hardware based full disk encryption (FDE)
  • Firefox seems to use SQLite3 in WAL mode, that's good choise when there are lot of writes.
  • What's new in Linux 3.9
  • Experienced developer can give you huge list what to do and why, and what not to do and yet again exactly why. I'm seen millions of ways writing extremely bad and unreliable code. I can tell exactly why not to write such code. Nothing kills more productivity than totally unreliable code.
    Use transactions, locking, handle exceptions, give clear error indicaton, log possible issues. Use auto-recovery, if possible. It's not that hard, it should be really obvious for anyone. Don't write stuff that totally kills performance, use batching, sane SQL queries, indexes, etc.
  • WiDrop - Wireless Dead Drop. As far as I know there isn't one in Helsinki yet. Maybe I should build one? Just have to ask some friends living in really center of town to run it. Or maybe I could find some local business to host it. Which probably won't work out after someone abuses the dead drop with wrong stuff.
  • When using Windows as file server, I just wonder why people always also give remote desktop access to server. It's just like when people share sftp / ftp / ftps / scp accounts, they usually also give always shell access. I guess that tells something about the administrators.
  • SaaS company providing superior Web-based security solutions for businesses, institutions, and government agencies to securely encrypt, time stamp, store, transmit, share, and manage digital data of any kind and size across a broad array of operating systems and devices, ranging from smart phones to Supercomputers. These services are further supported by our comprehensive and certifiable audit trail, including irrefutable time stamping. The proprietary methodology at the core of safely locked allows for a wide range of applicability of its software and services, all of which brings to the world market innovative, highly secure, interoperable and cost-effective services and products. - Sounds pretty cool.
  • Inception - Inception is a physical memory manipulation and hacking tool exploiting PCI-based DMA. The tool can attack over FireWire, Thunderbolt, ExpressCard, PC Card and any other PCI/PCIe interfaces. - If there's physical access to your systems, you're so owned.
  • Linode started to use TOTP
  • Flash Cache, Bcache, Bcache Design
  • WiDropChan anonymous wireless wlan chan with HTML5 off-line support. Just a thought play. Could be useful for someone. Providing pseudonymous access with private messaging & attachments would be nice bonus. Server software would provide basic access control, flooding protection, captive portal and HTML5 client off-line features. To update data it would be enought to visit the page when you pass by. After that, you could handle messages and files offline.
  • PCBoard BBS system - Sorry no time to write about it. I would have been just a good example about file based locking and multi-computer shared environment.
  • Babel routing protocol
  • Thinking about games and their closed world and money systems is great excersice. How do you make the game fair? How do you prevent inflation, deflation and generally control the supply of money in different economic situations. Making game fair in a such sense that all money doesn't come to one user etc. I did spend a lot of time thinking about these concepts, but I didn't ever write about it nor I did write any code. 
  • Big-O Cheat Sheet - This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science.
  • When bad data is received, what should be done? Halt, continue, log error, send alert? I often have found out that the best way is to halt, it forces the problem to be investigated. If processing is continued and error is only logged, it might take months before anyone notices that things aren't as those are supposed to be.
  • Had a long discussion with a good friend about: fluid intelligence, crystallized intelligence (aka wisdom)
  • Read: The Dangers of Surveillance - Neil M. Richards - Washington University School of Law
  • Strangest problem ever. I made too large transaction by giving command:
    delete from junktable;
    commit;

    It took a long time and after a the commit log got huge the database service crashed. After that, when restarting database server, the crash recovery crashed. I were forced to delete the transaction log. After all this the database was totally corrupt. Great going! Luckily I was using my test environment and not a production environment. - Phew!
  • I really dislike programs which malfunction so that they require operator / admin attention. When things are out of beta, things should just work, work and work. Not fail randomly giving headache for everyone.
  • Bitmessage load spikes on higher tier streams - Forever continuing retransmissions. Protocol used to synchronize data between nodes in same group not described. Possible traffic timing correlation attacks when using out bound message relay. Passive mode makes retransmission and work proof even worse.
  • Skimmed checklist manifesto, and watched Fukushima Two Years After documentary, where they analyzed the failures in essential processes of cooling the reactors. In very short from, after power outage the passive emergency cooling system was off and they didn't even realize it.
  • Just like database managers saying, yeah, there is that commit log, just delete it. Great choise guys, it seems that you don't know what the function of journal is after crash. It's true that recovery after crash is faster if journal is deleted. But it leads to data corruption. Great move, but seems to be one of the tools in DBAs standard toolset.
  • Spanning tree, real-time web applications
  • Tested: Freedcamp, Trello, Asana, Apptivo - For software development team work I liked freedcamp, for very simple task management I loved Trello. But Apptivo seems to be the most complete product of these. Maybe bit too heavy for simple tasks, but in general seems to be the best too of all of these.
  • Bitmessage - Only route refresh messages should be flooded. Flooding every message to every stream node is very bad tihng. I just remember too well how much fail original Gnutella protocol was. It did work well with a few nodes, but it was totally unscalable by design. Message types (inventory, getdata, senddata, sendpeers)
    Client allows system to be flooded with stale TCP connections. Because I can complete 3 way handshake using raw sockets, I could pretty easily flood all nodes on the network with countless stale connections. I didn't try what would actually happen if I would flood all network nodes with millions of stale connections. But it seems that there's a real problem there. First of all tons of connections shouldn't be allowed from same it, as well as there should be some limits which would prevent keeping stale connections alive for as shorter time. Now connections remain open for quite a long time (too long?), allowing attack to be effective without any additional handshake or state.
    A) Do not allow tons of connections from same IP, especially if there hasn't been ANY negotiation. This flaw makes attack super trivial and easy. I can disable ability to connect new nodes for whole network from one of my servers with gigabit connection in a few minutes and that's super trivial and easy to do.
    B) Because client doesn't recover from the initial attack, there's coding flaw. Normally client should return to 'normal state' after those connections die for what so ever reason. But this doesn't seem to be happening. Client can't form even outbound going connections after attack.
    All this is followed by the classic: socket.error [Errno 24] Too many open files.
    So there are at least two vulnerabilities. Third one is pretty bad too, they allow fake peer information to get propagated throughout the network. This can be also utilized to attack other TCP services by making almost all Bitmessage clients to connect those. There's no check if peer is valid before propagation.
  • I had nine computer's running Edonkey2000 cluster, which harvested files from Sharereactor. Including my own control software which took care of load balancing and allocating files based on disk space and bandwidth demand & peers on different servers.
  • Early ed2k server overloadaded when there were too many peers connecting to it. Worst part of it that it only send list of N peers to every client. So when swarm or client group downloading the same file grow large enough, all new peers got only information about those limited number of peers connected to the server. Rest of the peers were unconnected because there was no peer to peer gossip protocol with early servers.
  • I convinced the ED2K guys to select rarest vs random block when selecting which block should be downloaded from peer which got multiple required block available. I also told the GNUnet developers that the peers would need local block cache eviction. Old versions didn't evict blocks and when cache got full, those were simply unable to insert new blocks. Of course this isn't a problem as long as there's huge churn on peers, which install client run it for a while and forget it. But this totally ruins help provided by peers which are connected all the time and would be able to provide considerable resources to the network, if the cache would be fully utilized. Note! It's not cache, it's data storage, which remains persisted on disk when client isn't running.
    [2015 note] This is exactly what I would also like to see from Tribler.
  • Read: Introduction to financial economics by Professor Hannu Kahra
  • Managed Wlan systems, and configured one for a customer with five base stations
  • MitMproxy - Yet another tool for hijacking connections and stealing data
  • Finished reading Secrets of Big Data Revolution, by Jason Kolb and Jeremy Kolb
  • Watched Google I/O 2013 Keynote
  • Studied: software defined storage, universal storage platform, virtual storage platform, system storage san volume controller. storage hypervisor, virtual software, Geo fencing, location intelligence, Integration architect
  • User interface design, general usability (UX), and viewing the product usability from end users and customers roles.
  • General Data Protection Regulation
  • UnQLite - An Embeddable NoSQL Database Engine
  • Refreshed my memory about Freemail (email over Freenet) documentation and how they check if certain keys are being used and which keys should be used to publish new messages
  • Just keywords: Predictive modelling, Intermessage delay, asm.js, Gitian, yappi, pump.io
  • Bitmessage test attack client goals: Network propagated, persistent attack (invalid message getting propagated by the network, but still crashes the peer after it when being processed by the client). As example, packet passes networking and data store parts, but the user interface displaying notice about it crashes the client. In some cases this would even cause the client to crash again when it's restarted. Only purging the data store would fix this issue.
  • I like web apps because those work on all platforms: Sailfish, Firefox OS, Tizen, Ubuntu, HTML5, COS.
  • What's the point of using state of art VPN software, when it's configured not to use keys and passwords being used with it are lightly to say simply moronic. It's just matter of time before the state of art security is broken.
  • SQLite3 3.7.17 release notes
  • SQLite3 Memory-Mapped I/O
  • Bitmessage Fail -  Number of connected hosts isn't managed properly, can easily lead to several problem and running out of TCP sockets. As well as inventory message flood allows memory consumption attack (which caused remote clients to crash and therefore lead to situation where I detected connection management issue) - Quite good reasons NOT TO run P2P software on systems doing anything else than running just the P2P system itself. Always properly isolate P2P systems from all other systems. Bitmessage also failed to exclude 172.16.0.0/12 addresses. So it was possible to make the Bitmessage clients to connect local address space by spoofing peer addresses like described above.
    Bitmessage vanity address generation is trivial. All it takes, is changing one line in source code and some time.
    Bitmessage potential timing attack... You can flood or crash connected nodes and see how quickly the message received confirmation message comes. Also timing attacks might reveal who's the sender in case of distributed lists. Just connect really many nodes, do not relay the message your self and observe order which the message is getting offered to my node from other nodes. Restructure network, crash a few peers and recheck. Address messages can be also be used to check out the network structure, because messages are flood casted. Send message to only one peer and see which order and after what time other peers offer it back to you.
  • Native applications versus HTML5 applications (in Finnish)
  • HTML5 features you need to know
  • Non-blocking transactional atomicity
  • Good reminder: You're dangerously bad at cryptography - Yes, it's hard as any other security related topic. Getting some basics right is quite easy, but after that getting it absolutely right is nearly impossible.
  • OpenVPN, SSH and Tor port forwarding & tunneling
  • Unhosted applications - Freedom from monopoly - Distributed Standalone Applications using Web Technology
  • Parallelism and concurrency need different tools
  • Checked out Wise.io
  • Is NUMA good or bad? Google finds it in some situations up to 20% slower. - I guess it's all about memory access patterns.
  • In the upcoming Google App Engine 1.8.1 release, the Datastore default auto ID policy in production will switch to scattered IDs to improve performance. This change will take effect for all versions of your app uploaded with the 1.8.1 SDK. - Ok, so they didn't like the original concept where 1000 consequent keys were allocated at once for a peer. Even that didn't guarantee incremental key allocation, because each thread got it's own key space for performance reasons.
  • Studied more Security Information and Event Management (SIEM)
  • I'm one of the StartMail beta testers.
  • Started to use Comodo Personal Email Certificate as optional to my GnuPG / PGP keys.

Link: 2013 mega dump 2 of 2

A few thoughts about OpenBazaar, P2P and decentralization

posted Jan 6, 2015, 7:25 AM by Sami Lehtinen   [ updated Jan 6, 2015, 8:13 AM ]

OpenBazaar is decentralized P2P trading / market platform.

Some history and tech stuff about decentralized networking applications:

As I've said several times, I got really disappointed when PirateBay announced that reverse proxy technology is latest cloud / peer to peer networking stuff. I was expecting them to release a fully distributed version of PirateBay as portable multi-platform client application. If they follow the latest trends they could have used WebRTC compatible communication allowing the clients to run in modern web browsers in fully P2P fashion. I know some people don't remember eMule, but it actually provided search & listing functions from Kademlia (Kad) in fully distributed manner. So when people claim that Torrent's are so awesome, they're actually stepping back in time for old times. Centralized trackers is something that Even eDonkey 2000 (ed2k) and eMule got rid a long time ago. So it would be quite natural that the file sharing systems would work fully without any need for old fashioned web sites.

Flooding based networks would also work beautifully as mesh networking, even if connectivity is partially or mostly lost. As example Bitmessage would beautifully work like Firechat, same network could be relayed to 'local' peers over adhoc wlan, bluetooth, etc. So even if mobile networks are disturbed or what ever, as long as even one peer of local mesh got connection to the 'rest of world' it would relay messages and keep everyone connected.

Yet decentralized networks can be used as tool to attack other networks, if there are bugs in the client implementation. At one point ED2K's Kad had implementation problems. Peers having the lowest or hights peer ID received absolutely huge flood of traffic, because every peer on network was checking for. So the address space didn't properly wrap around to full circle. I had 10 megabit connection back then, and I got totally DDoSed out of the network after setting peer id to all 00s or FFs. Well, I'm sad for the users who didn't realize this issue and suffered from it without ability to manually change peers DHT address. I'm also the guy who told Overnet / ED2K designers that it's better to select rarest block for download, instead always selecting randomly any available needed available block. This radically reduced number of cases where there were just a few missing blocks for a huge file preventing complete availability.

Then a few thoughts about OpenBazaar, I checked it's source.
  1. OpenBazaar strangely leaks network traffic to web browser javascript client. Or maybe this is on purpose? Why? Well, because it would allow fully WebRTC based clients to work with the old fashioned client application seamlessly. Utilizing HTML5, WebRTC, indexed db, local storage, web sql, etc.
  2. Project seems to be fragmented, using multiple different technologies and making parallel implementations which aren't compatible with each other and so on. It's hard to track what's happening. I've been part of some Open Source projects, but those have been much more coordinated ones.
  3. It seems that many JSON messages are mix of new and old data formats. Some parts of data are in flat dictionary and some parts are in nested data structures. Everything is evolving all the time, is that good or bad, remains to be seen.
  4. The Client / Server (Full P2P) client got SEED MODE, which doesn't actually do anything according the source code. AFAIK, I would see seed node peers working similarly to traditional Gnutella Peer Caches. Which were often implemented in PHP, but doesn't really matter how it's done. In seed mode I would assume the peer to keeping a very few connections on default, like 3-5. The it accepts new connections, delivered information about alive peers / nodes / markets (what ever these are called) and then almost immediately disconnects the connection, making the seed available for new connections. I personally modified my Bitmessage seed node to work like this. Instead of starting the messages database sync with new clients it simply disconnected after delivering fresh network peer information.
  5. Client is currently quite messed up, it can easily form 60+ parallel TCP connections to SAME peer. Which is really bad behavior. I hope this gets fixed pretty soon. I've heard they're working on new rUDP code which would replace current ZeroMQ implementation. 
  6. There should be options for connectivity, number of connections to be opened etc. In case of Bitmessage I'm running my own modified client branch to change these values from originals. Currently number of peers is so low, that it's possible to remain connected to every peer, but that's of course scalability show stopper.
  7. What about IPv6 support? I didn't see anything related to IPv6 and we know, IPv6 is finally coming. (?)
  8. Currently it seems that only one WS connection is alive with browser. Has there been any plans for multi-tenant configurations where several markets could run on single host? What about allowing multiple browser WS connections in parallel to one host? Let's say you're running web shop or something like that? Or maybe in those cases there will be just integration between the webstore and OB which would feed data from the main system to OB so there wouldn't be any users using the OB directly. If multilple parallel WS connections are created, only latest one of the connections does receive data from server. I don't exactly understand why it's so. Probably current version is designed to be run locally with only one tab in browser. But there might be situations where different configuration would be preferred one.

I hope good luck and success for projects like OpenBazaar. I've thought writing my on P2P application(s) for several years. But I haven't yet found a good reason. Maybe it would be time to write fully mobile light secure P2P application without any centralized peers except boot strap service. But as far as I know, Wire should be something like that.

As I've written several times, decentralization, mesh and P2P are just bad for mobile platforms, because those consume CPU time, Network bandwidth and generally drain battery on battery operated devices. On severs and desktops normal P2P networks work without any problems, but on mobile platforms it's a whole different story.

If you want to comment or talk about anything, just mail me. I'll be glad to discuss anything.

Topic mega dump 2014 (1 of 3)

posted Jan 2, 2015, 11:41 PM by Sami Lehtinen   [ updated Jan 21, 2015, 9:40 AM ]

Year 2014 is ending, and it's time to simply dump the topic I've studied, read but haven't had time to blog about.
  • PostgreSQL LOCK modes. Why? Because someone told me that I should acquire write locks on table, to get consistent read snapshot. Well, at least with PostgreSQL it isn't true. But it might be with some non MVCC databases. Naturally locking whole table to get simple read snapshot is really bad idea which easily leads to lock contention.
  • Studied Google Prediction API, IMVU Engineering Blog Real Time Web REST
  • About Finnish Service Channel: I've seen it so many times. People claim that it's impossible, it's too complex, etc. Yet they're wrong. I've been doing that for ages. If engineers and programmers say, it can't be done, then I do it my self. At the same time, they'll bring it up. Even if the better solution is then done and technically working, it doesn't mean that it would be used. Because everyone is just so tied to the old methods and thinking. 
  • Security of Things (SoT) , closely related to Internet of Things (IoT)
  • Funny stuff, Quake using Oscilloscope as display
  • Microservices not a free lunch! - I completely agree with it. Many abstraction layers and separation can be nice in some way, but also adds a lot of overhead and interfaces and stuff like that. Which all requires maintenance and complicates things. - Yet I know a few systems like this, which are in highly demanding critical production use and work out great. Also monitoring is easy when it's all built in. But as said, it all makes project more complex, even if it's cool. It's a good question if customer(s) are willing to pay for that overhead. 
  • Hacker News discussion - Great comments about coding styles and project management. I've seen it over and over again. Absolutely ridiculous hardware resources are required to complete simple tasks, because coders don't really understand at all how things technically work. They just write code that works, well in testing environment. But in production performance is worse than awful. Basically causing DOS with just a few % of full production traffic. Yeah. Like I said, been there, done that. And it seems to be happening over and over again, developers never learn. 
  • WiFi / WLAN / IIIE 802.11af - Faster WLAN is almost here.
  • I got downvoted for this view, but I still stand behind it:
    "Whole password hashing and salting etc stuff is pointless. What does matter? Is the fact that passwords aren't reused anyway. Do I really care if you know that my password for service X was: c'EyqXnrq-bCyfF_dK67$j I don't really couldn't care less, if you get it hashed or not, really. If they owned the system, and they were after my data, they got it already. Password(s) is just minute and meaningless detail. I've always wondered this pointless discussion about passwords. It just doesn't make any sense.
    First mistake is to let user to select arbitrary username os password. Just provide them with secure ones, that's what I do. I always have wondered also the fact, what's the point of having separate username and passworld fields in the first place. Username: bjdndgEC2S4rHRZy7c8rdQ Password: 6TWe8EvxfRxxCvcyZTaBM6 Isn't it enough to concatenate username and password, it's just as secure to use one field instead of two. Btw. Do many of these services pay enough attention to potential weaknesses presented by the potentially weaker session cookies?"
    Password can be exchanged using challenge protocol and so on, or hashed with salt, what ever is the selected method. But still if there is security requirement, it's just silly to reuse same passwords. Yes, I know it happens all the time, but it really should not. 
  • The SQRL is technically exactly what I described, it's just a "random blob" of 256 bits (32 bytes), which is verified using bidirectional service specific challenge.  
  • A nice article by Cisco Systems about Wi-Fi (WLAN) interference and avoiding it using channel selection and eliminating interference sources. 
  • Refreshed my memory about Memory Imaging. Nothing new really. Also checked out Computer Network Exploitation (CNE), Computer Network Defense (CND). 
  • "A few thoughts about Tribler - Good comments about the details of the protocol. But I'm wondering why nobody found anything to comment about it on higher level than the crypto? We all know(?) that Tor and multihop data passing isn't efficient way to implement 'anonymity' for distributed file sharing.
    For that particular reason I was personally amazed that they did select Tor as example. Tor wastes a lot of bandwidth as well as allows easy traffic correlation attacks in the cases where that's generally feasible. I really loved Freenet and GNUnet designs, because those use really efficient caching, partitioning, routing compared to Tor. At least in theory anonymous downloads could be even faster than when using non-anonymous downloads, due to improved efficiency of the network resource utilization due to distribution and caching. When Tor is used as base, all these benefits are lost and in addition there will be huge bandwidth overhead causing about 600% slowdown.
    Does anyone agree with me? I was almost sure that someone would immediately comment this aspect, but as far as I can see, nobody has noticed these facts(?) yet."
    Another nice analysis about Tribler flaws, but from totally different point of view. This post is about traditional security flaws.
  • Lot of discussion with friends about VAT MESS what EU created. This will add substantial management overhead to small businesses. 
  • Checked out Azure Pack. Actually I didn't know about it. But I started to look for such product when I noticed that several service providers beside Microsoft is offergin Azure services. 
  • Had long discussions about private vs public cloud. I don't currently see any benefits from private cloud. If service provider is right, they can offer public cloud cheaper than what private cloud would cost. Even if many people are trying to push hybrid cloud forward. It's almost impossible to beat scale benefits of public cloud. Some service providers are hosting well over one million servers. So they must have clear scale advantage on costs on every mission related front. Security could be bit worse, but honestly. If the cloud platform is run by professionals, I wouldn't worry about it. I would worry about own software and it's configuration, which is almost infinitely more vulnerable than the hosting platform. 
  • I still hate sites which require login. If I'm going to just read content, why I would need to login? Sigh. - No thank you! 
  • Checked out: LXD and Rocket. It's more traditional combination to beat Docker. Actually I've been using LXD for years, mostly for security purposes and server process separation & segmentation on more hardened level than user / process isolation only. 
  • Office 365 password change works slowly: I mean if I change password, services logged in, remain logged in for a quite good while, even if clients would be restarted. This could be a privacy & security issue, when password change doesn't lock out other sessions immediately. 
  • 10 nice Python idioms worth of checking out.
  • Tor uses TCP connections. I2P would be better for some services. Also Tor clients do not automatically work as hidden services, making connecting back to nodes of normal non expert users awkward in cases where it's required. OpenBazaar uses TCP for Tor compatibility, but TCP isn't good for DHT which requires quick short lived connections to multiple hosts. Using Bitcoin to pay for services, like bandwidht and relays, etc. Proof of Stake, Proof of Work. P2P. 
  • Peer-to-Peer (P2P) is bad for mobile, consumes bandwidth, storage space, CPU time, power. Same problems also apply to mesh networking, which is just possibly indirect routed or flooded  Peer-to-Peer networking. 
  • Some systems which were fully P2P and beautifully constructed have even dropped the original Peer-to-Peer design. One of the well known examples is the Skype.
    I remember that when usign Emule Kad DHT implementation, if I manually changed my client ID to largest or smallest ID on the network, I basically DDoSed my self out. Why? I guess the client's did look for "near by nodes" as well as "highest and lowest" ID on the network. So the DHT implementation didn't for some reason implement full "wrap-a-round" circular address space causing the clients with highest or lowest ID be seriously overloaded. 
  • Talked about WebRTC based P2P networks which would run in browser as HTML5 app, just as any other code on web-page. This would allow CDN networks which scale on demand and so on. As well as building applications like OpenBazaar, without traditional servers or client "software". I'm sure someone is already working on something awesome like this. Actually I've been contacted by a few guys talking about these things. But I really can't tell you any details of their new projects. 
  • It's interesting to see if MegaChat can cash out these promises. Because Mega is already using JS encryption stuff and communicating with severs using HTML5 client, it's probable that they'll be using WebRTC and related stuff to make true in browser chat client. There are also many in web-browser video chat apps, so that's nothing new either. Unfortunately the browser support could be spotty at times. 
  • Once upon a time, we had a problem. Application crashed repeatedly. Customers were upset. Developers told that there's nothing wrong with the app. After investigating the problem, I found out that it was some kind of strange UI related problem. If the application was the ONLY application running with UI, then at times, it would hang whole system. I fixed this problem by starting notepad with do-not-close.txt in the background when app was launched. This 'fixed' the problem. It took over two years, before this same matter came up via someone other. I just told them what I did and there's "nothing wrong with it", it's just how things are. Lol. 
  • HTTPS / SSL for everyone. Let's encrypt - If you don't know what this is, it's better to check it out. 
  • Is the web dying or not? - I personally love web and hate apps due to many security & platform related performance and so on problems. So if there are two versions of the same application. I would prefer the one which runs in the browser. - Thanks
  • ISPs are forced to log traffic, but VPN providers aren't? Great, then just push all traffic via VPN?
  • I've found out that most of programmers simply do not have any understanding or even concept of productivity. What are the things that should be done. How those should be done. What's the ROI for the project and if systems should be self run or out sourced, how expensive part OPEX can be compared to CAPEX and so on. I just wrote how they don't understand even how computers work, so how could they understand how business works? No self-guidance / self-directed thinking. What could I do today, which would most benefit this organization?
    I personally like giving people large abstract tasks, and then leave them alone. If they're not active getting it done, they won't get it done and that's it. Task usually includes finding things out and thinking what should be done and how. But as we know, that's all way too complex for many people. If you don't trust the employees, you shouldn't hire them in the first place? 
  • Facebook Tor Gateway - As well as they use basic stuff like HTTPS/HSTS/PFS etc.
    Studied environmental impact of nano technology. This is something which remains to be seen in the years to come. http://en.m.wikipedia.org/wiki/Environmental_impact_of_nanotechnology I hope we don't make same mistakes which we did with radioactivity and many chemicals... Except I assume humanity is going to just those mistakes, unfortunately. Future of nano technology http://en.m.wikipedia.org/wiki/Nanotechnology is almost limitless? Is it going to be bigger or smaller change than Genetic Engineering? https://en.wikipedia.org/wiki/Genetic_engineering Don't know, in short term GE/GMO could have larger impact, but in long-term? Maybe nano technology got more possibilities.
  • Quantum Dot Display - This is exactly what we've been waiting for. Even if people got scammed by LED TV's which aren't actually LED TV's at all. Those are LCD TV's with LED backlight. Real LED TV doesn't require LCD at all. Actually QDDs are just one way to make LEDs known as QD-LED. Also checked out Quantum Dots.
  • Google isn't what it seems (?) - This is a good question, no comments. But as said, cloud isn't inherently bad, but of course you have to think about what stuff you're directly pushing into some of these higher level "mass surveillance" systems. 
  • Nice post about software testing, I can completely agree with them. It's just so common to see that a few of the most used features have been lightly tested and whoa, all tests passed. Even most of the program hasn't been tested even lightly. Not to even mention through testing. Great example why automated testing should be tested for coverage automatically.
    I should write something about cryptography, p2p, distributed systems, etc. What could be done? What would be interesting, what would be beneficial to whole world? - Don't know right now, but if you have some ideas, just contact me.
  • Reminded me about Static program analysis. Unit testing, Technology Testing, System Testing, Mission / Business Testing. I currently do Unit testing, to test small level code. Then I usually have some system level testing. And if problems are suspected then I do business level full end-to-end testing. Auditing whole data path and results from the original source to the final destination. Often these tests are based on carefully crafted test cases and checking the final end results on the output system. So all automated processing steps have been taken into testing, even if it would involve multiple different systems and processes, even potential manual processing steps. Often these high level complex test are done when system is introduced into production or when major changes are done. Or if there are some reliability or data corruption issues, then automated additional production monitoring could be implemented on parallel. It skips all the internal details of the complex process and only compares what was fed in and what came out. Do those results match what we expected to get? Can be hard task to when multiple systems are tightly integrated and something goes wrong, without any errors and nobody knows what the problem is. 1000000€ went in daily, and for two days of the year 999999,98€ comes out. These sums can be formed from 100k+ individual transactions that went in. But where's the problem, because all unit tests do pass. Uhh, business as usual. Due to multiple different logics, it can be really hard to even find out what subsystem caused this skew. Some times I just add debug code to the main project and at times, I'll write completely separate "check program" which uses different approach to get same results. So if I got some hidden logic problems, those won't affect the check program at all. Of course it's preferable that the check program is done by different coder using different programming language and so on. 
  • LXC Linux Container Security - A good post. I agree with them, like I said, I prefer LXC over full virtualization due to a lot lower overhead. 
  • Cloud-based Host Card Emulation (HCE), a more secure NFC alternative? 
  • It's just interesting to see, how all this password and credit card mess ties up to very same principles. And fundamentals. Password / credit card number re-use, protecting against it. Authenticating parties, verifying the payment etc. All of this could be also easily done using basic GnuPG. I'll receive your signed token to pay 5€, and I'll sign it with my key, and now you got my token worth of 5€. When you forward it to your (or my bank) you'll get the money. Really simple, so why these simple things seem to be almost impossibly complex at times? We could even get similar results using challenges and without the public key crypto.
    In some cases even hashing and encryption are basically same things, like in case of ThreeFish cipher. If hash is good enough, it's output can be used directly for encryption. without using separate cipher. Of course hashing and encryption servers different purposes, but at least in theory those are directly inter exchangeable. Input, with key, produces something. Where you can't easily derive the key or input. Using key+ctr > sha256 x data might be terrible cipher in reality, but at least in theory it should be ok. DISCLAIMER: Don't use it, consult professionals, this is just my random thoughts blog.
    SSLsplit http://www.roe.ch/SSLsplit - A nice tool for transparent SSL interception. Of course there have been devices and software doing this stuff earlier, but this is nice implementation and free to use. 
  • Future of Incident Response by Bruce Schneier OWASP AppSecUSA 2014
  • VeraCrypt - TrueCrypt fork, even if TrueCrypt isn't forkable legally (?)
  • LinkedIn Leaks - This is exactly why I've been asking browser vendors to provide PER TAB security. LinkedIn leaks. My request to browser vendors: "Web browsers should allow optional mode, where each tab is running as it's own isolated environment. It would efficiently prevent data leakage which is way too common with all current browsers. Basically data leak is basic feature of all current browsers and there's nothing much you can do about it." 
  • Passive Information Leakage: A New Kind of Data Loss
    New kind? No, not really new kind at all. I've been very aware about this as long as I've been using the net. Most people who are telling that email encryption would help, or using secure chat app or something, mostly fail to consider meaning of metadata. Sociogram, your contacts tell already a lot about you. Also communication patterns are very important. As everybody shoud know. At least SIGNINT people have been knowing it for ages. Who receives messages and when, and if those messages are forwarded gives you a good visibility in chain of command, even if you don't know the content of the messages. 
  • How POODLE Happened - POODLE SSLv3 stuff, discussion about it etc. It's just interesting to see, how long it takes before TLSv1 is found out to be leaky too.  
  • How much faster you can do apps, when you just decide to tinker with details, those all counts. New SQLite3 is 50% faster than previous version
  • Something different, checked out Combat Vehicle 90
  • No one will push you forward unless you do. Stay committed for better future. Keep learning more, every day. 
  • Checked out Kounta Cloud Point Of Sale. - a nice concept. 
  • Refreshed my memory about Kaizen - Even if I'm indirectly doing that stuff everyday. 
  • My Ubuntu 64 bit / Nvidia + Intel display adapter driver stuff won't still work. Issue have been getting fixed or months. Problem has changed a little during this time. But alas, it still doesn't work. First everything worked, then after one upgrade, xorg.conf file started to get deleted on every boot. Then they fixed it. Now the file content just gets reset on every boot. This nice feature practically prevents using all four screens on computer and drops me to two screen only configuration using the Intel driver. So, so, annoying. Ahh.... Nobody knows when working fix will get delivered. 
  • Throughly studied HTTP/2.0 FAQ
  • Compared 10 cloud service providers very carefully, created complex comparison Excel (LibreOffice Calc) sheets etc. Price differences are really staggering. How do you know you're getting over priced service? Well, if they're willing to discuss the prices, you'll immediately know they're ripping you off. If prices are low and already visible on the pages, yet services are large scale an reliable. That's what you should choose. Many asked how the expensive companies can even live, when others are 75%. That's a good question. I guess many people do not bother to do proper comparison between cloud services and buy from these heavily overpriced service providers. Because they might be local or they offer "exceptionally good service". Afaik, 100% automation is what I want. I don't want anyone who isn't 24/7 reachable or is having a vacation or simply messes up things to provide me a "great service" duh. But it seems that some companies are much more old fashioned than others on this spectrum. 

Sorry if there are typos and stuff. I'm just trying to dump my backlog as soon as possible. Now there's "only" 899 items left.

1-10 of 218