My personal blog is about stuff I do, like and dislike. If you have any questions, feel free to contact. My views and opinions are naturally my own personal thoughts and do not represent my employer or any other organizations.

[ Full list of blog posts ]

BBR, Windows 10 AU Features, U2F, Tribler

posted by Sami Lehtinen   [ updated ]

  • Bottleneck Bandwidth and RTT (BBR) congestion-based congestion control - Nothing new? Just send so much as the receiver can handle and try to estimate RTT and don't send too much. Yet these protocols might work well 'alone', but might now work well when there's someone using the bandwidth hog protocol I mentioned earlier. That's partially the problem. How do you know how much you can send. Some protocols try to minimize buffer bloat and others maximize it, when you then hit those shared buffers the ones minimizing the buffer bloat will get very small portion compared to those which are trying to maximize it. But otherwise BBR sounds really nice, when you can expect that everyone else is also using well behaving protocol like BBR. - Basic method for such probing is pretty much what they've used. Find out minimum RTT, then add more and more traffic, until the latency starts to grow. Then lower the traffic until latency is again at the stable minimum. - I've used that always when probing for 'free' bandwidth. But it doesn't mean that when the latency starts to grow that would be the point where there isn't more bandwidth to be robbed from other users by just adding stuff to the buffers. Even if latency grows, so can the bandwidth grow too, because programs using algorithm like BBR will give up and you can steal the bandwidth. Of course the same problem applies to any shared resource which should be used in 'sane moderation'. There always will be people who are going to abuse that. It would be nice to see charts how these different algorithms mingle together. - Interesting? Also check out CoDel - I did read more about the BBR topic on net, and almost everyone seems to be bringing same issues up which I did. That's nice. Also check out TCP congestion control.
  • Microsoft is also bringing these features to Windows 10 / Server 2016 - TCP Fast Open (TFO), Initial Congestion Windows 10 (CW10), TCP Recent Acknowledgement (RACK), Tail Loss Probe (TLP) and TCP Low Extra Delay Background Transport (LEDBAT).
  • I really like the Fido U2F standard, because now there are plenty of manufacturers on the market and many of those keys are considerable cheaper than the classic RSA SecurID Key Generators or YubiKeys. That is also the reason why I can't stop loving simple TOTP and HOTP.
  • Windows 10 Anniversary Update contains lot of interesting stuff. Skype mess is horrible, but Bash is awesome! I'm going to love that for sure. I've so many times given bash commands in powershell. Haha.
  • Anonymous Network, Tribler, quick analysis based on site & documentation. What are the factors I like and which I dislike and why. Based on P2P file sharing app, based on BitTorrent protocol (on some level). Streaming using magnet links. Content channels and reputation management. No centralized component. This is actually a good question always, how's boostrapping done if there's no central component? I've seen  networks truly without central component. But it's very awkward for users, they have to give bootstrap information manually. Then the technical questions, create a dedicated Tor-like onion routing network. Does that even make sense? Tor is very inefficient way to deliver data. This is exactly where Freenet & GNUnet efficient packet caching steps in. Using three layers of relay proxies. That's horrible, first of all, traffic is still 'realtime' and therefore global advisory can monitor it and they're wasting a ton of bandwidth for nothing. Things get even worse when the seeder is also hidden, it's just like they would be using Tor. Two sets of of proxy pipes one for downloader and another for seeder. This means even more wasted bandwidth. So when you download 1 GB actually 7 GB of data gets transmitted over the Internet. In general, I don't like this kind of design, because it's so inefficient. With Tribler, after that 7 GB is transferred theres only one new seed. With GNUnet & Freenet, using efficient caching, there would be actually 7 new seeds after same amount of traffic. This makes the seeding & network availability go up much faster for new popular stuff. Also reminded my-self and discussed with a friends about Vuvuzela, Alpenhorn, Aqua, Pond, Dissent, Riffle and Riposte
  • Something different? Yang Wang class tracking ship.

GraphQL, Microservices, MS, Cloud Services, Encryption Sabotage, Proper Passwords

posted by Sami Lehtinen   [ updated ]

  • Checked out GraphQL - Nice and simple Query Language designed for JSON. Also checked out Graphene for Python. This is something I could actually use.
  • EU decided that Free (as anonymous) WiFi is illegal. And users should be registered. What's the point? Because you can get anonymous mobile phone with Internet data, without registration. - Simply doesn't make any sense. Just when I were happy about shopping centers and many other places finally offering Free WiFi which you can auto join without hopping through all the endless loops.
  • Microservices please don't - A pretty nice article about Microservices. This pretty much agrees what one of my friends said. He re-engineered one project to run in dozens of docker containers, each one taking care of small part of the whole. It added lot of overhead, made system slower and harder to manage and administer. - Yet I've seen cases where microservices have been a perfect fit. They've got processing pipeline and graphical administration module which is just awesome. It's just like drawing a flow chart and setting options for microservices all using the single management GUI. Yet having this power naturally doesn't technically require that the system would be implemented using microservices. - SOA and message passing also fits somewhere here
  • Microsoft crappy policies and badly build products made me once again bat shit crazy. I don't hate anything more than data loss. You'll never know what you lost. Probably it wasn't anything important, but it potentially could have been something very important. Thank you Microsoft once again for providing such crap as Gotta migrate out as soon as possible. It has been guaranteed Microsoft (tm) quality so far. Super slow, totally lagged, with high latencies on everything, totally random delivery errors, bad authentication, spam filtration / classification failing miserably, data loss and so on. Just what ICT professionals would expect from Microsoft cloud services. Yes, I have to admit, I was highly skeptical about using Microsoft products and unfortunately they've proven me right. It was really stupid move. Everything is just as bad as you could have dreamed when thinking how to make your users life misery and troll them to the max. I'm still wondering what I were thinking when I chose Microsoft.
  • The statement above also made me thought that it's interesting to see how different Cloud Service Providers focus on different aspects. Others focus on retaining everything forever and making deleting hard and others make data retention basically impossible. Yet when I described how fked their system is, I'm pretty sure they're still keeping all the mails, they're just showing those to the end user to maximize the damages on different potential aspects. To the client they'll tell the data is long gone. But then someone paying well and basically unauthorized to access the data will still get all the records from several years. - That's the way to manage ICT systems. Give users illusion of privacy and still keeping it all. My little internal BOFH is laughing really hard (evil villain laugh), this is exactly what to do if you want to be real BOFH. Poor users don't even know what hit them.
  • Setting the level is hard. In Encryption Sabotage where you weaken the encryption usually using bad random numbers or weak algorithms on purpose, so you can break it with "reasonable resources" when necessary, but it still seems to be valid, good and good enough for most which do not have advanced cracking capabilities. They also had similar problem with Nathans, how to make enough many centrifuges to fail but not make so many fail, they immediately know it's sabotage. Problem needs to be annoying and cause damage, but still not big enough to really make it a priority to solve the problem what ever means necessary.
  • Someone told me that passwords have low entropy? Well, they're just doing it wrong. Here's my quick take to proper passwords: base64.a85encode(os.urandom(16)) - Example result: ';TjPs-b;<+@nd`^%.T[)' or 'L1)85<dr8-qHe)Yr46`*' good luck guessing! Even if you would have really fast password cracker running even empty loop like for i in range(pow(2,256)) will take a while. ;) There's still 340282366920938463463374607431768211456 combinations to check. Or if that's not enough, you can go for 256 bits which leaves us with 115792089237316195423570985008687907853269984665640564039457584007913129639936 combinations and passwords like: '\\S>&=Cs`nXTWfM6MO>f!+-`%h]_ag7kE+HRc:M=@'. Just treat passwords as mentioned earlier, shared blobs of bits. Simple as that, and it's highly unlikely anyone's going to crack those in any reasonable time. All and any passwords are always crackable, if unlimited time / resources are allowed. And we if we assume attacker being lucky, there's no reason why they wouldn't guess the password on the first try. You know, there are people who win on lottery too. If it's so confidential, how about not storing it in a first place, encrypted or not. If it's not there, then it's all good.

Topic Dump 2016 continued

posted Oct 16, 2016, 7:49 AM by Sami Lehtinen   [ updated Oct 16, 2016, 7:50 AM ]

  • I wonder when lights which can fire photos at exactly designed target in multiple focus points become possibly. Aka light field projectors. Also it's awesome if these can be implemented as panels. Using walls with these panels and some light smoke in room, it should be possible to make pretty cool 3D projections. Sure, it still isn't perfect hologram, but it's pretty hear it. It's just like the radiation treatment where you can project small amount of radiation from all over into single point so that single point (yes, technically it's not single point, it's target point) will get a lot higher dosage than then surrounding areas. It would be pretty cool tech. Instead using just a few projectors, you would technically have millions of projectors to be used for the projection.
  • One guy sent me ton of very nice digital forensics links. But I like to pick my sources as well as one site contained way too many adds so I won't unfortunately post the links here. This is anyway topic which can be Googled trivially at least on basic level.
  • Read The Paradox of Choise - Reminds me about Subway. First time when I visited Subway, I were really hungry. It was super annoying when they were just asking so many question. No, I don't want your questions, I want food now. - Freedom of choice, perfectionism and feeling a need for analytics easily leads to analysis paralysis.
  • Once again endless discussions about system security. Some customers are asking for secure systems. Well well. Any system which can be used to access email or web with modern standard software isn't being even nearly secure. So we can pretty much rule discussion about 'secure systems' out. It's just more or less secure, but it won't be secure ever. Just deal with it. Hardening, auditing, access management, training people or even certified software won't fix this problem.
  • Checked out SAML federation for integrated authentication and authorization.
  • How do you define Potentially Unwated Program (PUP). Is filtering all programs which aren't widely used a right way. What would mean that widely used software wouldn't be PUP in your environment? Etc, all the endless basic questions. It's also quite hard to verify is something has been planted inside a program / source you're already trusting. Nobody actually want's to audit every version properly, because it's just too expensive and time consuming.
  • Reminded my self about: Root key ceremony and Key signing party.
  • When we talk about public keys, next step is that the signature should be verified. But that won't happen. In several cases the data is read and 'existence of signature' is being checked, but it's not being actually cryptrographically verified. Yes, there's on XKCD about this too. One of the example fails was lack of Machine-readable passport verification in many cases where it's being handled.
  • The Let's encrypt related discussions are a classic. First people complain that things aren't easy enough, and when they are then they complain that things aren't secure enough. Sigh. - Subject Alternative Name (SAN) is here slightly related - Also Let's Encrypt Overview is a very nice post.
  • Also a nice article about Border Gateway Protocol (BGP) basics. Yet in this case, this light version didn't contain anything new. But I'm pretty sure the actual certification guides I didn't read would.

There will be more stuff bit later, during 2016 being dumped from backlog.

Integrity & cleanup code design patterns, Crypto-Gram, DDoS

posted Oct 16, 2016, 7:44 AM by Sami Lehtinen   [ updated Oct 16, 2016, 7:44 AM ]

  • Anything new here? Nope, it's all just very basic data management stuff. Which should be familiar for all of us. Bit messy? Ok, this is just random night rambling about the topic due to some cases where things wasn't done properly.
  • Mostly it starts with the question, should we use fast & leaky method or logically guaranteed correct method? And if that's executed from bottom up or top down. Of course it's also possible to shard / strictly partition these processes to make partial quick runs.
  • When such code is required? When any storage with complex relations is being verified or cleaned up. As example, you got standard object / blob storage where blobs got UUID and that's it. How do you know if the all relations are actually in place, data isn't corrupted and there aren't lose / extra objects left in the blob storage. Verifying all this might be quite expensive operation. Of course some of the parts are done runtime, but it's almost inevitable that it'll leak at some point, so fast and quick method from top down during runtime isn't actually working perfectly even if it would be working very well mostly. If there are additional optimization tables like quick reference tables which back link objects to the data structure or can be used to verify existence of objects in object storage without accessing the actual object storage, these tables / data structures can also get out of sync and need to be verified every now and then.
  • I mostly prefer guaranteed and fool proof method aka full logical verification, but it might be too expensive to run even if sharded. In this case all links are being verified for whole set or strictly defined subset. As example, back checking all objects from single object storage module back to the metadata servers and internally verifying links in metadata system. This is just like the full check with fsck or chkdsk or git gc, git prune or git fsck. This method will cleanup any mess potentially left in the system, or simply remove unnecessary / lose data.
  • But for the normal operations which happen most of time, it might be better to implement runtime execution where when user deletes object from object storage the metadata and actual objects are deleted. Yet, because this all happens in steps and isn't atomic operation it's well possible there's an leak due to multiple different reasons. Of course thist method can be also journaled and each step rolled backward of forward to improve processing integrity. But that adds lot of complexity.
  • Best solution? Normal operation uses fast and leaky "it should work" approach, and it's totally acceptable that there is some leak. Plus full sharded checkups at required intervals whatever that happens to be in that environment. This is quick and easy to implement and still gives good overall results.
  • The mark and sweep runs can be also implemented from top down or bottom up. Do we start from metadata and check objects or do we start from the objects and look for metadata. For that there's no perfect answer. Also the level of verification required can wary. Do we actually verify object / blob hashes or is it enough that the data seems to be there. Sharding and pruning can be done in levels too. Instead of verifying whole stack it can be verified in layers as well as the hash checks and other stuff can be naturally optimized. Because the data hash is verified on access, there's no need to verify hashes for objects which have been accessed in N time units etc.
  • It's bit like mostly counter based garbage collector with additional mark and sweep runs.
  • Last words: Just pruned a lot of dangling blobs from git repository. ;)
  • Read about 20 last Cryto-Gram Newsletters from my backlog. Many interesting stories there. Like taking down the Internet and Zero-Day NSA exploits, etc.
  • Verisign reports that Application Layer Attacks as form DDoS are going up. Well of course. Well crafted ALA attacks are indistinguishable from getting some extra traffic. Another noteworthy thing is that Even 100Gbps (Gbit/s) pipe won't be enough to protect you from even simple DDoS flood attacks.

Real-time vs async best effort, Data security, OVH, Computer Giveaway, OPT, Data corruption

posted Oct 15, 2016, 6:55 AM by Sami Lehtinen   [ updated Oct 15, 2016, 6:55 AM ]

  • Once again I'm wondering why some customers request for real-time solutions in cases where there is no any real requirement for it. AFAIK, best effort asynchronous quasi-real-time is the best solution for most of background data transfers. Data usually gets delivered with a few seconds max, but it's still not real-time. Requirement for a real-time solution can be a real show stopper in some cases. Because real-time solution can't continue the process until the real-time component has finished. This just makes the system more vulnerable and less robust than it would be without this silly requirement.
  • For most of critical systems, I've configured one reliable DNS server which isn't ISP's own server. It's so common that 'Internet is down' and only reason for that is that DNS resolvers are failing or are extremely slow.
  • First we place servers in nuclear blast proof, ultra security, biometric retina scanner data center, blah blah blah. And then we allow remote desktop globally and use credentials Administrator, admin123456. - Now we can feel so secure! Nothing can touch us! - Or if the credentials are more complex, we write everything required to access the server on paper slip and carry it around in our jacket pocket. - No worries, if I lose the paper, I can print a new one. - The reality of data security.
  • Unfortunately had to deal with OVH rescue mode. I mean unfortunately because Windows blew up, but luckily the OVH rescue mode is pretty darn awesome. Even if I have daily backups in place, it was very nice to recover "all the junk" not being backed up + getting absolutely latest copy of the database, so any data won't be lost. -> Hence no need to restore from backup + manually ask retransmission of lost data. - But then there were bad news, reason for the server not to boot was that the file system had corrupted. Because the file system was corrupted also the data / files stored in the file system were corrupted and that was end of the story. - Still required restore from off-site backup. But phew, at least I had that backup and it was pretty current. - OVH has been having quite lot of issues compared to other service providers I've been dealing with.
  • Gave away bunch of full solid aluminum cast professional Atom D525 computers with hard drives, touch screens and 4 GB of ram and so on, fully equipped. It was interesting to notice that dozens of devices were given away, but only one taker bothered to send email afterwards telling that the devices were awesome and they're really happy with those. - That gave a good impression. These computers are interestingly a lot slower when running 64 bit OS than running 32 bit one. I guess it comes to the CPU cache sizes, 64 bit code uses more space and therefore caching gets more inefficient.
  • More interesting algorithm discussions, about OPT caching algorithm and it's particular implementation behavior with cache size 1. When new entry is inserted into cache, can it be the entry being evicted immediately or does it cause the old entry to be evicted. Sure, it doesn't make any sense to run OPT like eviction algorithm with that small cache, but it's still interesting question.
  • I just hate when people are asking to reboot something. It's not very common, but at times the brown stuff just hits the fan. You'll end up with corrupted file system, and of course after that you'll also have corrupted operating system as well as corrupted database & data. So much joy. - Well, there's something wrong with it, let's just hard boot it. Ouch! It'll work, most of time, but you'll might end up in deeper than you would wish situation with that approach. - Finally the risk realized and had not so fun time revering stuff. Yet DR is something that should be practiced regularly, so maybe it wasn't that bad after all. We found out that the DR procedures worked out 'as planned'. Which was nice to notice.

.fi domains, Random bug, Right or Wrong, Unique tracking, Networking (4G, Fiber), Cool vs Probable

posted Oct 15, 2016, 6:44 AM by Sami Lehtinen   [ updated Oct 15, 2016, 6:45 AM ]

  • .fi domain registration got just freed from most of old regulation. Now anyone can freely register .fi domains, just as many other domains. Because there were a huge rush to register .fi domains, it seems that most of systems weren't properly tested. was super slow checking free .fi domains. Which of course was probably expected due to huge rush. It seems that some other service provders worked around that by not checking if domains are free, and still allowing people to register and pay for domains. I've got one of those cases open right now. I'm not naming the vendor just yet. I'll want to get this case finished and down to bottom. Then we can see, what kind of comments I'll put here. OVH also got bug in ther .fi domain validation, they claimed that two letter domain like would be too short. Well, it isn't. It's totally fine domain name according all regulation and policies. They confirmed it was a bug, and they failed with the domain registration, and now the domain is registered by other entity and then they refunded everything. Is this satisfaction? No, it was miserable failure. But it wasn't a scam or intentionally bad service. So I'll let the blame naming pass this time.
  • A great example and post about a random bug. As they dug deep enough, it was totally clear that it wasn't random at all. There was a very clear process and reasons why and how the issue occurs.
  • Read article titled: "To tell someone they're wrong, first tell them how they're right". That's exactly what I've been saying. Finding the both edges of the argument. As well as making clear, it can't function incorrectly, if you can't tell when and how it's working correctly. - Once again, why, why, why helps. Always when someone gives you an answer, ask why. What's behind that reasoning? - Just got requirement to change one value from 2M to 40M. I asked why? Let's see what the answer is. - Never got any sane answer. I love things which are just "set it to value 2312", Why? Well, it's good if you set it to value 2312. Even if there's no logical reasoning behind that setting.
  • Yet another tracking payment transactions / goods in inventory discussion. These are rather ridiculous often. Because the requirements and capabilities are so hardly conflicting. They want to track every unit / transaction. Ok, does the unit / transaction have unique identifier. No. Ok, this discussion is totally pointless. Either you have the identifier or not, if you don't have. Let's stop right now. Phew. Over and over again in multiple situations and cases. You can't track, what you can't track, get it? But if we... No, you can't.
  • Other funny discussions where things are being taken out of concept. We market this as something, but technically it's not that, it's something totally different. I can't go into the details. But at one point telcos in Finland marketed 3G-DC as 4G. Which was just utterly stupid bs marketing. That happens all the time, and I'm sure politics is absolutely full of that stuff. We want to make this generic implementation, but with a twist. We don't use anything form that generic implementation and provide a vendor locking as a bonus. But let's market it as this generic global standard, or something like that.
    Another similar chronic lie from operators is that they're selling fiber optic Internet connection. But when you dig bit deeper into the detail, you're bing connected to TV antenna or CAT3 using VDSL2. Honesty, wtf. What kind of fiber connection is that? Single mode fibers do run to the apartment and next to computer. So why would anyone use EuroDOCSIS 3.0 or VDSL2 crap, which are guaranteed to cause all kind of archaic problems. But as usual, telcos are bsing stupid customers and marketing with hype lies. - Nothing new. If something cool is on the horizon, always market based on it, even if it would be a utter lie. Hype and cool marketing is something I can't really stand. It's so annoying.
  • It's also funny how organizations always tend to accuse external entities before going through self-check. Even if the case is such that it wouldn't make any logical sense for anyone outside doing that. Based on this, I assume it was an illegal covert operation by CIA to break-in (without leaving any sign of break-in of course) and steal an apple from my kitchen desk. That's the most probable cause why it's not there. Of course they had technology and skills to circumvent the home security system. They probably entered through wall not to set off door detectors an then used invisibility clock which prevented the microwave, visible range and infrared detectors from going off etc. (lol, really). But it's possible, isn't it? Maybe I should add ultrasonic sensors too? - Or maybe I just moved or ate the apple and didn't remember that? But that doesn't sound nearly as cool, so I don't think so.

Parallel or Sequantial, Radio Signals, Python JIT, HL7 FHIR, UUID, WD, HPKP, USBee

posted Oct 8, 2016, 10:27 PM by Sami Lehtinen   [ updated Oct 8, 2016, 10:28 PM ]

  • Some times it's really hard to tell before hand which is the best option. When you'll need to run copy jobs A -> B and A -> C is is faster to run those in parallel or sequentially. There are multiple factors affecting that, including caching, disk devices, ports being used for I/O etc. There's no simple answer. This is very simple question, but answer is complex and basically impossible to answer without knowing a lot of specific things. As well as other processes running on the system affect the situation. Is there enough memory, can the disk cache be utilized, what are the storage access patterns and so on.
  • Configured IPv4 to use shortish keepalive timer to maintain NAT sessions alive. But with IPv6 there's no NAT and therefore no need to use aggressive keepalive for long term idle TCP connections.
  • Studied bunch of high speed USB 3.0 Flash drives. Bought one. Let's see if it's good or bad. Some of the drives have been optimized for FAT, but as I've written earlier, I really want to use NTFS with flash drives.
  • Radio signals can be used to see? - Not really surprising. There's passive sonar, traditional passive invention like camera which captures electromagnetic signals. AESA radars and stuff like that. Modern signal processing can give quite awesome capabilities if it's done right. Also if there's a static sound source in a room, you can clearly hear someone moving in the room, because way the sound waves reflect in the room changes. When listening music, just try moving hand around your head. It does create major changes, even if it would be 20 cm away. Just like the TV can work well when your neighbor is in bed, but stops working when someone moves in other room or so on. In radio world, everything affects everything.
  • Really nice article about Python JIT's - So far I haven't had real problems with Python performance. I usually use it as glue between systems in my integrations. But of course if there's easy to "plug-in" JIT I would use it. Often if writing just simple random code GIL is bigger problem than lack of JIT. Of course JIT can also improve single thread (or in case of Python process) performance a lot.
  • Just a link to HL7 FHIR v1.0.2 bundle specification. Btw, their HTTPS cert is failing. Smile. These samples are XML based, but I prefer the JSON option.
  • is failing again. Now their outbound email service is totally swamped. Yet when reviewing their history and track record, this isn't anything to be surprised about.
  • I could write so much about project critical path and Gantt charts and people thinking that it doesn't matter if one critical part stretches a lot. Yet they don't figure it out that actually some resources might have reserved slot. So pushing one step forward a month, can still lead to four month extension in completion. That's nothing new, and discussion about classic and extremely traditional project management failures are utterly boring. - Then someone bothers to say it's a surprise. - Surprise, it's not a surprise.
  • It's just like saying  like we need "a unique identifier", without defining it's scope. It's instant karma and fail.
  • Even more bugs in Skype for Android. When sharing a link whole Skype application must be terminated between shares. This really sucks because the app is also really slow to cold launch.
  • Way to go Western Digital. While testing bunch of old computers I found out that about 15% of the WD disks were fully "working". But it seems to be tradition for WD disks that those get darn slow, even if 'working perfectly' according Smart Data. This just confirms what has been said about storage media manufacturers. They allow crappy products, but the self-diagnostics say everything is ok, so customers wouldn't ask for warranty replacement. - That's really a bleep bleep bleep strategy. It makes me and many frustrated users just so angry. Say FAILED and that's it, it's clear. Instead they'll keep teasing customers with utter junk and won't even confirm it. - I just can't stop really appreciating their attitude and working methods.
  • Wrote yet another one data transport specification document for one project, describing multiple transport modes which are well suited for small and large businesses and their system capabilities.
  • Nice article about HTTP Public Key Pinning (HPKP) - Solution is more problems? Hmm, nothing new here. Actually I didn't know that HPKP requires backup key. Let's make keys more or less permanent and hard to replace. - Let's make it's trivial to get new keys. Sigh, contradictions. Systems should be really secure, but nobody bothers to administer those properly and follow required procedures. - Business as usual.
  • USBee - They said it can't be done. But technically thinking, it's pretty straight forward. - Nothing new, data leak over a channel, this time RF using USB as transmitter. Something new? TEMPEST is old stuff and different cases of unencrypted data leak with RF devices is even older than TEMPEST.

DNS, Duplicity, Rambling, Edge Discovery, Books, Online Banking, Sigfox Finland

posted Oct 8, 2016, 10:19 PM by Sami Lehtinen   [ updated Oct 8, 2016, 10:19 PM ]

  • I wondered what's wrong with my Internet connection for a while. After quick analysis it was painfully clear, TeliaSonera's DNS was running slow as (bleep). Ok. That's it. It's nice to have a fast connection, but if DNS queries will take several seconds to resolve / query, that's not great at all. To remedy that, even yet it still causes lag. I'll always add third DNS server which isn't the ISPs own server. Choose one near you with reasonable latency and resolve times.
  • Duplicity even more problems, it seems that they've broken compatibility with old include-globbing-filelist option. Well, I've fixed my file list format to meet the new standards and now that's working too. Also added monthly cleanup to the management scripts. I know it shouldn't be necessary, unless there are errors. But as usual, errors are something normal at least on some level.
  • “The one thing busy people can’t stand most is rambling,” - “Make your point and move on.”  - Sure. But without details and careful mapping it's often possible to be an expert in anything. Because the expert knows the details and why, instead of being able just to say "Make things secure" or make things "lot cheaper" or something similar. Anyone can do that. Expert knows exactly how and why. But it's also required to figure out what the customer or boss or anyone whom you're executing the task for actually wants. Often finding that out is anything but easy. We're building this building, we need to make it a lot cheaper. Ok, we got tents, why not to use those? What are the exact requirements for savings etc. Building things doesn't differ too much from normal integration project, the hardest part is to figure out what the clueless customer actually wants and if they're willing to pay what it takes to build their 'dream' or of it's even technically possible. Finding the edges of the case is also important. That car is too expensive, we need cheaper mobility. Ok, is used scooter enough? How about used bicycle? Ok. Why this is hard, that usually the one making the requests doesn't even them selves know what they really want for. If they can't specify what they want, then it's your task to figure out what's required. And that's usually not so simple task at all. Sometimes when checking if people are on map, I'll try to find BOTH edges. The upper and lower edges for whatever is being done. I've found out that this is especially annoying for people whom are pushing things from single point of view. Especially people in politics are very good at this. Our server costs are too high? Ok, I can cut those to zero in an hour by stopping all services immediately. But is that what you really want, or was it something else?
  • The rambling issue also spans to many 'professional books'. In most of cases the whole book could be condensed in two pages of simple statements. The rest of the book, is just explaining the background information and experiences, which the statements are formed from. I've seen things in most of 'single topic' professional books. Is it personal finance management, money and happiness or project management, leadership, data, system and personnel security etc. All the same, the key findings are really simple. Yet they've made a book about it versus a single quite short blog post. - On the other hand, saying that something is really simple, probably means that you just don't know the intricacies related to the problem. It's just so easy to say, secure messaging, no problem: 'Encrypt All The things!'.
  • One bank just recommended using HTTP (not HTTPS) for accessing their site in an email they sent. Neat. As said, business as usual. Nobody really cares about that. Was it a scam or phishing? No, not this time. I cross checked everything including email headers etc. It's totally legitimate. Of course the actual login form is submitted over HTTPS but that's way too late as we all well know.
  • Connected Finland is extending Sigfox IoT network around Finland. There's also LoRa / LoRaWAN are available from Espotel ad Digita. These also known as Low Power Wide-Area Networks (LPWAN) are mostly used for IoT devices or for M2M as it was before IoT boom called. Same stuff, new name.

Minio, SQL Server 2016, Hack, FaaS, Asking Questions, ISP=CA, Data Security, Serial Flow Control

posted Oct 8, 2016, 1:35 AM by Sami Lehtinen   [ updated Oct 8, 2016, 1:36 AM ]

  • Checked out Minio Cloud Storage. Which is Amazon S3 Compatible. Nice, yet, unfortunately I don't have time to play it. Just gave a few glances at the documentation.
  • Installed a few MS SQL Server 2016 instances to play and test stuff with. Of course also use SQL Server 2016 Management Studio (SSMS) with it.
  • The Dropbox hack is real - A really nice post by Troy Hunt about The Dropbox password leak / hack.
  • Checked out a couple of Function-as-a-Service (FaaS) services.
  • Julia Evans Asking Questions - Sure. We've all been there. Strange edge cases, digging deep why something works as it works. Why something 'random' isn't as random as you think You just don't know what the trigger is. As being said, tech is so deep stuff nowadays that you'll NEVER know nearly enough. Everyday you'll need to learn more and more stuff. After doing this for decades, often when people claim something is surprising, it actually isn't. It's just the usual case, in maybe bit different context. Most often it's not nothing new at all. Memory leaks, race conditions, some strange way to trick program to do something unexpected. All being just so normal. Yes, it might be a 'bomb' in a high profile program. But in general, it's just yet another normal design or implementation bug, and therefore nothing particularly interesting. Kernel user access elevation fail via some other bad code. That's actually why it's very important to study the most common fails, because those are the fails, you're most likely going to encounter. Over and over again. It's quite rare that you'll actually find something which you can define interesting. As well as for more experienced guys, that's the business as usual.  You said you found something new? Actually this fail category has been documented several decades ago. What's the new part here?
  • I just realized my ISP is also globally trusted CA root authority. Which means that they can trivially on the fly to forge any HTTPS certs and do MitM snooping for their customers. Don't trust HTTPS / SSL / TLS certs, those are just major scam.
  • Anyway, data security is something nobody actually want. Most of people see it just as a source of so much trouble. And that's something I can agree about. Similar policy of course applies to all other security. Data security isn't "separate field", it's just slice of the pie. So many systems are just fundamentally totally flawed by design, that it's almost utterly pointless trying to fight against that.
  • Many seem to prefer plain text over network, because the HTTPS SSL / TLS stuff is so complex and hard. But I guess this topic is like that passwords, shared secrets, authentication etc. It's all been covered over and over again. Nothing new, all the same stupid discussions over and over again, and yet no solution. Things are just as they are, and maybe it's better just to accept it. The flaws of the system are well known, and when somebody or someone exploits those, it shouldn't become as a surprise to anyone.
  • About "new problems". One guy said that he's experiencing data loss. After quick check I found out that he was using high speed serial link without flow control. Well, what would you expect. High speed link, slow devices and no flow control. Isn't that kind of stupid to complain about data loss. It's something everybody should immediately expected when not using flow control at all. Just please enable Software Flow Control, Data Terminal Ready or RTS, CTS and RTR. Check configuration on both devices as well as make sure that the cable pins are connected correctly. It isn't that hard after all. It's just so common to see even mis-connected cables or mis-configured devices, that this is all totally expected behavior unless you've verified all related details.

Aqua, Herd, It Ready, Local Database, Cisco AnyConnect, PSD2, Lftp, Duplicity

posted Oct 8, 2016, 1:30 AM by Sami Lehtinen   [ updated Oct 8, 2016, 1:31 AM ]

  • Quickly reminded my self about Aqua/Herd, Vuvuzela / Alpenhorn, Dissent, Riffle and Riposte anonymizing networks.
  • "We're so disappointed it's not ready". - I'll gladly let you be. If you can't specify what "it" is, then you can wait for it to be ready forever. - Enjoy. I've been doing this stuff so long, I can be honest about these simple things. Once again, they'll throw some random xsd schema file, and expect that to be enough for integration. No, that's not nearly enough. XSD isn't some kind of "magic file" which makes everything work. There are just so many related things that need to be resolved. But as the case usually is, they're so clueless they think the XSD is the ultimate truth. - Fail. It's like saying, we need that JPEG for the project lauch. Ok, what the JPEG should be enough? We can't answer about that, we've sent you the JPEG specification. - Then the question is just do I deliver some random JPEG for them, or do I just simply tell that I won't do you any JPEG for that because you haven't specified what the JPEG is about. Anyway, which ever option I choose, they're going to be unhappy. - But we though that you would deliver us "the JPEG" for the launch, discussion starts. Phew.
  • How to steal any developer's local database - Basic stuff, nothing new. Once again things which work just as designed are seen as a problem. It's a problem if you've configured it to be a problem.
  • So many wonderful problems with Cisco AnyConnect Secure Mobility Client. I just can't stop hating people who require using VPN. These things just add so much cost and overhead to processes.
  • It's always as funny as people pop up things to be 'fixed' after several years. I've used my car three years now. The color of the car isn't the one I would like it to be now, also the seat material should be changed and hook added for trailer. All this naturally free under the warranty, because the contract says I'll need a car. Sigh. - When they start discussion like that, it's likely it's not going to go well.
  • In one payment project quickly viewed the PSD2 requirements and related FAQ.
  • Never give up. God damn duplicity and lftp + ftpes just kept sucking. Unfortunately duplicity overrides some settings set in .lftp/rc - But as said, I won't give up easily. I patched the duplicity to work correctly, and now it's working beautifully with my on CA. Just fixed some code in '' which was the source of all the trouble I've been experiencing. Some of my friends already said that it'll work, you'll just need to drop encryption. But no, that's not something I'm willing to do. I want to keep my data and transport encrypted, so I patched the malfunctioning program code and got it working. Are you getting annoying TLS / SSL errors with duplicity or lftp? Like: "This server does not allow plain FTP. You have to use FTP over TLS." - Just go and fix it. And you're done! Yes, I'm happy when I get things solved and working. But on the other hand, it's darn annoying that basic stuff isn't working without need to hardcore fixing and tuning. In a way that's extremely frustrating. I think that about one month passed, before I got upset enough to focus enough energy into this problem to finally make it or break it. If programs wont work, you can always make a workaround or change the code or add other modules to emulate something to get stuff to work. Btw. I also learned about transport like fish which I haven't every used. If you want to know how something works, abandon the documentation and read the source. It's the best documentation you possibly can have. Most of normal documentation doesn't almost ever properly cover ALL options, nor edge cases, nor priorities in case of conflicting configuration and exception handling in detail and so on. But that's all in the source. I've seen it happening over and over again. 99% of people, administrators and developers just say oh darn, encryption isn't working, it's a problem. Let's just drop it. Ahh, now everything is working. Done.

1-10 of 423