Blog

Google+
My personal blog is about stuff I do, like and dislike. If you have any questions, feel free to contact. My views and opinions are naturally my own personal thoughts and do not represent my employer or any other organizations.

[ Full list of blog posts ]

MSSQL, Data Polling, RDP, Mobile Auth, Security, Credentials, Mental Models

posted by Sami Lehtinen   [ updated ]

  • Read a few more articles about MS SQL (Transact SQL, T-SQL) performance fixes and issues. Most of the tips were of course extremely obvious. Like fixing I/O, limiting I/O, making sane queries, reuse queries (for query plan reuse), use indexing (don't over use), separate data & log, don't use production database / server for scratch / temp data, too small VLFs, do not over allocate memory for MS SQL (causing memory deprivation of other processes and operating system), semi-slow queries which are run repeatedly like polling. Nothing but obvious stuff. But it's surprising how often these facts are forgotten.
  • One polling query is a good example. It's not actually slow, but it's run 50 milllion times per day. Even if it wouldn't return anything, it still requires lot of resources from server side. Especially if there's data which needs to be filtered and or sorted on server side.
  • More interesting observations about Remote Desktop Protocol / Remote Desktop Connection design fails. It seems that there isn't any kind of activity / networking timeout. Addresses getting banned on firewall level can linger as established TCP connections indefinitely. I guess this is also one of the reasons the RDP is so crappy and extremely easy to DoS. Negotiating connections with server to certain state and then just disappearing leaves the server with tied resources lingering forever. - Great, just great. Some protocols are just (a lot) better than others. I would understand this kind of 'quality' if it would be my code for a customer which wanted 'cheapest possible crappy ad-hoc' implementation. Build something which mostly works in a few hours. Copy paste sample code from net and make extreme naive shoddy experimental piece of code which just works when everything is OK. But when production code from major corporation is just as bad. Well, it is. Nothing more to say. Restarting remote desktop service, throws out all active users. As well as terminates these lingering connections.
  • So guys, next time you're writing production server code, just copy paste something like this. Python http.server - Who needs nginx or anything else, when we've got full featured robust and attack resistant web server which we can simply use. Actually I've been planning to do exactly that. But only for a project which handles a small quantity of request from a trusted sources and IP addresses.
  • One important mobile user identification application by DNA, doesn't allow user changing personal PIN code at all. That's just absolutely wonderful. There's no way to change PIN. Except than to terminate the contract agreement with customer service and then re-enabling it with new PIN. I'm not talking about lost PIN code recovery. I'm talking about changing known PIN code.
  • Even after double and triple checking, situation remains the same: For some reason discard doesn't seem to be working for my SSD with ext4. Funnily it works great with vfat on same drive. Should I see discard on #mount option row when checking what mount says? I would assume it should read there. What's the best way to verify that discard is actually active? I did see tons of guides with mostly only bad hints and incorrect ways of checking it.
  • Python 3.6.0 standard library hashlib also includes scrypt, blake2, shake and SHA-3 aka Keccak - Awesome - It's very important to have modern and compatible tools for key derivation, password protection & data hashing. Dupe from previous post, but doesn't matter. I studied and played quite a lot with that stuff.
  • Some security / design flaws are just so devastatingly horrible that those can't be even mentioned. - So I'll shut up. - But these are really serious. - Let's hope those get fixed, but I'm highly skeptical.
  • Also found out tons of basic stuff, like using default credentials. Which basically means that key business data is not protected at all. But actually, nobody cares, or gives a bleep. And this is the norm, most of companies actually got. So no news here. At least there is authentication, even if it requires attacker to guess the default credentials.
  • Excellent article: Mental Models I Find repeatedly Useful - This article covers many many models, which I've been talking about as well as plenty which I haven't. Especially liked the Deciding section: business case, opportunity cost, intuition, local vs global optimum, decision trees, sunk cost, availability bias, confirmation bias, loss aversion. Yet naturally all of the listed items were familiar. Virtual team is something, I've been talking for decades, and often been a part of from early 90s. I like high-context documents. There are many things, which are 'obvious' and therefore doesn't need to be mentioned. Technical Debt, such a classic. Unfortunately it's often hard or nearly impossible not to end up collecting (lots) of technical debt. It's a constant struggle. Zawinski's Law, hmm. Uncomftable laugh. Metcalfe’s Law aka Network Effect. Classics, MVP, Product/Market Fit. "First-mover advantage vs First-mover disadvantage", that's a very good question.

Admin, IoT, SMTP, Google+, RS-232C, 5 GHz WLAN, Python Scrypt, Switching Cost

posted by Sami Lehtinen   [ updated ]

  • Secure Administration Processes. When using default and shared credentials is forbidden. It seems that people are just as smart as you would expect. If rules say you must not use passwords like: default or password. When you run checks on what kind of passwords are being used, you'll find up that people are setting up passwords like default1 and password1 which aren't on the forbidden passwords list. - Thank you for that, smart engineers. Often it feels that people are using all their ingenuity to be stupid. Of course that's one way to do it. Instead of trying to do things smartly, you're spending considerable effort to think how to implement something that it's implemented according requirements, but in the most stupid possible way. - Next time someone complains that their disk is full, I just format it. Job done, now there's plenty of free disk space. - Be careful, what you ask for. - Remember to make your requests idiot proof. - But I can warn you, it's surprisingly hard.
  • Using IoT devices to plant fabricated planted evidence. - Interesting discussion with many example cases. Which I'm not going to elaborate about.
  • Getting sick'n'tired about people who just don't get how SMTP works. Yes, everything needs to be right.
  • Enjoyed Windows Task Scheduler unreliability issues once again. Run on system start up, simply doesn't work. But I assume this isn't any kind of news to anyone. Work-a-round? Make the script run every 5 minutes, and make it hang at the end and set script not to rerun if it's already running. Works, but doesn't make any sense whatsoever. Just pause / sleep for ever at end of script. Also the fact that the workaround works perfectly, just makes it 100% obvious it's MS fail. Just can't stop loving things which doesn't work as supposed and then you've have to create some strange workarounds.
  • More elite engineering: Google+ says that the website doesn't work because I'm using Chrome Browser on Android. It's wonderful how they can do stuff like this. It seems they clearly need more top engineers, so they can fail even on the level of simplest things. Google's own Google Plus and then they do not support a Web Browser by Google on Google Platform (Android). I wonder what they'll be supporting? Maybe latest Edge or Windows Mobile? - Exact quote: "Your browser is not supported by Google+ You may have an outdated browser version or an unsupported browser type."
  • I've designed full immersible 3D web site. Which is the coolest site ever on this planet. But because you're not using SL-WebBrowser-B201 build G9 on SL-B82-Operating-System-C4.322, you're just unable to see it. But it's your loss. My stuff is so cool, you wouldn't get it anyway.
  • Latest tech. Just created extremely traditional integration for one project using RS-232C 9600,n,8,1 and CSV data over it. Neat. Btw. This works extremely reliably. No need for über tech. ;) The most unreliable part of this is the USB RS-232C adapter, which unfortunately aren't very reliable. Those got hardware and software issues, etc. Even if we try to use high quality ones.
  • Quote: 'The government also has the power to force companies to “maintain technical capabilities” that allow data collection through hacking and interception, and requires companies to remove “electronic protection” from data. ' - Isn't that totally expected? Of course it is. It would be strange to assume something else.
  • Radar vs WLAN (WiFi). Some 5 GHz WLAN articles mention that RADAR use power levels like 60 W compared to WiFi 100mW power. But I just checked their info page about local weather radar, and it's not 60 W. It's 250 kW pulse power and 0.005 with duty cycle. It's still 1250 W when converted to continuous power. If I got that right. But without further analysis and information, it's hard to say, how badly short strong pulses will disturb networking. When serving in military, local military radars often caused repeated disturbance on radio, due to high power which would totally overwhelm the receiver. Even if it's operating totally different frequency.
  • Python 3.6.0 hashlib also includes scrypt. Awesome. Also Shake and SHA-3 are included.
  • Switching cost (switching barriers) is something which is being often considered when changing service providers. Yet, unless the old service provider isn't bad enough, it makes change to keep using the old service provider and just using new provider for new contracts.
  • Something different: Gaia (spacecraft / space telescope) and Alenia Aermacchi M-346 Master

IPv6, Cipher, Hash, Dev / Sysadmin / DevOps, Hdparm, Discard, Guerrilla, Iridium

posted Aug 12, 2017, 9:17 PM by Sami Lehtinen   [ updated Aug 12, 2017, 9:17 PM ]

  • Global IPv6 address scanning 33c3 (c3tv) talk "You can -j REJECT but you can not hide" reminded my about: This approach reminded my from binary split protocol I wrote to map out address spaces. It always splits the remaining space(s) in half, looks for neighbors and does that again. The platform didn't provide a feature to list all entires, so I had to write a code which will search for entries efficiently by splitting up the address space until all are found. Yes I've heard rumors that this addressing scheme might get changed. So that would force me to re-invent the method to find 'unlisted' items from the address space efficiently. It remains to be seen if and when I'm continuing this project. I've got several algorithmic projects open related on multiple things. No rocket science, but basic implementation which I find interesting. At least it's not basic CRUD stuff, which is very common for ETL stuff which I do daily.
  • I had to rethink what's the difference of cipher and hash and I couldn't actually come up with any reasonable outcome. Other than that those could potentially be designed and tested to resist different kind of attacks. But if the hash or cipher is working well, both should be actually as good for both purposes. Maybe I'll have to write bit more about this. In both cases it's all about taking input and turning it into pseudo-random. I did read many articles giving extremely shallow and hollow reasoning between differences. But none of those explained whats the actual technical difference.
  • New Ethernet standards between 1 Gbps and 10 Gbps to allow longer distances with old(er?) cabling. 2.5GBASE-T and 5GBASE-T. I haven't had too long cables, so 10 Gbps has been working just fine with cat6 cabling. Of course there's the single mode fiber which you can use, if the good old copper seems to be too slow.
  • Reminded my self about basic stuff: OFDM, IEEE 802.11n, IEEE 802.11ax, Chaffing and winnowing, All or nothing transform, Known plaintext attack. - I guess I've written about almost of all of these topics earlier. So no comments.
  • Software Developers Should Have Sysadmin Experience - Lol, I've written so much about this. I just make this, I don't use those. Hahah, everyone knows what kind of crap that ends up being. How about supporting your own product, as well as maintaining it. Eat your own dog food. It's also so legendary that it works here well and fast. First the developer is using latest high power computer. And he's trying that crappy code with database with 10 rows. But in reality, customers use old crap which still somehow happens to run, and have databases of tens or hundreds of millions of rows. I'm just wondering why developers don't figure out that it's being really slow code.
  • Configured bunch of servers at new service provider to use IPv6. It's actually extremely easy with static addressing, to only use IPv6. When you're not responsible for the whole network and configuration.
  • Can't stop loving false claims. Many say that hdparm.conf issues have been fixed years ago. But that's not true. Only using hdparm in rc.local worked, changing settings in hdparm.conf didn't work out as promised. It's always necessary to confirm that configuration actually works. Changing configuration is never enough.
  • Another story which is full of lies is the discard / trim story. Many tell that you should check if device supports trim, and then add discard to fstab. But you know what, that doesn't mean it would be actually working or enabled. Several guides tell "how to check that". But all of these guides are full of lies, and tell you to check the configuration. None of the guides, actually describe a process to actually see if trim / discard is being used. Phew. I've been witnessing this kind of incidents for ages, I really hate this kind of brain dead approach.
  • Is it encryption, if two pseudo-random streams are masked together? Technically it's not encryption, it's just a mask. But you have to know the original streams to get the data out, so it's still encryption?
  • Finished reading a book about Guerrilla warfare organization, tactics and strategies.
  • Iridium NEXT - Truly Global IoT / M2M communications.

BLAKE2, Python HashLib, Stream vs Block, Strange Problems

posted Aug 5, 2017, 8:13 PM by Sami Lehtinen   [ updated Aug 5, 2017, 8:16 PM ]

  • Python 3.6 also includes BLAKE2 hash functions in hashlib standard library. - Nice. - Blake was inspired by ChaCha stream cipher, so it would be also possible to build a block / stream cipher using Blake. I've been often wondering what's the actual difference between "perfect hash" vs "perfect block cipher", both end up with 'random output'. I mean hash should be able to generate stream of 'pseudo-random bits' which can be XORed with data to produce block / stream cipher. Also hash generation can be iterated to generate new keys for each subsequent block. So there's no need for CBC on data level. I guess the primary difference is that 'cipher' is 'reversible' and hashes aren't. Yet with right initial values, you can just encrypt twice to produce same output, like in the case when hash is just xored with data. Like when using OFB mode. I guess the real reason is that hashes and ciphers resist different kind of attacks. But if 'perfect' there shouldn't be any difference? - Maybe? OFB also converts neatly any block cipher (or hash?) into stream cipher. It's just kind of trivial stretching. Some say trivial stretshing shouldn't ever used. But that depends. Afaik it shouldn't be a problem with perfect hash. Of course if the hash is imperfect, then trivial stretching creates a bias problem. CTR b[n] = F(k,n). - Just random thoughts. Luckily I don't need to actually implement anything like that. If key isn't included in hash, like in OFB with hash alone, then known text attack can be used. Encryption is really simple, you'll just mix data with safe pseudo random function output, which is initialized with perfectly random value. Heheh, but practically that's not simple at all.
  • Just wondering if this is generic hash feature, I believe it isn't? Or if it's just Python implementation? I were assuming that hashes are block hashes for performance. But it seems that some of the hashes are stream hashes. Like hashing 'test!' and ('t', 'e', 's', 't', '!') each byte as separate call, will still lead same end result. Interesting. I would think this would make hashing performance worse than it could be, if using blocks. Or maybe it's just Python hashlib making hashing 'more user friendly' by maintaining state for last non-full block, so digest can be called at any time? I guess too many people haven't been thinking about this. But this is actually pretty basic question. It's usually claimed that most of hashes are block hashes for performance. After a quick check, it was obvious that BLAKE2 is stream hash / cipher, so this is exactly what should be expected. But can't be applied to other hashes which might be block based.
  • Since Hacker News was full of crazy problem stories. Here's a one which took quite a long to figure out. One customer had systems which were crashing repeatedly. After long analysis for weeks, it turned out that all the computers were on same segment of electric work where the kitchen devices were connected. No wonder it caused occasional crashes when voltage dropped. When fixed, the issue got away. Another story is that PS/2 cables or any 'digital' interface can be really tricky to analyze. Because the trigger value between working and not working can be extremely lingering to notice. Some people claim that PS/2 interface is a standard. No it isn't. It's just like USB. I got 4 similar desktop computers on desk, and several cables and additional devices. Then I made chart and permuted all possible combinations. It turned out to be exactly as unexpected. All of the devices had quality variance. So picking right combination would work extremely well, and picking wrong combination was guaranteed not to work. It's totally wrong to think that similar devices would be actually similar. Third story is similar to second story. We had tons of similar ram chips and huge mess with crashing computers. After all analysis and wondering, it turned out that the part of the similar RAM chips were substantially worse than rest of the similar chips. Problem only occurred on certain type of motherboards, even with same settings as other mother boards. It's totally common that there can be combinations which should work, which actually won't work. Same applies to Ethernet devices, I've been proving that several times. Devices A and switch B just won't work. Whatever you do. Or might work so that link drops several times a day or speed negotiation or duplex negotiation fails. Funnily enough, this can be often fixed by adding different switch between the devices so now it's A-C - C-B and everything works again. So once again, it's wrong to say that Ethernet devices would work with other Ethernet devices.

Telegram, Karn's Algo, Grumpy, Layered Sec, 5 GHz WiFI/WLAN, Shimming, 0-RTT, Let's Encrypt

posted Jul 29, 2017, 10:09 PM by Sami Lehtinen   [ updated Jul 29, 2017, 10:10 PM ]

  • Telegram account deactivation usability fail. You receive link to deactivate your account over Telegram. You click link. They ask for phone number. Then they send deactivation code via Telegram. You open the chat to pick up the code... And can't return to the code entry window. Ok, you can re-open the link, and it prompts again for phone number ... Sigh, endless loop ... Actually I didn't try if the deactivation code is static. So basically you could use the code you received and phone number to activate the deactivation. But usability things like these are just so so annoying. Users need to figure out how to 'workaround' the bad usability to get things done.
  • Telegram login security fail. I just hacked Telegram production servers. Or they've got really stupid bug / configuration fail with their servers. Go figure! Hahah. "We detected a login into your account from a new device on ##/01/2017 at ##:##:## UTC. Device: Web Location: Unknown (IP = 10.96.98.136) - The Telegram Team". If you missed the point, study this.
  • I just can't understand some managers. They love meetings, even if there's no value whatsoever related to the meetings and a lot of money and time is lost on travel. How about using let's say 6 days, to get something done. Instead lost on international travel and having 'high level' meetings which won't produce actually anything at all. - I guess everyone got their own style. But I prefer focusing on the things which do matter and produce something substantial.
  • Karn's Algorithm - Good old stuff. Here's great example how naive implementation of algorithm can lead to not so fun issues.
  • I would have liked to publish the firewall automatic management scripting packet as open source. But unfortunately it's work project and therefore employers intellectual property. - Made lot of small improvements in a few days. Now it's absolutely awesome to use and configure. Just set some basic parameters and let it run.
  • Basic stuff: SAML2 integration, OWASP top 10, classic 'SAP Integration' blanket integration request. Different talks about NFC and smart token identification. As well as AD based in application access control. All very generic stuff.
  • Grumpy - Python in Go - Neat! Gotta try that at some point, when I encounter situation where I believe it would be beneficial.
  • Layered security? Saved encrypted blob of data in encrypted database in encrypted archive in encrypted file in encrypted file system on encrypted disk. That's total of six independent layers on encryption. Is my tin foil hat too tight?
  • Great technology again. Sigh! 150 Mbit/s 5 GHz WiFi /WLAN throughput is 33 Mbit/s. And 72 Mbit/s 2.4 GHz WiFi/WLAN throughput is 55 Mbit/s. Graah! But why? Someone claimed that 5GHz would be faster, but actual throughput is lower. 2.4 GHz WiFi also provides lower latency than 5 GHz WiFi. Even when 2.4 GHz is using 20 MHz bandwidth and 5 GHz is using 40 MHz bandwidth. Also tried ~5.2 and ~5.7 GHz frequencies, didn't make any difference. KW: Mbps, MB/s, speed, performance, wireless networking, fail.
  • Shimming was finally success. I had to buy right kind of thin stainless steel. Make some edges round, and make it J shape, so it's easy to operate. Many plastics aren't hard enough. Aluminium cracked too easily under tension and fatigue from repeated bending. But this new tool worked easily, quickly and on first try. Also aluminium tool also had sharp edges, which were pretty bad for hands and might leave markings which would give the attempts off.
  • TLS session 0-RTT implementation and security. Early data might be snooped by stealing session key. Replay attacks are also concern. So classic replay attacks. There should be replay attack protection on application level. TLS doesn't provide it. One way to work around it, is to reject 0-RTT data on certain important messages. But it's almost guaranteed that some developers go and forget that. Early data flood resource consumption attack. Performance optimization vs security trade-offs.
  • Donated money to Let's Encrypt. For better global and free Internet security.

Show-Session-Key, Documentation, NFV, Distributed RDP fail2ban, Whitelist vs Blacklist

posted Jul 22, 2017, 10:37 PM by Sami Lehtinen   [ updated Jul 22, 2017, 10:38 PM ]

  • --show-session-key can be used to reveal content of single encrypted OpenPGP messages session key, without giving out the private key. Proving that you've got access to the the session-key meaning that you've also probably got access to the one of the private keys needed to decrypting the session key.
  • Producing so called 'documentation' for sake of documentation. There's too much documentation, it's inaccurate, and doesn't reflect reality. But it has to be, because they want documentation. Just the typical case. Seen this happening over and over again. It's funny how project management is often completely separate process, without any connection to the underlying reality. Maybe this is because of project management professionals, which run the project as they would wish it would go. But truth is, that the reality is very different. Maybe this is the reason why there are surprises later. Creating overlapping documentation is very common way to end up with conflicting documentation, which is extremely annoying. Another classic question is managing documentation scope and detail level. Some documents can be way too detailed and others lack basically all information. True classic. Personally I prefer very short, technical and informative documents. Some people love high level visualizations. But when you start accounting for details, it's easy to often notice that the visualizations are misleading and do not actually reflect reality. Only good thing about this documentation usually is, that nobody's going to read it ever again. If lucky, it gets read once at the project start. It's also extremely easy to say that documentation is bad. But doing good documentation and or requirements specification is actually extremely hard and time consuming. That's also why it's often very rare. It's a nice question how to keep documentation coherent, simple, clear and still include all the technical details. Most of projects also badly lack knowledge, time and resources required to make anything which would make sense. But I guess that's also more norm than exception.
  • Doing things so you can say things are done, even if the things won't serve any other purpose. One of my favorite things. I especially loved the requirement by the customer. This documentation isn't enough, we need more documentation. Excellent, just list a few things which you've need additional documentation for. Guess what, they never delivered any extra requirements. So typical. We need at least 200 pages of documentation. For what? - I don't know. It's nice to have documentation, right? - This is just where the clueless 'documentation department' can step in and make nice looking documentation for 200 pages. It doesn't matter if it's accurate or not. It just needs to be a nice documentation which might be somewhat related to the project. Could you add a few cool slides, plz? And maybe some hype words?
  • It's also great question what information is essential. What can be safely assumed or kept obvious and which parts have to be documented in very fine detail.
  • Had long discussion about Network functions virtualization (NFV) / Virtualized network functions (VNF, VNFI) with one service provider. This is all related to Management and orchestration (MANO). Afaik, it's better to get "private cloud from public cloud" with software isolation. Than getting true hardware private cloud. Which I've been talking about earlier. Now I got two service providers which can provide this reliably.
  • It's amazing how persistent (RDP / RDC / RDS) aka Microsoft Windows Remote Desktop attacks are. The network banning application I wrote has now banned more than 1000 IP addresses and +40 subnets (/24 using IPv4 and /48 using IPv6). Most interesting observation is that large number of systems located in totally different data centers, as numbers and networks and countries are being attacked by same IP addresses. This means that some of the attackers are extremely active. This is also great reason why centralized banning which protects whole network works so well. If server network in New York is attacked from some IP address, it makes perfect sense to ban same IP address in London, Singapore and Helsinki. First I thought it would be kind of overkill, but in reality this is highly beneficial. - After some questions I've received. I would define this as distributed fail2ban implemented on network level. - Yes, it rarely produces a few false positives. Not nice, but it's still much better than letting all the attacks through.
  • I've got many questions via social media channels. Here are some answers I wrote to social media: Q: Why use banning, why not use whitelist alone? A: Whitelist is used for environments where it's possible to get required IP information for whitelisting. Unfortunately there are tons of users and businesses which seem to prefer consumer grade Internet connections due to costs. Which practically means that they're ending up using dynamic addresses and operator won't even provide option for static or dynamic permanent address. Another option would be using IP geo location to build blacklist. So there would be large national IP space on normal address space. Foreign IP addresses would be black listed and even national addresses could be banned on request. But managing that national IP list has been proven to be problematic in practice. There are some environments which use this kind of large whitelists. Let's say that company X is using cell phones from operator Y. Then we just block everything except IP address space of operator Y mobile data. Works, but requires often constant updates. Especially now when IPv4 space is running low. With IPv6 these kind of rules are much more manageable. Q: Technology being used? A: I'm using SaltStack to collect failed authorization attempt data (among many other things of course). And custom Cloud Orchestration to manage network access globally. Technically this means updating ban access rules to three separate service providers, using their own APIs.

Switching network operator and effects on network latency

posted Jul 22, 2017, 10:21 PM by Sami Lehtinen   [ updated Jul 22, 2017, 10:21 PM ]

Just checked some backlog data from monitoring. Some physical servers just switched from network operator A to network operator B. The automated monitoring and performance analysis gave interesting and mixed results. This is great example how something is not or is better than something else. But it depends from so many different factors. In this particular case, the only thing which changed was network operator. IP addresses, servers, locations, everything was kept the same.
 
Helsinki to London (UpCloud) network latency dropped by 4%.
Helsinki to Frankfurt (UpCloud) network latency increased by 35%.
Helsinki to Gravelines (OVH) network latency was decreased by 11%.
Helsinki to Strasburg (OVH) network latency was increased by 41%.
 
So is the new network provider better or worse? It depends. This data is based on over 10k samples collected during a week using old provider UpCloud and new network provider Nebula.  For service hosted on Sigmatic.

Unfortunately we don't have any servers in Amsterdam currently. Having that data could have been interesting.

As you can see from the list there are cases where latency increased very significantly due to new routing. That's part of the Internet, it's very hard to tell what the latency between A and B will be, because it depends just on so many different factors. Sure, you can give rough guess. But that can still be off by order of magnitude, especially if bad or very routing happens between.

Paranoid, RAIN, Google Sites SSL, AMP, Blocking, Batteries, ECC / Brainpool / GnuPG / PGP

posted Jul 16, 2017, 7:22 AM by Sami Lehtinen   [ updated Jul 16, 2017, 7:22 AM ]

  • Sometimes timing of things just makes you paranoid. How it's possible that after certain event, your mobile phone starts to reboot several times. Still having plenty of charge left. Those effects are just like malware exploit attack gone slightly wrong. So user is more than aware about the attack. But this kind of malfunction happening in tens of minutes after even, just makes you highly paranoid. Yes, it could be totally random.But hasn't ever happened before. Nor probably will never happen again. But why it just happened in relation to something else. That's just so strange. Often this kind of strange events get into right context afterwards. And you kind of know, that you noticed it. But you didn't know what it was, when it happened.
  • Did that strange email attachment crash your PDF reader or word? Yep... But it doesn't mean anything, or maybe it does. Who knows.
  • RAIN RFID - GS1 UHF RFID Gen2 protocol, ISO/IEC 18000-63 - Identify – Locate – Authenticate – Engage - Internet of Things (IoT) - This is something which got nearly limitless use cases. If it's just cheap enough to be widely deployed. RFID would be great, customers are asking for it all the time. But usually the prohibitive factors is excess cost per unit. This would partially solve all the complaints about 'can I track it', which I've posted several times earlier. Either you've got tracking device & id, or you do not. And if you don't then well, you simply just can't do it. Yet there are some cases where FIFO stack or something similar can be used to track individual items, without individual ID. But that's only possible in very controlled environments.
  • Just one question: When HTTPS will be available for Google Sites, with custom domains?
  • Do we really need Accelerated Mobile Pages (AMP) ? As far as I know, all web sites work extremely well on Mobile. This of course expecting that the sites aren't full of absolutely bloated crap, like many sites are. If you take as example, generic news sites. 90% of the content loaded when accessing news article is non-relevant crap. No wonder sites are slow. But that doesn't require anything like AMP, it just needs cleaning the mess and getting rid of absolutely ridiculous excess weight.
  • Read interesting article: TRIM dm-crypt problems? About SSD, TRIM / DISCARD, and Disk Encryption DMCRYPT.
  • Today I finally wrote a script which processes logs and also bans different kind of DDoS attacks as well as slow drip attacks. Some of the attacks were so bad and annoying, that it just had to be done. I knew I've gotta get some kind of solution for it, at some point. But there weren't any perfectly suitable solutions cheaply available. So I wrote one. Which also includes handy whitelist management. Yet detection engine supports custom rules which allow very efficient banning, with very low number of false positives. IPv4 and IPv6 are supported. When 8 /64 are blocked in same /48 then /48 gets blocked and the /64 blocks are removed. Because it's likely that same attacker is just using different addresses from same pool. Same happens with IPv4, 32 gets converted to 24 after 8 hits. It would be also possible to implement regional or AS based ban / whitelists. But so far what hasn't been necessary.
  • Brain dead engineering, once again. I've got plenty of devices, which do not have any kind of indication what kind of batteries those are supposed to use, not even voltage or polarity. What kind of idiot moron engineers produce this kind of stuff? - Really. I guess they assume that everyone got connectors and lab power souces, which you can use to safely find good 'working voltage' without going too high. Yes, of course that's doable, I've done it several times. But still, it's just so silly. Is it CR123 or 4LR44? Maybe 1.5 V button cell, or 3 V button cell? Who knows. There are often several batteries which are viable based on dimensions alone. So annoying. I guess that's one of the ways they'll try to make people to buy new device instead of replacing battery. Of course making device nearly impossible to open, is also one of the ways, and soldering the battery permanently in place, etc. I've defeated all of those things, but is sure does badly annoy me.
  • I'm happy someone asked me this. - Yes, I'm very aware that ECC key [A4F5 3032 18AA 5665 76B6  90AE FCD4 D06B 02B8 D42A] is signing / authentication key only. Why? Because now I can send you case specific public key and sign it with my key. This is to mitigate people about complaining that OpenPGP / GnuPG doesn't provide "ephemeral keys". Therefore if the matter is important. I'll generate new key for the specific case in secure environment and send it to you specifically for the discussion. This also makes it possible to get rid of the key as soon as the matter has been handled.
    Actually I've added separate Brainpool 512 bit ECC key RFC5639 for encryption since.

FIDO UAF, IoT DoS, Tech Debt, SCTP, RTP, OVH, MS SQL, App Performance, SSH U2F 2FA

posted Jul 9, 2017, 4:45 AM by Sami Lehtinen   [ updated Jul 9, 2017, 4:46 AM ]

  • Checked out: FIDO UAF 1.1 and U2F 1.1 specifications, again.
  • In another case, customer complained about... All the usual stuff, programs and everything not working, etc. Finally when I got there and analyzed whole network, it became painfully evident, that it was their multimedia TV which got Ethernet connection and flooded the network with such amount of crap that everything else stopped completely working. This is just another thing, how much fun we can expect from IoT devices when all of those devices can launch network crippling DoS at will without any sane reason. Based on software, hardware bugs and otherwise just the usual low quality crap hardware & software. We're going to have so much fun in future. In this case it was wired. Wired is actually easier much easier to analyze and contain than wireless DoS flood devices. If being nasty, it would be just so much fun, DoS:ing different critical radio devices at random locations and times, but at times when you know those are actually needed for something important. But not doing it for so long you'll actually get caught. Boom, have fun. Don't ever trust anything wireless. (EW, EME, EM) One of the problems of analyzing WiFi networks is that most of devices do not provide any kind of even nearly useful  and suitable debug information. There are proper WiFi and Spectrum Analyzer devices, but those aren't usually available when you need those.
  • Excellent article about the human cost of tech debt - All true, sometimes bad code is good enough. Some times you'll spent years making perfect code, which never gets any actual use. Fixing tech debt code or developing more is horrible, stuff is all the time collapsing on you, and it's hard to even figure what's causing the issue. All this leads to deadline issues, as requiring high work estimates. But customers rarely want anything done properly, if it's cheaper done shoddily and 'just works well enough'. Team infighting, sure. Don't touch it even if it's barely working, guaranteed. Winforms, no comments. - A truly great post!
  • Read a few good articles about WebRTC and SCTP & RTP. Avoiding all the joys of UDP and NAT traversal using different kind of hole punching methods etc. Same Origin Policy (SOP).
  • OVH expanding to Frankfurt - From Finland it's hard to decide if Frankfurt or Amsterdam is better location for servers. But with the newish fiber, Frankfurt might be better. For Finnish customers. It's sad that the best providers do not yet operate in Finland. Warsaw is also raising star on the data center Map as major easter European location. OVH got Warsaw data center too.
  • Sputnik News is using Telegram Channel to push content to mobile users, instead of yet another annoying mobile app to be installed. Neat.
  • Enjoyed deep discussions about MS SQL Simple Recovery Model vs Full Recovery Model. Well, it depends. Both options got own use cases. Lots of discussion about different use cases, space management, backing up, performance, etc. Lot of discussion about Transaction Log, it's Physical Architecture and Virtual Log File (VLF) Segments, sizes, locations, file groups. How all this affects the file system and storage. File Groups Log Truncation and reasons why it grows and so on. Circular log buffer which reuses the log segments without backup. Had to link this blog post for guys.
  • Wrote a leaflet about application performance. Caching, Locking, Transactions, Internal processing performance, Data batching and Generic parallelism & concurrency. How to resolve common performance bottlenecks.
  • SSH U2F 2FA with Teleport - which earlier already supported TOTP.  Wrong TOTP isn't using common shared secret. Secret is naturally realm specific. Not true, U2F doesn't protect user from advanced MITM attacks. U2F security isn't any better than SMS based authentication, like Mobile ID. U2F doesn't provide built in protection against MITM either.  Mobile ID SMS authentication provides actually better security as far as I can see, than U2F. Hmm, YubiKey doesn't trust them or the connection. That's something which I have to check. That's somehow related to U2F protocol. That URI & TLS Channel ID binding isn't really strong, afaik. This only prevents TLS level MITM. But it still doesn't block device level MITM. Which is the problem with most schemes. As long as the data isn't signed / validated, you're just signing "random blob" without knowing what it really is. Application-specific keys is of course obvious. U2F protocol details.

Time Management, Signal, ACPI Shutdown, Mobile Pay, REST, WebSockets, Data Lake, Telegram

posted Jul 3, 2017, 1:44 AM by Sami Lehtinen   [ updated Jul 3, 2017, 1:44 AM ]

  • Why time management is ruining our lives - Absolutely execellent questions. Nothing to add. I'm pretty sure everyone has thought about these things over and over, and formed their own opinion about all this.
  • Parkinson's law - https://en.wikipedia.org/wiki/Parkinson's_law - - Nothing to add about that too. It's clear that things which aren't necessary to be done, rarely gets done. But that's just optimization, aka lazy evaluation. I like. There's no point of doing something, which isn't required to be done.
  • I'm wondering why privacy app like Signal requires so many rights on phone. Afaik, encrypted messaging shouldn't basically require anything else than networking. Everything else should be optional.
  • How to make Windows Server Shutdown ACPI Friendly, using group policy or registry.
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system]
    "ShutdownWithoutLogon"=dword:00000001
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\Windows]
    "ShutdownWarningDialogTimeout"=dword:00000001
    Or alternatively you can use Group Policy: In Security Options, Shutdown: Allow system to be shut down without having to log on -> Enabled.
  • Lot of discussion on different mobile payments. Technologies how those operate in detailed level. How to launch products. Covering all aspects, design, technology, usability, marketing, support, users, processes, advertising, sales, overall politics, production environments, certification processes, customer targeting, maintenance fees, pricing models, value proposition, technical platforms, etc.
  • One client wrote: "You are a valuable worker, I would hire you for the reason how you see the things: You have a free mind, free of ideologies, pure rational thinking."
  • Wondered one project, why they've split functions in two. Many things happen over WebSocket interface, but some of the functions are called using traditional REST API. The WebSocket API uses JSON and IDs, so it's completely asynchronous. I don't personally see major benefits why these tso interfaces are split into two. Of course I've got my own abstraction API layer. So my code doesn't care at all, if it's WebSocket or REST API which is being used to get the data. But still, I prefer using single method, if there aren't any good reasoning why to have multiple alternatives. Some messages over REST API are so small, that it doesn't make sense either. I could understand approach where WS is used for 'notification' data, and then REST API to actually push and pull the large data sets. Currently one thread handles the websocket, and then there's thread pool taking care of the REST requests.
  • Yes, I'm Full Stack Developer / Administrator / Business guy. Jack of all trades. But can buy, build, setup, configure, develop, run, manage a whole system and business processes and customers, everything alone from scratch. It requires a lot of studying all the time. But I think it's worth of it. Only thing limiting these activities is time.
  • Data Warehouse vs Data Lake, What's the difference? - Nice post.
  • This week seems to be rant only week: Telegram bugs. I don't get it, some old photos I've shared about week ago in Telegram Secret chat, just suddenly appeared at end of new discussion stream. I don't really get it. It also shows date which is about one week old for those photos. But still those are at the end of stream which starts today. New posts after those photos didn't jump back today, but appears to be one week old also, based on time stamps. I'm just so aww aww aww, about this great technology developed by elite coders. Even more fun, I closed Telegram and restarted and the images were gone. I wonder if they EVER can create anything secure, when the very basic things are so hard they've are desperate to get even those to barely work. - Ding, this applies to just so many programs, platforms and environments.
  • I'll be posting in future more details of the integration work I've done. For the documentation I draw some flow charts and stuff, which I can tokenize in suitable way.

1-10 of 499