Blog

Google+
My personal blog is about stuff I do, like and dislike. If you have any questions, feel free to contact. My views and opinions are naturally my own personal thoughts and do not represent my employer or any other organizations.

[ Full list of blog posts ]

Forget Technical Debt - Comments, Phishing trap

posted Sep 25, 2016, 8:32 AM by Sami Lehtinen   [ updated Sep 25, 2016, 8:33 AM ]

  • Forget Technical Debt - Yet another great article about this topic. It's like with every other thing, there's no single 'right' solution which would be better than all other options. I really liked lack of tests and naming issues. Naming issues are legendary. Often some features got totally and absolutely misleading names. Making it's practically impossible to figure out what it does, unless you know exactly what it is all about. Culture of messing things up, doesn't limit to code alone. It's an overall process as they say. If everything is totally messed up before it comes to the coders, the situation is already pretty bad. Reasons for things being totally messed up are varying and in some cases it feels like being the norm. It's hard to write tests for cases which are actually unknown when the code is being written. Unfortunately it's quite common that stuff is being developed without knowing what it should do. Don't ask, it's just that way. This is awesome point: Write and run unit, acceptance, approval, and integration tests. - It would be so nice if it would be possible. But in most of cases, customers don't appreciate it and therefore these things aren't getting done, because nobody's paying for those. - When you rented your small economy car for a day, did they have professional racing driver to check tire pressures, adjust shock absorbers and run break tests on bench? They didn't? Why did you rent the care from that car rental shop? It would be so nice to make awesome stuff, but as said in world of super cars, it's very limited market. It's like the acceptance testing, most of customers can't produce any kind of clear test cases anyway. Some customers can, and that's awesome luxury. But in most cases, especially if behind schedule, let's just put it into production and fix it later if something breaks down. In that case it's easy to say, something will break down. But that's what you're asking for. Hurtling toward entropy doesn't have anything to do with programming or code. It's totally normal thing with any process not being properly managed. I often find out that it's funny that they're talking about code. It doesn't matter what the process or task is. If it's not being properly managed, it's going to be one big entropy. Let's say something simple like invoicing or stock management. Those will fail just like code, if things aren't being under tight control, checks and process guides being followed. As POS guy, it's really common to see peole saying that your stock keeping code isn't working. When you check from logs, everything works. They'll agree with you. They just don't want to immediately say that their staff is stealing or they don't simply bother to input the stock transactions. After that it's totally natural that stock is messed up. What's the surprise there? Actually it's quite interesting to see how large chains can totally screw up their stock management & logistics. Yet in this article, all tips are good. It's always just very important (and hard) to know what are the points which are worth of extra work. It's like the Efficient Frontier in investing. If small start-up launches and spends 10 years building their ultimate on cloud infrastructure and platform to build their services on, it's not probably going to go very well. Also project & team size is essential question here.  Continuous Delivery is something I think should be obvious, but in many cases companies are actually making the release cycle more sparce so things 'can be tested properly'. Which of course means that the automated testing is lacking as well as automated build, update and rollback stuff. This just makes all the development iterations way slower and updates larger. Which means that when you finally release the 'great new version' everyone's been waiting, it's actually going to be more like running a nuclear bomb test. After that there will be panic to get the most deliberating issues fixed. Which will be fixed hastily, causing yet more trouble and things to fix. I've gotta be honest, I've been laughed at when I said that automatic scaling of computing resources would be smart. It's impossible, you know. Some of the examples in the article about archaic projects feel just so painful it's even uncomfortable to read the article. Modern Agile principles sounded great. Yet agility is actually against other things mentioned in this article. Adding all the heavy administrative processes reduce agility a lot. It's always about finding the correct balance for the situation. Makers doing maintenance, been there done that. Actually doing that everyday. I built programs to be maintained, not just 'make it work' and then causing havoc later. I've had to explain that times when something is taking too long or is too expensive. Or I should have gotten something done on extremely bad specification. I do not do something, so I can say it's done. I do things to provide value to the customer and end users. If I don't see it producing value, or working properly, there's no point of doing it. Unless the customer specifically acknowledges and agrees that this is something really crappy and ad-hoc and we can fix it later. Then it's totally acceptable.
  • Kraken Phishing Warning. Another great example how easy it's to fall pray to phishing unless you know what you're doing.

Practical Data Security, Data Leaks, Linux Desktop Touchscreen Calibration

posted Sep 25, 2016, 8:22 AM by Sami Lehtinen   [ updated Sep 25, 2016, 8:23 AM ]

  • Laughed at 'practical data security procedures' which are actually being followed. Bunch of laptops bought from on-line bankruptcy auction contained everything, absolutely everything from the previous business it had been used at. I got work applications, contract of employment data, so much email I didn't bother to go through it. Sales and purchase orders, salary information. Employee identity data and stuff like that. All the usual 'business data'. I didn't even bother to run forensics tools because all this data was directly available. - So this is the business as usual. Everything leaks out just as it is. When someone says that can't happen. That happens all the time and continuously. What's there's to wonder about it or why to deny it? Because they already leaked everything to 'unauthorized' part, I could have just created one big torrent about that and released it for fun. They leaked the data, I just made it 'bit more available'. Or maybe something that I could explain as mistake, install web server which shared root path publicly and Google can then come and index everything. It's not malice, it's just incompetence. I wanted to see how web server works. I wonder what they would have thought about that. I haven't signed any NDA about that data, so it's not protected by any agreement. The purchase from on-line auction didn't say anything about using or protecting any potentially passed data. Well, I actually wiped the drives and removed the data. But this should work just as a reminder, that you'll never know what happens if you do stupid things like that. Business was bankrupted anyway, but what about all the employee data, vendor contracts etc 3rd party data which could have been exposed in the process. Usually when I talk about stuff like above, people just laugh. They think it's some kind of NSA, CIA or secret agent hacker movie stuff. It's so ridiculous to talk about data security, nobody actually cares about that. - Or maybe the audience isn't just the right one? - Try talking about Olympic athlete nutrition in local alcoholism as a way of life bar.
  • This is one of the great cultural examples I've talked earlier. Is the problem that there are constant data leaks, or is the problem the person who raises an alarm about data leaks and says that these should be obviously blocked. Which one is the real problem? In many organizations it seems that the problem is the talk about the problem. The data leak issue can be largely ignored, until something really massive blows up. So it doesn't matter. And following proper data security procedures would just add more expensive work and non productive overhead. It's wonderful how often people believe that 'having password' or 'deleting files' will secure data in some way. This seems to be pretty common misunderstanding even with professionals.
  • Stuff above just made me kind of laugh, all that high tech data security marketing stuff and then there's the actual reality. Just like being said about cloud services. You'll never know where your data will end up, nor you can ever delete it. Only way to make the process bit harder, is proper pre-cloud encryption, which basically means that they're now able to do anything with the data even if it leaks.
  • Got bunch of old Atom silent computers, which could be used as Linux servers / desktops for small loads / random use. It's interesting to notice that those computers with 4 GB ram and Atom CPU are clearly slower with 64 bit version than 32 bit version. I guess that's also complex question, but in this case, it was very visible difference, not just a few percents, but like tens of percents. Didn't time it. But lag was very clearly noticeable with GUI.
  • Ubuntu touchscreen calibration, It just worked beautifully. Really nice! Worked like a charm. Just put the xinput calibration commands in .profile . There was also a secondary trap, the device name contained unicode (tm) which is bit hard to deliver on command line. And the xinput_calibration uses device names by default. This was easily fixed by using device XID instead of it's name. After that everything was working perfectly!
    sudo apt-get install xinput-calibrator x
    input_calibrator --output-type xinput

Electronic Locking & Key Management, Webhooks, Duplicity, lftp, Apple Software

posted Sep 25, 2016, 8:19 AM by Sami Lehtinen   [ updated Sep 25, 2016, 8:20 AM ]

  • This electronic key management and access control is just like firewalls. Usually and mostly more or less misconfigured, sometimes extremely badly misconfigured for extended times.
  • Systems require constant maintenance, one webhook just dropped off without any warning. The code didn't have automatic webhook reactivation feature nor check to confirm that webhook is still active and system did break down. -> Trivial to fix, when noticed. But it always causes a temporary service interruption.
  • Duplicity + lftp + GnuTLS issue was so wonderful. Somehow the default mode has been modified with Ubuntu distribution upgrade (which upgraded lftp) so that ftps default changed from Explicit to Implicit and this caused the problem. Reconfiguring settings fixed the issue. (Hopefully, verify) Well well. It took a while, most of instructions were totally misleading. They claimed it's required to ignore the certificate. Nope, that's all BS. It has nothing to do with the certificate. What's the real problem? Well, it seems that it has been changed so that ftps means implicit TLS. If you wan't to use ftpes aka ftps explicit which was earlier the default option you'll have to use ftp:// as for plain text and then set additional configuration file parameters to require starting encrypted explicit ftp(s) session after that in .ftp/rc to be exact set ftp:ssl-force on set ftp:ssl-protect-data on. Sure, I blogged those, so I can Google my own site if I start to swear about this stuff again. Also I've been so happy about 'software quality' at times. Another interesting thing was that lftp -vvv ftp://mysite which should be verbose according man pages, actually shows version message which should be show with --version. I just hate it when documentation is absolutely misleading and incorrect. Thank you for that too. Yet opening lftp and giving command debug 9 did the trick and now I can clearly see that ftp://mysite without any kind of TLS request actually does AUTH TLS and certificate is being checked as well as PROT P mode is being negotiated and used for data transfer so it's working. - Done. I just which things wouldn't be this annoying always. But this is business as usual. Go figure, it'll work. You're just doing it wrong, if it doesn't. I'm sure that default change is documented somewhere, I just didn't bother to dig it up, now it works.
  • Can Apple engineers really make such a lousy software, that you can't sync over 3G / 4G data connection. They require WiFi (WLAN) for syncing. That really doesn't make any sense, whatever at all. Checked several forums and discussions and that seems to be the case. There's utterly stupid but working workaround. Turn mobile hotspot on, and then connect the phone to computer using USB and then set the computer to sync over the phones WiFi connection. But that if something is really retarded. It would be so simple to have an option which method(s) are allowed / disallowed for syncing, but phew, go figure.

Relays & Gateways, Random Problems, lftp ftps gnutls, HTTP/2

posted Sep 18, 2016, 5:59 AM by Sami Lehtinen   [ updated Sep 18, 2016, 6:01 AM ]

  • I generally really dislike running 'relays' due to responsibility and reliability matters. But it seems inevitable that you've gotta run those at times. Therefore I've done small but well working high availability generic data relay which can cross protocol gaps. It allows upload, download, send and receive data over HTTP, HTTPS, FTP, FTPS, SFTP, SMTP and SMTPS + authentication for all protocols naturally. On top there's a small rule engine which processes received data. So you can send email (SMTPS) using SFTP or receive HTTPS/POST data over FTPS. Why? Well, because all programs do not support those easily, as well as some users unfortunately use dynamic IPs and some other organizations do not allow connections from dynamic IPs etc. That single module can easily bridge just so many communication gaps, that it's really vital tool for many users and organizations daily. One major benefit from using this kind of relay is proper logging. Also this service allows simple and easy queuing. Some apps are so badly coded that those do not even have proper retry logic. So this HA service is available and will receive the data. If the final destination isn't available, it doesn't matter and data gets spooled and queued for final delivery. If there's a problem, it's usually pretty easy to check relay logs and decide which side of integration group is responsible for the problem. Because otherwise they're always claiming that this fault is caused by the other party, ha ha. Then it's just easy to check logs and say: I really would appreciate if you would stop lying right now and get your junk fixed.
  • This is strictly a violation of the TCP specification by CloudFlare - Random problems usually just mean you don't understand what you're doing and how things work. Normal day at the office. As example, adding comment data to SQL query stops server from crashing. My guess is that the timeout was added to deal with crappy code and then created not so obvious secondary issue with arises only in certain cases. Business as usual. Work a round to deal with bad code creates more problems. Then you're ending up having multiple layers of bad code with complex state in each layer. Business as usual. It works exactly as it's designed to work. Yet that doesn't mean it would make common sense nor it would work as you think it would. Same stuff every day. You just need to keep digging until you find the problem. This is s good example about that. I'm usually the guy who has to dig deep and find out whatever is causing the problem everyone else is claiming impossible to solve or 'totally random'.
  • iMessage E2EE encryption seems flaky - Crypto is good as long as nobody takes a look at it.
  • After Ubuntu distribution upgrade to 16.04 duplicity doesn't work and gives this error with my FTPS server. "Fatal error: gnutls_handshake: An unexpected TLS packet was received." Strange. Duplicity and file transfer protocols have been such a problem earlier, things are broken on multiple levels and if you change something that's broken, you'll just find out that the alternate method is broken too but in a different way. That's just so frustrating. But still business as usual, every unfortunately.
  • Journey to HTTP/2 - Really nice article. Comments: It seems that many people talking about HTTP don't know about Gopher - Some people also don't know that web sites can be just as well be served over FTP than HTTP. Anonymous FTP works just as well for read only sites. - Long post, nothing new at all. Just some basics. kw: HPACK, PUSH, TCP, keep-alive, HTTP, H2, H2C, HTTP/2, PUSH_PROMISE, TLS, SPDY.

DiskFiltration, Discussion Summary, Credential Management

posted Sep 18, 2016, 5:56 AM by Sami Lehtinen   [ updated Sep 18, 2016, 5:57 AM ]

  • DiskFiltration: Data Exfiltration over Hard Drive Noise - Yes. Anything over anything, as being said. If it allows "any form of transfer of data" it can be used to bridge anything over it. Nothing new in the field of data ex-filtration. Air-gapping / Air-gap won't protect your systems. Also don't forget the classic TEMPEST leaks. Some people claimed that wired keyboards don't transmit wireless signals. But that's total BS. Most of electronic devices do send wireless signals. Devices COULD be made so that those won't transmit wireless signals, or transmit much weaker wireless signals, but that would be much more expensive. And therefore, manufacturers naturally just don't care. Every wire is technically just an antenna. This is also yet another (not new) covert communication channel example. Non surprisingly they list stuff like: Electromagnetic, Optical, Thermal, Acoustic (physical vibration) based communication channels. All of those allow communication in some for or another. If the data center got Internet connected temperature monitoring system and racked air-gapped servers. The temperature monitoring system can be used to extract data from the air-gapped servers, etc. But isn't that slow? Sure it is. But stuff like encryption keys do not actually require a lot of data. Other obvious stuff like a led being visible in security camera or so.
  • Short summary about stuff we've been talking about with friends: Web of Trust (WoT), identity management. White listing, black listing, different trust models, approval processes and process automation. Signatures, identity verification. Effects of Sybil Attacks. How to detect promiscuous users. Trust should be based on context and perspective. Trust scoring. Detection possible identity & reputation manipulation attacks. Trust and data crowd screening and reporting. Different rating models. Stakeholders, proof of reserves. Liquidity, bidding systems and auctions. Possible Timed English Auctions. DSA/ECDA threshold signature scheme. WoT can be screwed by stupid users. As well as some people aren't simply able to maintain even basic key management privacy requirements.  Or misidentify the trust level of keys in WoT. Because there are always disputes, it's a great question if it's about terrorists of freedom fighters. What view is the 'correct one', and how truth is defined after all. Who says that the majority's opinion is the right one? Pressuring peers with giving negative feedback as attack or as way to coerce into co-operation. Several problems with revocation message and actual revocation data distribution. If other peers are supposed to update data with which they can't properly sign without having the private key. Of course in this case it could be possible to delegate the signing using per signer secondary public / private key combination to sign the final document. And limit the access in signature made using the own key to the individual document being signed with the secondary key. That would work. So someone else can be authorized by me, to partially update document, which I've actually signed and the updated part is signed using delegated key. Just like I described in OpenPGP / GnuPG ephemeral keys blog post. That's trivial.
  • In one case, user access token was good after an year, even if the user had been fired. That's the normal truth of user access control. Even if vendors claim all kind of stuff, it doesn't matter, because it won't happen anyway.

Electronic Locking, Brother & Ubuntu, GAE Python 3, Programming, Payments, Outsourcing, Hosting

posted Sep 17, 2016, 1:36 AM by Sami Lehtinen   [ updated Sep 17, 2016, 1:36 AM ]

  • Drawbacks of electronic locking. In some cases it doesn't unfortunately deliver as promised. In some cases key identifiers might wear off so badly that it's impossible to reliably identify keys. Another problem is that locking management it deeply outsourced in deep nested chain. This means that any change that needs to get done, like disabling lost or stolen keys might take much longer than vendors advertise. They usually claim stuff like "immediately". What a claim. In truth it might take several days to get the task actually accomplished. It's just like out sourcing firewall management. I've seen so many times who task which should take about one minute, end up taking weeks or even months in some cases. This unfortunately applies to electronic advanced managed locking and access control solutions. Now disabling lost key has taken over 24 hours and counting. So much about all those bs advantages the sellers are always talking.
  • Even more fun configuring Brother Multifunction printer with Ubuntu (Lubuntu to be exact, but so what) and Scanning over Network (TCP). Luckily I've blogged all the essential things. Aka, hidden dependencies not publicly announced as well as the commands to activate the printer on computer.
  • Google App Engine Python 3 support - Finally. Well, maybe that's too late? Yet it's good have. In case there are some projects which I would prefer to run on App Engine. So far Linux VPS servers have provided a lot better bang for buck. But of course those solutions do not scale like App Engine does. Yet for limited loads, these are so much cheaper to run.
  • Found out more code made by elite coders which loop over data without any sleep virtually consuming all possible resources being thrown at those. So, how much server capacity is enough? Add 100 new machines to cluster? Not enough, still being 100% loaded, should we add more? - Phew, no comments. - As said, it's unfortunately very common that many programmers don't have a slightest clue what their code is actually doing. It was slow so I added threading... Great, but did you ever consider that it's slow because it repeatedly triggers table scan on HUGE table which possibly can't be even cached in memory due to it's sheer size. How about doing something else than adding threading to alleviate the performance problem. Just as awesome as acquiring exclusive lock on table for minutes. And other cool stuff which happens way too often. But I thought the ORM... Sigh. Fix that with threading and, now you've just created one kind of buffer bloat, now you've just created a queue of exclusive locks waiting to be applied on the table. Putting any other access at the end of the queue, guaranteeing that it's going to be darn slow. Adding threading once again just made bad situation lot worse. - Thank you for that too.
  • Studied some interesting payment related market studies. But unfortunately all related information is proprietary strategic business information, so I can't really say anything more about it. But it's interesting. Only thing I can say is that providers often got small market shares and nobody's dominating the market.
  • More server outsourcing and hosting negotiations. Ahh, it's so nice to see how different strategies different organizations try. They don't get it that bs will only make your negotiation position a lot worse or get you ignored immediately, when buyer is competent. Facts and prices, no talk about gold levels and other stuff which really doesn't matter at all. I don't care if your helpdesk. I'm actually really happy if I never need to contact your helpdesk. I'm not interested if they're really polite and nice, I want them to make the stuff just work.

Outlook, DHT, P2P, NAT, WLAN, WiFi, Networking, Development Backlog, Support Tickets

posted Sep 17, 2016, 1:30 AM by Sami Lehtinen   [ updated Sep 18, 2016, 10:55 AM ]

  • Experienced some annoying email delays both with outlook.com as well as outlook 365 services, both. It's seems to be pretty standard life with current cloud services. Performance is tuned so that it's "okish", not too good. Of course this provides optimum utilization for systems, but in case there's any problem it quickly builds up backlogs which take a long while to exact.
  • Short summary about stuff we've been talking about with friends: DHT technology details, P2P networking, NAT hole punching, IP address hiding solutions, relays, etc. Data transfer windows in custom UDP based protocol. PII sanitization. Performance requirements, problems with flooding DHT table with too many 'connections' even if UDP is stateless. User and identity reputation management, counter party risk management. Transfer Window Scaling algorithms and options. Message queue and retransmissions, packet loss. Maintaining pseudonym privacy and identity hidden. Rotating identities and contact information. Out of band identity verification or using trusted 3rd party for new identity linking for trusted parties. Keeping true capabilities secret. Risks of acting on partial or non-complete date. Acting probably reveals targeting to the target making them very careful and probably paranoid. Pervasive internet monitoring and mitigation practices. 'Killing' old identities so that those can't be even technically re-used. This makes sure that the attacker can't trick me into revealing my old identities by asking some kind of proof or something. What's gone, is gone. Using public key cryptography to maintain option to provide high reliability proof that I'm the one. Aka confirm linked hidden identities, if it suites me. But the identity can still remain completely anonymous. Some people claim that it's stupid to post signed messages, without providing the public key. No, it isn't. I can provide the public key and signed additional proof later. If I want to. Not providing the public key doesn't matter at that point. Technical solutions for anonymous long term storage. That's challenging, because someone should actually store and provide the bandwidth for that data. It can be done collaboratively like in case of Freenet. But of course that doesn't guarantee data durability. Multiple different ways of linking / verifying separate identities. Using PKI, ECC, DH, Shared Keys, Nonces, Tokens, etc. Multiple JSON message structure and database schema discussions.
  • Complains about serious network issues, reason? They've added parallel WLAN NAT router box with enabled DHCP to same network. It doesn't work? Well it does, it works exactly as it should in this case. Which naturally means it doesn't work, especially because they've connected their existing LAN to this boxes LAN and now they've got two competing DHCP servers and one is issuing addresses and gateway which WAN side isn't connected to anything at all. What's the problem? I don't get it. Well, it was trivial to fix, just check addressing, and disable DHCP server. But as far as I can see, there was no malfunction of any kind. It's not my fault if people configure and install stuff in a manner, which it hasn't been planned to work, nor should even work at all.
  • The only rule of the fight club is that you don't talk about the fight club. 
  • Just noticed that I've got over 130 entries in blog about backlog and about 2k entries in development backlog. This is one of the reasons why keeping backlogs isn't a great idea often. I'm pretty sure that many of the entries in the development backlog have been 'expired' or become 'obsolete' in someway. Situations change, so either you'll do it now, or don't backlog it is just fine.
  • I wonder that would be beneficial idea for customer services and help desks. All tickets which are older than 30 days will be auto deleted. No, not closed. Then someone could possibly re-open the ticket. It's much better to delete the ticket and force who ever created it into first place to re-create it from scratch. When they complain about some thing not being taken care of, you can always ask for proof. They start talking something about ticket that doesn't exist anymore. Well, fail, just retry.

Windows 10 Anniversary, Database is not, Cyber Grand Challenge, ECC security, CPU caches

posted Sep 11, 2016, 2:56 AM by Sami Lehtinen   [ updated Sep 11, 2016, 2:57 AM ]

  • Installed Windows 10 Anniversary Update on multiple computers as well as upgraded several desktops from Ubuntu 14.04 LTS to 16.04.1 LTS version. Phew, what a job. I'm totally amazed, nothing got totally freakingly bad broken in the process. Now there's only one task to do. Trying to get the quad display configuration to work with this new version. Because with the earlier version, it did seem to be impossible due to some bugs and incompatibility issues which suddenly appeared when some libraries got minor updates. Which was naturally extremely annoying. - Multiple tries and about three hours later. Nope, it's nearly impossible to get NVidia to work. Oh crap. Maybe I'll try again after one year or something. Or just go and buy a proper display adapter. I'm 100% sure it works, but it just doesn't work as long as Intel adapter is enabled. After four hours I concluded that Nvidia sucks hard. (Yeah, I agree with Linus). Simple multi monitor setup is impossible, things won't work with proprietary nor nouveau drivers. Actually with nouveau situation is really funny, because mouse pointer does work perfectly but all other frame buffers are totally corrupted. Thank you again for wasting my time. Just can't stop loving software with just essentially drains life force out of you.
  • One friend showed me application which shows randomly error message "Database is not". - That's awesome. It's one more BOFH error message, go and figure what that actually means.
  • Set (again) a few new temp drives to use data=writeback with tune2fs -o journal_data_writeback /dev/XXX and mount option data=writeback as well as set for temp drives longer commit=300 time. Yes, I know it can and will lead to data corruption in case of sudden power loss or crash, if there was on going writes to the drive.
  • Watched: DARPA Cyber Grand Challenge (CGC) - Finals Video - Should that be actually called cyber hacking? ;) - No link to Finals video, it got actually removed from YouTube before I got this post out.
  • Why do CPUs have multiple cache levels - Very nice basic CS post, with the simple examples. Similar I've been using. I often wonder why people don't get benefits of multi-tier storage systems, even if they probably use those at home. It's just so obvious.
  • Nice post about ECC security - Even if it doesn't contain anything new at all. That seems to be the norm with most of articles.
  • More interesting discussions about email deliverability with totally clueless people. It's not that complex after all. You'll check involved parties, configurations and logs. It's usually pretty clear what the problem is. Yet of course in many cases it's something 'totally normal'. Like customer has blocked some IP addresses using SPF and then complain that they're not receiving email from... Well well, I thought that's exactly as it should be. What's the problem? - This is also perfect example about the case with "robotic programmers, which would make perfect code". The only problem is that the people giving the instructions gives instructions that do not make any sense, and after a while start to complain that the program doesn't work. Let's see, what the problem was? 'Broken' program, 'wrong' configuration or someone asking for those things to be done in the first place? - Specification has changed, we need a bug fix. - Nope, that's not going to happen. Because there's no bug.

2FA/SMS, Email/SPF/SPAM, Fix/Bug/Request, Contract/Termination, Data/Retention, OVH, Rants

posted Sep 11, 2016, 2:47 AM by Sami Lehtinen   [ updated Sep 11, 2016, 2:48 AM ]

  • Just as a reminder that NIST isn't anymore recommending 2FA using SMS. It's mentioned in the Digital Authentication Guideline, I've posted about bit earlier.
  • Following comments are based on feeling after reading my backlog of summer emails.
  • Email, is that really so freaking hard for people to use and get. Sick'n'tired about people whining about... Not receiving email. Well, it's not my or our fault if... A) You ask to use your from domain B) You haven't configured your SPF properly C) Recipient checks SPF information and reject messages. Not interested, it's not technical fault. A) It's your domain B) It's your SPF rules C) It's the recipient which refuses to receive your email. - Not my fault, nor technical fail, again, don't call us. Any pointless and unnecessary discussion about this topic will be charged to the max. - There's nothing to discuss about this topic, and it's business as usual. RTFM. Get used to it, don't whine, fix it, if you care. Simple solutions which are more than obvious A) Use our domain as sender B) Fix your freaking SPF records C) Whitelist our SMTP IP so it's not getting blocked. - Meh
  • Once again, some people are always asking 'fixes'. No there's nothing to fix, if there isn't flaw. I order blue car, then I start whining, that the supplier should fix the car to red. - Won't happen. If they order paint job and pay it, then it's totally different story. But starting discussion about 'warrant fix' is just absolutely ridiculous. If they want to make it even more ridiculous, they can whine that they've been whining bout the warranty repaint job to different color for months and nothing has happened. Yeah right. That's true. And nothing will ever happen. Except, I might invoice for each time they try to contact me about this matter. - How about honestly saying that they want the car to be repainted to different color and are ready to pay about it. This isn't first time, nor last. But seems to be pretty standard in software customization as well as with any ICT projects. Customers try to BS about requirements. Requirements are fixed when final order is done, unless otherwise agreed. Also scope gets fixed at that point. If someone claims project was a failure, it's not true. If it was failure from their part, it's completely different story again. If they didn't mention or document requirements before the project started properly. I ordered car without trailer hook and then I whine that it's a failure, because I can't pull my trailer with it. Live with it, that's what you were asking for.  - Meh again.
  • First customer wants to terminate all agreements. After that they start asking for their data. Actually, this is a perfect question. And answering to requests like this might contain a trap. A) After agreement termination, we don't have any responsibility to maintain their data B) Even worse, after agreement termination we don't have any legal right to maintain their data. So basically when contract is terminated we're at least in theory responsible for proper disposal of customer data. - Often nobody actually cares about the details. But in some cases these steps could be very critical. As well as there are many related laws. It's actually very easy to 'help the customer' but at the same time technically break several laws. - I think the Yahoo lawsuit was perfect example about this. Yahoo restored deleted emails. Which they claim they don't have, but they still got those. - This also just confirms the 'cloud service point' I've written earlier about. Nothing you'll every upload to cloud gets deleted, ever. It's the only safe assumption. Whatever service providers claim, might not be true for several reasons, which I'm not going to extrapolate here. But have covered in several earlier posts.
  • Afaik, there was one funny forum post complaining about OVH in the same style. Customer first refused to renew server and after server expiring complained that OVH didn't allow downloading their data after the server was expired without the customer paying for it. Ha, and they complain about the payment? I think they should have been very happy that the service provider even got their data any more. They could as well said, it's gone. Enjoy the rest of the day. Often there are clearly written guidelines to follow, but exceptions aren't rare as well as in many cases nobody follows the guidelines even if those would exists and be very clearly written. In some cases people think that their data would be indefinitely kept by someone for free, even if they terminate contracts. I don't know what they're thinking. Because retaining the data is actually against many privacy laws. Yet, as said, many companies still do it for multiple undefined purposes and reasons. - Even if OVH was mentioned in this bullet point, my comments aren't regarding OVH practices nor I'm ranting about OVH in this case.
  • That's just a few business as usual daily tech rants. ;)

Privacy Paradox, Riffle, Encryption at Rest, 2FA, Databases

posted Sep 11, 2016, 1:34 AM by Sami Lehtinen   [ updated Sep 11, 2016, 1:34 AM ]

  • Studied Privacy Paradox. Well, yes. It's hard to tell people about something which is private and isn't being told to anyone.
  • Read Riffle paper [PDF] - Verifiable shuffle technique, which is supposed to provide bandwidth and computation-efficient anonymous communication. Interesting. Let's see. Riffle requires the servers in Riffle Group to have high bandwidth interconnects. Only client-server communication is 'bandwidth-efficient'.  Of course: "variable-length messages must be subdivided into fixed-length blocks and/or padded to prevent privacy leakage through message size". And as expected: "each client must perform PIR every round to remain resistant to traffic analysis attacks even if the client is not interested in any message". And naturally: "the total grows linearly with the number of clients" Leading to: " the primary limitation is the server to server bandwidth". Summary: Nothing new, just combining old stuff, very nice academic work, tinfoil hat stuff, not practical even in theory. For everyone else except cryptography & anonymization theory geeks this isn't interesting at all. No practical use whatsoever.
    kw: Dining-Cryptographer Networks (DC-Nets), verifiable mixnet, cover traffic, delays, mixnets, mixes, deanonymize, anonymize, anonymity, Aqua, anytrust, Riposte, Dissent, private information retrieval. (PIR), clients, servers, client server, authenticated and encrypted channels, confidentiality, anonymity, authenticity, end-to-end encryption (E2EE), correctness, honest, adversary , power, security critical information, sensitive, sender, recipient, receiver, publisher, architecture, protocol, protocols, cryptographic, ciphertexts, plaintexts, algorithm, broadcast, trap protocols, trap bits, attack surface, rounds, accusation process, misbehaving server / client, accountable, malicious, secret key, zero-knowledge, plaintext, ciphertext, forgery, tamper, nonce, DeDiS Advanced Crypto library, ElGamal, Curve25519, Neff’s shuffle, Chaum-Pederson proof, Secretbox implementation, Salsa20 encryption, Poly1305 authentication, Herbivore, Intersection attacks, correlate, networking, network, internet, privacy.
  • Encryption at Rest in Google Cloud Platform - DEK, KMS, HMAC, GCM, CTR, CBC, AES, HDD, SSD, ACL, plaintext, ciphertext, Keyczar, BoringSSL, NIST, KEK, RNG.
  • Microsofts Partial Two-Factor Authentication (2FA) is just partial security and allows attacker still do a lot. Therefore Microsoft 2FA fails, you can still use IMAP to fetch email and stuff, without 2FA. So it's only partial implementation. Of course it prevents "completely taking over the account", but even if it's enabled you can still do a lot without providing 2FA code.
  • Yet another great post about PostgreSQL, MySQL, Uber and Database design trade-offs. - All that makes sense, and there wasn't anything new out there.
  • Something really different? Cookiecutter Shark - Is it a time to get your optic fibers bitten?

1-10 of 411