DNSWL, DDG, Helpdesk & Customer service, Fast or Slow, Browser Cache, Data Security, Back button, Signing

Post date: Feb 3, 2013 10:29:47 AM

  • Got my mail server listed on DNSWL. It seems that internet is quite hostile nowadays. If you start running your own mail server, it seems that even some of the large webmail providers seem to easily classify your mail as spam. Well, now that should have been fixed. - s.sami-lehtinen.net @ DNSWL.
  • A few articles:
    • How US spies on Europe located data centers (fictive, speculation)
    • Personal Security Images (useful or not?). I have said that site should authenticate to user, this is one attempt to do it, but not foolproof at all.
    • Big Data, Business Intelligence (BI), Data Discovery, Data Analysis, Probability and Statistics, Visualization and Graphing by AppliedDataLabs. I have been playing with Qlikview, but Tableau was new app to me.
    • Reaching 200k events / second. Small opitimization article.
    • Tips for Legitimate Email Senders to Avoid False Positives. As simple as it is, it seems that many email server admins do not follow the tips. Like the services I have mentioned here, which lack reverse DNS name for the outbound mail server.
    • Secure Data Deduplication (PDF). Strange white paper, because as far as I know, this same method has been used for ages. Even when you encrypt data for multiple recpients using PGP/GPG/OpenPG actually payload is encrypted only once, but the actual encryption key is simply encrypted using multiple public keys. So each recipient and decrypt the actual encryption key using their own private key. Also Freenet ang GNUnet use similar method, where the data is used to form the encryption key for the block. So you can decrypt the block only if you know the content, or you have the key for the block. As said this is problematic just like it's in Mega's case, because when you get something you know how to decrpyt, then you can start hunting people having the data, even if they wouldn't have the decryption key, nor they would know what the data actually is. Freenet & GNUnet encrypted disk cache. It would be very interesting to see how courts handle this kind of situation.
    • Microsoft starts supporting Git. Nice!
    • Good and bad OAuth 2.0 implementations. Yet another nice article.
    • DuckDuckGo handles over million deep queries / day, article by HighScalability. I have been using DDG for a long time, and it's now clear that
  • Computers & Robots get smarter. Will there be work left for the regular guy in the future? Actually this is no news, isn't all cloud & data automation about mostly reducing labor costs? At least in every case where I have talked about integration and automation, that's the primary goal. If system looks cool, but doesn't reduce costs to produce the service, then it's a fail.
  • Helpdesk & Customer service parts of any service should be as automated as possible. If possible 100% self-service is preferable and then multitiered help functions.
    • A) Clear, simple, self-explatonary user interface with tips. To avoid raising questions how application / program should be used in first place.
    • B) Excellent user FAQ & guide.
    • C) Automatic checks and suggestions for the user question being sent. If user tries to ask something that will be covered by FAQ & guides, give straight answer.
    • D) When a person real sees a ticket first time, there should be automated suggestion for canned response, as well as option to select from other canned responses.
    • E) Last expensive option is to individually answer the question. Note. question should be analyzed and there must be option to store this response as canned response and store it to FAQ & guide section knowledge base too.
    • F) Way too often answers aren't stored, and same questions are being answered by humans over and over again. Which is really expensive.
    • G) There should be a regular reviews to update the FAQ & guide based on new canned responses. All of the canned responses should not be visible in the public FAQ & guide section, there might be several reasons for that.
  • My favorite RSS to mail service was closed a month ago. Now I'm using Blogtrottr, which seems to be working very well. I also tried IFTTT but I didn't like it too much. I have also listed all of my favorite RSS feeds to Ampparit service, so I can see all interesting feeds with other news collected from public (mostly finnish) sources.
  • Other essential tools for web-masters are ChangeDetection and WatchThatPage, both allow monitoring listed web pages for changes. Especially good for sites which do not provide key RSS streams.
  • Following stuff and white papers released at Usenix is as essential as following Google I/O events.
  • Fast or Slow application? Annoying or good app? The user decides, based on many things:
    • It's all about user perception. Developing smart programs give user fast impression even if the fundamental task didn't change at all.
    • Perfect example is Outlook vs Evolution, and in this case it's the evolution which absolutely sucks. I'll write message and click send mail. Outlook handles it in less than 100 ms. But with Evolution, it's taking for ever! Come one, I got 200 other emails to handle, what is taking so long? Still sending mail, omg! This is ridiculous!
    • So what is the design failure causing all this annoyance? It's super bad user interface design! Instead of showing big slab on screen: WAIT, sending email... For several seconds or in some cases even for tens of seconds. All should simply queue the task and get it done in the background without annoying users so much, that they bother to write posts like this! Of course I can switch to another window and continue processing mails, but then I get many of those sending message windows in background and those bother me. Also it's not ideal for user flow to always find the next task from stack of windows.
    • One guy just yesterday asked about me, that if mobile client is used with cloud service, how it can be made responsive enough. That's the trick, use what ever data you have, and make app at least seem to be fast, even if the real task is handled in the background asynchronously. It just makes so big difference.
    • Let's take scenario where user would like to check new mails, there's one mail with large attachment. He can't access any of the mails, before all of the mails has been completely downloaded. Or, you'll get list of messages, you can process small messages which are already downloaded while the large attachment is downloaded in the background without annoying user for 5-15 minutes with "downloading messages, please wait" notification.
    • These are so simple things, that I thought everyone gets the point immediately, well, it seems that many engineers and developers simply do not.
    • That's my rant for today about annoying user interfaces, bad design and poor user experience (UX).
    • Whats worst of all things? If something happens, before message is fully sent. It's often completely lost. Yet another massive win for super bad design. If message would be first queued and then sent, if something happens during the transmission, it can be sent again. But in this case, message is totally lost or only partially recovered, because it was not actually saved before the attempt to transmit it.
  • It seems that to many organizations even simple things like renewing SSL certificate seem to be way too complex thing to get done. First I laughed hard, when Finnish web forum & quite big web mail provider Suomi24 forgot to renew their certificate. But when Nordea Bank forgot to do it for their web bank, I really did start to wonder. What kind of guys they really got working there? - Hire me, laugh. I know how to check date from SSL cert, as well as I'm totally qualified to use calendar, with re-occurring events. Worst part of this which make me really wonder, is that often they claim this to be completely surprising thing. I have also heard that even more important money traffic related SSL certs have expired (can't mention exactly what) in similar fashion, and yet same BS arguments were used. In some cases, admins tell that best way to fix this is to disable SSL cert checks completely. Guess what, when the cert is renewed, how many percentage of users turn the check on? 1 or maybe 2%, I actually think that my estimate is way too high this time. For important systems I would still use something better than pure server SSL cert based authentication. Self signed certs and client certs is better, but even with that I would like to use bi-directonal challenge based authentication. So even if SSL layer completely fails, attackers won't gain authentication information. Of course they still could set up MitM attack and get the data because encryption layer is now peeled off.
  • One friend of mine suggested that it's good idea to always disable all browser caching. I personally said that it's not so good idea, because it'll add network traffic and make browsing experience slower. Therefore I did some cache analysis from my own browser cache, and results were following. During two weeks, browser cache saved one gigabyte of browser traffic. Even more importantly cache served over 55k request immediately to me. Based on this temporary 3G connection I'm using, it saved me about 110k seconds = 1843 minutes = 30 hours. Well, of course I usually open tabs before hand when working with other tabs. Even if we completely ignore the round trip time, which is usually quite important when surfing, pure data transfer with max rate would have taken 5,5 hours. Is caching pointless or not? You tell me. Like I said, my usage is quite light, because I'm now using 50% broad band connection.
  • One friend of mine was really upset, when his boss told him that he can't download customers all data to the dev / test environment and use it for testing. I have had similar kind of discussions several times. Usually the argument is that it's faster to troubleshoot if you have all customers data to be used for testing. But I personally think that's security issue and it shouldn't be so. Therefore my own apps use excellent logging, with tracebacks, without any (meaningful / private) customer data. If something malfunctions, first I get automated alert about the problem, as well as I usually get everything I need to have for fixing the issue. In worst case, all the customers data is then left indefinitely around some hard disk corner, and not properly destroyed from the system(s). You'll never know when you need it again, do you? It's better to keep it. A very bad policy, AFAIK.
  • Quickly tested Asana for project management and sharing tasks with distributed team and Freshdesk for helpdesk functions.
  • Reminded my self quickly about usefulness of Bloom filters.
  • Duplicati backup system comments: I'm not running duplicati.exe process all the time on all of the servers. I'll start it using Windows scheduler or from larger backup batch for creating off-line backups when it's required. Currently I'm using parameters: --run-backup="Backup" --trayless. I just would need need a option to tell the Duplicati process to exit when it's done. Currently there's no option to do that. I have defined process to be killed when certain time has passed, but as we all know, this is very dangerous and incorrect method. I would love to be able to add: --exit-when-done parameter. That's all, when backup job(s) are done, exit. - Thanks. You might ask Oh, why? There's command-line version of the backup program. Yes, I know. I use it with Linux systems. But because some Windows admins doesn't seem to understand the command-line version parameters at all. Theyre' not used to command-line tools.
  • I'm so happy with 7-Zip It's just so superior compared to other alternatives. Nobody should use other general archivers than it. Zip can be used for compatibility reasons when working with legacy stuff, but otherwise modern systems are able to run 7-zip and it's simply great compared to zip. Compresses much faster and even with better compression ratio.
  • Reminded me about benefits of memory hard problems, like scrypt. Which require memory instead of pure math / CPU processing power. Scrypt is also on it's way to a standard.
  • Google Cloud Storage is offering Durability Reduced Buckets / Durability Reduced Storage, which offer a cheaper way to store data. But this isn't anything compared to Amazon Glacier.
  • Google is also offering PageSpeed as cloud service, quite a nice idea.
  • Checked out and tried HelloSign and SignOm online agreement / document signing services. HelloSign seems to be really nice to use. Most important part of signing service is that it has to be legally accepted way to make a contract. In Finland banks are mostly providing authentication and signing services, because those are only trusted online authorities which also got wide userbase. I just wonder when Facebook can provide legal agreements. Doesn't it have wide user base and hopefully strong authentication? How strong authentication is strong enough? Using this method would also make sure, that they (should) know if user is using his right identity with the service. Many strong authentication service providers do not actually provide strong real world identity, that's the weakness of many services. As well as mentioned earlier, I might like to have a strong pseudonymous identity, so they know it's "me", even if they don't know who I actually am. Very useful in some situations.
  • It was really hard to get one web designer to understand difference of browser back button, and going up one hierarchy level in web store. If user lands to site using Google to one article, and wants to see other articles in the group, that should be possible. She thought that now it's good idea that that when user wants to go back up one level, it means back to Google. No, going back / up in article hierarchy doesn't mean returning to where you came from. It would be very annoying to then form another Google query and hope that you'll land back to the right page in the web-store, because web-store navigation is crippled.
  • Android back button, is it broken or not? Kennu wrote that having a global back button is really bad idea. I personally say, it's not a bad idea at all. I really like global back button. He also claimed that it's confusing to have several back buttons. No it isn't. One back button is the Android back button, which goes back one step, what ever it was. Another back button is then the application layer back button which goes back or up one level, just like in the web store case I wrote earlier. Most important for the user experience (UX) is just defining the steps where back button returns to clearly. As we know this has not been simple in the future. Currently browser back buttons are totally broken, when every ou press the back button, you really do not know where you're going to land. As well as you click back button on web site, it's same result. You don't know where you're going to end up when clicking it. But this is not a technical flaw, this is flaw from the application developers, it's not systemic flaw. When you press cancel or undo, it's just the same question, it goes back to some stored point, but without trying, you really do not know how far back you'll end up with that click. So, global back button is cool, as long as apps work correctly with it. I open link from email, read the page, press back, I'll get back to the email. What is the problem with it? It's just like with any HTML5 app, you might end up leaving the whole application, or getting back a smaller step in the app. You'll never know. As well as with browser, you'll learn not to use the back button with broken apps, but is it the source of problem the badly behaving app or the broken(?) button concept.