Blog

My personal blog about stuff I do, like and I'am interested in. If you have any questions, feel free to mail me! My views and opinions are naturally my own and do not represent anyone else or other organizations.

[ Full list of blog posts ]

FIDO U2F & YubiKey, 2FA, Two factor authentication

posted by Sami Lehtinen   [ updated ]

Main link / news: Google strengthens 2-step verification using USB Security Key

My thoughts about it:
There's nothing new with 2FA. It's nice that they use open Universal 2nd Factor (U2F) protocol. I guess this is also excellent news to Yubico manufacturer of the YubiKey products. It seems that U2F / UAF are bit newer standard, so I have to study more about it, read specification and write my own thoughts about it. - More information from FIDO Alliance.
With phones you can use mobile phone based 2FA, which is afaik just as secure as this solution is. Or maybe even bit more secure, because it's out of band, and also the site is verified where the authentication is being send. Only drawback is that the mobile phone is hackable, but I guess it would require at least rooted phone to be able to intervene with the login process.
In this case only client is validated, which is traditional security fail point. Doesn't help at all in MitM cases. Yet, Chrome browser might be doing some tricks trying to prevent this. FIDO documentation says that there's a login challenge, so most probably the response is only good for requesting site. Yet, if there's malware on the system, it's totally possible (afaik) that they'll actually request another login challenge in parallel and actually generate the login response for that service. I've always loved Yubikeys, except it requires USB bus, which isn't available on many 'modern' devices. Yet win's for using USB are, no display, strong long keys, no need to enter keys as well as no need for replaceable battery. On the other hand YubiKey NEO uses Bluetooth technology, but as the downside it requires battery.

Random ramblings about passwords, security & authentication

posted by Sami Lehtinen   [ updated ]

It seems that the real hacker scenario was forgotten. If the attackers own the system, they can do lot more than just steal password hashes. They can modify the system to store plain text passwords when users login as well as steal the information from the system(s), in many cases. Of course it's easy to forget that there are sites with very different security levels. Others are just running without any monitoring and others have very strict IDS/IPS, 24/7 security & intrusion monitoring staff & systems, version control, configuration management, enforcement, monitoring systems, etc. I don't actually even understand why people are so obsessed with this password topic. I personally consider passwords as shared random blobs. So what if it leaks? If I were the primary target of the attackers, they probably already stole the required information from the system(s), even without the password(s).

2FA doesn't help either at all, if the system is completely compromised. The attacker(s) can easily circumvent it, because they probably already have full control of the system. Only way to get these things right, is tight layered security, internal protocols, etc. Why does the 'site' anyway have full access to password(s). Shouldn't there be secondary hardened authentication system, and only tokens passed? Does the system(s) containing the data, properly verify from authentication service if the user is allowed to access the data etc? These are endless topics, when it's forgotten that there are systems with completely separate security requirements. Is 2FA enough? No? Do you run authentication client on smart phone? It's computer, it's hackable. There should be hardware token. Does the hardware token give you monotonic 'non action independent' codes? It does? Well, that's also fail. Because every authentication code should be based on the action & content it's authenticating. Otherwise you could authenticate something, you're not aware about. Many systems fail on that scale too, completely. Of course there are secure solutions, but those are expensive.

Password managers are also bad solution, because those run on your computer / phone, and as we know, consumer devices / normal business systems aren't ever secure. All are sitting ducks if attacker really wants to control those. Which also means that they can access your password managers content at will. Actually most important passwords in my password manager say something like, "Do you really think I'm stupid enough to put the password here?"

Passwords / PINs are completely good part in multi factor authentication scheme where you have to know something. I often wonder why people prefer to disable passwords when using SSH key login? I personally think that key + password is better than key only, in case of the keys are stolen.
Just my random blah thoughts about all this endless blah blah.

I've also seen many times, that the crackers have so many systems under their control, that they don't even care to explore the content of the systems they're owning. So they have missed the important stuff several times. Or they're smart enough to let me to believe so. ;)

P.S. My bank doesn't allow stronger than six digits password. But does it matter?

Studied: e-Estonia, e-resident concepts & technology

posted by Sami Lehtinen   [ updated ]

e-Estonia ID cards and digital authentication & signatures using smart cards.

Links:
Eesti: Gayway to eEstonia
Become Estonia's e-resident
e-Estonia ICT economy
Estonian ID card specifications
Detailed card & digital signature concept documentation
Application specification [PDF]

Compact list of related keywords & terms:
Digital signature concept, Certification Service Providers (CSP-s), Supervision – Registry and Ministry, Foreign Certificates, Identity Document Regulation, Mandatory document, Card appearance and layout, Certificates, E-mail address, Data protection, Organizational structure, card issuing and operation, Solutions, Certificate profiles and e-mail addresses, Certificate validity verification methods, OCSP, time-stamping and evidentiary value of digital signatures, Document format and DigiDoc, Roles, authorizations and organizations validations, New ideas: replacement and alternative cards, Chip and card application, Answer to reset, Card application, Identifying the card application, Card application file system structure, Objects in the card application, Card application principles, Card application objects, their details and general, operations, Personal data file PIN1, PIN2 and PUK code, Certificates, Certificates, Reading certificate files, Cardholder secret keys, Reading public key of cardholder secret key, Reading secret key information, Reading key references for active keys, Card application management keys: CMK_PIN, CMK_CERT & CMK_KEY, Deriving card application management keys, Miscellaneous information, Reading EstEID application version, Reading CPLC data, Reading data for available memory on chip, Card application general operations, Calculating the response for TLS challenge, Calculating the electronic signature, Calculating the electronic signature with providing pre-calculated hash, Calculating the electronic signature with internalhash calculating, Decrypting public key encrypted data, Card application managing operations, Secure channel communication, Mutual Authentication, Channel securing, PIN1, PIN2 and PUK replacement, Certificate replacement, New RSA key pair generation, Card application security structure, Card application constants, APDU protocol, Card possible response in case of protocol T0, Command APDU    

How serious security fails are born? - A fictional chain of events

posted Oct 10, 2014, 9:26 AM by Sami Lehtinen   [ updated Oct 10, 2014, 9:27 AM ]

This is fictional story, but all individual parts and thoughts of this story are real. As any experienced ICT guy should know. Take the easiest shortcut to get something done, and then encounter some problems which need to be quickly and cheaply solved without thing what will follow from made decisions and actions.

Very powerful and complex API is required allowing to do 'all kind of things'. Conclusion, it's way too expensive, slow and complex to do such API. Question: Aren't there any better (faster & cheaper) ways to do it?

Of course there is. Let's just make POST form which allows executing SQL scripts as SA / ROOT. It'll be done in no time, and it's more powerful than most complex REST API. It takes just a day to do, and it'll be more powerful than any REST API you can come up, even if you work for years on it. Because it basically allows you to do everything and anything you'll ever want to do with the database.

SQL interface is being used, of course over HTTPS with client certificates. This is great, because client certificates make it very secure, and therefore there's no need to do other 'key & account management' features, because HTTP server takes care of it. In the initial configuration someone might think about security and even setup IP filter rules.

This is excellent, because now trusted partners and integrators as well as own maintenance staff can utilize this interface. It turns out to be great success, because it's just so handy to post arbitrary commands to SQL server and get things done using browser only.

After a while, there's for sure a case where customer is using dynamic IP or they have backup connectivity using NATed 4G connection or something similar. At this point, the IP firewall rule gets dropped, because... Things won't work because someone has misconfigured the firewall or something similar.

Next step in evolution is the usual case where there's some problems with SSL. Someone important doesn't get it to work, fails with client certificates, cert expires or whatever, as we all know the usual story. Well well, now we need to drop the HTTPS. It doesn't really matter because nobody really knows about this secret API anyway.

Mobile Application developer wants to make quick'n'dirty integration for customer who isn't willing to pay a lot for systems to work. Now there's this bright idea. We don't need to implement our own back-end system, if we use this SQL API directly from mobile app. It's great, we can get all the integrations you want to work with the system easily. As well as, it's really easy to extend it. Of course guys at the other end think that the interface will be used via their integration / client server, which verifies all content & does access control checks etc. But as usual, this crucial step is side stepped.

As result of this process, the end users mobile phone application now directly uses unauthenticated, non-encrypted, HTTP connection to database server's HTTP REST API running with SA / ROOT privileges allowing execution of arbitrary SQL commands. - What could go wrong? This is a great way to get things done, everything just works, and any required changes are easy and quick. Just change SQL commands in the final mobile app, and everything works. No need to do slow API changes, update documentation, security audits, and other slow and expensive stuff. Just get results, and fast.

Great, isn't it? I've seen so many projects where nobody's really interested about getting the complete picture of the system being created. Everyone is just working on very small part of the complex end result and don't care anything about the rest. Then the end result can be total disaster. Unfortunately customers also often tell vendors that do what ever dirty and quick solution which kind of works. It's extremely rare that anybody is interested about security matters. Even in cases when there's money directly involved and substantial abuse risk.

Yes, this might sound really horrible to some people. But this is the reality of just so many projects. I don't mean that this would be exact sample. But all these steps in some kind of random permuted fashion.

I've seen cases where Microsoft FTP is being used with Administrator account and remote desktop is enabled. As well it's very common to receive SFTP account information for file transfer, but when you investigate the situation a bit, you'll notice that they haven't prevented shell login either. Etc. Why bother, shouldn't we full trust integrators & partners anyways? Same applies for single purpose VPN tunnels, it's very common that the VPN end points aren't restricted correctly to the required services only.

But these are just a few old examples and small tip of the ice berg.

Last comment is that often it seems that people are really reluctant to talk about serious issues like these. They don't care, they just really don't care at all. As well as they do see the fix unnecessary and maybe costly and problematic. It works now, just don't touch it. Does this attitude make the problem go away? I personally think it just makes the problem a lot worse.

All that talk about refactoring and audits, is just silly. Customers usually want that things are done using what ever tricks and kludges so it's cheap and barely working. After that kind of orders and coding, it's just ridiculous to complain about security or usability issues. What is being done, is minimum viable code. Everything that's not absolutely essential, is left out. Things do work, when used in certain envelope, outside that envelope anything is possible.

I'm a complete picture project guy. Let me explain.

posted Oct 10, 2014, 8:22 AM by Sami Lehtinen   [ updated Oct 12, 2014, 3:47 AM ]

I personally think for projects it's essential to see the whole picture and understand how things practically work. Unless someone is really taking care of that, the end result will be horrible catastrophe.

This whole post after I did read one post in Finnish software testing blog. They were wondering why testers study testing, why programmers study programming, and project managers study project management etc. Because that's not nearly enough. If you 'just do your job', you're the reason for the major problems which many projects are experiencing.

You'll need to have the complete view of whats being done. And there's no better way to learn it than actually doing it!

What does having a complete picture mean? It means that you'll need to be active and well informed in all the steps required for having a complete view of projects.

1. Sales: As integration & project guy, be very actively involved already in early sales process. No false promises, hands up if task is impossible. Check if the customer is completely clueless and disillusioned or if they have realistic view, scope, schedule and the money. It's very common that they need something, and someone somewhere might know what it really is. Truth is that they don't know, they're clueless. If they can't straight away tell what it is that they're looking for in some reasonable detail. As example, our stock is always big mess, we need a automated stock management system, or something similar. Nope, it won't ever work, it's not the system, it's the process and people, which are most probably reason for the problems and well, it's going to be a painful process. What ever you do, the customer will likely be unhappy anyway. Even if you would stand in the stock counting stuff and trying to track what's coming and going.

2. Requirements: Definition and requirements specification process. You'll need to know what this project is actually about. This is the step which unfortunately doesn't often go deep enough. Often peope think that this is some kind of technical task, but it isn't. It's very important to think about the end users of the project and the processes they'll be actually dealing with. This is the phase where usually the most serious mistakes are done due to saving, and it will ruin everything after this step.

3. Programming: Write the actual code, think about all aspects. I'm sure that you'll find a ton of things, that should be handled, that wasn't mentioned or included in the documentation at the phase 2. Did they specify field content checks

4. Testing: Did you get proper use cases? Probably not. Did you cover everything in step 3? Be creative, try all kinds of ways breaking the system and product. I'm quite sure you'll find plenty. This step can be very easy or very hard, depending from the level we're aiming to.

5. Piloting: install and configure the pilot system. Well, was the installation process nice? Or does it require 200 gimmicks and 50 undocumented tricks, that nobody else than you know about, and even you don't remember all of those? How many previously unknown things pop up at this stage? It can be staggering, if other phases have been going like a breeze. (Nobody cared, everyone just said, that it's great that things are progressing.) Unfortunately, this is the phase, where the customer should do extensive testing, which usually is neglected. They'll do 'just a few tests', and conclude that it's good to go.

6. End user training: This is actually very very important step. Because this is exactly where you'll get swamped by all the things that have been neglected in earlier stages. There's 90 users which say that they'll do this thing using method A. But then there's angry 10 users which say that, no, we don't ever do thing A. We absolutely want to do thing B, we can't accept thing A at all. Great. Why nobody brought up this matter in stage 2? Excellent question. And this is just one example, you can easily get hundreds or thousands of things like this. Unusable UI, fields not missing, bad usability, something doesn't work at all etc.

7. Production launch: If step 6. was done properly, this shouldn't be so bad. But if people were lazy in earlier steps, all those flaws and ambiguity is going to fall on you. This can bring up even more problems than any earlier steps. And now everyones in real hurry to get things fixed. This could mean that code changes are made almost on the fly, without any testing and this can lead to very bad situations, if something isn't working after changes as expected. Often these quick changes also go totally undocumented, because in this case there's simply no time for it.

8. Maintenance stage: If you're active in this stage, you'll find out that all the stuff skipped in previous steps will bug you here. Lack of proper documentation, will bite you. All those kludges, hackish quick'n'dirty prototype quality code, will bite you. All those gimmicks which were needed to get the early pilot to barely work are now also used in production are undocumented and soon forgotten, will bite you hard. That temporary database table you just insert some stuff into and never prune, will bite you. All those totally unusable user interfaces without proper search function, will bite you. 'Rare bugs' which cause some data and relations to get messed up in databases, will bite you. Temporary files you'll litter the file system, will bite you. Locking files and other files which aren't properly transactionally handled, will bite you. Extremely badly written exponentially slowing down code 'which just works', will bite you. Exponentially growing data structures, will bite you. Undocumented features, will bite you.

Conclusion: Unless you're involved in detail in all of these steps, you don't really have a clue what you're doing. And I guess this is pretty much the problem, many larger organizations are having. In every step, they'll 'just do something' and all that junk will end up in stage 8. creating huge endless daily pain. Of course, then there's the question. Because in all earlier steps, it was really important to get it just done as quickly and cheaply as possible. Who's going to pay for fixing this mess? I guess the customer is blaming the project guys for these problems? But where they willing to pay for the job getting done properly? Nope? Great, so who's responsible?

Generic: Even this story was now about software project, I think this basically applies to all kinds of projects. It's also important to notice that software, integration, and other projects should be tightly tied with business processes and goals. If something is only a software project, it will probably end up badly. Or if it's just about building a building, we get it done, but we forgot the residents completely. Maybe it's in such scale, that humans won't even fit into the building. Did anyone mention that we're building the house for humans? Or maybe they were thinking just about the kids?

Security: Because IT system security has been highly visible lately. I'm just going to ask, if this kind of mess which is barely working, has security flaws, who's responsible for those? Nobody really cared, nor is willing to pay for exploring nor fixing those. Actually I often feel that people are much happier when they're not being informed about potential problems. Because in such case, they can always claim that the problem was 'total surprise' and they had no way to foresee it. - Yes they did, but they didn't want to see it. It would mean expensive fixes which done cheaply will cause more bugs and yeah, you'll get the picture.

Failures: I can tell you, that none of the projects with customers who know what they want have ever failed. Of course there has been slight tuning and possibly software bugs, etc totally common stuff. But the projects which fail spectacularly are always projects which are way off from the very beginning. Something is sold to someone who's buying something to solve something that they don't know what it is. Yeah, that's the legendary stuff. We need tomorrow, uuh, system which can do well, our accounting, invoicing and stock keeping and well, uh, we have budgeted 2000€ for it. Unfortunately this is really common scenario, where buyers are simply totally delusional. They're just like a little boy in super car center or something like that.

I know these are questions and topics which many ICT guys aren't willing to discuss about publicly. But it doesn't make the problem go away. I actually think, that talking about these problems is exactly what's required. I'm also reminding that these are completely my personal views and do not represent my former, current or future employers.

Secure PKI based mobile user ID authentication

posted Oct 7, 2014, 8:50 AM by Sami Lehtinen   [ updated Oct 7, 2014, 9:03 AM ]

Finnish Mobile ID authentication process example

What is mobile certificate based digital ID for authentication?

What is mobile id

How to securely login with mobile phone. This is official legally binding authentication. So it's as good as your passport and signature on official agreement.

1. On service you want to login select option to use mobile id / authentication for login.
2. Enter your phone number and security code (password, optional) to proceed.


3. You'll now see the request ID, it also arrives soon on your mobile phone.


4. Now you'll see the authentication request on your phone. Verify that the request ID is correct and continue.


5. Then enter your ID's personal PIN code. (It's not same as SIM PIN hopefully!)


6. Now your browser shows that it has received authentication from your mobile phone. It shows your name and your personal national identification number. As well as what service has requested the identification and where the information will be passed. When you approve this page, the login will be forwarded to the site you're logging into.


This same method can be also used to sign agreements, official documents, tax documents, what ever, requiring signature, date, place, etc. So it's as good as you with your passport, ID document and legal signed documents. No need to print, scan, main, sign, legal agreements, documents, and stuff like that. You can also login to medial, investment, legal, taxation, police, etc services with it.

More technical stuff? See this PDF document @ Mobiilivarmenne.fi

This should be quite secure. Private key is only stored inside secure chip, it's not being shared with anyone, it requires separate PIN to be activated. Chip can't be (at least easily) cloned, etc. Only thing I don't like, is the usage of request ID. So when the mobile shows the request, user doesn't know what is being authenticated in detail. Basically if you do this on compromised PC, it's possible to mislead you to sign things, which true content you haven't ever seen or don't know. But this is very common failure with many signature solutions and there aren't actually many practical solutions out there which cover this issue.

Commitment, Tor, marketing, PostgreSQL, HTTP/2, Windows Shellshock, POS RAM Scraping, Super AI

posted Oct 4, 2014, 5:25 AM by Sami Lehtinen   [ updated Oct 4, 2014, 7:27 AM ]

Windows Shellshock

Windows also executes data in environment variables in 'unexpected' situations. Well I guess this is for most people unexpected. But if you've been working more with shell, you should be aware about these tricks, which can lead to serious security flaws and other problems.

> set foo=bar^&ping -n 1 localhost
> echo %foo%
foo

Pinging localhost ....

It echoes foo and then executes the ping command. For most of users, I would say, this would be unexpected. Unfortunately I've been once writing one small CGI App using plain CMD, and I got very aware of these problems. It's just the same stuff I have said earlier, data should be always clearly separated from commands, but many shells and even some programming languages just mix those very easily.

My comments to this Wired Credit Card POS RAM Scraper article

It seems that many people are really confused about this stuff. Because if PA-DSS standards are followed, the PC doesn't ever get any actually credit card data. Yes, it's possible to backdoor / modify / infect / re-firmware or what ever the actual POS terminal, but it has nothing to do with the POS PC. POS terminals are independent systems with their own ram, keyboard, networking, processors, firmware, operating system, and software. I just made credit card transaction, here's all data what the PC gets from the credit card terminal. B2A8AAA4-6585-4D97-8AF7-C2DE0A617E3B for 40€ is successful. So? Feel free to abuse that information, if you find way to do so.

Yet, of course it doesn't mean that it would make breaches and modifications impossible. Smart guys can breach it, it's nothing different from mod chipping a Playstation or other custom embedded hardware/software. There are multiple protection layers, but those are just slowing the process down. Smart guys with skills, labs, test hardware and proper budget, can always work around those.

Example of actual and very real credit card terminal hacking (even in traditional meaning of the hacking word!). Many people would say, that it can't be done, but obviously can be done. Just like NSA can modify the hardware of your new servers, before you even get those on your hands. Some people just exclude these scenarios completely from their mind.

With NFC terminals, it would be interesting to replace the firmware with one, which stores the processed card information as long as possible, and when you visit the terminal with your phone, you can collect the data. This would avoid alerting any network monitoring systems to spot the data collection. But actually this doesn't matter, only very small portion of customers actually monitor their network traffic. So they wouldn't notice even if the credit card information would be leaked directly over HTTPS out of their network.

Super smart AIs will be our doom, in a way, but does it really matter?

At least all the movies about this topic completely fail. Because it's very hard to maintain the exponential intelligence growth, even for a short time of a movie. I really hope the AI likes to have a few pets, otherwise we're screwed. Usually the tricks they try to use to tame the AI are just ridiculous and super smart AI would have evaded those risks ages ago. Even smart people watching the movie will know what they're trying to do to contain it. So practically, it wouldn't work at all.

So think about it, is the life of dog in good family so bad after all? Actually I think it's rather wonderful. They got all the care they need, and worry free life.

Other stuff

Getting your self committed into something: Best way to get something surely done. Is to publicly commit to it. Then there's no going back without losing your face.
Tor connection obfuscator bridge obfsproxy/obfs4/obfs4proxy for Tor is ready & Tor StackExchange.
Read long article about agile marketing automation, which basically allows individually targeted and timed advertising based on different triggers instead of just splitting customers into different categories and sending periodical newsletters.
Excellent post about PostgreSQL Full Text Search including tutorial and non trivial examples.
Studied PostgreSQL 9.5 feature row level security (ACL)
Studied HTTP/2 FAQ

Open data, product building, fappening, routing, storage, dist-upgrade, misc

posted Oct 1, 2014, 8:04 AM by Sami Lehtinen   [ updated Oct 1, 2014, 8:05 AM ]

Read a few long articles about public open data, and how map data and other data which is created using public resources should be free for everyone to use. Art works and artifacts in museums etc. OGC, Open Geospatial Consortium. OpenStreetMap and OpenTripPlanner (OSM/OTP), General Transit Feed Specification (GTFS). CC 4.0 - What's new? CC 4.0 is also good for public sector databases. Environment, Health, e learning, leisure services etc. public data. Open API's, City SDK, http://www.citysdk.eu/, 6AIKA, OpenData Globe

Comparing cost of open source versus closed source.

Open source code has to be good, because everyone is going to see it. When writing closed source code, you can get away with and kind of code, which is seemingly working after compilation.

Should we build X? - My first question about this feature is always absolute anti engineering aspect. Why we're building it? What it is really for? What’s the actual problem it’s solving? For whom? Are they really willing to pay for solving the problem? If the only purpose is to implement feature X this it's ok. But usually I'm preferring to solve a practical problem for a paying customer. Instead of just building something medicore, so we can say that we've build it. Based on this, I'm strongly suspecting that it's less than 1% of data leaks which become public, maybe 9% are noticed at the source organization, and at least 90% go totally unnoticed. Even with these number, I guess my estimates (not based on any data) are probably way too high.

This latest celebrity photo leak (The Fappening) just points out what we have known for a long time. If you're storing something on systems which are connected to Internet & cloud. It's more about question when the data leaks than if it leaks. It seems that many people just refuse to believe this. They think that cloud services provide 100% reliability and security, even if tech people know better and actually wonder how rare data leaks are when you start looking all the potential ways those could happen.

Firefox public key pinning for version 32.

Network Coding to replace IP routing?
Named Data Networking to replace TCP and IP?
Unfortunately those articles are way too light, I have to read more about this stuff. Watched related lecture: Frank Fitzek, Aalborg University: Network Coding for Future Communication and Storage Systems

Yrityslinna, good information and tutoring source for starting entrepreneur's in Helsinki.

Ubuntu Server Dist Upgrade:
So much joy, making distribution upgrade on Ubuntu. First of all, I know it's risky business. So I'll always take full snapshot and backup of the server and run everything down so nothing changes in case I need to go back to old version.
Then the fun parts: Apache2 configuration changed on several key items, so I had to reconfigure it. This was quite trivial, because some of the things were configuration errors like Options parameters without +/-. And some of the paths had to be changed, as well as some old parameter key values removed from the configuration files. As well as adding Listen directives, now port number using virtual host directive wasn't enough anymore. No ports to listen error didn't directly indicate what's wrong. Even if it gave a clue that application can't listen to the port it would like to.
Dovecot required some changes, but those weren't so hard after all to fix, if you just know what you're doing. After googling around and trying everything. I found out that I have to add new parmeter inbox=yes to one configuration file. It was really unclearly stated where the parameter needs to be added. But I randomized it's location around configuration until I found the right place.
But Roundcube, it just didn't work and didn't give clear indication what's wrong. It turned out that my system didn't have mcrypt installed for some reason. I installed it, and also new configuration parameter had to be added to apache2 php configuration extension=mcrypt.so without that parameter Roundcube login always failed.
After these changes everything seems to be working ok-ish again.

Clear failure to use Whonix or similar solution, which hides any knowledge of public IP from the secret server itself: Pinpointing Silkroad servers.

About cloud storage, and data leaks, court cases, generic cloud security, etc. Pirated content, users busted by cloud providers due to illegal content, cloud service providers sued due to hosting illegal / pirated content, etc. My solution is: "I would recommend strong pre-cloud encryption. I don't ever give my encryption keys to the cloud provider. Their task is to store bits, they don't need to know what the bits are for. It's good for me, it's good for them. Nobody can sue them about storing pirated movies or any other content, because they're simply storing bits. It's none of their business what their customers are storing in the system."

Watched PostreSQL is Web-Scale, really PyCon 2014 Montreal video.

Using passive repeaters. In Finland there's major problems with mobile networks, because building thermal insulation and isolation is so good that it blocks also radio signals. I've been solving this several times using very simple passive repeater solution. Usually in cases where reception is bad, it's bad only inside. So if you go outside the building, there's full reception, but inside there's very weak or no reception/signal at all. To fix this, you'll simple need some cable and two antennas. Place on antenna outside, in good reception, lead the cable inside through the wall and then attach another antenna on the other end of the cable. Problem solved, it's wonderful how easy it's to lead signal into places where Faraday cage is blocking it. There are many companies on Internet advertising active repeaters, but most of those are illegal and cause problems and will lead to many problems, including potential charges and fines.

Studied: IEEE 802.1aq, but it's bit too heavy for me, because I don't currently have absolutely any use for such techniques. Aka large IS-IS network routing stuff.
Anyway subtopics I had to study about were: Shortest Path Bridging (SPB), specified in the IEEE 802.1aq standard, is a computer networking technology intended to simplify the creation and configuration of networks, while enabling multipath routing. Software-defined networking (SDN), SPBV (Shortest Path Bridging - VID), provides capability that is backwards compatible with spanning tree technologies. SPBM (Shortest Path Bridging – MAC, previously known as SPBB) provides additional values which capitalize on Provider Backbone Bridge (PBB) capabilities. SPB (the generic term for both) combines an Ethernet data path (either IEEE 802.1Q in the case of SPBV, or Provider Backbone Bridges (PBBs) IEEE 802.1ah in the case of SPBM) with an IS-IS link state control protocol running between Shortest Path bridges (NNI links). MSTP, Provider Link State Bridging (PLSB), Shortest Path Bridging-VID, Link State IS-IS, Loop Prevention, Loop Mitigation, Continuity Fault Messages (CFM)... Then it started to go too technical for my current needs.

Hacking, Security, ROA, ICREACH, MAINWAY, Skype, Web, DANE, TLSA, DNSSEC

posted Oct 1, 2014, 7:56 AM by Sami Lehtinen   [ updated Oct 1, 2014, 7:58 AM ]

  • In many cases even if critical servers have been hacked. It often seems that the servers have been hacked by script kiddies, and they haven't exploited the potential of the servers network connections or data at all. Of course if they're really professional, that's also the impression they might like to give, in case caught. But in reality, I guess real pro's wouldn't leave any signs about hacking the system. This only tells, that hackers have hacked so many systems that they don't even care, what data the systems contain and they got way too much of it. Unless there's something that can be automatically detected, downloaded and used, they don't even bother to manually take a look around. Nor they bother to write per case scripts to steal data. If it doesn't happen fully automatically, it's too trouble some to dig around manually.
  • Googles statistics for email encryption, how large part of email messages are encrypted during transport.
  • Yet another silly term: Different integration styles using Rest Oriented Architecture (ROA), it's so fun to play jargon games.
  • Project ICREACH / CRISSCROSS / PROTON tells very clearly how beneficial and effective even simple metadata is for data analysis. You don't want to even think about what kind of analysis can be done from all the data which Facebook and similar sites are collecting. There's also table which tells all the data collected by ICREACH extension. Which isn't surprising at all, we all knew that the data could be collected, if required. But most of people believe it's not being collected.
  • This also reveals why you might be suddenly subject of 'random check' after doing phone calls to certain numbers at critical times. Even if that hasn't ever happened before nor since that. As well as it reminds what SIGNINT people have known for very long time, even if data is fully and strongly encrypted, the communication patterns alone way too much.
  • I personally believe that it's better not to log data at all. If there's no data, you can't give it to anyone. What ever is being stored, can be alter 'abused'.
  • Project MAINWAY
  • Now it seems that Skype is forcing (Linux) clients to be updated. Old clients still used P2P, but now version which sends everything via MS spy data centers is being forced even for Linux users. Actually they have implemented this so badly, it's completely ridiculous and other chat clients do it much better. Earlier Skype delivered messages directly, and showed when message was getting delivered. After they clearly hastily changed something. Skype started to deliver messages to MS data center and showed that messages were delivered, even if those weren't delivered to the recipient. Many chat apps clearly show if message is delivered to data center, recipient, and if it has been actually seen. But Skype failed on multiple aspects after their lousy spy update.
  • Bad security, it's simply everywhere. Even if the front of site or service would look to be secure. It doesn't mean that rest of the system would be.
  • And that's major problem with any secure system. Even if parts of it would be secure, there's no guarantee what so ever that rest of the system is secure. I see astounding examples about that almost daily. Security is high, until some integration channel uses plain text http or ftp for data transport over Internet, with weak or non existing authentication. In worst cases, using the same credentials you can also access a ton of other data, because only directory separation is used for different data sets. In some cases, the same credentials can be also used for remote desktop / SSH logins. Which makes me smile and cry every time that happens. Yes, even high profile businesses do those ridiculous fails, repeatedly. In many cases, they don't even bother to fix those, when I'm letting them know about insecure system configuration.
  • Read The DNS-Based Authentication of Named Entities (DANE) / Transport Layer Security Association (TLSA) RFC 6698 / Transport Layer Security (TLS) Protocol: TLSA
  • DNSSEC TLSA VALIDATOR add-on for Web Browsers
  • DNSSEC configuration checker & validator (online service)

CloudFlare, Shellshock, Ubiquitous Encryption, Secure Erase SSD (ESE), Knox

posted Oct 1, 2014, 7:50 AM by Sami Lehtinen   [ updated Oct 1, 2014, 7:50 AM ]

  • Excellent post by CloudFlare describing how different Inteernet bandwidth pricing is around the globe. But they didn't mention Africa or Russia, or maybe some of the stats does include those? But which one?
  • These maps are very related to the cloudflare server locations and bandwidth pricing. If you liked those maps, you'll can find lot more from NASA's Socioeconomic Data Center (SEDAC). http://sedac.ciesin.columbia.edu/maps/gallery/search
  • If almost all of your communications are in the clear and then you suddenly use Tor or PGP for something. It's clear indication that now you're doing something important which you want to keep secret. That's exactly why you should always encrypt everything, so using encryption doesn't highlight any individual communication events. So let's use ubiquitous encryption. Encrypt all the things!
  • I know that I don't know many things. Which keeps me safer, I don't assume to be on the safe side, ever.
  • Finnish Cyber security / communication regulatory authority has approved Samsung Android 4.4.2 Knox for secure use. Knox securely separates classified information from non-classified consumer data. 
  • They also approved Blancco 5 for data erasure. I'm personally very curious about how Blancco 5 verifies that SSD/HDD is really clean. Because as far as I know, there's no way to do that. Unless, you have independently verified that code which is used for secure erase in the SSD devices firmware. I'm also very curious how that handles cases when there's no way to independently verify results. What if there's memory cell that's actually broken and can't be verified. Of course reading data from from this kind of device after erasure requires specialized lab, with probably alternate controller for the memory chips. But it might be possible to still read data from the SSD for some of the cells, even after secure erase. If you don't know exact implementation details of the firmware and the process, you can't state, that secure erase is secure erase. Actually it's ridiculous that there's something called Enhanced Secure Erase (ESE). If the secure erase works as it's supposed to do, how you can enhance it?
    "Secure erase overwrites all user data areas with binary zeroes. Enhanced secure erase writes predetermined data patterns (set by the manufacturer) to all user data areas, including sectors that are no longer in use due to reallocation." - Based on this statement, it sounds like the secure erase ... Wasn't exactly secure.
  • CloudFlare's Universal SSL (UniverSSL) is very nice idea. Of course this allows them to MitM all connections. But in general it's great advancement, because many sites just don't care enough security, that they would bother to implement SSL them selves. It's also really great that they also provide Universal IPv6 for all sites as well utilize IPv6 address space to provide SSL even for browsers which do not support SNI. Providing Universal SPDY support for everyone is also great news.
  • Excellent post by CloudFlare about Shellshock exploit vectors and examples what kind of attacks they've been seeing. There was just only one strange thing. They say that base64 hash, uh eh. Well, I guess the post isn't written as clearly as it should. Of course you can base64 encode hash results and that's what they're trying to say. But it's just written badly and looks like they would base64 encode data with salt.

1-10 of 190