Blog

Google+
My personal blog is about stuff I do, like and dislike. If you have any questions, feel free to contact. My views and opinions are naturally my own personal thoughts and do not represent my employer or any other organizations.

[ Full list of blog posts ]

VDSL2, Filter Bubble, Nvidia, GitLab, BlackNurse, Support, Surveillance, Made in China

posted Feb 25, 2017, 11:20 PM by Sami Lehtinen   [ updated Feb 25, 2017, 11:20 PM ]

  • I've got an idea how to improve VDSL2 speeds a lot. Service provider says that only VDSL2 connectivity is available. So we all know that VDSL2 speed sucks, especially with long or poor quality wires. There are two ways how VDSL2 speed can be radically improved.
    1) Use CAT6 cabling for VDSL2
    2) Connect the VDSL2 modem directly to VDSL DSLAM with short < 2M CAT6 #cable and then use Ethernet signaling from that point to the end destination.
    After short testing found out that both solutions work well, and provide clearly superior speeds compared to CAT3 cabling. Internet networking. Especially the configuration option 2. should provide guaranteed stable full speed VDSL2 connection.
  • Google+ got so strong Filter Bubble on stuff being shown, that it's annoying. It makes content you'll see very very narrow, even stuff you've been interested about a year ago is totally forgotten, unless you'll keep being interested about it all the time (weekly). It's good idea to remind yourself about this problem. It's old 2001 TED "filter bubbles" talk, but it's more relevant than ever.
  • Even more rage with Nvidia "quality display adapters". There are horrible peace of ****. Absolutely. I cn agree with Linus that Nvidia is total disaster. Probably I'll by cheapest Intel display adapter, and it'll work and perform much better than expensive Nvidia junk. Coders have been fixing this issue for several years now, and it's still same fail. Amazing piece of junk. - Working time trying to get Nvidia to work costs considerably more than their products. What a waste! I totally agree about Intel GPUs being clearly superior. Nvidia also nicely floods my dmesg log: [12496.161542] nouveau 0000:01:00.0: Xorg[1385]: fail set_domain [12496.161544] nouveau 0000:01:00.0: Xorg[1385]: validating bo list [12496.161546] nouveau 0000:01:00.0: Xorg[1385]: validate: -22
    I ended up selling my Nvidia adapter away, it was just so bad.
  • GitLab: How We Knew It Was Time to Leave the Cloud - The CephFS latency chart seems extremely painfully familiar. It's just incredibly slow at times and there are hot spots causing total extended lockups at times. As expected, everything they wrote, was very very true. Been there, done that. Nothing new.
  • BlackNurse attack - Did I just complain a while ago about inefficient bad code. It seems that many companies are able to produce it. including high profile ones. Yes, this problem is caused by simply bad inefficient code. There's generic name for these attacks. Resource consumption attack. Btw. It seems that ZyWall USG 50 requires less than 10Mbit/s of traffic to basically go brain dead due to CPU load. Works like a charm.
  • I have Toyota Corola - Lol. Been there done that. I've also supported people with all kind of strange issues, just because they managed to find my email or phone number from some related file. It's amazing how little people appreciate open source and free projects. As well as they don't get the fact that the developer might not run 24/7 generic ICT expert helpdesk just for them.
  • Britain's new surveillance law. Well? That was quite expected. Because there is data, they want to access it. That's the norm. IoT is going to lead to interesting issues / matters in future. And the government surveillance isn't the one, you really should worry about.
  • Read United Nations reported about "Robots and Industrialization in Developing Countries". Interesting and bit scary thing to read. But I guess it's nothing new to us whom are working with technology daily and follow global politics, business and global economy. Automation is going to replace jobs and there's nothing that can be done to prevent that. Only efficient business work and if things are inefficient, it doesn't work out. Anything being done to stop development is guarateed to backfire really badly. This issue has been covered in history over and over again. Try prevent progress and even if it might seem like a good idea for a while, it's guaranteed to cause major long term damage. Made in China 2025 strategy, Industrial Robots.

33C3 notes & keywords part 2

posted Feb 25, 2017, 11:15 PM by Sami Lehtinen   [ updated Feb 25, 2017, 11:16 PM ]

Watched more talks...

So much bad 'secure' code. You shouldn't expect much from normal programs, hehe.
Bootstrapping a slightly more secure laptop - Amazing talk about how deep exploits and malware go in the hardware. As expected, current systems are littered with different serious attack vectors.
Law Enforcement Are Hacking the Planet - Yep, USA can 'legally' hack any computer, anywhere. Let's watch the World Police again! As well as they can decoy sites, while committing actual serious crimes while doing it. It also makes it very clear that there's no "cyber virtual world", it's actually very real and physical.
Shut Up and Take My Money! - This pretty much proves the point I've been raising repeatedly. As long as user authentication sucks, there's no way to make things secure. Almost all 2FA schemes I've seen are more or less bad. Good ones are extremely rare. It's not ok to give generic authentication token. It's just as stupid as using static password. The token should naturally be 'command / action' specific at specific time. Aka cryptographic signature for that particular transaction now. Otherwise the user can 'authorize' anything at all, without even knowing it. Most of 2FA schemes are just like 'signing blank contract'. Fill in whatever you want to later. - Real-time Transaction Manipulation and user / automation system misleading is very real and works great. Awesome talk, no hammering protection, trivial brute force attacks in minutes etc. Totally laughably fun talk, I mean in terms of security fails. But truth is that security is usually totally amazingly bad. "Banking by design", laughable security. Hahaha. I'm clapping too, great! N26 Security. Only amazing thing is that when the issues were reported, they seemed to understand that there is an problem. Often they don't. Which makes it even more fun. This was absolutely great talk.
Pegasus internals - Neat espionage software payload, vulnerabilities and exploits. Kernel exploit on each boot.
A Data Point Walks Into a Bar - Wonderful talk about data visualizations. Data driven design.
Make the Internet Neutral Again - EU net neutrality rules and laws. Hmm, I don't know if zero rating is a real problem. I can see many benefits for it too. These are complex questions. European commission regulation.
Untrusting the CPU - Secure Access Module (SAM) -
What's It Doing Now? - The Role of Automation Dependency in Aviation Accidents - Interesting talk, how systems can confuse, disinform and mislead users.
Dieselgate – A year later - Interesting talk about Volkswagen and court cases & differences between American (US) and European (EU, Germany) justice systems. Europe lacks class action law suits.
Make Wi-Fi fast again - Nice talk, 802.11n comparison included. Beam forming, QAM, BPSK, QPSK. Multi-User MIMO (MU-MIMO). Beam forming. Phased Array Antenna. Multiple data streams. Measuring Radio Channel. Limited WiFi / WLAN bandwidth. 80 & 160 MHz channel widths basically unavailable.
Lockpicking in the IoT - Bluetooth Low-Energy (BLE / BTLE) might be dangerous. - Security hardware & software, is ridiculous. So full of absolutely laughable fails. I love talks which really make you laugh. Because security is just so laughable. Nothing new. But the BTLE putton pusher made me laugh. It's "IoT" kit for any device with button(s) to make it IoT and Internet compatible. Decompiling applications. Downloading firmware. Modifying firmware, hacking locks. Totally awesome talk. Hard coded fixed encryption keys. That 8 months to fix simple issues, typical. Laugh! And the final magnet part, omfg and lulz rotfl. Great, that's just great.  Really loved this talk. The NDA comments were really true too. Why they want to ship shitty vulnerable products. How about fixing those and not worrying someone spilling the secret sauce?

There will be more stuff later, on subsequent posts.

SHA1 (SHA-1) is broken - How to configure GnuPG and Mozilla Thunderbird / Enigmail

posted Feb 24, 2017, 7:40 AM by Sami Lehtinen   [ updated Feb 24, 2017, 7:48 AM ]

Generic background information for this post  and SHA-1 being broken can be found from this site: Shattered.io
 
Simply put: Well, they've done it. SHA1 collision generated on purpose.
 
SHA1 has been on way out for a decade. But now it's finally time to retire it on cases where security matters. It still can be used as hash algorithm, as long as you just remember it isn't secure one. I'm using often some extremely simple algorithms like adler32 or crc32 to generate 'hashes'. Point is just to generate short version of data, which is highly likely to produce another outcome if data is being changed.
 
As happened with MD5, it's probable that massive increase in attack strength expected in near future. So if it's now considered to be broken, soon it will be much more broken.

GnuPG configuration

In gpg.conf set following settings:
 
personal-digest-preferences SHA256 SHA512
digest-algo SHA256
 

Enigmail for Mozilla Thunderbird configuration

In Thunderbird settings, just set:
extensions.enigmail.mimeHashAlgorithm = 3

It stands for SHA256.  Note that the = is just indicating key and value separation. The equal sign shouldn't be used.

Other issues and tests

Even if digest-algo and personal-preferences are set, and the recipient doesn't set any hash preference. Enigmail still signs with SHA1. I don't know why.
 
These tests were made from command-line / shell.
 
-----BEGIN PGP SIGNED MESSAGE-----
 
Hash: SHA1
 
Test
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
 
iEYEARECAAYFAlivncgACgkQrgJ3hCdO9iaVUgCdFGkBNiUHQ69fmlt6ai6j+9Ab
lkgAmQHC7uPnWeTVlhMlDzjvjpXym1x6
=YKBM
-----END PGP SIGNATURE-----
 
Only when digest-algo SHA256 option is enabled then output will be using
SHA256.
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
 
Test
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
 
iEYEAREIAAYFAlivnhUACgkQrgJ3hCdO9ib07QCfUip2uFfgPWzn9ndGImaqqcUF
foYAnj0V3k2A5G7yE1BS1wFJFseztpfI
=G7X1
-----END PGP SIGNATURE-----
 
As you can notice I've tested everything using GnuPG v1 and GnuPG v2.
 
Now the command line --clearsign produces right output. Yet interestingly that won't affect Enigmail.
 
Also my own default key sets SHA256 as preference. But that won't clearly affect signing by default with that key. Which would have been nice?
 
Just to make sure that the recipient preferences do not affect the outcome. I've disabled the digest-algo option and tried using -r when clear signing.
 
GnuPG just warns that -r without -e doesn't encrypt the message. But still the digest-algo preferences set by -r user's preferences won't affect the digest algorithm. Hmph.
 
So, just go and add digest-algo SHA256 in your gpg.conf if it isn't there already.
 
But how do I specify the hash algorithm for Enigmail?
 
Quote from Enigmail Wiki documentation:
 
"Enigmail relies by default on GnuPG for selecting the hash (digest) algorithm. From GnuPG, the hash algorithm can be specified in the file gpg.conf using the parameter digest-algo hash_algorithm."
 
Yet for some interesting reason, the digest-algo setting didn't actually affect Enigmail. 
 
Other values for mimeHashAlgorithm with Enigmail:
0: Automatic selection, let GnuPG choose (default, recommended)
1: SHA1
2: RIPEMD160
3: SHA256
4: SHA384
5: SHA512
 
After changing the settings, I sent email to myself and verified that the setting actually affects the mesages being sent out:
 
Content-Type: multipart/signed; micalg=pgp-sha256;
    protocol="application/pgp-signature";
    boundary="N/A"
 
If settings aren't correct it'll say:

Content-Type: multipart/signed; micalg=pgp-sha1;

 
And when using S/MIME:

Content-Type: multipart/signed;
    boundary="N/A";
    protocol="application/pkcs7-signature"; micalg=sha1
 

Other remarks:

Almost all messages discussing SHA1 being broken, were hashed with SHA1 and then the hash was signed with public key cryptography. That was pretty funny.

kw: Mozilla Thunderbird Enigmail, GnuPG, PGP, GPG, SHA1, SHA256, SMIME, S/MIME, hash, digest, signature, signatures, configuration, settings, set, configure, algorithm, preference, preferred, preference, security, privacy, email encryption, signing, data, email, configuring enigmail to use sha256, configuring GnuPG to use SHA256.

ZeroNet - Just my personal random thoughts about it

posted Feb 24, 2017, 7:26 AM by Sami Lehtinen   [ updated Feb 24, 2017, 7:27 AM ]

Link: ZeroNet.io
 
First I played a bit with ZeroNet and yes, it seems to be practically working. In some cases the response times are even surprisingly low. It's much more responsive than what you would expect from such system.  So it works nicely after the initial synchronization.

Some of the opening statements sound really dubious, from practical point of view. Let's not yet condemn them.

Distributed approach, caching, censorship free, incremental updates and Tor support are really nice concepts. Of course the synchronization can be made in very efficient or inefficient way. That remains to be seen.

Easy zero configuration setup is wonderful. I personally dislike programs which require complex setup.

I'm really curious about the SQL database support. But I'll hope I'll find more out about that bit later. Generally data updates have been exactly the problem with distributed systems.  It wasn't documented in detail. I guess they're at least trying to do it reasonably efficiently. Yet there are of course spam issues etc, probably. But those are concerns with any site which allows user content.
 
"No torrent-like, file splitting for big file support" - This is bit confusing. I thought it was using BitTorrent? So why multi peer parallel downloads wouldn't work?  Or do they mean downloading 'individual files' from larger bundle? Don't know.
 
They also mentioned they're not trying to compete with Freenet and I2P and they're not going to replace the current client - server based model. Which are just joyfully sane conclusions.

So nothing much, I'll hope bright future for Zeronet. I've been huge fan of peer-to-peer (P2P) technologies. As well as anonymous & freedom of speech platforms. So, future will tell how Zeronet is going to do. Based on history, it's not going to do well. P2P technology history is pretty bleak. Just like with web sites. Only extremely, really extremely small portion will make any success. Others will be forgotten so that nobody remembers or cares about those after a few years.

IPv6 addressing is hard, but is it too hard, even for major network operators?

posted Feb 20, 2017, 10:31 AM by Sami Lehtinen   [ updated Feb 23, 2017, 10:11 AM ]

It seems that the OVH doesn't know how IPv6 address zero compression works.

I'll recommend that they'll read RFC 5952 section 2.1 - and RFC 4291 section 2.2.

Leading Zeros in a 16-Bit Field and Text Representation of Addresses.

Zero compression allows compression one more full zero blocks to :: .

Lead zero compression allows dropping leading zeros within address block.

Example of full address:
2001:0db8:0401:0310:0000:0000:0000:270d

Only addresses 16 bytes aka 128 bits, are full addresses.

Zero compression:
2001:0db8:0401:0310::270d

Full blocks of 0's have been omitted, with :: .

Zero compression can be only used once per address. So if there are addresses like 2001:0db8:0000:0000:1:0000:0000:1 Only one group of zeros can be compressed like 2001:db8:0000:0000:1::1 or 2001:db8::1:0000:0000:1.

Lead zero compression:
2001:db8:401:310:0:0:0:270d

Leading zeros within each block have been omitted.

Both methods combined, most compact representation:
2001:db8:401:310::270d

Examples above are all structurally correct.

OVH's fail:
That address seems to be ok above? Yet it isn't. Why? Because it seems that guys at OVH do the zero compression incorrectly.

The address is shown as above, even if it's supposed to be uncompressed:
2001:0db8:0401:3100:0000:0000:0000:270d

And as compressed:
2001:db8:401:3100::270d

That's right, for some unknown reason they're taking one trailing zero from that 3100 block and omitting it with :: structure. That lost zero is high lighted with red color. I suspect they've got buggy code or something dealing with the address compaction. This mistake seems to be systematic in their management console portal, in different views. I got rid of all IPv6 issues so far, by adding that one missing zero to configurations manually. Same mistake applies to the host IP address as well as the default gateway address. Earlier OVH used uncompressed addresses in the management console, but they implemented the address compression incorrectly. - Lulz.

This interestingly shows the 'standard it security & reliability concepts'. Nobody gives a bleep, if it's working or if the information is correct. Unfortunately I see this happening daily, in all kind of situations. Disinformation is extremely wide spread and nobody cares about the fact, that facts are completely wrong.

Other funny remarks: I also found several sites advertising IPv6 address expansion to full 128 bit notation. Yet even that site worked incorrectly. It seems that they'll only expand the zero compression, and do not correctly expand the lead zero compression. Yet Python does all that correctly.

OVH - Thank you for providing disinformation to your customers and wasting my time for pointless troubleshooting and blog posts. You know, it took me a while to figure out WTF is wrong. And then you complain that the HD is overloaded? Well, it is. If you disinform and mislead customers, it's quite natural that most customers don't get what's wrong.

Edit: Continued three days after noticing the issue. They haven't even bothered to fix the issue yet. OVH's JSON API does give correct (uncompressed / exploded / expanded) address information. But the management console still compresses addresses incorrectly. Other people have confirmed the issue on Twitter as well as I've confirmed it using several different OVH accounts. It's just as I said and nobody bothers even to care about that. - Also added color highlighting. To examples.

Made up IPv4 example to clarify things

It also seems that the IPv6 example is hard to grasp for several people. Let's make IPv4 example if it would be more familiar for everyone. This is made up example, but shows the concept.

JSON API would show full address:
010.123.200.001

And the management console would show compressed address:
10.123.20.1

Python & IPv6

This is also reason why using Python standard library ipaddress and similar functions are very nice and get the job done correctly.

>>> ipaddress.IPv6Address('2001:0db8:0::1').compressed
'2001:db8::1' # Compressed to max
>>> ipaddress.IPv6Address('2001:0db8:0::1').exploded
'2001:0db8:0000:0000:0000:0000:0000:0001' # Full exploded / expanded address
>>> ipaddress.IPv6Address('2001:0db8:0::1').packed
b' \x01\r\xb8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01' # The one and only binary representation
>>> base64.b85encode(ipaddress.IPv6Address('2001:0db8:0::1').packed # Even more compact (Ascii85) encoded binary representation
b'+9;q]zz!!!!"'

kw: OVH, IPv6, fail, addressing, zero compression, won't work, no Internet, issue, problem, fix, connectivity, can't ping gateway, no route to host, default gateway unreachable, IPv6, Internet address addressing, python, representation, human readable, spacing, colon, encoding, ASCII, 85, networking, issues.

SSD, SQLite3 Vacuum, Inefficient, Neural Networks, Storage Tiers, VDSL2, Sound Canceling, EmDrive

posted Feb 19, 2017, 12:14 AM by Sami Lehtinen   [ updated Feb 19, 2017, 12:15 AM ]

  • Laughed at the post telling that Spotify writes a lot of crap to SSD. - Only because they seem to run SQLite3 Vacuum way too often. That's absolutely stupid. It's nice to see that even large companies write utter crap'o'ware software. Vacuuming database monthly should be more than enough for Spotify. There's no reason what so ever to do it several times / minute or so. - Horrible, bad, insane, silly, stupid and I really don't care code is the norm. This should be totally obvious for everyone, except the most ignorant coders. It works, it's great code! Or maybe they've written it as intentional sabotage? Let's put something really crazy here and see if anyone notices. It's nice to do that kind of checks every now and then. To see if the peer reviews and stuff work at all. In this case it's also funny that fixing such a fail would take them half a year. Any competent guy should be able to fix that in minutes. - Most of my projects vacuum sqlite3 databases monthly. If database is basically insert only, then I often use yearly vacuum. This is to to reduce index fragmentation. Data doesn't get fragmented, because there are no deletes / updates which would create fragmented records / leave empty space in database.
  • I've seen projects which consume Gigabytes of memory and almost all available CPU time. While doing, nothing. Yes, absolutely nothing. Program is just written very inefficiently, loops over same data over and over again without any optimizations and doesn't have any sane sleep, or trigger mechanisms in it. - But hey, it's not bad code. It works. It's just horribly inefficient. If someone complains system running out of resources or slowly, you can always add memory and CPU cores to keep running these utterly unnecessary brain dead tasks.
  • Had long, long discussions about evolutionary algorithms and machine learning. How much it's just statistics and 'brute force' finding the right values. In many simpler cases, that's just one way to do it. Here's the Neural Network Zoo, which visualizes some of the network arrangements. Very nice article, even if it's long, I couldn't stop before having it completely read.
  • Played a little with Windows 10 and Tiered Storage, just to lean more about services provided natively. It's real joy to get this configuration done using PowerShell. Even if Windows is all about GUI, these features can't be configured without using PowerShell. For more information see Storage Tiers Performance in TechNet.
  • Stupid or Smart solution? Who can judge. Party A says they can only deliver VDSL2, that's great. Then we can connect VDSL2-Ethernet media converter with 20cm cat3 cable to the VDSL2 DSLAM and use Ethernet connection from that point to the end site. All these last mile discussions are always so ridiculous. But this is just 'software engineers' solution. Code A is crappy and buggy, let's write code B which works as proxy and fixes fails with A. Been there done that, countless times. Thanks, WebService / JSON / REST world. It seems that the network layer is similar. There's optical fiber to the end, but one party only agrees to deliver Internet over VDSL2. So you can use just few centimeters of VDSL2 and then switch back to single mode optic fiber or CAT6 Ethernet. Makes perfect sense, right? - If you think these solutions are stupid, have you ever used lately VGA cable? If so, welcome to the club. It's totally stupid to connect digital output to digital display with analog connection. But it works, so maybe it's smart? Who knows.
  • Sound Canceling Devices - Sample videos om YouTube are amazing, but I'm pretty sure these are fakes / prototype (fantasy) promotion. If those devices would actually work, I'm sure there would be someone would be selling these with free shipping and returns. Because I would definitely buy several instantly. And I've got plenty of friends whom would do the same. - Try before buy, should be clear. If they trust their product, no BS, it shouldn't be any kind of problem.
  • EmDrive - confirmed to work. That's major news. I'm still extremely curious how it's actually working. Waiting for first in orbit tests.

33C3 notes & keywords part 1

posted Feb 19, 2017, 12:08 AM by Sami Lehtinen   [ updated Feb 25, 2017, 11:11 PM ]

What could possibly go wrong with <insert x86 instruction here>? - ASLR, CPU cache layers, cache line, cache attacks, covert channels, flush+reload attack, prime+probe attack, SSH connection over CPU cache covert channel. Awesome. Crypto side-channel attack using cache timings. T-Tables, Bouncy Castle, Android. Flush+Flush attack. Timing leakage. Pre-fetch, Address Translation, Page Directory. Kernel Addres Space Layout Randomization (KASLR). Virtual address space -> Physical memory direct-physical map. Translation-level oracle. Rowhammer / Row hammer - attacks on privileged addresses.
The Global Assassination Grid: Espionage, Killing, National Security. Sounded like the Drone Operations Team is actually very well informed, compared to situation where you just turn up with gun in some random site and have to make decisions without all that background & analytical information. Doesn't sound that bad at all. Of course mishaps happen at times, but in general. That's much better than what it could be. DDS Sometimes media makes it seem like they would be just picking random targets from video feed. But that's not true, there' much more background intelligence behind it. Where, What, Why, etc. Intelligence Surveillance & Reconnaissance (ISR) network. Target Identification & Acquisition. Wide-band Global Satellites (WGS). Autonomous systems. Counter-surveillance, Transparency. Loitering weapon systems. Weaponized Drones. - Documentary movie about this topic is coming out and that's what I'm going to watch for sure when it's out.
Reverse Engineering Outernet - Outernet @ Wikipedia. That's pretty much legacy tech, but interesting project still.
Everything you always wanted to know about Certificate Transparency - HTTPS, Certificate Authority (CA), Online Certificate Status Protocol (OCSP), Vulnerabilities, ENISA, Implementation issues, Deployment issues, HTTPS / TLS / Drown Attack, CipherSuite mess, Signed Certificate Timestamp (SCT). Hash tree with signed tree head (STH). History proves that Certificate Authorities can't be trusted! /ct/v1/get-sth - crt.sh - Cert Watch
The Fight for Encryption in 2016 - Crypto fight in the Wake of Apple v. FBI. - The Encryption Debate. - Defend Encryption / EFF - Privacy vs Security vs Security - This is actually extremely intersting talk. This is what we've (at least I'm) been waiting for. - Hacking & Cracking mobile phones with several different attack vectors and exploits. Mythical "Secure Golden Key". - UK Snooper's Charter. - European Court of Justice. - Gag Orders. - Investigatory Powers Act - Everyone's should use strong encryption all the time.
Predicting and Abusing WPA2/802.11 Group Keys - Tornado Attack WPA-TKIP session key recovery - Broadcast group frame encrypted using Group Keys. - Flawed RNG - Weakening encryption using MITM to force RC4 encryption during handshake - Hidden terminal problem - Group Temporal Key (GTK) - Following bad standard, is bad practice. It's better to implement your own than follow the standard. - "random enough", that's well said. - Group Master Key (GMK) - Nice, the demo worked also. - Hole 196 check - Classic ARP poison - RC4 NOMORE - Don't put extremely bad example code in specification. - AP should ignore group-addressed frames. 
My own comment about previous talk 802.11n prevents use of TKIP, probably to prevent just this attack.
The DROWN Attack - Additional information at DrownAttack.com - Funny, people still usign and loving SSLv2. Well, not a real surprise at all. Nobody cares. - PreMasterSecret (PMS) - TLS RSA handshake - Bleichenbacher's Attack - Shared Key Among Protocols / Ports, that's really bad. Generating new keys isn't that expensive after all. Ciphersuite selection bug. It's also obvious that crappy code is just everywhere, even in the most critical code sections. Special DROWN. Lol, ridiculusly bad bugs. Generig DROWN 2^40. Special DROWN requires only 15 probe connections and on average 15*128=1920 trial encryptions. That's just like awww and it works with older versions of OpenSSL. Ancient SSLv2 breaks current TLS. Totally amazing talk.

I'll be posting 33C3 notes as long as I've finished watching all the interesting talks. These will be probably posted before the normal weekly post.

Digitalization, Rescue Mode, Data Erasure, BOM, Async, DDoS CRE, Data Studio

posted Feb 11, 2017, 11:00 PM by Sami Lehtinen   [ updated Feb 11, 2017, 11:01 PM ]

  • Digitalization is here, we can reduce number of different systems they're. Provide raw data for their data lake approach. Integrate to most of other key systems quite easily. As well as improve and simplify their management, supplier and order processes in general. Also improved control and visibility of things, will probably lead to large cost savings. If there's no proper tracking, it's just so common in retail that there's surprising amount of stock loss. But as I've written for a decade, all this is totally normal and nothing new.
  • Watched a documentary called Building Artificial Human. Very interesting aspects on robotics, AI, etc. But we're still very far from real androids.
  • Wow, some servers in OVH needed to be booted in Rescue Mode. But interestingly booting rescue mode Linux took several hours. That's just crazy. They probably had some kind of serious platform related issue there. Also booting to rescue mode was clearly a mistake, because there wasn't any real problem with the server but the platform.
  • Gave a lecture about proper and practical data erasure and security procedures and provided written documentation which can be followed to ensure that confidential data is properly and practically erased. Without going to ultimate paranoid tinfoil hat lengths.
  • One article says that software development is going to 2x in just three years. That's quite a growth for business sector. I agree, almost every project contains more and more integration and automation, etc. Mobile apps, Web Shops, CRM, ERP, BI and so on.
  • Reminded my self about Unicode Byte order mark (BOM), because there's one project where I need it, even if I usually don't use it.
  • Studied a few more existing e-Receipt APIs. Sorry, can't name those projects. But based on earlier experiences, I could say nothing new. They got rocket science like credit card tokenization using hashing, wow.
  • Another interesting article about Python's post-async/await. So much blah blah. Btw. As far as I've seen, none of the articles properly integrate multiprocessing with this stuff. I've seen so many programs to suffer from GIL. Async IO won't help. Your stuff becomes un-usable when GIL hits you and that's it. If it would be something worthwhile, it should trivially integrate proper multiprocessing in the standard implementation. I've seen so many projects fail with this pattern. Also it's usually quite hard to fix it. Those buffering issues mentioned in article are fun. Been there done that. Many single to many data piping systems can be easily crashed by just sending way too much data. - I liked this article, many important business as usual fail examples. Unique IDs in logs, should be obvious. - Once again, you can add something cool, create a big mess and have horrible problems. Or you can write extremely boring code, which works reliably and delivers. Yep, not cool or exiting. But I really do love boring when it comes to programming & project management & sales.
  • How to avoid a self-inflicted DDoS Attack - CRE life lessons - Interestingly there's nothing new in the article. I've implemented all the mentioned tricks in my integrations and software implementations for a long time. Because all of those are totally obvious. (Backing off, Jitter, Priority, Queue length limit)
  • Started to study Google Data Studio - Sigh, first. I needed to use proxy to gain full access. Google Data Studio isn't available in your country. What an insult, isn't Interwebs global? Anyway, this is very basic. But that's what data visualization often is. I still like Tableau very much over this, but of course it's on totally different level. Basic visualization and reports are quite easy to do with Google's Data Studio. Even easier than with Open Office Calc or MS Excel. Yet these two are seriously capacity limited. Also wrote internal memo about this. I'll be doing some more testing bit later. Just for fun wrote a Python script which takes data from current database and uploads new data / changes to Firebase database for analytics and visualizations. This layer also contains transformation / filter layer, so it's possible to select what's uploaded as well as consolidate data if required.

Quick thoughts & personal review comments EasyCrypt - easycrypt.co - concept

posted Feb 7, 2017, 9:30 AM by Sami Lehtinen   [ updated Feb 7, 2017, 9:31 AM ]

Some personal random thoughts about easycrypt.co .

How it works

First it sounds really good, easy, secure, communication using pseudonyms and 'encrypted' metadata. Let's find out what the weak spots are. Because usually things aren't as good as they seem to be on first glance.

It can be used as app or webmail, that's nice. Yet this might mean the legendary 'JavaScript' cryptography problem. No it's not about JavaScript, but generally about browsers and web tech as 'too complex' and therefore insecure platform, with way too many soft spots.
If not using EasyCrypt App / Webmail, then you don't (of course) see the encrypted message content. That's something which is natural and totally expected.

FAQ

Transparent encryption is always nice, but is strong identity management included? Encryption without strong identity is practically useless. It was encrypted to someone's public key, but you're not sure to whom exactly. Not even in pseudonymous sense, except the public key fingerprint. Of course it might not be visible. This is a common weak spot with many public key encryption solutions.
Limited to specific email service provider? - They say you can use any provider. But that's not probably true. I assume they'll be using IMAP and other protocols, which aren't actually provided by "any" provider. Of course you can use copy-paste methodology, it works. But it's annoying. Yet it's universal.

Ok, PGP / OpenPGP support, that's clear and I were kind of waiting to see it. Because they said 'external' users, it sounded like they would be using some common technology like S/MIME or OpenPGP 'internally' too.

Now they said it, IMAP required, yep. I guessed it too. So service can't be practically used with email providers which do not provide IMAP / SMTP support.

EasyCrypt plugin for popular email clients. Ok, now it's interesting, how's that different from standard PGP features integrated in many existing email apps already?

Tor / .onion support, nice.

Secret key is stored by them, and protected using a passphrase. Meaning that they've got access (if required) to everything required to decrypt the data. That's naturally a very weak spot. Even if they wouldn't store the data, they'll have it temporarily. And therefore also have access to it + encryption information, if required.

You can import your private key to PGP client. That's clear, so basically EasyCrypt is just 'alternate user interface' for 'Cloud integrated PGP'. Clear, check.

'Disclose my keys', well. Technically you're not unable. You can modify code to store unencrypted data or to store data required for decrypting data, or encrypt data for multiple recipients, etc. This is just why any single service provider claiming to make secure applications is usually bad idea. There's no way for normal users to know, what happens when they give their pass phrase to an app, is it native or a webapp doesn't really make any difference. Both can be covertly modified by powerful attacker or the authors, if and when required.

Previous things also mean that the metadata claim was isn't totally solid. Because message is transmitted over normal SMTP / email delivery platform, which means that the metadata is available, at least partially.

Time when emails are sent and the size, ahem. Maybe you forgot to mention source and destination. Those aren't directly available, but probably can be statistically correlated later by powerful advisory if required.

Next bullet is interesting. They claim that no email addresses, no IP addresses or other data / metadata will be leaked. This is something which probably will be answered in the Under the Hood section. At his point my guess is that the messages will be sent to some kind of a gateway taking care of the processing, so there's metadata but it's pretty useless. I'm curious to find out the rest. - I'll keep reading.

The OpenPGP statement sounds correct. That's just what I were kind of expecting. 'Full' metadata leak, when using OpenPGP alone.

Using a pseudonym with existing address. This is probably related to the gateway concept. Let's see.

Anonymous statement is interesting. Waiting to see the Under the Hood section. - When they say 'identity' will be hidden. It's a great question what's defined as 'identity'.

They talk about SSL, how about talking about TLS, not about SSL. PFS is good, but that's not alone probably enough. Also mentioning the CA doesn't really matter, because the problem isn't the CA alone, but it's the whole mess of certificate authorities. This is something which is widely covered and known. And I've written about it several times earlier.

Some of my questions weren't addressed, but they made me really interested to read the next section.

Under the Hood

First though, the Under the hood section is a lot shorter than I were expecting. After reading tons of academic research white papers about secure communication, this is a real stub compared to those. Yet that's not necessarily a bad thing. I personally do like very compact documents.

Ok, Tor is also a fundamental part of how EasyCrypt works.

Encryption: '4096 bit encryption', ehh... It doesn't mean anything. Ok ok, of course I know in this case it means 4096 RSA (because of OpenPGP), but in general. I don't ambiguous like statements like that. I've blogged about that earlier too.

Key management

Private keys are stored on EasyCrypt servers, encrypted with a password... - This is one of the primary weak links so far.

Yes, keys can be decrypted on any device, as long as you know the password or able to guess or bruteforce it. Also as said, there's no way for the user knowing, if the data is properly encrypted or not, and or if the encryption key / password is being leaked.

Yet I like their option for external public key registration, I could do that right now. There's nothing wrong with it. Yet for something I would really need OpenPGP for, I wouldn't probably use EasyCrypt for. But that's totally personal paranoid tinfoil hat opinion. Nor I would use any of my regular workstations or phones.

Metadata and anonymization

This should be the interesting part.

Wow, the first sentence / paragraph is cool and confusing at the same time.

Minimization of metadata is always good.

Several terms are bit ambiguous. 'Email pseudonym', 'Anonymous Cryptographic proof'. More details plz. I of course do understand the concepts, but details would be very interesting to see.

Sending data over Tor network indicates participation to Tor network as well as mount of data transferred and timings, which is metadata. Yet of course it's not as bad as the usual email metadata.

Yet again, being anonymous might make blocking different kind of content, abuse & attacks lot harder.

Appending recipient information to the encrypted message, leaves it open if the recipient information is encrypted. From the text it seems that it isn't. Yet Tor does encrypt data for transport. Are additional layers necessary, or being used. I don't know. I personally might wrap that data with EasyCrypts public key and additional OpenPGP encryption, before transferring over Tor. Probably that's just overkill, but it wouldn't hurt too much.

As we're talking about metadata. Also the amount of data and timing information is obviously known to EasyCrypt.

Message transport to the recipient

Filling headers with fake metadata is probably a bad idea for multiple reasons. Spam filtration, and misdirecting abuse, etc... Probably major services like Gmail reject the messages almost immediately. Email doesn't require any extra headers, just leaving those out would be a better option. AFAIK. Probably still leads to high spam score.

Message decryption by the recipient

That's all obvious.

Communication with external PGP users

That's all pretty standard.

Secure Webmail with single password login

Browser based Webmail / HTML5 client, ok. That's also pretty standard. This topic has been discussed deeply in other discussions and there are inherent problems with this approach, if high security is required. Hushmail case could be a good example with Java Applet approach.

bcrypt isn't recommended to be used. Yet scrypt is also pretty new. Generally it seems that using scrypt over bcrypt should be preferred.

Here's also the weak link, yes, password is processed in browser. It can be stolen from browser, or it can be sent without protecting it properly before transmission, plus countless other issues by real hackers. Like the end device security etc. Yet those apply to all encryption applications. These were just the extremely simple obvious attack vectors.

It's not defined exactly how the private key is encrypted / protected, except that it's using the users password. I would assume it's the OpenPGP standard protection, which should be ok.

To sum it up, the users password is the weak link on multiple levels. If and when required, it's trivial to break system security totally transparently, so that nobody likely notices anything.

Random thoughts, questions and comments

These are things I don't know answers to. How do you defend against DDoS on Tor? Maybe there's obvious answer to that, but I don't know it. Initially I would think it's pretty hard due the double blind anonymization.

Omitting information is sometimes a blessing, sometimes it makes things hard to follow. I often like very compact documents. Questions which follow from the reader, also usually clearly indicates how well they've understood the (purposefully) missing obvious parts.

Mixing practical solution: With nerd cypherpunk tech, free and anonymous is usually a hard mix to get right. - Problems are almost inherently guaranteed. - No this doesn't mean it wouldn't work. But it could be very hard or practically impossible to get it right and manage the system and abuse.

Just as a remark, I haven't never ever received PGP, OpenPGP, GnuPG, GPG encrypted spam. Even if my public key(s) are widely and publicly available.

Without seeing the user interface, it's hard to know if the key management provides enough visibility to the end user about the keys being used. It might be really good, or bad. Who knows.

Sometimes I wonder if the whole concept of services like this is to gather 'confidential' data, and then cash out intelligence services and others 'trusted wealthy entities' to pay for accessing the data, or directly blackmail users. Yet, I just wish that would be my personal sick thought.

Conclusions

Pretty basic implementation to utilizing well known standard technologies (mostly). Yet, it's only as secure as they want it to be, and that security and be broken without the end user knowing anything about it.

This writing is based on their web site's information on 2017-02-06. I'm not cryptography professional. These are just my nerd hobby random ramblings about the topic. - Sorry about that.

IoT, Standards, DNS, USB, Build vs Buy, Minoca, Elementary, Secure Email

posted Feb 4, 2017, 9:49 PM by Sami Lehtinen   [ updated Feb 4, 2017, 9:50 PM ]

  • Mirai and IoT security. It's just as expected. And as we all know, it's going to get much worse soon.
  • Another long discussion about standards, this is great topic. Here's some random comments about that: When something 'temporary' seems to be working well, it becomes de facto permanent. When that then breaks down, it's required to be repaired in a hurry, so it's likely that it gets fixed with similar solution. - Totally normal loop. If could get then fixed properly, but why bother, when it now works ok again. This is totally normal logic, and you'll see it everywhere you go. It has nothing to do with Internet, ICT or computing in general. Because we also live in changing world, over engineering something is silly. Then you'll end up with 'large corporation' like solutions. Where they'll rent up a building, it's upgraded with all the latest gear and then for some reason it's demolished 6 months later. It's the same discussion like with cell phone durability. What if you would get NASA space engineered cellphone, first it would cost 6k and it would actually last 20 years. But why bother, when you'll anyway replace it every 1-2 years. Or team which spent 3 years optimizing something, which never launched or got used for anything at all. Based on this, I assume all of you guys got a single fiber sockets i all rooms of your apartments installed? - You might not need it now, but it would be good start for high standards. - I do have, and I feel kind of silly. It remains to be seen if those are ever needed as long as I live here.
  • You're probably doing DNS wrong - Why make DNS a critical point? Why not have a fall back / backup solution in the application level? - Not all DNS entries are used for web-sites only. If DNS entries are used for applications, also implementing fall back / caching inside application is totally viable way. Use these DNS entries and if these fail, use these IP addresses. - This is something which should be done if the connectivity is important. Some of the backup IPs shouldn't be listed on DNS so if the attacker doesn't know the platform they're attacking well enough, they'll probably miss the services and therefore the DDoS won't cause disruption was it DNS or the servers listed in DNS entries.
  • USB standard & Apple - Yay. I've been annoyed by USB standards, connectors and stuff several times. But this is even worse than what I knew about. What a horrible mess.
  • Some Hand Held Terminal (HHT) experiments.
  • Build vs Buy, generic of specific features, etc. That's really hard decision to make at times. I guess there's no perfect solution and it needs very careful consideration in every situation.
  • Just a short list of stuff I've done:
    • Consulting, business analysis, implantation, data set up, migration, training, support, integration, project management, product management, all rounder tech guy.
    • All round experience: planning, installation, configuration, help desk, software development, reporting, project management, user experience, system reliability, technical sales
    • System integration, dozens of integrations from early planning & sales support to extended term production support, 15+ years
    • Software tailoring, functional specification requirements as well as exact technical requirement understanding
    • Cloud platforms, Server infrastructure, POS hardware, Networking, Database Administration
  • Quickly tested Minoca OS and Elementary OS. Maybe more comments bit later.
  • A few very long discussions about 'secure email providers'. My personal recommendation is that you should never trust any secure mail provider. It's much better to use PGP / GPG or some proven cryptography and non-SMTP transport. If hiding connectivity / metadata / identity isn't required, only communication security. Then it's easy. Like for business communications etc.

1-10 of 460