32C3, GDPR, OpSec, Data Compression, CLTV, Identifi, Brotli, NTFS-3g, fsck

Post date: Jan 30, 2016 10:15:09 AM

  • Listened several 32C3 presentations CCC. Great questions like value of anonymous communication. Aren't 100% of anonymous users just jerks and trolls? Lot of content about underground... Forums, Security, Privacy, Tor, Wikipedia, Trolling, Doxing, Threats, Vandalism, De-anonymization, Surveillance, Loss of Privacy, Harassment, Intimidation, Safety, Reputation, Contextual Cues. No surprises on that list, all of those are real threats and actually cause chilling effects and participation moderation. Which only means that threats and intimidation do work. One of most interesting points was what I've brought up several times. Most of de.anomyization was done using contextual queues. Which means that even if technology would be untraceable and perfectly anonymous, the content will still reveal your identity, unless exceptionally good OpSec hygiene is being maintained. What does Big Brother see while he is watching was also pretty nice, yet nothing surprising there. The Thunderstrike video made the old facts clear, there's no such thing as secure system, there are just tons of different ways in, even if the system is air gapped.
  • Checked out New EU General Data Protection Regulation (GDPR)
  • Read a few quite long OpSec & Tradecraft articles. "Cryptography is only as good as it’s user in many cases." most of people can be easily tricked to fail even if the crypto itself would be solid. Here's excellent basic checklist about secure communications. As said, in many cases cryptography only solves the first issue on the list, indirectly solving the case 2. But most of people completely forget points 3. and 4. Btw. This isn't anything new Internet hype stuff. This is the basic stuff spies and agents have had to deal for centuries. Yep, nothing new.
  • Some people claim that zipping a zip doesn't improve compression. Well well, that depends. Actually zipping a zip is beneficial in certain circumstances. Standard zip does not use solid compression. So compression dictionary is created separately for each file. Which means that if you're compressing a large number of files which have quite similar content and are somewhat small, using double zip construction can save a lot of space. As example source code, HTML, JSON and such quite similar ad often pretty small text files are a great example for that. As well as filenames and similar control stuff (metadata) is in uncompressed format, which can also be compressed when compressing again. It's so easy to demonstrate that I had to do it just for fun. 10000 x 4096 byte files compressed using zip into one zip file results as 1637802 bytes. And then re-compressing that zip again with zip leads to zip file only with 79992 bytes. 95% compression improvement on next iteration. I've observed this so many times when compressing web sites, logs, source code, mail dirs, file based queues and almost anything which comes in multiple smallish files. This is also one of the reasons why .tar.gz might end up being compressed much better than .zip file. With the sample case above .tar.gz results as 171578 bytes, using only one compression round.
  • Talked with fiends about Bitcoin and what kind of benefits CLTV is bringing and if it's any better than PoB and in what kind of circumstances.
  • Listened: Tim Pastoor: Rethinking Identity As A Decentralized Web Of Trust With Identifi.
  • Played a little with Brotli compression. Browsers are going to support it soon, and some already do (Firefox). So I know where and when to use it, when required. Brotli offers better compression than zlib, faster decoding, but uses more cpu time when compressing and potentially a lot more memory up to 16 megabytes unless limited otherwise. Here's also a comparison Brotli against other alternatives.
  • Something different: Reread article about Falcon Heavy.
  • Had NTFS corruption issue on Linux when using NTFS-3g. I used ntfsfix and after that some files got cross-linked content. Ouch! I thought that only happens with FAT volumes. Very serious issue. Now I can't trust the volume content anymore. This is especially annoying if you're not just transporting data, but you've got a master on a system which is partially corrupted. It's so easy to miss hidden corruption and it might take quite a while to notice it and then it's often easily way too late. I also got some of my git repos messed up with that same mess. Luckily it way just to delete stuff and clone it back. git fsck also showed that stuff was seriously messed up on that volume. Somebody dared to ask if I did run ntfsfix on mounted volume, well of course I didn't. But it's good think to check, eh. Now I'm using ext4 instead of NTFS to avoid future incidents.

Here's a dump from chkdsk finally using Windows:

The type of the file system is NTFS.

Volume label is ...

Stage 1: Examining basic file system structure ...

7824 file records processed.

File verification completed.

45 large file records processed.

0 bad file records processed.

Stage 2: Examining file name linkage ...

Deleting index entry defff51aa0c0f7e3a639b30983192a8a1d4fd2 in index $I30 of file 460.

11358 index entries processed.

Index verification completed.

CHKDSK is scanning unindexed files for reconnect to their original directory.

Recovering orphaned file 4ee2f09061fe8e15783d656d706d8a6ef96026 (461) into directory file 460.

Recovering orphaned file bb8e9f0d2ed843254f2c1f6e214652622bf97c (463) into directory file 460.

3 unindexed files scanned.

Recovering orphaned file 840d840e78b4225ebac85e6a2db04b80831318 (4161) into directory file 460.

3 unindexed files recovered to original directory.

0 unindexed files recovered to lost and found.

Stage 3: Examining security descriptors ...

Security descriptor verification completed.

1767 data files processed.

Correcting errors in the Volume Bitmap.

Windows has made corrections to the file system.

No further action is required.

7823056 KB total disk space.

836460 KB in 5114 files.

2252 KB in 1769 indexes.

0 KB in bad sectors.

22144 KB in use by the system.

13376 KB occupied by the log file.

6962200 KB available on disk.

4096 bytes in each allocation unit.

1955764 total allocation units on disk.

1740550 allocation units available on disk.