Opsec, InfoSec, Whonix, Gfast, Data Storage

  • Listening just one opsec / infosec show, and they say there are two ends of the spectrum, security versus usability. Haha. I just today spent around 12 hours working hard, updating password / closing accounts / updating emails / verifying recovery codes / recovery emails / phone numbers / rotating my recovery prepaid and some other basic account management stuff. Only major win is that I also closed lot of unnecessary accounts, instead just abandoning those. Yet it makes scamming harder and security better, if this email address and phone number are used with one bank and with separate dedicated laptop, it becomes much harder to gain access to vital information or deceive the user. Sorry, wrong context for this phone number / email.

  • Spent one day fine tuning, testing and configuring Whonix setup with luks and stuff... This should be tinfoil hat class setup. Anonymous prepaid, "clean hardware", no other radio devices than the adapter connected to the Whonix gateway and the Workstation connected to it is completely void of any radio devices. Should be pretty ok. Everything with strong Full Disk Encryption, with nested hidden encrypted volumes, which can be mounted based on required situation, leaving still other volumes locked. Safe? Nope, but better than the usual I'm browsing Internet setup. - No not using the virtualized version. The Gateway is system with two physical Ethernet ports and Workstations is it's own secure system as well. The encrypted containers can be also easily disconnected, so you can't connect the different sessions identities to each other, even if you could gain access on the Workstation computer via the network. Yet the Workstation is run in VM container, to hide the host hardware information. The work station is amnesic, if required, because the image file it uses, can be trivially replaced with original or pre-configured one. As well as it's of course possible to have multiple separate workstation configurations to be used based on context / operation.

  • Studied G.fast @ Wikipedia, G.mgfast, MGFAST, XG-FAST, TDSL. Copper speeds are getting faster and faster. So installing fiber can be delayed further at older sites. Allowing 5 and 10 Gigabits/s speeds. Full-duplex operation using echo cancellation.

  • Discovering Hard Disk Physical Geometry Through Microbenchmarking - Fantastic article, true nerd science. I really loved it. Track Skew was called Sector Interleave value with MFM drives earlier. There's something new in the article I didn't know about: sector and track slipping. Don't forget page 2 with results simply beautiful.

  • Everything I know about SSDs- It seems like a strange design MLC would store multiple pages in parallel in the same cells. Why just not make the pages larger or number of cells / page smaller? Maybe there's a good reason for this, but it wasn't mentioned in the article. Deterministic Zero Read after TRIM (DZAT), Zero Return after TRIM (ZRAT) or Read Zero after TRIM (RZAT)? It seems that different sources use different terms. I would have been curious about wear leveling, especially how it's implemented on low level. Because I've shortly thought about it while walking out, and I haven't figured out any great solution to do it. In general nice article, it also didn't blindly repeat some misconseptions widely spread around the web.

  • Relocated /tmp /var/log and some other volatile content a few servers to flash-drives. The servers mostly store passive data, this allows the primary data disks to start just a few times per day, when the massive ETL batch processing is being completed. All the usual constant logging and temp processing, doest end up starting the main storage drives. Yes, it's a good question why this server doesn't use flash as system drive. But that'll get sorted out probably years later. Most likely when the primary system drive fails. kw: Ubuntu, Server, /var/log, /tmp, relocating mount points to flash media. Due to the disk caching, basically everything being read from the disks repeatedly ends up being cached. Only thing starting disks are log / process tmp writes. Yet I used ext4 with a journal, so if the FTL is bad, it might kill the drive. It remains to be seen.

2020-12-27