Slow Flash, Slow USB, Random Write, Tox, Disk I/O, Logic, Xz compression, Etag

Post date: Dec 11, 2016 5:51:08 PM

  • One friend claimed that 9 KB/s is slow speed for SD card? Is it? I tested one SD card with 1 byte random writes and it turned out that the actual write speed was extremely slow. The reason for this extreme slow speed is garbage collection and flash erase cycles. Even if sequential write speed would be high, random write speed is really slow, because technically the SD card rewrites large blocks of data on every byte being changed. What was the final write speed? Well, I think it was lower than what you expected, this using NTFS without closing file between writes. - This just shows how important batching and generally thinking about data access / write patterns is. You claim the SD card I used was slow? No actually it wasn't, this SD card can write sequential data at 40 MB/s which is quite nice speed and read over 120 MB/s. And the write speed? It was 0.48 bytes / second. That's incredibly slow. Compared to that 9 KB/s isn't bad at all. - If fsync is dropped, then the run is fast, but the data hasn't been actually written to the SD card yet. So when you choose to cleanly unmount it, it will take a surprisingly long time. So even if it's slightly faster this way, it still doesn't make the data rate high.
  • I also updated large number of small files like web image gallery to USB 3.0 Flash stick. The actual write rate was around 64 KB/s due to write amplification and 'random' meta data update writes etc. -> USB Flash storage can be extremely slow on this kind of work. As said earlier, it's just better to compress all up and write a one big blob instead of small fragmented files, which absolutely and totally kills performance.
  • Still looking for perfect Skype replacement... Maybe Tox could be it? Maybe not? WhatsApp lacks video calls. Fix, lacked, when I wrote this post to post queue. Now WhatsApp got working video calls. Yet Telegram doesn't got video calls. Google's Duo is nice video app, but why have yet another app just for video calls, etc.
  • Just wondering how bad disk I/O problems lead to Windows Server 2012 R2 dying with Black Screen of Death? It seems that some timeouts with server are quite long. From some posts I did get the fact that the default timeout would be 60 seconds and there would be 8 retries before things fail. This should mean there's plenty of time to deal with bad I/O. - But I don't have actual experience or knowledge in this topic. Could some one confirm or deny my uneducated guess? - This is related to Ceph performance issues.
  • Complete lack of logic is really interesting feature in many people. It isn't that hard. I'm not very logical / smart guy, but still some people make me totally crazy by totally lacking any kind of logic.
  • Interesting post about Xz compression format shortcomings, yet I personally feel most of the arguments being not so great in practical terms. 2.5 point is true, but so what? I usually consider archives to be total loss anyway if anything is corrupted. It's just safer to assume so. Either it's good, or it isn't and if it isn't, then it's better to get non corrupted version of the data. It's just like with databases. I hate people trying to "fix databases" or data, it's going to be just extended pain. It's better to get to step where integrity is good and then replay data, than just try to arbitrarily fix the partially corrupted database or archive. It's just like running AV on infected or hacked server system. Now it's clean. Right, you'll never know if it's good or not. But assuming it's fixed is quite dangerous. 2.7 everyone knew that LZMA2 is what it is, it's supposed to make compression work in parallel, it's not compression, it's speed optimization. 2.9 trailing data? Why you would want to add trailing data to complex container, sounds crazy to begin with. Some of the error detection math goes over my head, but it's interesting claim that crc32 would be more accurate than sha-256. Detecting errors in parts which don't matter is of course silly and that brings up the issue of false positives which they do well mention in the article. Yet point of compression is to store data as efficiently as possible, so why there are bits of data which aren't even required is a good question.
  • Changed one project to use crc32 based etags instead of sha-256, sha-256 is real overkill for such a simple purpose.