Blog‎ > ‎

Thoughts about Seagate 8TB HDD drive review

posted Mar 7, 2015, 8:39 PM by Sami Lehtinen   [ updated Mar 7, 2015, 10:26 PM ]
Seagate 8TB HDD drive review

I didn't like the review. No, I didn't say I didn't like the drive. It seems that the tests they run wasn't designed for this kind of usage. As well as some very interesting key data was completely missing. They should notice that as the article says, the SMR is drive managed. So they can't test it like traditional drives are tested. The complex internal state of the drive affects the situation seriously. As well as with that drive capacity the internal garbage collection (GC), data compaction, re-ordering, processing, releasing space for writes and so will take hours, maybe even days if there's heavy load on the drive.

So if they did the tests as those are usually done, in quite short batch, that's one of the mistakes. You should run test, and then run next test tomorrow. Also the total amount of data written and especially modified on drive affects the state a lot. Incredibly fast random 4 KB write means that the data needs to be later arranged on disk. I would be very interested to see even simple chart where there's random 4KB write, measured. As well as blocks and latency & MB/s chart for just growing write data set. Because I assume it changes at some point drastically. This test should be run up to 24 TB of writes.

My guess is that there's quite limited amount which is written extremely quickly. Then there's some amount that's written with higher performance in some kind of write-ahead-log (WAL) and at some point before the 8 TB limit, the performance drops even further. I would also like to see same test with larger blocks and then finally with linear write where drive is just written over and over in order. But they didn't test those features, they tested it as it would be traditional disk, but it isn't.

This kind of drive could also benefit from discard / trim feature, which would tell it that it can ignore data in some existing blocks when doing garbage collection and write those directly over and avoid read modify write cycle.

Also the test 70% read and 30% write was bit strange. They didn't mention how it was done. Because they got so much higher read IOPS compared to write IOPS it makes me think that the test was run as example using setup where there are 7 threads reading and 3 threads writing. So if the drive and OS prioritized reads this is what I would assume to see. It's really easy to forget how big part operating system plays in drive tests, unless tests are run on RAW drive interface. In this case the results they got are possible. But we could run similar test using bit different setup. Let's just run 10 threads, which are all reading 7 times and writing 3 times in cycle or something similar. In this case also the performance could be greatly affected by the fact if the 3 writes are some of the blocks in the seven reads or not. If I would have one of these Seagate drives, I would run quite different tests set than what they did. Using this kind of synthetic tests which do not reflect reality in any way.

This is a good example like how people mischaracterized SSHD hybrid drives using random read and some consumer SSD drives with prolonged exhaustive write tests, causing drive to jam with slow block erasure and GC, which won't happen in normal usage. In this case of hybrid drives random read didn't really give right picture about performance in daily usage. As well as in case of those SSD disks sustained high speed write didn't give right picture. Especially using this kind of synthetic tests can give bad impression, because those doesn't reflect reality in anyway. These tests are only good for measuring performance of disks without complex internal state.

Of course during these tests it's also easy to forget that at least in desktop usage write performance isn't that important? Why? Because OS can buffer writes just like the Seagate Archival HHD does. I can write even to slow USB stick 1 GB instantly and then OS just writes the data to stick in background. As well as when they say it's archival drive, so the write process to disk is buffered using alternate storage. Nobody's actually waiting for it to be completed. It just trickles in background, probably freeing space on alternate storage like SSD or SAS drives when completed.