Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Transcend SSD370 256GB
Default
25% Over-Provisioning

Despite the custom Transcend firmware, performance consistency is an exact match with ADATA's SP610. I'm suspecting that the reason for low steady-state performance might be the hardware because the SM2246EN is a single-core design. Most controller designs today are multicore because today's NAND requires a lot of management and with multiple cores the NAND management can be dedicated to one or more cores, which leaves the rest of the cores available for host IO processing. In Silicon Motion's case, the one core has to take care of everything from host IOs to NAND management, which translates to lower overall performance as the controller can't keep up with everything that needs to be done.

Transcend SSD370 256GB
Default
25% Over-Provisioning

 

Transcend SSD370 256GB
Default
25% Over-Provisioning


TRIM Validation

To test TRIM, I filled a 128GB SSD370 with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as expected.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

44 Comments

View All Comments

  • danwat1234 - Saturday, January 21, 2017 - link

    Might not have been the flash degration, perhaps some other failure. A couple hundred TB before real failure probably. At least 100 I would say. Google this thread "SSD Write Endurance 25nm Vs 34nm" - has extreme testing to failure. But yeah, your SSDs were probably sub 25nm. Reply
  • nathanddrews - Tuesday, January 27, 2015 - link

    Has AT ever done anything beyond testing TRIM and provisioning? Are you talking about prolonged write endurance? I think the manufacturer states that. Or are you thinking of this?
    http://techreport.com/review/27062/the-ssd-enduran...
    Reply
  • Solid State Brain - Tuesday, January 27, 2015 - link

    The quoted numbers are what one would normally expect from honest SSD manufacturers who take into account actual 2x nm MLC NAND endurance with random workloads, based on a 3000 P/E cycles threshold. It's really nice that Transcend doesn't just settle with "40 GB/day" or "80 GB/day" or similar figures just because most consumers won't ever write that much daily. Reply
  • Dr0id - Tuesday, January 27, 2015 - link

    Do you know plan on reviewing the Muskin Enhanced Reactor series? The 1 TB model seems to be the least expensive model on Newegg for that capacity. Reply
  • Kristian Vättö - Tuesday, January 27, 2015 - link

    That's the next drive in the queue, so check back next week :) Reply
  • hojnikb - Tuesday, January 27, 2015 - link

    Give some love to the newfly released BX100 (based on the same controller). Looks like a nice budget offering from Crucial that happens to have very high random io for that controller. Reply
  • Kristian Vättö - Tuesday, January 27, 2015 - link

    I don't have samples yet. Reply
  • romrunning - Tuesday, January 27, 2015 - link

    In most of the tests, the Crucial MX100 beats the Transcend SSD370 at the same capacity. The Crucial drives are also cheaper by a few bucks. If that's the case, then why is it said that the Transcend drives are undercutting their competitors? Also, how can you draw the conclusion that the Transcend is the best value drive - better than the MX100? Reply
  • hojnikb - Tuesday, January 27, 2015 - link

    Because it kills every crucial offering in mixed workload (destroyer).
    Sequential speeds mean very little with ssds.
    Reply
  • Don Tonino - Tuesday, January 27, 2015 - link

    How do mixed workload correlate with the random write/read results? I've seen the same behaviour in another reviews, where the aggregate results of the SSD370 are shown to be much better than the MX100, notwithstanding both sequential and random results being much better on the latter.
    As I'm debating which SSD to buy to use as storage for my Steam library, I'd be interested in better understanding how to tell which one of the two is better suited.
    Reply

Log in

Don't have an account? Sign up now