Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Mushkin Reactor 1TB
Default
25% Over-Provisioning

Despite the use of newer and slightly lower performance 16nm NAND, Reactor's performance consistency is actually marginally better than the other SM2246EN based SSDs we have tested. It's still worse than most of the other drives, but at least the increase in capacity didn't negatively impact the consistency, which happens with some drives. 

Transcend SSD370 256GB
Default
25% Over-Provisioning

 

Transcend SSD370 256GB
Default
25% Over-Provisioning


TRIM Validation

To test TRIM, I filled the drive with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as expected.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

69 Comments

View All Comments

  • Hulk - Monday, February 9, 2015 - link

    When it's written as 131GB writes/day for 3 years it seems like more than enough.
    But cell endurance of 144 writes seems really, really low.
  • hojnikb - Monday, February 9, 2015 - link

    It doesnt work like that. 144TB for 1TB doesn't translate to 144 p/e flash. You have to factor in write amplification, which can be more than 1 on controller like this.
    Also conservative rating is nothing new with budget driver.
  • cm2187 - Friday, February 13, 2015 - link

    Stupid question: has anyone any experience with SSD reliability over time. I.e. is it reliable to store static data 3-5y+? Or does the 3y (or 5y) guarantee also means the data should be migrated out after that period even if the number of writes has been low?
  • hojnikb - Monday, February 9, 2015 - link

    Its just a conservative rating for warranty purposes. Besides, other value drives are no better at this (evo is only "good" for 150TB).

    In reality, drives typically last many times the rated endurance.
  • DanNeely - Monday, February 9, 2015 - link

    More importantly, it's set low to scare off enterprise customers who'd subject the drive to an order of magnitude more IO.
  • toyotabedzrock - Monday, February 9, 2015 - link

    So if the endurance is 144TB on a 1TB drive, they are predicting the nand can only take 144 writes?

    That is a bit scary for even home use. I wouldn't trust my data to that.
  • zepi - Monday, February 9, 2015 - link

    There is not a single medium or drive out there that you should trust your data on. Only thing you can trust is redundancy.
  • hojnikb - Monday, February 9, 2015 - link

    No. Read my post above ^^
  • TheinsanegamerN - Monday, February 9, 2015 - link

    the 512GB crusial mx 100 endurance is only 72TB, yet people dont seem to be complaining. besides, as typical day to day use only accounts for maybe ~10GB or writes(3.65TB a year), the 144TB endurance will last far longer than the machine it is put in.
  • Murloc - Monday, February 9, 2015 - link

    yeah problem is that until it's broken, I'll keep moving it on to the next machine.
    But I write about 1 GB/week to the SSD so I should be safe, unless windows does a lot of that in the background, I don't know, I just deactivated all the bad stuff I read about.

Log in

Don't have an account? Sign up now