Kingston may not be a name that rolls off the tip of the tongue when you're talking about datacenter hardware vendors, but the company has come to have a major presence in datacenters through their DRAM modules. A lucrative and high-volume market on its own, the company has unspririsngly been attempting to pivot off of their success with datacenter DRAM into other datacenter products, but they've only met limited success thus far. Their other product lines – in particular enterprise/datacenter SSDs – have been servicable, but haven't been able to crack the market as a whole.

Still intent on slicing out a larger portion of the datacenter SSD market, Kingston has decided to raise their profile by introducing SSDs that are based around the needs of their existing DRAM customers. That means that the company's new DC500 family of SSDs is intended for second-tier cloud service providers and system integrators, rather than the top hyperscalers like Google, Microsoft, Amazon, etc. This also means that the new drives are SATA SSDs, because in this market segment – which relies more heavily on commodity components and platforms than Open Compute Project-style thorough customization – there is still significant demand for SATA SSDs.

Using NVMe SSDs adds to platform costs in the form of expensive PCIe switches and backplanes, the drives themselves are each more expensive than a SATA drive of the same capacity, and power efficiency is often better for SATA than NVMe. PCIe SSDs make it possible to cram a lot of storage performance into a smaller number of drives and servers, but where the emphasis is more on capacity and cost effectiveness, SATA still has a place.

The SATA interface itself is stuck at 6Gbps, but the technology that goes into SATA SSDs continues to evolve with new generations of NAND flash memory and new SSD controllers. Kingston's new DC500 family of enterprise SATA SSDs are our first look at Phison's new S12 SSD controller (specifically, the S12DC variant), the replacement for the S10 that has been on the market for over five years. (S11 is Phison's current DRAMless SATA controller.) While consumer SATA SSD controllers have mostly dropped down to just four NAND channels, the S12DC still has eight channels, but more for the sake of supporting high capacities than for improving performance. The S12DC officially supports 8TB, but Kingston isn't pushing things that far yet. The S12DC controller is fabbed on a 28nm process and brings major improvements to the error correction capabilities including Phison's third-generation LDPC engine.

The DC500 family uses Intel's 64-layer TLC NAND flash memory, a break from Kingston's usual preference for Toshiba NAND. 96/92-layer TLC has started to show up in the client/consumer SSD market, but it's still a bit early to be seeing it in this part of the enterprise storage market.

The DC500 family includes two tiers: the DC500R for read-heavy workloads (endurance rating of 0.5 DWPD) and the DC500M for more mixed read/write workloads (endurance rating of 1.3 DWPD). Kingston says the Intel NAND they are using is rated for about 5000 program/erase cycles, so with a warranty for a bit less than 1000 total drive writes on the DC500R they're clearly allowing for quite a bit of write amplification.

NVMe SSDs have mostly killed off the market for very high endurance SATA drives, because applications that need to support several drive writes per day tend to need higher performance than SATA can support (and as drive capacities increase, there's no longer enough time in a day to complete more than a few drive writes at ~0.5GB/s). Micron still offers a 5 DWPD SATA model (5200 MAX) but most other brands now top out around 3 DWPD for SATA drives. Those 3 DWPD and higher drives only account for about 20% of the market, so Kingston isn't missing out on too many sales by only going up to 1.3 DWPD with the DC500 family. The introduction of QLC NAND has helped lower the entry-level of this market down to around 0.1 DWPD, but Kingston doesn't have anything to offer at that level yet.

Kingston DC500 Series Specifications
Capacity 480 GB 960 GB 1920 GB 3840 GB
Form Factor 2.5" 7mm SATA
Controller Phison PS3112-S12DC
NAND Flash Intel 64-layer 3D TLC
DRAM Micron DDR4-2666
Sequential Read 555 MB/s
DC500R 500 MB/s 525 MB/s 525 MB/s 520 MB/s
DC500M 520 MB/s 520 MB/s 520 MB/s 520 MB/s
Random Read 98k IOPS
DC500R 12k IOPS 20k IOPS 24k IOPS 28k IOPS
DC500M 58k IOPS 70k IOPS 75k IOPS 75k IOPS
Power Read 1.8 W
Write 4.86 W
Idle 1.56 W
Warranty 5 years
DC500R 438 TB
0.5 DWPD
876 TB
0.5 DWPD
1752 TB
0.5 DWPD
3504 TB
0.5 DWPD
DC500M 1139 TB
1.3 DWPD
2278 TB
1.3 DWPD
4555 TB
1.3 DWPD
9110 TB
1.3 DWPD
Retail Price (CDW) DC500R $104.99 (22¢/GB) $192.99 (20¢/GB) $364.99 (19¢/GB) $733.99 (19¢/GB)
DC500M $125.99 (26¢/GB) $262.99 (27¢/GB) $406.99 (21¢/GB) $822.99 (21¢/GB)

The DC500R and DC500M are available in the same set of usable capacities ranging from 480GB to 3840GB, but they differ in the amount of spare area included, which is what allows the -M to have higher write endurance and higher sustained write performance. For sequential IO, the -R and -M versions are rated to deliver essentially the same performance, bottlenecked by the SATA link. The same is true for random reads, but steady-state random write performance is limited by the flash itself and varies with drive capacity and spare area. The DC500M models all have higher random write performance than all of the DC500R models.

Power consumption is rated at a modest 1.8 W for reads and a fairly typical 4.86 W for writes. Low-power idle states are usually not included on enterprise drives, so the DC500s are rated to idle at 1.56 W.

Left: DC500R 3.84 TB, Right: DC500M 3.84 TB

The DC500R and DC500M both use the same plain metal case, but the PCBs inside have some minor layout changes due to the differences in overprovisioning. Our 3.84TB samples feature raw capacities of 4096GB for the DC500R and 5120GB for the DC500M, so the -R versions have comparable overprovisioning to consumer SSDs while the -M versions have about three times as much spare area. The extra flash on the DC500M also requires it to have more DRAM: 6GB instead of the 4GB found on the DC500R 3.84TB.

Physically, the memory is laid out differently between the two drives. The 3.84TB DC500R has a total of 16 packages with 256GB each of NAND, and the 3.84TB DC500M uses 10 packages of 512GB each rather than mix packages of different capacities. In both cases this is Intel NAND packaged by Kingston. Since the -M has fewer NAND packages, it also gets away with fewer of the small TI multiplexer chips that sit next to the controller. The -M also has two fewer tantalum caps for power loss protection despite having more total NAND and DRAM.

The Competition

There are plenty of competing enterprise SATA SSDs based on 64-layer 3D TLC, but many of them have been on the market for quite a while; Kingston's a bit late to market for this generation. Samsung's SATA SSDs launched last fall are the only current-generation drives we have to compare against the Kingston DC500s, and all of our older enterprise SATA SSDs are far too outdated to be relevant.

The Samsung 883 DCT falls somewhere in between the DC500R and DC500M, with a write endurance of 0.8 DWPD (compared to 0.5 and 1.3 for the Kingston drives). The Samsung 860 DCT is a bit of an oddball since it lacks one of the defining features of enterprise SSDs: power loss protection capacitors. It also has quite a low endurance rating of just 0.2 DWPD, which is almost in QLC territory. Despite these handicaps, it still uses Samsung's excellent controller and firmware, and is tuned to offer much better performance and QoS on server workloads than can be expected from the client and consumer SSDs it superficially resembles.

To give a sense of scale, we've also included results for Samsung's entry-level datacenter NVMe drive, the 983 DCT, specifically the 960GB M.2 model. Some relevant SATA competitors that we have not tested include the Intel D3-S4510 and Micron 5200 ECO, both using the same 64L TLC as the Kingston drives but with different controllers.

Test System

Intel provided our enterprise SSD test system, one of their 2U servers based on the Xeon Scalable platform (codenamed Purley). The system includes two Xeon Gold 6154 18-core Skylake-SP processors, and 16GB DDR4-2666 DIMMs on all twelve memory channels for a total of 192GB of DRAM. Each of the two processors provides 48 PCI Express lanes plus a four-lane DMI link. The allocation of these lanes is complicated. Most of the PCIe lanes from CPU1 are dedicated to specific purposes: the x4 DMI plus another x16 link go to the C624 chipset, and there's an x8 link to a connector for an optional SAS controller. This leaves CPU2 providing the PCIe lanes for most of the expansion slots, including most of the U.2 ports.

Enterprise SSD Test System
System Model Intel Server R2208WFTZS
CPU 2x Intel Xeon Gold 6154 (18C, 3.0GHz)
Motherboard Intel S2600WFT
Chipset Intel C624
Memory 192GB total, Micron DDR4-2666 16GB modules
Software Linux kernel 4.19.8
fio version 3.12
Thanks to StarTech for providing a RK2236BKF 22U rack cabinet.

The enterprise SSD test system and most of our consumer SSD test equipment are housed in a StarTech RK2236BKF 22U fully-enclosed rack cabinet. During testing for this review, the front door on this rack was generally left open to allow better airflow, and some Silverstone FQ141 case fans have been installed to help exhaust hot air from the top of the cabinet.

The test system is running a Linux kernel from the most recent long-term support branch. This brings in about a year's work on Meltdown/Spectre mitigations, though strategies for dealing with Spectre-style attacks are still evolving. The benchmarks in this review are all synthetic benchmarks, with most of the IO workloads generated using FIO. Server workloads are too widely varied for it to be practical to implement a comprehensive suite of application-level benchmarks, so we instead try to analyze performance on a broad variety of IO patterns.

Enterprise SSDs are specified for steady-state performance and don't include features like SLC caching, so the duration of benchmark runs doesn't have much effect on the score, so long as the drive was thoroughly preconditioned. Except where otherwise specified, for our tests that include random writes the drives were prepared with at least two full drive writes of 4kB random writes. For all the other tests, the drives were prepared with at least two full sequential write passes.

Our drive power measurements are conducted with a Quarch HD Programmable Power Module. This device supplies power to drives and logs both current and voltage simultaneously. With a 250kHz sample rate and precision down to a few mV and mA, it provides a very high resolution view into drive power consumption. For most of our automated benchmarks, we are only interested in averages over time spans on the order of at least a minute, so we configure the power module to average together its measurements and only provide about eight samples per second, but internally it is still measuring at 4µs intervals so it doesn't miss out on short-term power spikes.

Performance at Queue Depth 1
Comments Locked


View All Comments

  • Umer - Tuesday, June 25, 2019 - link

    I know it may not be a huge deal to many, but Kingston, as a brand, left a really sour taste in my mouth after V300 fiasco since I bought those SSDs in a bulk for a new build back then.
  • Death666Angel - Tuesday, June 25, 2019 - link

    Let's put it this way: they have to be quite a bit cheaper than the nearest, known competitor (Crucial, Corsair, Adata, Samsung, Intel...) to be considered as a purchase by me.
  • mharris127 - Thursday, July 25, 2019 - link

    I don't expect Kingston to be any less expensive than ADATA as Kingston serves the mid price market and ADATA the low priced one. Samsung is a supposed premium product with a premium price tag to match. I haven't used a Kingston SSD but have some of their other products and haven't had a problem with any of them that wasn't caused by me. As far as picking my next SSD, I have had one ADATA SSD fail, they replaced it once I filled out some paperwork, they sent an RMA and I sent the defective drive back to them, the second one and a third one I bought a couple of months ago are working fine so far. My Crucial SSDs work fine. I have a Team Group SSD that works wonderfully after a year of service. I think my money is on either Crucial or Team Group the next time I buy that product.
  • Notmyusualid - Tuesday, June 25, 2019 - link

    ...wasn't it OCZ that released the 'worst known' SSD?

    I had almost forgotten about those days.

    I believe the only customers that got any value out of them where those on PERC and other known RAID controllers, which were not writing < 128kB blocks - and I wasn't one of them. I RMA'd & insta-sold the return, and bought X25M.

    What a 'mare that was.
  • Dragonstongue - Tuesday, June 25, 2019 - link

    Sandforce controller was ahead of it's time, not in the most positive ways all the time either...

    I had an Agility 3 60gb, used for just over 2 years for my system, mom used now an additional over 2.5 years, however it was either starting to have issues, or the way mom was using caused it to "forget" things now and then.

    I fixed with a crucial mx100 or 200 (forget LOL) that still has over 90% life either way, the Agility 3 was "warning" though still showed as over 75% life left (christmas '18-19) .. def massive speed up by swapping to more modern as well as doing some cleaning for it..

    SSD have come A LONG way in a short amount of time, sadly the producers of them via memory/controller/flash are the problem bad drivers, poor performance when should not, not work in every system when should etc.
  • thomasg - Tuesday, June 25, 2019 - link

    Interestingly, I still have one of the OCZs with the first pre-production SandForce, the Vertex Limited 100 GB, which has been running for many years at high throughput and many many Terabytes of Writes.
    Still works perfectly.
    I'm not sure I remember correctly, but I think the major issues started showing up for the production SandForce model that was used later on.
  • Chloiber - Tuesday, June 25, 2019 - link

    I still have a Supertalent Ultadrive MX 32GB - not working properly anymore, but I don't even remember how many firmware updates I put that one through :)
    They just had really bad, buggy firmwares throughout.
  • leexgx - Wednesday, June 26, 2019 - link

    main problem with sandforce is the compressed layer and the nand layer was not ever managed correctly with made trim ineffective and GC was having to be used on new writes resulting in high access times and half speed writes after 1 full drive of writes , you note did not have to be filled.if it was a 240gb ssd all you had to do to slow it down permanently was write over time 240-300gb(due to compression to get an actual full 1DWP) of data only way to reset it was secure erase (unsure if that was ever fixed on the SandForce SF3000 , seagate enterprise SSDs and ironwolf nas SSD)

    other issue which was more or less fixed was rare BSOD (more or less 2 systems i managed did not like them) or well the drive eating it self and becoming a 0Mb drive (extremely rare but did happen) the 0mb bug i think was fixed if you owned a intel drive but the BSOD fix was limited success
  • Gunbuster - Tuesday, June 25, 2019 - link

    Indeed. Not going to support a company with a track record of shady practices.
  • kpxgq - Tuesday, June 25, 2019 - link

    The V300 fiasco is nothing compared to the Crucial V4 fisaco... quite possible the worst SSD drives ever made right along with the early OCZ Vertex drives. Over half the ones I bought for a project just completely stopped working a month in. I bought them trusting the Crucial brand name alone.

Log in

Don't have an account? Sign up now