Today Marvell is announcing the first NVMe SSD controllers to support PCIe 5.0, and a new branding strategy for Marvell's storage controllers. The new SSD controllers are the first under the umbrella of Marvell's Bravera brand, which will also encompass HDD controllers and other storage accelerator products. The Bravera SC5 family of PCIe 5.0 SSD controllers will consist of two controller models: the 8-channel MV-SS1331 and the 16-channel MV-SS1333.

Marvell Bravera SC5 SSD Controllers
  MV-SS1331 MV-SS1333
Host Interface PCIe 5.0 x4 (dual-port x2+x2 capable)
NAND Interface 8ch, 1600 MT/s 16ch, 1600 MT/s
DRAM DDR4-3200, LPDDR4x-4266 with ECC
Sequential Read 14 GB/s
Sequential Write 9 GB/s
Random Read 2 M IOPS
Random Write 1 M IOPS
Max Controller Power 8.7 W 9.8 W
Virtualization 16 Physical Functions, 32 Virtual Functions

These new SSD controllers roughly double the performance available from PCIe 4.0 SSDs, meaning sequential read throughput hits 14 GB/s and random read performance of around 2M IOPS. To reach this level of performance while staying within the power and thermal limits of common enterprise SSD form factors, Marvell has had to improve power efficiency by 40% over their previous generation SSD controllers. That goes beyond the improvement that can be gained simply from smaller fab process nodes, so Marvell has had to significantly alter the architecture of their controllers. The Bravera SC5 controllers still include a mix of Arm cores (Cortex-R8, Cortex-M7 and a Cortex-M3), but now includes much more fixed-function hardware to handle the basic tasks of the controller with high throughput and consistently low latency.

Such an architectural shift often means sacrificing flexibility, but Marvell doesn't expect that to be a problem thanks in large part to the Open Compute Project's Cloud SSD specifications. Those standards go beyond the NVMe spec and define which optional features should be implemented, plus target performance and power levels for different form factors. The Cloud SSD specs were initially a collaboration between Microsoft and Facebook but have caught on in the broader market and even have the support of traditional enterprise server vendors like Dell and HP. This allows controller vendors like Marvell and SSD manufacturers to more narrowly focus their product development efforts, and to target a wider range of customers with a single hardware and firmware platform. In spite of the shift toward more fixed hardware functionality, the Bravera SC5 controllers still support a wide range of features including NVMe Zoned Namespaces (ZNS), Open Channel SSDs and Kioxia's Software-Enabled Flash model.

In addition to being the first available PCIe 5.0 SSD controllers, the Bravera SC5 family includes the first 16-channel controller designed to fit on the EDSFF E1.S form factor, using a controller package size of 20x20 mm with peak controller power of 9.8 W. The new controllers are currently sampling to select customers, with the option of using Marvell's firmware or developing custom firmware.

Comments Locked


View All Comments

  • AdrianBc - Thursday, May 27, 2021 - link

    A simple implementation of in-band ECC would require too many extra write cycles to have an acceptable performance.

    Complex implementations, like the in-band ECC controllers that are enabled in some embedded SKUs of Intel Tiger Lake U and Elkhart lake use clever caching algorithms to eliminate most of the extra writes.

    However, caching is never foolproof, there will always be some applications where most of the writes cannot be cached and have to go to the memory lowering dramatically the performance.

    In my opinion in-band ECC is a bad solution. It is an ugly workaround for the fact that the market for users that value reliability is much smaller that the market for clueless users, so the LPDDR have been not made available also with the larger width required for normal ECC.

    In-band ECC is better than no ECC, but much worse than traditional ECC.
  • mode_13h - Saturday, May 29, 2021 - link

    > In-band ECC is better than no ECC, but much worse than traditional ECC.

    Thanks for sharing your thoughts. I agree, but it does seem attractive for certain embedded applications, especially where the number of DRAM chips might be too low to support conventional ECC.
  • theno1patrick - Friday, July 2, 2021 - link

    If you want some reliability on your SSD/NVME, check out the new plot ripper by sabrent. insane PBW!
  • saratoga4 - Thursday, May 27, 2021 - link

    It is first and foremost a cost savings measure to enable vendors to sell DRAM cells that would otherwise have an error rate too high to be usable.
  • KarlKastor - Thursday, May 27, 2021 - link

    Makes sense. Thank you for the explanation.
  • theno1patrick - Friday, July 2, 2021 - link

    Typical SSDs maybe, but spaceinvaderone on youtube has a video where he's making an unraid server at his friends business where they're using 100TB SSDs, its absolutely ridiculous. Per his video each one cost roughly $40k
  • Samus - Sunday, May 30, 2021 - link

    I'm surprised it's even DDR4. Most SSDs were still DDR3LP until a year or two ago.
  • mode_13h - Sunday, May 30, 2021 - link

    Look at the data rates! And it's only using a 64-bit datapath to DRAM, so DDR3L was out of the question.
  • DigitalFreak - Thursday, May 27, 2021 - link

    14GB/sec... So for consumer SSDs with QLC to hit that number you'd need 64 channels and 16TB of flash. LOL
  • KarlKastor - Thursday, May 27, 2021 - link

    Why that? That's the read speed. Is not much lower with QLC.

Log in

Don't have an account? Sign up now