Micron Launches 9400 NVMe Series: U.3 SSDs for Data Center Workloads
by Ganesh T S on January 9, 2023 9:20 AM EST
Micron is taking the wraps off their latest data center SSD offering today. The 9400 NVMe Series builds upon Micron’s success with their third-generation 9300 series introduced back in Q2 2019. The 9300 series had adopted the U.2 form-factor with a PCIe 3.0 x4 interface, and utilized their 64L 3D TLC NAND. With a maximum capacity of 15.36 TB, the drive matched the highest-capacity HDDs on the storage amount front at that time (obviously with much higher performance numbers). In the past couple of years, the data center has moved towards PCIe 4.0 and U.3 in a bid to keep up with performance requirements and unify NVMe, SAS, and SATA support. Keeping these in mind, Micron is releasing the 9400 NVMe series of U.3 SSDs with a PCIe 4.0 x4 interface using their now-mature 176L 3D TLC NAND. Increased capacity per die is also now enabling Micron to present 2.5″ U.3 drives with capacities up to 30.72 TB, effectively doubling capacity per rack over the previous generation.
Similar to the 9300 NVMe series, the 9400 NVMe series is also optimized for data-intensive workloads and comes in two versions – the 9400 PRO and 9400 MAX. The Micron 9400 PRO is optimized for read-intensive workloads (1 DWPD), while the Micron 9400 MAX is meant for mixed use (3 DWPD). The maximum capacity points are 30.72 TB and 25.60 TB respectively. The specifications of the two drive families are summarized in the table below.
Micron 9400 NVMe Enterprise SSDs
U.3 2.5″ 15mm
PCIe 4.0 NVMe 1.4
Micron 176L 3D TLC
Random Read (4 KB)
1.6M IOPS (7.68TB and 15.36TB)
1.5M IOPS (30.72TB)1.6M IOPS (6.4TB and 12.8TB)
1.5M IOPS (25.6TB)
Random Write (4 KB)
600K IOPS (6.4TB and 12.8TB)
550K IOPS (25.6TB)
14-21 W (7.68TB)
17-25W (30.72TB)14-21 W (6.40TB)
The 9400 NVMe SSD series is already in volume production for AI / ML and other HPC workloads. The move to a faster interface, as well as higher-performance NAND enables a 77% improvement in random IOPS per watt over the previous generation. Micron is also claiming better all round performance across a variety of workloads compared to enterprise SSDs from competitors.
The Micron 9400 PRO goes against the Solidigm D7-5520, Samsung PM1733, and the Kioxia CM6-R. The Solidigm D7-5520 is handicapped by lower capacity points (due to its use of 144L TLC), resulting in lower performance against the 9400 PRO in all but the sequential reads numbers. The Samsung PM1733 also tops out at 15.36TB with performance numbers similar to that of the Solidigm model. The Kioxia CM6-R is the only other U.3 SSD with capacities up to 30.72TB. However, its performance numbers across all corners lags well behind the 9400 PRO’s.
The Micron 9400 MAX has competition from the Solidigm D7-P5620, Samsung PM1735, and the Kioxia CM6-V. Except for sequential reads, the Solidigm D7-P5620 lags the 9400 MAX in performance as well as capacity points. The PM1735 is only available in an HHHL AIC form-factor and uses PCIe 4.0 x8 interface. So, despite its 8 GBps sequential read performance, it can’t be deployed in a manner similar to that of the 9400 MAX. The Kioxia CM6-V tops out at 12.8TB and has lower performance numbers compared to the 9400 MAX.
Despite not being the first to launch 32TB-class SSDs into the data center market, Micron has ensured that their eventual offering provides top-tier performance across a variety of workloads compared to the competition. We hope to present some hands-on performance numbers for the SSD in the coming weeks.
Post Your Comment
I suggest Micron 9400 PRO and MAX go against the Kioxia CM7-R and CM7-V, which haven’t released the briefs. The performance of Kioxia CM6-R and CM6-V is worse than Micron 9400 series. Moreover, they are very hot. And predictably, Memblaze will introduce their new product after the Micron, soon.
I was part way through the article before I realized these were just flash drives, not crosspoint. It was the capacity figures that triggered the realization. No way someone’s going to put 30.7TB of that in a drive. You can put stuff in space for less that that would cost. And who would want a tiny U.2 PCI-E v4x4 ‘straw’ to drink it through?
This is for the datacenter market. Very different requirements. The key driver here is the IOPS. Think of a server supporting 50 different engineers doing a project compile. Lots of small files being read in simultaneously by different users. All of them can be from this one single disk. Sequential bandwidth is only one part of the story. Random IOPS is key in a lot of other scenarios – databases, ML training, OLTP etc. etc..
Wow, what was your hint? Was it the U.3 profile? Maybe the title of the article?
What I’m refering to is the ration between drive size and both total daily data written or write speed. In other terms, how long–at full speed–does it take to exceed the DWPD or to fill a drive. The larger the drive and the slower the interface, the worse the ratio.
Not every workload is the same in the datacenter, you know. You need to make sure you’re using hardware that’s appropriate for your needs. So, understanding where these drives fit in that arena is important. Hence my “large drive, small interface” comment. That tells us where in the spectrum of performance this(these) drive sits.Reply
Don’t forget that these will be used in a SAN so the writes will be spread across more drives. According to VMware’s vSAN documentation, even a lowly 1 DWPD drive could be a write caching drive if the capacity is high enough. I can tell you that at 7.68TB that would fall into their Performance Class F (2nd highest, needs 350k+ random IOPS for the highest level) and Enduranc Class D (highest level for 7.3PB+ write endurance). Basically it would qualify for being a write cache drive for the highest performance vSANs. Also you will run into storage network bottlenecks before PCIe bus bottlenecks in an array. Basically if you have 24x PCIe 4 SSDs you would need quad 400 Gbps connections to be able to take all the possible storage bandwidth.
The pricing is not unreasonable for what you get. As for bandwidth, in the enterprise, you’re also likely to put the drives into a raid array, and your bandwidth is aggregated across all the drives in the array for sufficiently busy workloads.
You will be limited by your network speed before the bus speed.
Assuming it crosses the network. For those with a monster server those numbers would be impressive.
In this case it would be your storage network. 99.9% of all applications are run either on a VM or in a container. It is getting rarer by the day that a company runs even their largest DBs on a physical appliance. Doing it in a VM is the better solution.