Comparing a 12 TB HDD to a 30 TB SSD (for Object Storage)
Yesterday, Samsung announced a 30TB 2.5″ SSD. This is the highest capacity yet achieved for a single flash drive in production, and, even if the price is prohibitive for now, the TCO of a scale-out solution based on this device may surprise you.
Because of its form factor (2.5″) and capacity, this device is 5 times denser than the largest HDD available today (12TB). This is a huge difference that directly impacts datacenter footprint, the number of nodes necessary to build a configuration, and more.
Overall system resiliency
With this type of density, it is possible to think about smaller nodes too. The throughput of these flash drives is very high, and you do not need to fit 90+ devices in a single node to match the CPU and network performance of similar capacity HDDs. Smaller nodes have several advantages, such as a smaller failure domain, while allowing for better data distribution and faster date rebuilds!
I couldn’t find the specs for this drive, but I suppose it will be similar to other SSDs (also because it has to comply with SATA specifications), meaning that it will have power consumption that will range from 4 – 7W (probably 3.5 – 5W).
If so, this SSD consumes half the power of an HDD (7-14W). This number alone doesn’t say much but, again, if I start thinking about W/TB, the difference is huge: it represents a ten-fold improvement.
QED. There is no debate here.
Endurance and failure management
Mechanical HDDs are more susceptible to failures than SSDs. And SSDs are also more predictable, allowing you to avoid data re-balancing due to unexpected failures. This means that it is easier to plan system maintenance, and that there is less overall work managing a system.
Object stores like OpenIO SDS can do storage tiering, and uses flash memory to store metadata. By using only one very efficient medium, the
overall cluster design is simplified, and there is no need to configure RAM-based caching mechanisms to improve performance.
Data optimization and compute
One of the limitations of HDDs is the low number of IOPS available. By adopting flash, this limit is removed, and you can offload many tasks to the storage system itself. This is part of OpenIO’s vision: the integration of a serverless computing framework (like OpenIO’s Grid for Apps) onto the object store makes it possible to run code directly next to the data and perform complex compute tasks without leaving the system.
There’s no doubt that $/GB still favors the HDD but, when you start to think about the overall picture, the real cost of flash is lower than expected, even for large scale-out systems. It also opens new markets for object storage and enables the introduction of solutions like our Grid for Apps for applications that just a few years ago were unthinkable.
Have we reached a tipping point? Not yet. But last year we had a 15TB SSD, and there is now a 30TB option. if you look at HDD capacity, it moved only from 10 to 12TB. Unfortunately, I do not yet have the price of the 30TB drive, but the 12TB HDD costs roughly what the 10TB cost last year ($500). The 15TB SSD was in the range of $10,000, and I’m quite sure that this one won’t cost twice that.