Navigation ↓
  |  OpenIO

Why throughput matters in object storage and how we exceeded 1 Tbps on commodity servers.

To demonstrate the performance and scalability of OpenIO Object Storage, we recently deployed our solution on a cluster of 350 physical servers (thanks to Criteo Labs) and crossed the symbolic threshold of writing up to 1 terabit of data per second. We achieved this under real production conditions. This is the #TbpsChallenge!

 

The performance of storage systems in the era of data processing

In recent years, due to the exponential growth in the volume of data collected by companies and organizations, their choices of storage technologies have often been guided by two criteria: the first is the price per gigabyte; the second is scalability, or the ability to easily increase the capacity of a storage platform to create its “data lake” without creating data silos.

The extraction of value from this data is becoming one of the main growth drivers for many companies. The concern today is less about optimizing the costs of archiving data, and more about making data processing possible by voracious Machine Learning and Deep Learning algorithms.

When we talk about the performance of a storage system, there are 3 dimensions to consider: storage capacity, read/write bandwidth and data access time, otherwise known as latency. For unstructured data, which now represents the bulk of data held by companies, capacity and bandwidth are paramount.

It is not only necessary to “scale” one’s platform, but also to be able to write and consume data at an optimal rate, otherwise the calculations that run on data sets will be slowed down or interrupted, and the loading time for data between each calculation task will impede their exploitation. Given the cost of a supercomputer (or more commonly a Hadoop computing cluster) – often charged by the minute – the throughput offered by a storage system can no longer be a secondary issue.

“Since the creation of OpenIO, we have placed the issue of performance front and center. Once the first challenge – large-scale storage – was resolved, it became clear that the data would be used more intensively than in the past. This is why we designed an efficient solution, capable of being used as primary storage for video streaming, or to serve increasingly large datasets for AI algorithms on Big Data / HPC clusters.” — Laurent Denel, CEO and co-founder of OpenIO

 

Setting a record on Criteo’s infrastructure

Criteo handles large volumes of data and uses advanced machine learning technology to provide effective advertising across all channels. To succeed in this goal, Criteo built a unique Big Data platform with several thousand nodes.  Pleased to support a French company that develops open source technology, Criteo Labs’ team of engineers graciously provided more than 350 machines, which had just been racked in one of their data centers (Amsterdam), but not yet put into their production. An invaluable opportunity because this cluster, which comprises standard servers such as compute nodes (2 Intel® Xeon® Gold 6140 CPUs, 384 GB RAM, 1 SSD disk for the system, 15 8TB SATA disks), allowed us to reach an order of magnitude that is truly “hyper-scale”.

HOW WE DID IT

In concrete terms, we deployed OpenIO on 352 physical machines. Deployment on this scale is a major challenge in itself. Our challenge was to copy data from one of Criteo’s data lakes, composed of 2,500 servers, to an OpenIO cluster hosted on the same infrastructure, the machines being interconnected by a mesh network. Criteo has a core network that allowed us to saturate all the network links of the machines (1 x 10 Gpbs), which suggested that we could exceed the terabit. We launched a unit test (to a single machine) to validate the configuration and confirm the saturation of the link at 10 Gbps. We then launched the load trigger and added machines in batches of 50, demonstrating the perfectly linear performance of the OpenIO cluster, until the throughput of 1.372 Tbps was reached. This means that OpenIO succeeded in writing 171 GB of data per second, since this performance includes data protection using 14 + 4 erasure coding (a combination that allows up to 4 servers to be lost within the cluster, without data loss).

“Criteo is pleased to have been a part of this benchmark, which gave OpenIO the opportunity to demonstrate how their technology can maintain consistent high performance while adding new nodes to scale massively, without loss of performance when adding new nodes. They were able to achieve a write rate close to the theoretical limits of the hardware we made available.” Stuart Pook, Senior Site Reliability Engineer at Criteo

#TbpsChallenge

The performance that OpenIO achieved with Criteo is a record in the sense that no other object storage technology has, to date, claimed and demonstrated such a throughput under production conditions and on standard hardware (commodity servers). Already, sectors such as medical research or the automotive industry (for the development of autonomous cars) need such data rates to train their algorithms to build models based on datasets exceeding 100 Petabytes.

“First of all, we want to solve the problem ‘How to calculate an AI model with a size of the dataset +100PB.’ It’s a must for the driverless industry and the phrama / DNA / genome.” — Octave Klaba, Founder & Chairman of OVH (@olesovhcom) Sep 1, 2019

 

So with this record, OpenIO would like to launch the #TbpsChallenge, and invite other market players to put their technology to the test! This gives companies a way to compare market players on facts rather than marketing promises, in a sector where comparisons are particularly difficult to implement due to the prohibitive costs of a test infrastructure powerful enough to perform such performance benchmarks.

Get the Benchmark Report