OpenIO, a leading provider of hyper-scalable, open source object storage today announced a record performance achievement with Criteo, the advertising platform for the open Internet. OpenIO demonstrated the performance and scalability of their object storage solution by deploying their technology on a cluster of more than 350 physical servers provided by Criteo. The benchmark made it possible to cross the symbolic threshold of writing up to one terabit of data per second, and even to exceed it since the useful throughput observed is 1.372 Tbps. This is the equivalent of digitally transferring the 22 million books from the world’s largest library, the Library of the U.S. Congress, in under one minute! OpenIO’s performance is a record because no other object storage technology has so far claimed and demonstrated such a high throughput and on such a large scale.
"Since the creation of OpenIO, we have placed the issue of performance at the very center of what we do,” explains Laurent Denel, CEO and co-founder of OpenIO. “Once we solved the problem of hyper-scalability, it became clear that data would be manipulated more intensively than in the past. This is why we designed an efficient solution, capable of being used as primary storage for video streaming (as is the case with our customer Dailymotion), or to serve increasingly large datasets for Big Data use cases.”
In recent years, due to the exponential growth in the volume of data collected by companies, their choices of storage technologies have often been guided by two criteria: the price per gigabyte and scalability - the ability to easily increase the capacity of a storage platform to create its "data lake" without creating data silos.
It is now possible to extract value from this data, and this extraction is becoming one of the main growth drivers for many companies. Also, the concern today is less to optimize the cost of archiving data than to make their processing possible by voracious Machine Learning and Deep Learning algorithms. After the era of Data Archiving and Data Sharing, we are entering the era of Data Processing.
The high performance that OpenIO achieved on Criteo machines under production conditions, confirms the design of the OpenIO object storage software for new data uses, in particular the massive exploitation of data by AI algorithms on Big Data / HPC clusters. Inaugurating the #TbpsChallenge with this record, OpenIO invites other market players to put their technology to the test (on commodity servers and tuned for production!).
“Criteo is pleased to have been a part of this benchmark, which gave OpenIO the opportunity to demonstrate how their technology can maintain consistent high performance while adding new nodes to scale massively” said Stuart Pook, Senior Site Reliability Engineer at Criteo. “They were able to achieve a write rate close to the theoretical limits of the hardware we made available."
Criteo handles large volumes of data and has built a unique Big Data platform with several thousand nodes. Criteo's team of engineers provided more than 350 machines from its infrastructure, recently racked in one of their datacenters, but not yet put into their production. An invaluable opportunity for OpenIO, because this cluster made up of commodity storage servers allows them to reach an order of magnitude that is truly ‘hyper-scale’.
Criteo (NASDAQ: CRTO) is the advertising platform for the open Internet, an ecosystem that favors neutrality, transparency and inclusiveness. Close to 2,900 Criteo team members partner with close to 20,000 customers and thousands of publishers around the globe to deliver effective advertising across all channels, by applying advanced machine learning to unparalleled data sets. Criteo empowers companies of all sizes with the technology they need to better know and serve their customers.
For more information, please visit www.criteo.com