Object Storage for Big Data
The fastest Object Storage
OpenIO achieves performance at terabit-per-second: 171 GB/s of write speed on standard hardware. It’s the world’s fastest software-defined object store, when running under real-life production conditions with erasure coding data protection.
It’s an ideal 1st tier storage solution for a diverse set of workloads, as well as a replacement for HDFS, or can work seamlessly alongside installed infrastructures as a warm archive tier.
A better replacement for HDFS
Object-based storage systems, compatible with the S3 API, have emerged to complement HDFS and overcome its limitations. Among them, few can sustain the high performance required for Hadoop to replace HDFS, but none can match OpenIO in terms of hyper-scalability: it scales instantly – easily and infinitely – not restricted by any limitation and without any slow down.
With OpenIO you will never face the pitfalls of rebalancing data (choose to sacrifice performance or data availability), nor have to plan capacity in advance (and deal with the wasted costs of unused capacity).
More efficient than public clouds
OpenIO is hardware agnostic, open core software that can be deployed on-prem (or cloud-hosted for maximum flexibility). It’s the ideal choice to repatriate data from public clouds and forget the crippling costs of their “pay as you go” pricing model – that charge per request or by bandwidth.
With OpenIO you can optimize the storage architecture, be free from any lock in and keep total control over your data.
OpenIO's architecture is also unique in the way that it really scales when you add more machines. We are able to just keep adding spinning disks if it looks like we're running out of bandwidth.
We have improved our performance, and reduced costs by 40% from public cloud. And we’re still agile: the web host adds nodes when necessary to increase the capabilities of the platform.