Object Storage? Sure, but How OpenIO is Different?
The object storage market is very crowded now, and the first question people ask me, whether they are customers, analysts, partners, or the press, is always the same: “What is OpenIO differentiator?”
I couldn’t agree more with them, since most object storage systems look the same; they all seem to have a similar feature set:
- Scale-out? Check.
- Erasure coding? Check.
- Remote replication? Check.
- S3 and Swift support? Check.
- Software solution running on commodity hardware? Check.
- WORM, Encryption, hybrid tiering, file interfaces, and so on… All check!
It’s boring but, as I said, this is common now. If the object storage product you are evaluating doesn’t have all of these features… Well, you should look at something else.
While we are the latest object storage company coming to the market, SDS is a very mature product, born more than 10 years ago. This is why it has all the features you could expect from an object storage system, and more.
Standing out from the crowd is even more important for us than others. I think there are three main elements that set OpenIO SDS apart and make our solution future-proof.
1) Lightweight backend design
We aren’t fans of ARM processors and Raspberry Pis just because it’s cool. Well, it is cool, but there is much more than that.
A complete SDS installation can run on a single-core ARM CPU with 512 MB RAM. But it’s better to look at it this way: an SDS cluster can run on any type of hardware, from the smallest of devices up to the largest datacenter server. This is not only a huge differentiator, but it also brings incredible benefits for infrastructure design and IT strategy.
OpenIO SDS’s unique architecture can be at the core of any cloud, edge, or IIOT infrastructure, freeing IT architects from compatibility issues and complexities. The same identical products can be installed in a datacenter and in a small, remote infrastructure on embedded devices. Data availability and resiliency are never at risk, and, by leveraging the same technology, it is possible to build a comprehensive and seamless data plane which can cover any sort of data storage needs.
SDS isn’t based on a classic distributed hash table, like the vast majority of object stores. This technique allows to build very scalable systems, but imposes a lot of constraints when it comes to changing cluster configuration or balancing load. Performance impact for cluster expansions or node replacement could be massive, and it’s hard to get optimal load balancing from nodes with different capacities and CPU resources (which is not unusual after multiple cluster expansions). There are workarounds, of course, but at the expense of overall simplicity and ease of management.
OpenIO SDS is based on a different hashing mechanism, a persistent three-level massively distributed directory, which has already proven to be very scalable (with clusters in production counting more than 650 nodes and 20,000 servers), and without the limits and rigidity shown by others.
Thanks to this approach, it has been possible to implement what we call Conscience, a set of algorithms that constantly monitor every single resource of the cluster to manage dynamic load balancing. The result is that the cluster is highly efficient at any scale, reacting immediately to configuration changes (or failures), and taking advantage of all available resources immediately, without visible impact on performance, and simplifying overall management.
3) Grid for Apps
The first two differentiators lead to the third. The extreme efficiency introduced by the lightweight design and Conscience technology allowed us to take advantage of unused resources and run applications, triggered by events, directly on the storage infrastructure.
Every action performed on an object (PUT, GET DELETE, etc.) creates an event that is intercepted by Grid for Apps and used to trigger a specific task (or App, or function) on the object itself (its data and/or metadata). There are many uses cases for this, and efficiency is boosted to unprecedented levels.
This computing framework doesn’t need any additional hypervisors, containers, orchestrators, or whatever. Just applications and data, simplifying the stack, making the most of available resources and allowing a different approach when it comes to next-generation applications like machine learning and Industrial IoT.
To sum up, our differentiators are a lightweight backend, flexibility, and Grid for Apps. But it is reductive calling them differentiators; they are actually enablers.
- OpenIO SDS’s lightweight design allows our technology to be deployed in any sort of environment (from a large datacenter to a remote set of small devices). And it connects all these infrastructures, seamlessly.
- OpenIO SDS’s flexibility makes it easy to manage all these infrastructures while taking advantage of all available resources efficiently.
- OpenIO Grid for Apps makes it possible to run applications directly on the storage system, simplifying application development and deployment for edge computing, and optimizing data access and reducing complexity in large storage-driven and big data applications.