If you look at the object storage landscape today, you'll find that most available solutions are able to do what you expect from an object store. Scalability, data protection, replication, S3 compatibility, and many other basic features are taken for granted now. While some systems implement a specific feature better than others, the differences are minimal, and various object storage systems are adopting the same feature set across the board.
The problem is exactly this: they all look alike. Based on similar concepts, and with a similar rigidity, they have the same limitations when it comes to deploying, managing, and scaling an infrastructure, and the results are nearly always comparable.
First, the overall TCO is higher than expected, and the constraints impose infrastructure design choices that are not always aligned with the evolution of a specific business. Second, in the real world, scalability is much more of an issue than you may think. It’s not that these systems don't scale; but every time you introduce new resources in the system, everything has to be thoroughly planned in advance and has an impact on performance.
Overcome the limitations of traditional object stores
OpenIO has some unique characteristics that set our solution apart from others on the market. It overcomes all the limitations of traditional object stores while giving the same, or better, functionalities.
The lightweight design of SDS, capable of running on a single CPU core and 512MB of RAM, coupled with the flexibility provided by Conscience Technology, make it easy to manage cluster resources very efficiently.
Flexibility, a synonym of freedom of choice in this case, allows our customers to build all-flash as well as all-disk and hybrid configurations, starting with 3 nodes and growing up to hundreds, while still having the ability to add a single hard disk to a node if necessary. Users can mix any combination of ARM and X86 nodes in the same cluster, and performance is usually better than for the competition. This is because a small footprint means that the system is optimized and efficient. The flexibility that comes from supporting heterogeneous hardware is made possible by Conscience technology and the dynamic load balancing mechanism it provides.
It's a totally new way of thinking about an object storage cluster and how it works. Services and features are quite similar to what you can expect from the competition but, again, it's the simplicity, efficiency, performance, ease of use, and overall flexibility that sets us apart.
OpenIO SDS is an open source software solution that runs on commodity hardware. If you have a modern datacenter server, and your software only needs only one core and 512MB of RAM to run, what do you do with all the rest of the available resources? Why not use them to offload compute tasks from the rest of the infrastructure? This is exactly why we use the word "serverless".
Grid for Apps is an event-driven compute framework that works on top of OpenIO SDS. It intercepts all the events that happen at the storage layer, and it is able to trigger specific applications or scripts to act on the data (and metadata) stored in the object store. By consolidating data and applications on the storage infrastructure, you are able to save on external servers and have fewer other components to manage.
There were a number of words that could describe this, but seerverless is the most appropriate. Grid for Apps is also very similar to what you get from AWS S3+Lambda in the public cloud.
Our customers are already using Grid for Apps for applications such as metadata enrichment, data indexing and search, pattern recognition, machine learning, data filtering, video transcoding, and so on. Grid for Apps simplifies many operations and workflows, and is very easy to adopt. So Serverless could mean that you need fewer servers to do the same job; or, to put it another way, you don't need external servers to run a lot of tasks.
The first goal of any IT professional is to avoid complexity. OpenIO SDS simplifies the life of developers and system administrators while contributing to lower TCO. By adapting OpenIO SDS to our customers' requirements, we provide flexibility and efficiency. And this is also why we have a great success rate when we do PoCs with potential customers. We are different at the core.
The same goes for serverless computing. The small resource footprint needed to run OpenIO SDS allowed us to build more around our core and take advantage of all the resources available in the cluster. This improves efficiency (less data moving back and forth) and saves money (no external resources needed to run compute tasks that can be offloaded to the storage infrastructure). Once again, we reduced overall infrastructure complexity while doing more with less.