The market is changing quickly, and many analysts and journalists agree that data storage is getting commoditized. Many vendors changed their approach and become cloud gateways rather than proper storage systems. I understand why, and there are several valid reasons for their change, but OpenIO has a different story to tell.
Storage is a commodity
Not everybody needs scale-out storage. File systems are still fine in 2017; we have a file system option too, and it is in high demand among smaller customers. And if you need to store hundreds of terabytes of data, it is highly likely that a scale-up NAS will still do the trick. It is also true that this depends on the use case, because if your 300TB is made up of 10 billion files, object storage is probably what you really need, especially if these files must be accessed over the internet. But if you don’t have the right technology, building a scale-out cluster for 300TB of data could be very expensive. It's all about constraints and limitations. You’ll eventually end up looking at the cloud because it’s just easier (and cost effective). Fortunately for us, we don’t have the limitations of traditional object stores, which is why many customers have migrated to OpenIO after trying the public cloud.
My point is that data storage is a commodity, and people look for the best $/GB. They like to get a good price for what they buy (TCA), and only after a while they really understand what the real cost is (TCO). Most traditional object stores have failed because they can't provide good TCO over time. They are complicated, they are rigid, and they do not scale as easily as promised.
But even a good TCO is no longer enough. Storing data safely is commonplace now. Doing it at scale is a bit more complicated but, still, $/GB is the most important metric. We worked hard to make our solution affordable while looking at long term TCO, and we know that this is just the first step toward increasing our market share.
Data is not a commodity
One of the main limits of traditional storage solutions is that they just store data. They don't do anything with it, and that’s a shame.
When I was an independent analyst, I usually referred to "Flash & Trash" when talking about primary and secondary storage. Flash systems stored primary data and all the rest ended up in large scale-out repositories. This model still exists, but people now understand that by piling up huge amounts of trash you end up with a landfill. Do you like landfills? probably not; they don’t produce any value, and they damage the entire ecosystem.
We have a different approach. Some time ago, thanks to the intrinsic characteristics of OpenIO, the team started designing a basic serverless computing feature. The goal was to take advantage of unused cluster resources because of the small footprint in CPU and RAM usage of our object store (let me remind you that OpenIO can run on 1 ARM CPU core and 400MB of RAM), and this was improved and became a full-fledged product called GridForApps.
Going back to the "Flash & Trash" paradigm, you can think about GridForApps as a tool to convert trash into value.
The mechanism is very simple, and easy to adopt at any level for developers as well as sysadmins. OpenIO generates events for everything that happens in the system (PUT, GET, UPDATE, DELETE, etc.), and these events are intercepted by GridForApps, which can trigger functions accordingly.
A function is a relatively small bit of code associated with some configuration information such us its name, description, and resource requirements. The code must be written in a stateless style assuming there is no affinity with the underlying compute infrastructure, and any persistent state should be stored in an object store or an external DB service.
We don't just store data, we can understand it and create value from it.
This is a powerful mechanism, and it can be used in a variety of ways. Not just the use cases I have mentioned many times in other articles, but processes such as intelligent data tiering: the system can check any file stored in it, and if it doesn't find useful information, it is probable that it will be never accessed again. Why not compress these files and push them to the cloud to take advantage of the good $/GB of a service like Glacier? It can be done with a few lines of code in a simple function!
This is just one example showing that OpenIO is thinking beyond $/GB. We started talking about increasing $/Data when we discovered and improved the value stored in our systems.
Data storage is not dead, but traditional storage it is for sure. If you stick with $/GB or $/IOPS without building value on top of it you'll end tits up! This is history repeating itself, is Violin ringing a bell?
If you look at traditional object storage or a flash array without data services, they are just the same. One is fast and the other one is capacious but at the end of the day they are just commodity.
One more thing… We saw vendors trying to virtualize storage many times in the past, SAN first and NAS later. To me, all this multi-cloud storage controllers look very like object storage virtualization. Why should it work this time?
OpenIO is different. We believe that best $/GB is the starting point, but create real value for customers (or more $/data) is the key.