Blocks, Files and Object Storage Compared
It’s not uncommon to find end users adopting the wrong technology for new projects just because they don’t know there are better alternatives. “If all you have is a hammer, everything looks like a nail”, and it often seems less expensive to stick with what you already know, instead of changing applications or investing in new skills. But a new mindset can sometimes lead to better efficiency and greater savings very quickly.
Hammers and nails
Part of my role here at OpenIO is to educate and inform users about our solutions, as well as object storage in general. But, even today, I find the saying, “if all you have is a hammer, everything looks like a nail,” to be very appropriate.
It’s not uncommon to find end users adopting the wrong technology for new projects just because they don’t know there are better alternatives. And even though I could agree that, in some cases, it is less expensive to stick with what you already know instead of changing applications or investing in new skills, a new mindset can sometimes lead to better efficiency and greater savings very quickly.
I think the table below sums up the differences between the storage technologies you can find in the market today.
The most important part of the table in the slide is not scalability (object storage is usually considered the most scalable of the storage systems) or performance, but protocols and the data/access ratio.
The right tool for the right job
If you are accessing data only locally, in a data center, on a single server, object storage doesn’t make much sense. But if data is accessed from everywhere, by many devices at the same time, and via HTTP protocols, delivering it through a web server installed in a VM accessing an all-flash array doesn’t make sense at all. Does it?
This is an important concept for us at OpenIO. Not all storage infrastructures are at the petabyte scale, but more and more data needs to be accessed from everywhere. And this is why OpenIO SDS can start out very small, with a minimal, cost-effective, 3-node cluster, and grow from there.
Scalability and multi-tenancy come next. Let me share a second slide here, which highlights the fact that object storage is the best platform to consolidate all data and non latency-sensitive workloads. By doing, so your organization can get the best $/GB in the market, and save money by offloading other storage systems from secondary data, improving overall efficiency and TCO.
Once again, we think OpenIO SDS stands out from the crowd because of its flexibility, allowing end users to expand the cluster very quickly with heterogeneous hardware and without the need for resource rebalancing, thanks to our Conscience technology.
In the past, object storage was a technology available only to end users with multi-PB installations (at least for the majority of our competitors). Things have changed, and while object storage still remains the best option for large infrastructures, it’s also worth considering for small deployments aimed at serving all non latency-sensitive applications, and for data that need to be accessed from everywhere.
The size of the infrastructure is no longer a discriminator, that it’s why we think “It’s time for object storage!”
Want to know more?
OpenIO SDS is available for testing .