Navigation ↓
  |  Enrico Signoretti

Comparing blocks, files and objects

On Thursday, April 6, we will be hosting our first webinar, “It’s time for object storage!” I don’t want to give out too many spoilers, but there’s a slide that I want to share with you in advance.

Hammers and nails

Part of my role here at OpenIO is to educate and inform users about our solutions, as well as object storage in general. But, even today, I find the saying, “if all you have is a hammer, everything looks like a nail,” to be very appropriate.

It’s not uncommon to find end users adopting the wrong technology for new projects just because they don’t know there are better alternatives. And even though I could agree that, in some cases, it is less expensive to stick with what you already know instead of changing applications or investing in new skills, a new mindset can sometimes lead to better efficiency and greater savings very quickly.

I think this slide sums up the differences between the storage technologies you can find in the market today. The most important part of the table in the slide is not Scalability (object storage is usually considered the most scalable of the storage systems) or performance, but protocols and the data/access ratio.

The right tool for the right job

If you are accessing data only locally, in a data center, on a single server, object storage doesn’t make much sense. But if data is accessed from everywhere, by many devices at the same time, and via HTTP protocols, delivering it through a web server installed in a VM accessing an all-flash array doesn’t make sense at all. Does it?

This is an important concept for us at OpenIO. Not all storage infrastructures are at the petabyte scale, but more and more data needs to be accessed from everywhere. And this is why OpenIO SDS can start out very small, with a minimal, cost-effective, 3-node cluster, and grow from there.

Scalability and multi-tenancy come next. Let me share a second slide here, which highlights the fact that object storage is the best platform to consolidate all data and non latency-sensitive workloads. By doing, so your organization can get the best $/GB in the market, and save money by offloading other storage systems from secondary data, improving overall efficiency and TCO.

Once again, we think OpenIO SDS stands out from the crowd because of its flexibility, allowing end users to expand the cluster very quickly with heterogeneous hardware and without the need for resource rebalancing, thanks to our Conscience technology.

Takeaways

In the past, object storage was a technology available only to end users with multi-PB installations (at least for the majority of our competitors). Things have changed, and while object storage still remains the best option for large infrastructures, it’s also worth considering for small deployments aimed at serving all non latency-sensitive applications, and for data that need to be accessed from everywhere.

The size of the infrastructure is no longer a discriminator; for example, our base license starts at 100TB (usable), and the hardware configuration only depends on user needs.

That it is also why we think “It’s time for object storage!” Join us for the webinar to learn more.

Want to know more?

OpenIO SDS is available for testing in free different flavors: the Docker image, a simple ready-to-go virtualized 3-node cluster and Raspberry Pi.

Stay in touch with us and our community through Twitter, our Slack community channel, GitHub and our web forum, to receive the latest info, support, and to chat with other users.

And remember, on 6th of April we are hosting our first monthly webinar. Use this link to register for “It’s time for object storage!”.