Object Storage is for Everyone; It’s Time to Adopt it
Why should you use object storage? How difficult is it to get started with it? Where can you begin? And how can you limit your initial investment and get the best ROI? The last question is the most important one, and you’ll find the answer at the end of this article.
Why object storage?
I’m a big fan of object storage. I evangelized about it for years, and I ended up joining an object storage startup because I believe it is the future of data storage. So it’s obvious to me why and when you should adopt it: object storage is for everyone.
Only a small part of our data needs to reside on low-latency arrays (and it’s best if they’re all flash storage). This type of storage system gives you consistency of performance (not just low latency), and your active and structured data are much happier, as is the user.
But the rest of the world is made up of unstructured data, and workloads less sensitive to latency, such as archives and backups. In addition, a lot of data is now accessed remotely from a variety of devices connected over the internet.
What is good for a server or a local file system accessed by a few PCs, it is inefficient when you have to work remotely with HTTP protocols, with low bandwidth, and with intermittent connections: you end up lowering efficiency while providing subpar services and poor user experience.
Here’s an example: think about an organization with remote offices. All of them need local file services, and some users want to sync and share data too. The IT manager wants to maintain control over access and data while granting backup and DR, as well as keep costs down. This is a common scenario today, one that is perfect for object storage. With object storage in your datacenter (perhaps with data replicated in the cloud or at a secondary datacenter), it is possible to easily deploy sync & share and remote NAS gateways. And this is just the beginning.
Once the object storage system is in place, it becomes easy to consolidate secondary workloads and additional data; and you can then start to think about next-gen applications.
On prem or in the cloud?
Not all object stores are identical. Cloud-based object storage, for example, is good and inexpensive at the beginning, because of the pay-as-you-go model and the fact that you don’t need to have your own infrastructure. On the other hand, you pay for bandwidth, and it is probable that the data is stored in a remote datacenter. (Even if that doesn’t bother you, local laws and regulations could require that you store it locally).
You need to plan carefully before choosing the technology that fits you best. If you plan to store lots of data, on-premises object stores are better. Doing a migration after storing hundreds of terabytes of data takes a lot of time, bandwidth, and can be costly.
How to choose the right object store?
Object storage needs large capacity to start? No. Today you can start with three nodes and just a few TB of capacity, distributed in two or three datacenters for optimal resiliency and availability. You can easily grow from there to multi-petabyte installations by adding additional nodes. There are clearly limits and constraints, but some solutions are much more flexible than others. Protocols are no longer an issue, since most of the object storage products available offer both S3 and Swift APIs.
You need to find a solution that is flexible and easily adaptable to future needs; one that allows you to sustain growth in capacity and throughput. This may be easier than you think.
How to spend little money and get the best ROI with OpenIO?
From my point of view, OpenIO SDS, is the most flexible solution on the market. There are several technical reasons for this, but this isn’t the post where I’ll go deep with the technology (you can read how we do our magic in this document). My goal right now is to tell you how easy it is to install our product, and how little it costs to get it up and running and see results.
Flexibility is the key. Taking for granted that you want to install a highly available and resilient configuration, we will begin with a 3-node cluster. These three nodes can be virtual (no additional hardware required in this case), or physical; or both.
In the case of physical nodes, I always suggest that small configurations like this start with decommissioned servers (for example, you could reuse old VMware servers). The servers don’t need to be identical (this is one of the advantages of our architecture design), and CPU and RAM will be sufficient considering the previous scope of the server. Adding capacity to these nodes shouldn’t be a challenge either.
Key point: With OpenIO SDS initial investment in hardware is minimal or null.
Once the servers are installed with one of the major Linux distributions (Ubuntu, Debian, CentOS) you have two options: you can opt for with the open source solution (all the basic features are included), or you can subscribe to a support plan.
Key point: cost for a simple OpenIO SDS cluster, easy-to-use UI and 24×7 support is in the range of 0.05 $/GB/Yr.
Have you seen how easy it is to install our solution? It just takes minutes.