July is a very hot month, and temperatures outside are around 30°C. When it is that hot, engineers come up with the craziest ideas. The other day, some of our engineers launched an interesting side project. They wanted to demonstrate how far we could go with our ability to reuse old hardware to run OpenIO.
Bringing old hardware to new life
Here at OpenIO we have a corner in our storeroom where all old hardware ends up. There are old PCs and accessories, as well as hard disks and other components that are no longer good enough for our developers or for the lab. It can be problematic to get rid of it, since it is a special type of waste and there are strict rules for recycling it. The best way to solve the problem is to store everything in one place and wait as long as you can before going to the local recycling center. But this time another idea came to mind.
Following the idea of zero-waste initiatives that are popping up all around Europe, our engineers wanted to try to recycle hardware internally by reusing it. We promote the fact that we can run on heterogeneous hardware, on the smallest of the computing devices, and we highlight how our solution is lightweight and efficient. So why not prove this once again with an internal challenge?
Two teams, one working on X86 PCs and the other working with ARM SBCs, started to build two separate clusters out of recycled hardware and the competition is heating up!
In the coming weeks I'll update you on the achievements and benchmark results. But this also gives me the opportunity to talk about the advantages of this approach in real-world environments.
Software-defined data protection and availability
When you start to think about the way modern software-defined storage works, it is clear that data protection has moved up in the stack from hardware to software. You don't care that much about hardware reliability, because data integrity and protection are managed by software, which can handle multiple, frequent system failures and improve overall system availability. Distributed erasure coding, for example, makes it possible to sustain a large series of system failures including disk, network, node, or even entire data centers!
With correct cluster design and implementation, hardware resiliency becomes almost irrelevant, and you can easily compensate for quality with quantity. This also means that it is possible to build an entire storage infrastructure with limited support services without thinking about four- or eight-hour support contracts, but just part replacement. The savings could be huge and you can organize maintenance activities in the datacenter only once or twice per month to replace the failed parts. This is the model now commonly adopted by most hyperscalers. Failures happen, but the system is resilient enough to wait days or weeks before needing human intervention to replace the part. By radicalizing this approach, you could also think about letting a node loose components over time without replacing them, and switch it off when the quantity of resources is too limited to justify its existence.
OpenIO has a unique feature called ConsciousGrid technology that allows it to perform dynamic data placement and load balancing, making it possible to use heterogeneous hardware seamlessly and without any drawbacks.
ConsciousGrid is a distributed service that computes a quality score for each cluster resource in real time and decides where to put data accordingly. By doing so, an OpenIO cluster is continuously optimized and each write or read operation is always performed on the most available resource in accordance with the protection policies in place. In other words, each node in the cluster contributes to the overall cluster performance for up to 100% of its available resources, no matter its size, capacity, CPU, or ethernet connectivity. This brings some important benefits:
- OpenIO allows the end user to pick the best hardware configuration they need when they need it, depending on their business needs.
- New and old hardware can be mixed together in the same cluster without affecting performance.
- The customer can deploy recycled hardware and expand the cluster later with new hardware.
- By building a cluster with recycled hardware it is possible to save on hardware and support; one can start quickly, get excellent performance, and test the efficiency of object storage with a minimal investment and the best $/GB.
Bringing old hardware to a new life without spending money for support contracts, and getting the best performance, availability, resiliency, and capacity out of it, is the dream of anyone who works in a datacenter. With OpenIO this is possible.
The open source version of OpenIO is available for free download, and this is the same version we support with our subscription plans. This means that building an object store could come with no initial investment and with the same features and reliability you would expect from other, much more expensive, solutions available on the market.
OpenIO is the most flexible object store on the market, and supports clusters built with any type of hardware. It provides high and consistent performance thanks to ConsciousGrid technology and its seamless data placement and load balancing mechanisms, making it possible to use different types of hardware in the same cluster.
By adopting OpenIO it is possible to start with a minimal investment and grow from there depending on business needs, performance, and capacity requirements. And by moving performance, data protection, and availability to the software layer, without thinking too much about hardware reliability, costs can be easily slashed even further.
In the coming weeks we will keep you updated here on this blog and on social media about what is happening with our #zerowastecluster side project (read the Zero-Waste OpenIO cluster articles part 1 and part 2 to go further). It is just a simple demonstration of what OpenIO is capable of, but you could replicate it and put it in production and we will support it!