We were mentioned by Chris Mellor in his "Igneous ARM CPUs: What if they tossed the blindfold?" article, and - while we can't speak for others - we think it should be clarified as to why adopting ARM CPUs could be much more effective than Chris seems to think.
Nano-nodes and massive scale
Object storage is not very CPU hungry, and even when erasure code is enabled, there is plenty of CPU left. That’s why we started to implement a Serverless, event triggered, compute framework called GridForApps™. Thanks to this, our customers can run applications directly into the storage platform. It has several use cases (email scanning, real time encoding and so on) and it vastly improves storage infrastructure efficiency.
On the other hand, most of our competitors have nothing similar to GridForApps™ and take a different approach to improving infrastructure efficiency. Usually, they build "fat nodes" with plenty of disks which introduce other issues. In fact, you can get a very good $/GB because, chassis, CPU, RAM and networking costs are split between 80 or more disks. But, at the same time, the failure domain is huge and performance is very poor. We could do the same , and for some of our customers it’s "good enough"… but we knew that with our technology we could do better than that, much bettter. Now, here at OpenIO, we think customers prefer having more than capacity only, even when capacity is their primary goal. It gives them options, at least!
In the last couple of years, we worked a lot with Kinetic drives (HDDs with an Ethernet interface and some CPU power) but they weren’t enough/sufficient. The idea is good but in practice they are hardly the solution. You can just offload parts of the backend logic to them, but most of the intelligence remains on an external CPU which, again, leads to fat x86 nodes. So, we went back to the drawing board and we found what we consider to be the best solution: the Nano-node.
A Nano-node has all the components you can think of: CPU, RAM, a small flash memory for booting and storing local data and high speed connectivity. You can think about it as a Raspberry Pi on steroids. The nano-node is a small board with a SATA interface supporting HDDs and SDDs, the size of the front of a 3.5" hard disk (and that’s exactly where it’s installed). At the end of the day, each single disk has its own CPU, RAM and connectivity, without the end user having to worry about connecting hundreds or thousands of nodes together to get the cluster working. The nano-nodes are installed in a 4U chassis which also provides all the links and 2 6-port 40Gb/s switches for front-end connectivity and back-to-back expansion. By doing so we get 96 CPUs (192 cores), 40Gbit/s networking and more than 1PB in 4U with a failure domain equivalent to one disk!
The failure domain is important for two reasons. The first is that losing a single disk is never like losing an entire 80-disk server and, on top of that, most of our competitors use distributed hash tables that could be very painful to rebuild (a problem that occurs also when you expand the cluster!) and can easily lead to performance consistency issues… and let me tell you that we overcome that issue by doing it differently and not by using distribute hash tables. Secondly, as a consequence, the fat node is not for everyone. If what you need is a relatively small infrastructure of inexpensive storage, by building it out of fat nodes you end up trading capacity for performance and it becomes harder to justify object storage for smaller installations (or start small and grow over time). This is especially important in the enterprise space where 1PB is still a lot and, in many cases, the first installations start at less than 200TB and grow over time by consolidating several different workloads… and, again, this is why you always need capacity, scalability and performance without compromises. We can start as small as a 3-node cluster (no matter the CPU architecture) and grow up to hundreds of Petabytes, while mixing different node types in terms of capacity and CPU as well!
Back to Chris' article
Making our case on ARM and Nano-nodes has taken longer than we expected, but it was necessary to explain why ARM is great for object storage. That said, our software is identical on both ARM and x86! The only thing that changes is the order of magnitude in the number of nodes involved in a cluster of a similar size (but we are good at scaling, so it's actually not an issue).
The software does all the magic, and OpenIO is designed to have an efficient lightweight back-end with a smart way to balance the load among the available nodes (we call it ConsciousGrid™) and, as mentioned earlier, it has helped us to develop GridForApps™. Now - and this is the important part - all we can do on fat x86 nodes can also be done on ARM-based nano-nodes… and maybe more.
By adopting ARM, no visibility is lost over data or metadata and we can run all of our software on it too. If it can be done on x86, it can be done on ARM too.
The next steps
Running Big Data analytics, deep learning and AI applications and more, directly into the storage system? Why not? Isn't it the current industry trend after all? Yesterday was all about hypervisors, now it's all about containers and now everyone is excited about serverless computing. Applications running directly into the storage system, isn't it the most hyperconverged infrastructure ever?