What is Erasure Coding?
Originally designed for unreliable long distance communications (i.e. satellites and spacecrafts), erasure coding became popular in the storage industry because of the risk arising from the deployment of large hard drives. In fact, even RAID 6 became unsafe when the rebuild time for a failed multi-terabyte hard drive started to be measured in days instead of hours.
Now, EC is a very common protection mechanism that is widely adopted by modern storage systems; you can think of it as RAID on steroids. It is a bit more complicated than that, and there is a lot of math behind the scenes, but the result is very simple. Starting from the original data, a larger piece of information is created and then segmented in fragments. To retrieve the original information you only need a subset of these fragments. For example, when you see that the EC scheme 7+5 was applied, it means that you need only seven fragments out of 12 to get the information. 7+5 also means that you can sustain up to 5 concurrent failures before losing any data.
EC can have a huge impact on CPU usage, but most modern CPUs have specific instruction sets to alleviate this issue.
How EC Works in OpenIO
OpenIO is a distributed scale-out storage system, and data chunks are distributed across all the available nodes in the cluster. As with any other IO operation, the ConsciousGrid™ selects the best nodes to place the segments in real time.
It is important to note that OpenIO can be configured to be node, rack, and datacenter aware, so that chunks can be distributed accordingly. When distributing chunks across wide-area networks, latency and bandwidth could become an issue, but we support up to 10ms of latency between datacenters. Chunk geo-distribution allows users to save a lot of capacity when compared to traditional remote replication, and keeps cost at bay without compromising infrastructure reliability. For example, if we have three datacenters with four nodes each, we could select EC 7+5. The minimum number of fragments needed to retrieve the information is 7, meaning that I could lose 1 entire datacenter (4 nodes) while having an additional failure in one of the others and still be able to recover my object!
OpenIO's distributed erasure coding also has the ability to save chunks in other datacenters while one of the sites is offline, and then move them as soon as it is possible to comply with the policies in place. This saves CPU power and limits risk in case of multiple network or datacenter failures.
Even though distributed erasure coding trades performance for capacity optimization, due to the added network latency, the ConsciousGrid™ technology helps to get the best performance out of available resources in this case as well. The first chunks read always come from the most available nodes, which, in this case, will probably be the closest ones with better latency and bandwidth.
OpenIO once again demonstrates its flexibility, allowing users to select the best data protection method for their needs. Erasure coding is the best option for several use cases, especially when $/GB is a key aspect, but you do not have to trade data availability and durability for it.
Geo-distributed erasure coding takes it a step further and is available to all customers at no additional cost. It is included with our standard subscription for the peace of mind of our customers.