Navigation ↓
  |  Enrico Signoretti

Cloud, Edge, and IoT: the Future of Object Storage and Serverless

Last week I took a trip to the US, attending VMworld first, and some meetings in Silicon Valley later. I spent most of my time meeting with partners, influencers, and end users.  One of the topics I frequently discussed was IoT, followed by edge computing, with cloud computing coming in third. 
 
Yes, in a world that thinks of cloud computing as the panacea for all ills, it is interesting to see that there is a growing interest in complementary approaches and technologies, and from organizations of all sizes and all kinds. This is because in many cases, data has to be computed and consumed where it is created. 
 
One might think that this is not a topic for an object storage company, but, from my point of view, it is precisely the future of next-generation object storage and serverless computing.

Object storage in the cloud

This is easy, isn’t it? 
Today, object storage is usually deployed in large infrastructures with capacities that are measured in PBs. It is the perfect companion for many cloud applications, especially if they manage unstructured data and are heavily distributed (such as for use with mobile apps). 
 
This is not the only application for object storage. Some of our customers are already associating the advantages of object storage (SDS) with our serverless computing framework (Grid for Apps), and they are using this technology to enrich metadata or run applications closer to data. The range of applications is growing by the day: metadata enrichment and indexing/searching came first, but now we see a strong interest in machine learning, apps that can understand and act on content. And real-time transcoding and other tasks that can be easily offloaded to the storage infrastructure when an event occurs are also popular. Grid for Apps is perfect for these tasks: it can intercept any event that occurs in the storage layer and trigger applications or scripts accordingly (very similar to what happens with Amazon S3 and Lambda).

Object storage at the edge

The same goes for edge computing.
Wikipedia defines edge computing as “a method of optimizing cloud computing systems by performing data processing at the edge of the network, near the source of the data. This reduces the communications bandwidth needed between sensors and the central datacenter by performing analytics and knowledge generation at or near the source of the data.” It is clear here that the association of object storage and serverlesss is fundamental to deploy simple infrastructures capable of storing data and processing them locally. 
 
Benefits in this scenario are tangible. Edge infrastructures should be simple, without complex layers that have to be managed, and are usually unattended. This is why having storage and compute integrated at this level is much more efficient, while reducing management efforts. I agree that object storage is not primary storage, but more and more applications are moving to memory for performance and use object stores for checkpoints and consistency.  
 
Another important aspect about integrating object storage and serverless computing is the fact that data can be analyzed and optimized before being sent to the cloud (for example, data can be validated and analyzed locally, and only results are sent to the a central cloud repository).

Object storage in IoT devices

Isn’t this crazy? No, it isn’t! 
This concept is not applicable to all object stores and compute frameworks, but this is not the case for OpenIO SDS+G4A. Our software can run on a single ARM core with less than 512 MB of RAM. This means that it can be installed on practically any sort of device providing the same level of functionality you can find on cloud and edge infrastructures. But why?
 
The answer is simple: IoT takes the edge use case to the next level. Data storage is necessary for all types of applications that manage data, no matter whether they are in the cloud, at the edge, or on a small IoT device. In addition, IoT devices are small, not always well connected and, usually not very resilient. Even though this is not a huge problem for consumer devices it could become critical in Industrial IoT environments. 
 
The ability to create a network of small devices that can share a resilient software-defined storage layer and compute resources might seem to be a bit of a stretch, but with OpenIO technology, it is just a matter of configuration. 
 
With this approach, it is possible to work locally on most of the data, select and analyze sensors, and transfer to the core only valuable information. Moreover, it is possible to maintain a local storage layer that can store logs and other important information safely for a longer period of time. 

OpenIO connects Cloud, Edge, and IoT

At OpenIO, we focus our message on four distinctive enablers: Flexibility, Lightweight Design, Serverless, and Open Source. Traditional object stores can’t match the range of applications we can target because they lack some or all of these characteristics. 
 
For example, look at our flexibility. Thanks to our Conscience technology, we can choose the best resource available for each discrete operation in the cluster. This is good for cloud, even more important for edge, and becomes fundamental in IoT environments where any single resource is precious. At the same time, our technology allows data storage systems to avoid data rebalancing when new nodes are added, and, again, while this improves efficiency in a datacenter, think about doing data rebalancing every time a new IoT device is added to the network (or disappears for some time)!
 
The same goes for our lightweight design. In a datacenter, it is good to improve efficiency and performance, lower costs, and get the most from any type of server. But the smaller the nodes, the more important lightweight design becomes. Any single saved resource contributes to the overall performance of the network.
 
Serverless computing is the glue. By intercepting events and passing them to applications or simple scripts, as with Google functions or Amazon AWS Lambda, it is easy and efficient to offload many tasks to the storage layer. And you don’t need any other infrastructure layer to do that: no hypervisors, no orchestrators, no management tools. Just simple code and data, perfect for IoT environments (think of AWS Greengrass for example).
 
And last, but not least, OpenIO SDS and G4A are open source. The code is available on github, alongside the packages for several ARM and x86 Linux distributions. This allows end users to test and deploy OpenIO SDS and G4A without limitations. Open source helps avoid lock-ins and gives end users maximum freedom and better ROI. 

Takeways

Object storage and serverless computing are two components that can help build large and small infrastructures capable of storing and computing data with high efficiency and at the lowest cost. This applies to the largest infrastructures where $/GB is an important metric as well as in the smallest of IoT networks with a small number of distributed resources.  
 
Not all object stores have the appropriate characteristics to do this, and most of them lack integration with a serverless computing framework. OpenIO, thanks to our unique technology, can tackle all these scenarios and can be the right choice for any organization looking for a comprehensive solution for an integrated strategy built around cloud, edge and IoT.

Want to know more?

OpenIO SDS is available for testing in four different flavors: Linux packages, the Docker image, a simple ready-to-go virtualized 3-node cluster and Raspberry Pi.

Stay in touch with us and our community through Twitter, our Slack community channel, GitHub, blog RSS feed and our web forum, to receive the latest info, support, and to chat with other users.

Reserve your seat for one of our webinars! Check on our Events page for next sessions or on our youtube channel for recorded videos.

Leave a comment

All fields are required. Your email address will not be published.