The benefits of being data-driven
The most competitive and successful companies at the moment are all data-driven. Examples range from service companies like Google, Facebook, or Netflix, to retailers like Amazon, and even hardware companies like Apple. All of these companies are generally considered data-driven.
They all collect and analyze data in different forms and for different reasons. They do not work in the same sector, and the data they collect is meant to be used in different ways… but the results are very similar. Google, for example, collects data on everything you do, offering you a wide range of services in exchange, while reselling your profile to advertisers. One could complain about privacy, but some of their services are invaluable, and, because of that, their advertising business is so wealthy that they can afford to spend millions on thoughtful R&D as well as on moonshots.
The same goes for Apple; while they do not use your data to build advertising profiles, they use data they collect to constantly improve their products and the overall user experience of their platform.
Leaving aside ethics, regulations, security, and other concerns around data collection, which are not the goal of this article, a data-driven company has a huge advantage over its competitors that are still building products and services in traditional ways:
- They can get real-time feedback from the field about their products and services, and use this data to stir product development quickly when necessary.
- They can improve products, or create new ones, by analyzing their customers’ behavior.
- They can find design and security flaws quickly and address them promptly.
- Support can be improved through predictive analytics and preventive actions.
And these are just a few examples. For every application and data set there are plenty of opportunities to shorten response times and improve products, services, or processes.
Collect, Analyze, Improve, Repeat!
Being data-driven is not a state; it’s a repetitive process. And it should be integrated in product and service design from the ground up. For example, if you are manufacturing some sort of machine, all the sensors that you put in that device should be able to collect, store, and send data locally or to the cloud for further use (in this article you'll see what I mean by that).
Once data is stored safely, you can start to analyze it and get insights directly from users. Big data tools are widely available now and the number of people with the right skills to take full advantage of them is growing quickly. These tools are also becoming much easier to use than in the past, speeding up the learning process and enabling more organizations to use them efficiently. It takes time but, in my own experience, I have seen organizations adopt data-driven development processes and extend the benefits of their experience to other departments including support, marketing, and sales.
Where do you begin?
Let's start with the most obvious thing: data.
Data has to be collected and stored safely. This is the first step, and to do so, it is necessary to add sensors to you product (or service) and connect them to an external scalable repository.
Data augmentation follows on from this. Sensors should be cheap and easy to replace, and missing information can be easily created while data is saved. This particular step is fundamental to make data searchable and reusable.
Data analysis is the core element of the process: it is what turns data into real value, which is converted to the information that helps drive decisions. Depending on the needs, it can be done in real-time or later, and with different tools. If the data sets are reusable over time, these tools can be replaced when business requirements change.
What can OpenIO do to make your organization data-driven?
OpenIO’s ultra scalable data storage and compute platform can be deployed both at the edge and in the cloud on commodity hardware. We do not provide analytics tools, but they can be easily integrated in the same infrastructure to simplify the entire stack.
OpenIO is a powerful scale-out object store that has unique design characteristics, allowing our customers to deploy it at the edge as well as in large data centers. Its lightweight design and Conscience technology, a dynamic load balancing mechanism, provide better and more consistent performance than other object stores. The result is a solution for storing unstructured and semi-structured data coming from anywhere and making it available to multiple applications.
At the same time, GridForApps is the best companion for OpenIO . It is a serverless computing framework and can be used to automate many operations like data augmentation or even data analytics. It intercepts events happening at the storage layer and triggers small pieces of code that can perform operations directly when data is ingested or read. This is a powerful tool that can also be used to convert data, validate it, and import/export relevant information to other systems such as databases.
The first step toward building a data-driven organization in any sector is to collect as much data as possible and to store it forever, if there are enough resources. The only way to do this is to use a scale-out storage system. The cloud can be relatively cheap at the beginning but as soon as you start to use the data, its cost will increase dramatically. OpenIO is cost-effective, scalable, and offers very high performance to make this first step affordable to any size business.
GridForApps is the next step. It transforms OpenIO from a passive data store to an active one, allowing companies to build functions that can automate processes including data augmentation, while simplifying integration with other platforms, including analytics tools.
Nobody knows when or if the data we store today will become useful. But OpenIO and GridForApps enable you to create smart, scale-out storage infrastructures that can start small and grow big while remaining sustainable over time.