OpenIO SDS is designed to be efficient while simplifying data workflows and continuously optimizing available resources.
Groups of storage media, hard disks or SSDs, can be configured to isolate particular traffic or objects from others. Data can be moved from one pool to another with simple operations or through automated tasks, enabling automated tiering. This improves multitenancy, performance consistency, and eases the organization of data across different media pools.
User-defined policies allow SDS to select the best storage pool and data protection scheme based on object size. It improves data footprint efficiency and overall system performance on mixed workload environments.
Data can be compressed at rest, synchronously or asynchronously to select the best tradeoff among data footprint reduction, i/o ops optimization, and CPU consumption. The platform operator is able to choose the best fit for his use cases. Compression saves space and improves overall system efficiency.
Data can be automatically moved across different storage pools thanks to an elaborate lifecycle management rules system (based on metadata, size, tags, prefixes, ect). This simplifies data movements in the system, leaving faster resources ready for new hot data.
Automated tiering can be expanded to the cloud. OpenIO SDS currently supports BackBlaze B2. Infrequently accessed data can be offloaded from the system to a cheaper, remote repository.