COSBench-ing OpenIO (or Any Other Object Store)

I'm here again writing one more pseudo-technical tutorial. I'm having a lot of fun working with my home lab and rediscovering my rusty sysadmin skills (from the nineties).
Enrico Signoretti
Enrico Signoretti
Former Strategist at OpenIO
@esignoretti
Share

Today I'll borrow the work of some of my colleagues to talk about COSBench, and explain how to set it up for a quick benchmark of OpenIO.

This is easier than you may think. In fact, I discovered that we already have a container ready to rock and roll!

What is COSBench?

COSBench stays for Cloud Object Storage Benchmark. It is an open-source tool originally developed by Intel and hosted on GitHub. It is a de facto standard when it comes to object storage benchmarks and is very easy to use. In fact you can find it ready to use in several forms.

The good thing about this is that anybody can use it and simulate their workload by describing it in an XML file that will be parsed and executed by the tool. It is so easy that anybody can do it, and also compare the performance of different object stores in their lab.

Let's do it!

We will use the docker container available here. I run it on my Mac but you can do this on every other supported OS.

If you do not have Docker CE installed yet, it is available here. After you have completed the installation we will be ready to run Docker commands from a Terminal window:


#docker pull docker pull openio/cosbench-openio
#docker run -p 19088:19088 -p 18088:18088 -ti --tty openio/cosbench-openio

The first one downloads the necessary Docker image, the second runs it and maps ports 19088 and 18088 to the same ports on your Mac. In short, Docker uses an internal private network and you can map each port to an external port (-p internal_port:exteral_port), as long as the external port is not in use. Please note that this mechanism may be slightly different if you run Docker on a Windows PC or a Linux machine.

Once you have the COSBench container up and running you'll have to connect to http://localhot:19088/controller/index.html

A very basic user interface will appear.

COSBENCH

Now it is time to prepare the XML file that describes the workload. I have to say that my file is very simple. This is because I have a small ARM-based cluster made of three nodes and one load balancer all connected on the same 1Gbit/s network with my Mac accessing it via WiFi… not the ideal configuration for benchmarks.

You'll find several examples on GitHub showing how to build the perfect simulation for your workload and infrastructure.

Here the file I used:


<xml version="1.0" encoding="UTF-8">

The test is in five self-descriptive parts: init, prepare, main, cleanup, and dispose.

Click on "Submit new workloads,” choose the file, then click “Submit.” That's it.

The jobs start immediately.

COSBENCH

You an check the status by clicking on "View details,” and, at the end of the process you'll get results about Operations/s, Throughput, and so on.

COSBENCH

Key Takeaways

My lab is not suited for speed benchmarks but this procedure can be easily repeated in any lab and with infrastructures of all sizes. (With large-scale clusters COSBench will be a little more complicated to configure though.)

If you are evaluating object stores for your infrastructure, I strongly suggest that you test them against COSBench too. It is easy, it doesn't take a lot of time, and gives you a clear idea of the efficiency and speed you could expect in production. This could also give you the chance to compare more solutions on your workloads and the same hardware. And, why not include OpenIO in these benchmarks? They only take a few minutes to install.

Enrico Signoretti
Enrico Signoretti
Former Strategist at OpenIO
@esignoretti
Share
Enrico is an experienced IT professional and internationally renowned author/speaker on storage technologies. In 2017-2018 he has been Product Strategist at OpenIO, today he continues to envision changes in the storage industry as a GigaOm Research Analyst. Enrico enjoys traveling, meeting people and eating "gelato". He is also a fond sailor, kite surfer, and a lazy runner.
All posts by Enrico