Navigation ↓
  |  Guillaume Delaporte

Simple Metadata Indexing through Grid for Apps

Simple Metadata Indexing through Grid for Apps

It’s time for the second article in our series about the event-driven framework that is part of our SDS object storage solution. If you missed our first article, you can read A technical introduction to Grid for Apps. And note that we will be hosting a webinar about Grid for Apps on May 24 (Sign up here to learn more about our data processing framework).

The first article discussed enriching object metadata by adding a new metadata tag to files on the fly. We can think of many useful types of metadata that we can add to an object. For example, it could be a license plate extracted from a speed control camera picture, a document type, the resolution and bit rate of a video, or a pattern found in a picture. We will tackle all these examples in the coming weeks, but let’s answer a question first. Since an SDS object storage solution like ours can store petabytes of data and billions of files, how can we make use of it? How can we find the specific object we are looking for?
This is actually easy. We can use our event-driven solution to index in Elasticsearch all the metadata attached to our files. We will use the same event we used in the previous article (when a new file is updated) to push it in Elasticsearch. You will then be able to query Elasticsearch for all the files matching any type of metadata.

Let’s do it!

As in our previous article, we will use our Docker container image to easily spawn an OpenIO SDS environment. We will also use the Elasticsearch Docker image to deploy it.

Retrieve the latest Elasticsearch Docker image (5.4.0 as of this writing):

# docker pull docker.elastic.co/elasticsearch/elasticsearch:5.4.0

And start an Elasticsearch instance:

# docker run -d -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" docker.elastic.co/elasticsearch/elasticsearch:5.4.0

Retrieve the OpenIO SDS Docker image:

# docker pull openio/sds

Start your new OpenIO SDS environment:

# docker run -ti --tty openio/sds

You should now be at the prompt with an OpenIO SDS instance up and running.

Next, we will configure the trigger, so that each time you add a new object, the metadata from the object will be pushed to Elasticsearch. Add the following content to the file /etc/oio/sds/OPENIO/oio-event-agent-0/oio-event-handlers.conf:

[handler:storage.content.new]
pipeline = process

[handler:storage.content.deleted]
pipeline = content_cleaner

[handler:storage.container.new]
pipeline = account_update

[handler:storage.container.deleted]
pipeline = account_update

[handler:storage.container.state]
pipeline = account_update

[handler:storage.chunk.new]
pipeline = volume_index

[handler:storage.chunk.deleted]
pipeline = volume_index

[filter:content_cleaner]
use = egg:oio#content_cleaner

[filter:account_update]
use = egg:oio#account_update

[filter:volume_index]
use = egg:oio#volume_index

[filter:process]
use = egg:oio#notify
tube = oio-process
queue_url = beanstalk://127.0.0.1:6014

If you want to learn more about this configuration file, please refer to our previous blog post.

Then, restart the openio event agent to enable the modification:

# gridinit_cmd restart @oio-event-agent

Your event-driven system is now up and running. The next step is to write the script that will index the metadata into Elasticsearch. To do so, we first need to install the Elasticsearch python module:

# gridinit_cmd restart @oio-event-agent

And we can now write the script. Let’s call it index-metadata.py:

#!/usr/bin/env python
import json
from oio.api import object_storage
from oio.event.beanstalk import Beanstalk, ResponseError
from elasticsearch import Elasticsearch
# Initiate a connection to beanstalk to fetch the events from the tube oio-process
b = Beanstalk.from_url("beanstalk://127.0.0.1:6014")
b.watch("oio-process")
# Waiting for events
while True:
    try:
        # Reserve the event when it appears
        event_id, data = b.reserve()
    except ResponseError:
        # Or continue waiting for the next one
        continue
    
    print event_id
    # Retrieve the information from the event (namespace, bucket, object name ...)
    meta = json.loads(data)
    print meta    
    url = meta["url"]
    # Initiate a connection with the OpenIO cluster
    s = object_storage.ObjectStorageAPI(url["ns"], "127.0.0.1:6006")
    
    # Create index for metadata
    print "indexing"
    # Open a connection to Elasticsearch 
    # /!\ Change the ip /!\
    es = Elasticsearch(['http://elastic:changeme@192.168.99.1:9200'])
    
    # Retrieve the metadata from the object
    meta, stream = s.object_fetch(url["account"], url["user"], url["path"])
    # Create the index in ElasticSearch if it does not exist
    if not es.indices.exists(url["account"]):
        es.indices.create(index=url["account"])
    
    # Push the metadatas to Elasticsearch
    res = es.index(index=url["account"], doc_type=url["user"], body=meta)
    es.indices.refresh(index=url["account"])
    
    # Delete the event
    b.delete(event_id)

You will have to modify the IP address of the Elasticsearch instance. In my case, the IP address of my machine was 192.168.99.1. Change it according to your environment.

Finally, launch the script in background:

# python index-metadata.py &

Please note that the script is written in Python, but you can write it in any other language.

How does it work?

It’s time to add a new object to see if this works.

Using the OpenIO CLI, let’s upload the new object /etc/fstab to the container mycontainer in the account myaccount. We will also add the metadata type=configfile that will help to search for it in Elasticsearch.

# openio --oio-ns OPENIO --oio-account myaccount object create mycontainer /etc/fstab --property type=configfile

Well done! You’ve just uploaded the file fstab, while, in the background, its metadata was indexed in Elasticsearch.

Now, we’ll query Elasticsearch, asking it to find all the objects that match the property configfile:

# curl -XPOST 'http://elastic:changeme@192.168.99.1:9200/myaccount/mycontainer/_search?pretty' -d '
{
  "query": {
    "multi_match" : {
      "query":    "configfile",
      "fields": [ "name", "properties.*" ]
    }
  }
 }

Searching for objects with the property (“fields”: [ “name”, “properties.*”]) configfile (“query”: “configfile”), we obtain the following result:

{
  "took" : 147,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 0.2876821,
    "hits" : [
      {
        "_index" : "myaccount",
        "_type" : "mycontainer",
        "_id" : "AVv0rLyke_i5VY9AuEZo",
        "_score" : 0.2876821,
        "_source" : {
          "hash" : "FB2B5EC6E6BC56CF7D02BE2B3D4AA5BA",
          "ctime" : "1494458539",
          "deleted" : "False",
          "container_id" : "594C8B26EA13E562391013AE6FC360C2C1691F314164DD457EF583B16712E360",
          "properties" : {
            "type" : "configfile"
          },
          "length" : "313",
          "hash_method" : "md5",
          "chunk_method" : "plain/nb_copy=1",
          "version" : "1494458538343358",
          "policy" : "SINGLE",
          "ns" : "OPENIO",
          "id" : "B43B4FBE334F05006C1396B21200CE3B",
          "mime_type" : "application/octet-stream",
          "name" : "fstab"
        }
      }
    ]
  }
}

All right, our newly uploaded file was detected by Elasticsearch as matching the request “query”: “configfile”.

Join us on May 25

As I mentioned above, these series of articles will demonstrate our Grid for Apps technology with some interesting use cases (image recognition and manipulation, pattern recognition, and more). So stay tuned!

We are also planning a webinar for May 24, and we’ll give you a glimpse of what you can expect from Grid for Apps in the near future. This will be the chance for you to ask all your questions about how this technology works and how you can implement it in your environment.

Here’s the link to register for the webinar.

Want to know more about OpenIO SDS?

OpenIO SDS is available for testing in four different flavors: Linux packages, the Docker image, a simple ready-to-go virtualized 3-node cluster and Raspberry Pi.

Stay in touch with us and our community through Twitter, our Slack community channel, GitHub, blog RSS feed and our web forum, to receive the latest info, support, and to chat with other users.

What are you waiting for?! Sign up and join us on May 24 for “Run Applications Directly on the Storage Infrastructure” webinar!