How to Install OpenIO in Standalone Mode to Perform Functional Tests

A step-by-step guide to deploy OpenIO on a single node for testing purposes.
Cédric Delgehier
Cédric Delgehier
Ops at OpenIO
@jacknemrod
Share

By default, OpenIO is designed to be installed on a cluster of at least 3 nodes to ensure data security and high availability of the system itself - critical services are replicated on the different nodes to prevent the loss of a machine.

To quickly discover the solution or to perform functional tests (e.g. to validate the proper integration of OpenIO with your applications), it may be useful to deploy an OpenIO platform on a single instance. It can be hosted by a cloud provider or on your workstation, using tools such as VirtualBox. This "single node / standalone" configuration also works on a physical machine.

There is a simple way to test OpenIO in a sandbox. But this configuration, which uses our Docker image, by default does not allow data to be retained when the machine is rebooted (unless a local mount point is created, as described here).

With the "single node / stand alone" installation described below, which is based on Ansible as for a traditional deployment, data and configurations are persistent. Above all, it is possible to work with multiple disks, where the Docker installation allows only one storage media to be used.

Installation of OpenIO in a standalone mode

Architecture

For this installation, I use an x86_84 virtual machine with a Centos 7 and a Ubuntu Bionic on an OpenStack. The machine has these specifications:

  • 1 CPU

  • 2 GB RAM

  • 1 disk of 20 GB for the system

  • 3 SSD disks of 1, 2 and 3 GB for the data

  • 1 NVME disk of 1 GB for the metadata 1 single network interface

In addition to being hardware agnostic, OpenIO does not use Consistent Hashing for data placement. This allows us to handle mixed sizes of disks and to be able to add additional disks at a later date without having to rebalance and move the data.

Preparations

I start by updating my system:

# CentOS 7
[root@standalone ~]$ yum update -y

# Ubuntu Bionic
[root@standalone ~]$ apt update
[root@standalone ~]$ apt upgrade -y

To be able to start the installation, I install git and python-virtualenv

# CentOS 7
[root@standalone ~]$ yum install python-virtualenv git

# Ubuntu Bionic
[root@standalone ~]$ apt install python-virtualenv git

Then I deactivate SELinux/AppArmor and firewalld

# CentOS 7
[root@standalone ~]$ systemctl disable firewalld
[root@standalone ~]$ sed -i -e 's@^SELINUX=.*@SELINUX=disabled@' /etc/selinux/config


# Ubuntu Bionic
[root@standalone ~]$ systemctl disable ufw
[root@standalone ~]$ echo 'GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0"' > /etc/default/grub.d/apparmor.cfg
[root@standalone ~]$ update-grub

I format my storage devices and present them to the system.

[root@standalone ~]$ parted -a optimal /dev/vdb -s mklabel gpt unit TB mkpart primary 0% 100%
[root@standalone ~]$ parted -a optimal /dev/vdc -s mklabel gpt unit TB mkpart primary 0% 100%
[root@standalone ~]$ parted -a optimal /dev/vdd -s mklabel gpt unit TB mkpart primary 0% 100%
[root@standalone ~]$ parted -a optimal /dev/vde -s mklabel gpt unit TB mkpart primary 0% 100%[root@standalone ~]$ mkfs.xfs -f -L SSD-1 /dev/vdb1
[root@standalone ~]$ mkfs.xfs -f -L SSD-2 /dev/vdc1
[root@standalone ~]$ mkfs.xfs -f -L SSD-3 /dev/vdd1
[root@standalone ~]$ mkfs.xfs -f -L NVME-1 /dev/vde1

I add this part to my /etc/fstab file, after using blkid to find the disk IDs

[root@standalone ~]$ blkid
...(truncated)
/dev/vdb1: LABEL="SSD-1" UUID="cb91ed2e-0113-4183-8a10-57dbd93faae7" TYPE="xfs" PARTLABEL="primary" PARTUUID="fbdb0a76-6f63-42ec-86e0-d4d5973e9da1"
/dev/vdc1: LABEL="SSD-2" UUID="678e0118-3317-4430-b56b-410aa11e1495" TYPE="xfs" PARTLABEL="primary" PARTUUID="0bdd17bd-e7a8-490b-92a1-91c6b60f3a84"
/dev/vdd1: LABEL="SSD-3" UUID="91568f07-6757-4f0a-85d6-c5fdf4e07e6d" TYPE="xfs" PARTLABEL="primary" PARTUUID="3040a043-fdf7-4d3c-b6b6-93faa7840dc5"
/dev/vde1: LABEL="NVME-1" UUID="037fd7cf-b241-4584-a19b-c4d1cab480b2" TYPE="xfs" PARTLABEL="primary" PARTUUID="2d3734af-d28e-4c9b-9e3f-8237c0d0eeb2"

[root@standalone ~]$ grep /mnt/ /etc/fstab
UUID=cb91ed2e-0113-4183-8a10-57dbd93faae7 /mnt/data1 xfs defaults,noatime,noexec 0 0
UUID=678e0118-3317-4430-b56b-410aa11e1495 /mnt/data2 xfs defaults,noatime,noexec 0 0
UUID=91568f07-6757-4f0a-85d6-c5fdf4e07e6d /mnt/data3 xfs defaults,noatime,noexec 0 0
UUID=037fd7cf-b241-4584-a19b-c4d1cab480b2 /mnt/metadata1 xfs defaults,noatime,noexec 0 0

and I mount them.

[root@standalone ~] mkdir /mnt/{data1,data2,data3,metadata1}
[root@standalone ~] mount -a

Finally I reboot.

Installation

Let's start by downloading the installation process based on Ansible.

[root@standalone ~] git clone https://github.com/open-io/ansible-playbook-openio-deployment.git --branch 19.10 oiosds
[root@standalone ~] cd oiosds/products/sds/

The installation of Ansible will be done in a python virtual environment.

[root@standalone sds] virtualenv openio_venv
[root@standalone sds] source openio_venv/bin/activate
[root@standalone sds] pip install -r ansible.pip

I download the different modules needed for installation

[root@standalone sds] ./requirements_install.sh

I start customizing the inventory which represents my unique node

---
all:
  hosts:
    node1:
      ansible_host: 172.30.1.8
      openio_data_mounts:
        - mountpoint: /mnt/data1
          partition: /dev/vdb1
        - mountpoint: /mnt/data2
          partition: /dev/vdc1
        - mountpoint: /mnt/data3
          partition: /dev/vdd1
      openio_metadata_mounts:
        - mountpoint: /mnt/metadata1
          partition: /dev/vde1
          meta2_count: 2
  vars:
    ansible_user: root
    ansible_connection: local

As my node has very few resources and will not secure the resilience of the data on another node, I’m going to scale down.

children:
    openio:
      hosts:
        node1: {}
      vars:
        namespace: OPENIO
 
        # LESS RESOURCES PART
        openio_account_workers: 1
        openio_oioswift_workers: 1
        namespace_meta1_digits: "1"
        openio_event_agent_workers: 1
        openio_zookeeper_parallel_gc_threads: 1
        openio_zookeeper_memory: "256M"
        openio_minimal_score_for_volume_admin_bootstrap: 5
        openio_minimal_score_for_directory_bootstrap: 5
        # LESS RESOURCES PART
 
        # STANDALONE PART
        namespace_storage_policy: "SINGLE"
        openio_replicas: 1
        openio_namespace_zookeeper_url: ""
        openio_namespace_service_update_policy:
          - name: meta2
            policy: KEEP
            replicas: 1
            distance: 1
          - name: rdir
            policy: KEEP
            replicas: 1
            distance: 1
        # STANDALONE PART
   
        openio_bind_interface: '{{ ansible_default_ipv4.alias }}'
        openio_bind_address: '{{ ansible_default_ipv4.address }}'
        openio_oioswift_users:
          - name: "demo:demo"
            password: "DEMO_PASS"
            roles:
              - admin
        # Credentials for private features
        openio_repositories_credentials: {}
 
### SDS
    account:
      hosts:
        node1: {}
      vars:
        openio_account_redis_standalone: "{{ openio_bind_address }}:6011"
    beanstalkd:
      hosts:
        node1: {}
    conscience:
      hosts:
        node1: {}
    conscience-agent:
      hosts:
        node1: {}
    meta:
      children:
        meta0: {}
        meta1: {}
        meta2: {}
    meta0:
      hosts:
        node1: {}
    meta1:
      hosts:
        node1: {}
    meta2:
      hosts:
        node1: {}
    namespace:
      hosts:
        node1: {}
      vars:
        openio_namespace_conscience_url: "{{ hostvars['node1']['openio_bind_address'] }}:6000"
    oio-blob-indexer:
      hosts:
        node1: {}
    oio-blob-rebuilder:
      hosts:
        node1: {}
    oio-event-agent:
      hosts:
        node1: {}
    oioproxy:
      hosts:
        node1: {}
    oioswift:
      hosts:
        node1: {}
      vars:
        openio_oioswift_pipeline: "{{ pipeline_tempauth }}"
        openio_oioswift_filter_tempauth:
          "{{ {'use': 'egg:oioswift#tempauth'} | combine(openio_oioswift_users | dict_to_tempauth) }}"
    rawx:
      hosts:
        node1: {}
    rdir:
      hosts:
        node1: {}
    redis:
      hosts:
        node1: {}
    sentinel: {}
...

You can create an image with all packages installed only for a later offline use case like this:

[root@standalone sds] ansible-playbook \
-i inventory.yml main.yml -t install

Then, run the installation with this command:

[root@standalone sds] ansible-playbook \
-i inventory.yml main.yml \
-e "openio_bootstrap=true" -e "openio_maintenance_mode=false" \
--skip-tags checks

This takes a few minutes to complete, depending on your internet bandwidth, because it downloads and installs packages.

Finally, I can check my installation by running the script /root/checks.sh

You will find below an implementation of this article:

Cédric Delgehier
Share
Automation addict + CI/CD enthusiast. Science Fiction + Pastry.
All posts by Cédric