20.04 Instant scaling

Learn how to deploy OpenIO 20.04 and how to set up instant scaling
Cédric Delgehier
Cédric Delgehier
Ops at OpenIO
@jacknemrod
Share

This article will show you how to deploy OpenIO 20.04 and how to set up instant scaling.

To do this, we will deploy a demonstration environment using Vagrant.

We will demonstrate :

  • How to deploy openio version 20.04
  • That it is possible to add a disk to a server already in the cluster and even if it is not the same size.
  • That it is possible to add a new server to an existing cluster.
  • That it is possible to ensure high availability with a load balancer such as Traefik.
Schema_instant_scaling

Deployment

The reproducible environment is available at this address: https://github.com/open-io/oio-user-guides/tree/master/blog/instant_scale. It only requires vagrant and virtualbox.

[me@laptop ~]$ vagrant up

We will use the load balancing server to install OpenIO SDS 20.04.

[me@laptop ~]$ vagrant ssh lb
[vagrant@lb ~]$ cd oiosds/products/sds/
[vagrant@lb sds]$ . openio_venv/bin/activate
(openio_venv) [vagrant@lb sds]$ ./requirements_install.sh
(openio_venv) [vagrant@lb sds]$ ./deploy_and_bootstrap.sh

While the solution is deploying, let’s take a closer look at the inventory:

The servers have 3 data disks and 1 metadata disk, except the node1 server which can have a different type without any problem. It's also interesting to note that the disks don't need to be identical - they can be of different sizes, generations or even brands.

node1:
   ansible_host: 192.168.4.10
   # Customized openio devices definition
   openio_data_mounts:
     - { 'partition': '/dev/loop0', 'mountpoint': '/mnt/disk0' }
     - { 'partition': '/dev/loop1', 'mountpoint': '/mnt/disk1' }
     - { 'partition': '/dev/loop2', 'mountpoint': '/mnt/disk2' }
     - { 'partition': '/dev/loop3', 'mountpoint': '/mnt/disk3' }  # additional disk
   openio_metadata_mounts:
     - { 'partition': '/dev/loop4', 'mountpoint': '/mnt/disk4', 'meta2_count': 2 }

We activate the multiple ‘conscience’ mechanism from the version 19.04, so we don’t have to ensure high availability via traefik.

openio_conscience_multiple_enable: true

We reduce service refresh times to view the scaling faster.

openio_conscience_services_common_timeout: 5
openio_conscienceagent_check_interval: 1
openio_namespace_options:
  - "proxy.period.cs.upstream=1"
  - "proxy.period.cs.downstream=2"

Finally, we protect the data with Erasure Coding 6+3.

namespace_storage_policy: "ECISAL63D1"

Once the deployment is finished, you can ensure that the service is functional by creating your first bucket.

(openio_venv) [vagrant@lb sds]$ aws  --endpoint-url https://vagrant.demo.openio.io  s3 mb s3://bucketname
make_bucket: bucketname

You can find the traefik graphical interface on: https://localhost:8081/dashboard/

Adding a data disk to an existing server

We deployed a rawx service per disk as shown here:

[me@laptop ~]$  vagrant ssh node3
[vagrant@node3 ~]$ sudo -i
[root@node3 ~]# openio cluster list rawx
+------+-------------------+-------------------+--------------------------+----------+-------+------+-------+--------+
| Type | Addr              | Service Id        | Volume                   | Location | Slots | Up   | Score | Locked |
+------+-------------------+-------------------+--------------------------+----------+-------+------+-------+--------+
| rawx | 192.168.4.30:6201 | 192.168.4.30:6201 | /mnt/disk1/OPENIO/rawx-1 | node3.1  | rawx  | True |    99 | False  |
| rawx | 192.168.4.30:6200 | 192.168.4.30:6200 | /mnt/disk0/OPENIO/rawx-0 | node3.0  | rawx  | True |    99 | False  |
| rawx | 192.168.4.30:6202 | 192.168.4.30:6202 | /mnt/disk2/OPENIO/rawx-2 | node3.2  | rawx  | True |    99 | False  |
| rawx | 192.168.4.20:6201 | 192.168.4.20:6201 | /mnt/disk1/OPENIO/rawx-1 | node2.1  | rawx  | True |    99 | False  |
| rawx | 192.168.4.20:6202 | 192.168.4.20:6202 | /mnt/disk2/OPENIO/rawx-2 | node2.2  | rawx  | True |    99 | False  |
| rawx | 192.168.4.20:6200 | 192.168.4.20:6200 | /mnt/disk0/OPENIO/rawx-0 | node2.0  | rawx  | True |    99 | False  |
| rawx | 192.168.4.10:6200 | 192.168.4.10:6200 | /mnt/disk0/OPENIO/rawx-0 | node1.0  | rawx  | True |    99 | False  |
| rawx | 192.168.4.10:6202 | 192.168.4.10:6202 | /mnt/disk2/OPENIO/rawx-2 | node1.2  | rawx  | True |    99 | False  |
| rawx | 192.168.4.10:6203 | 192.168.4.10:6203 | /mnt/disk3/OPENIO/rawx-3 | node1.3  | rawx  | True |    99 | False  |
| rawx | 192.168.4.10:6201 | 192.168.4.10:6201 | /mnt/disk1/OPENIO/rawx-1 | node1.1  | rawx  | True |    99 | False  |
+------+-------------------+-------------------+--------------------------+----------+-------+------+-------+--------+

[root@node3 ~]# gridinit_cmd status -c @rawx
KEY           STATUS      PID GROUP
OPENIO-rawx-0 UP         6684 OPENIO,rawx,0
OPENIO-rawx-1 UP         6693 OPENIO,rawx,1
OPENIO-rawx-2 UP         6702 OPENIO,rawx,2

To add a 4th disk to our node3 server, we're going to replace the inventory by another one already planned for that purpose.

(openio_venv) [vagrant@lb sds]$ cp inventory_add_1disk.yml inventory.yml
(openio_venv) [vagrant@lb sds]$ ansible-playbook main.yml -e '{"openio_maintenance_mode":false}' --limit node3 --tags sds

Now we need to restart the local ‘conscience’ agent so that the new disk will be recognized by the ConsciousGrid.

[root@node3 ~]# gridinit_cmd restart @conscienceagent
DONE    	OPENIO-conscienceagent-0	Success

[root@node3 ~]# openio cluster list | grep node3.3
| rawx       | 192.168.4.30:6203 | 192.168.4.30:6203                    | /mnt/disk4/OPENIO/rawx-3       | node3.3  | rawx       | True |     0 | True   |
| rdir       | 192.168.4.30:6303 | n/a                                  | /mnt/disk4/OPENIO/rdir-3       | node3.3  | rdir       | True |     0 | True   |

The services are currently out of flow. We need to secure our new rawx by attaching it to a rdir service.  For example:

[root@node3 ~]# openio rdir assignments rawx
+-------------------+-------------------+---------------+---------------+
| Rdir              | Rawx              | Rdir location | Rawx location |
+-------------------+-------------------+---------------+---------------+
| 192.168.4.10:6300 | 192.168.4.30:6201 | node1.0       | node3.1       |
| 192.168.4.10:6302 | 192.168.4.20:6202 | node1.2       | node2.2       |
| 192.168.4.20:6300 | 192.168.4.10:6201 | node2.0       | node1.1       |
| 192.168.4.20:6300 | 192.168.4.30:6200 | node2.0       | node3.0       |
| 192.168.4.20:6301 | 192.168.4.10:6200 | node2.1       | node1.0       |
| 192.168.4.20:6302 | 192.168.4.10:6203 | node2.2       | node1.3       |
| 192.168.4.20:6302 | 192.168.4.30:6202 | node2.2       | node3.2       |
| 192.168.4.30:6300 | 192.168.4.20:6200 | node3.0       | node2.0       |
| 192.168.4.30:6301 | 192.168.4.20:6201 | node3.1       | node2.1       |
| 192.168.4.30:6302 | 192.168.4.10:6202 | node3.2       | node1.2       |
| n/a               | 192.168.4.30:6203 | None          | node3.3       |
+-------------------+-------------------+---------------+---------------+
[root@node3 ~]# openio rdir bootstrap rawx
+-------------------+-------------------+---------------+---------------+
| Rdir              | Rawx              | Rdir location | Rawx location |
+-------------------+-------------------+---------------+---------------+
| 192.168.4.10:6300 | 192.168.4.30:6201 | node1.0       | node3.1       |
| 192.168.4.10:6302 | 192.168.4.20:6202 | node1.2       | node2.2       |
| 192.168.4.10:6303 | 192.168.4.30:6203 | node1.3       | node3.3       |
| 192.168.4.20:6300 | 192.168.4.10:6201 | node2.0       | node1.1       |
| 192.168.4.20:6300 | 192.168.4.30:6200 | node2.0       | node3.0       |
| 192.168.4.20:6301 | 192.168.4.10:6200 | node2.1       | node1.0       |
| 192.168.4.20:6302 | 192.168.4.10:6203 | node2.2       | node1.3       |
| 192.168.4.20:6302 | 192.168.4.30:6202 | node2.2       | node3.2       |
| 192.168.4.30:6300 | 192.168.4.20:6200 | node3.0       | node2.0       |
| 192.168.4.30:6301 | 192.168.4.20:6201 | node3.1       | node2.1       |
| 192.168.4.30:6302 | 192.168.4.10:6202 | node3.2       | node1.2       |
+-------------------+-------------------+---------------+---------------+

Now we can add our two new services to the workflow. These services will obtain a score (here it’s 99) from the ConsciousGrid, and fill up according to that score.

[root@node3 ~]# openio cluster unlockall
[root@node3 ~]# openio cluster list | grep node3.3
| rawx       | 192.168.4.30:6203 | 192.168.4.30:6203                    | /mnt/disk4/OPENIO/rawx-3       | node3.3  | rawx       | True |    99 | False  |
| rdir       | 192.168.4.30:6303 | n/a                                  | /mnt/disk4/OPENIO/rdir-3       | node3.3  | rdir       | True |    99 | False  |

If we push data into the previous bucket, we can see that the disk receives many data chunks.

(openio_venv) [vagrant@lb sds]$ aws  --endpoint-url https://vagrant.demo.openio.io  s3 sync /etc s3://bucketname

[root@node3 ~]# find /mnt/disk4/OPENIO/rawx-3 -type f | head
/mnt/disk4/OPENIO/rawx-3/C8F/C8FF5B868BE4371E8B523E7C245543E223DB276AE5A28C9D11C74A4415DE78A1
/mnt/disk4/OPENIO/rawx-3/841/841465319BC0C32739B97EFE255DDC4E7AC9140A3A8FC81ED0B71E51C44891EC
/mnt/disk4/OPENIO/rawx-3/B70/B70FB42F33773A5D33854F03E2D722D650E018F098F38FD00F7B1C775AB73F8E
/mnt/disk4/OPENIO/rawx-3/8FB/8FB2A88DC05FBAACF8250F126EFCA34A23F004B21F8B0FF3223EF74C70360866
/mnt/disk4/OPENIO/rawx-3/8FB/8FBB4ADE049410F4F2A3C97BF1DDEFA0E75F951D2142A508D4ED4490C54948B3
/mnt/disk4/OPENIO/rawx-3/511/511E2C35E1CB6354207FFB936D7423201B2782650411C452B31F95EB3D37DCCA
/mnt/disk4/OPENIO/rawx-3/F46/F46A70721E12DBB71F6D723564294E063CB54B69B18A44771CC032B3C255F5BF
/mnt/disk4/OPENIO/rawx-3/835/83577842D50F12F97A60DB952AF84D3E197C12ACFC3E4D60653DA13910814AA9
/mnt/disk4/OPENIO/rawx-3/FCA/FCA891A6E540E39EA2E608C99D1C5FE209A2CF2431DF2D2D5EE6604B58DA9261
/mnt/disk4/OPENIO/rawx-3/177/177629B971F91176583EC7A14D82A96E2820FE87E002C73882AE4F59CD535DA

Adding a server to an existing cluster

Storage platforms often grow but the allocated budget is not always sufficient.

With openio SDS, it is possible to scale little by little without having to redeploy a new cluster on the side. There’s also no need to rebalance data when a new server is taken into account.

(openio_venv) [vagrant@lb sds]$ cp inventory_add_1disk_1server.yml inventory.yml
(openio_venv) [vagrant@lb sds]$ ./deploy.sh
[me@laptop ~]$  vagrant ssh node4
[vagrant@node4 ~]$ sudo -i
[root@node4 ~]# openio cluster list |grep node4
| account    | 192.168.4.40:6009 | n/a                                  | n/a                            | node4.0  | account    | True |     0 | True   |
| beanstalkd | 192.168.4.40:6014 | n/a                                  | /mnt/disk3/OPENIO/beanstalkd-0 | node4.0  | beanstalkd | True |    98 | False  |
| meta2      | 192.168.4.40:6121 | n/a                                  | /mnt/disk3/OPENIO/meta2-1      | node4.1  | meta2      | True |     0 | True   |
| meta2      | 192.168.4.40:6120 | n/a                                  | /mnt/disk3/OPENIO/meta2-0      | node4.0  | meta2      | True |     0 | True   |
| oioproxy   | 192.168.4.40:6006 | n/a                                  | n/a                            | node4.0  | oioproxy   | True |    97 | False  |
| oioswift   | 192.168.4.40:6007 | b1f205b3-50a5-5373-acea-e8ee6001a4ce | n/a                            | node4.0  | oioswift   | True |    98 | False  |
| rawx       | 192.168.4.40:6201 | 192.168.4.40:6201                    | /mnt/disk1/OPENIO/rawx-1       | node4.1  | rawx       | True |     0 | True   |
| rawx       | 192.168.4.40:6200 | 192.168.4.40:6200                    | /mnt/disk0/OPENIO/rawx-0       | node4.0  | rawx       | True |     0 | True   |
| rawx       | 192.168.4.40:6202 | 192.168.4.40:6202                    | /mnt/disk2/OPENIO/rawx-2       | node4.2  | rawx       | True |     0 | True   |
| rdir       | 192.168.4.40:6301 | n/a                                  | /mnt/disk1/OPENIO/rdir-1       | node4.1  | rdir       | True |     0 | True   |
| rdir       | 192.168.4.40:6300 | n/a                                  | /mnt/disk0/OPENIO/rdir-0       | node4.0  | rdir       | True |     0 | True   |
| rdir       | 192.168.4.40:6302 | n/a                                  | /mnt/disk2/OPENIO/rdir-2       | node4.2  | rdir       | True |     0 | True   |

Our newest server is deployed and ready to take on its function.

We must not forget to secure the data and metadata in the rdir.

[root@node4 ~]# openio rdir bootstrap rawx 
+-------------------+-------------------+---------------+---------------+
| Rdir              | Rawx              | Rdir location | Rawx location |
+-------------------+-------------------+---------------+---------------+
| 192.168.4.10:6300 | 192.168.4.30:6201 | node1.0       | node3.1       |
| 192.168.4.10:6300 | 192.168.4.40:6200 | node1.0       | node4.0       |
| 192.168.4.10:6302 | 192.168.4.20:6202 | node1.2       | node2.2       |
| 192.168.4.10:6303 | 192.168.4.30:6203 | node1.3       | node3.3       |
| 192.168.4.10:6303 | 192.168.4.40:6201 | node1.3       | node4.1       |
| 192.168.4.20:6300 | 192.168.4.10:6201 | node2.0       | node1.1       |
| 192.168.4.20:6300 | 192.168.4.30:6200 | node2.0       | node3.0       |
| 192.168.4.20:6301 | 192.168.4.10:6200 | node2.1       | node1.0       |
| 192.168.4.20:6302 | 192.168.4.10:6203 | node2.2       | node1.3       |
| 192.168.4.20:6302 | 192.168.4.30:6202 | node2.2       | node3.2       |
| 192.168.4.30:6300 | 192.168.4.20:6200 | node3.0       | node2.0       |
| 192.168.4.30:6301 | 192.168.4.20:6201 | node3.1       | node2.1       |
| 192.168.4.30:6301 | 192.168.4.40:6202 | node3.1       | node4.2       |
| 192.168.4.30:6302 | 192.168.4.10:6202 | node3.2       | node1.2       |
+-------------------+-------------------+---------------+---------------+
[root@node4 ~]# openio rdir bootstrap meta2
+-------------------+-------------------+---------------+----------------+
| Rdir              | Meta2             | Rdir location | Meta2 location |
+-------------------+-------------------+---------------+----------------+
| 192.168.4.10:6301 | 192.168.4.30:6121 | node1.1       | node3.1        |
| 192.168.4.10:6302 | 192.168.4.40:6120 | node1.2       | node4.0        |
| 192.168.4.10:6303 | 192.168.4.30:6120 | node1.3       | node3.0        |
| 192.168.4.20:6301 | 192.168.4.10:6121 | node2.1       | node1.1        |
| 192.168.4.30:6300 | 192.168.4.10:6120 | node3.0       | node1.0        |
| 192.168.4.30:6300 | 192.168.4.20:6121 | node3.0       | node2.1        |
| 192.168.4.30:6302 | 192.168.4.20:6120 | node3.2       | node2.0        |
| 192.168.4.30:6302 | 192.168.4.40:6121 | node3.2       | node4.1        |
+-------------------+-------------------+---------------+----------------+

Now we add it to the workflow.

[root@node4 ~]# openio cluster unlockall
+------------+-------------------+----------+
| Type       | Service           | Result   |
+------------+-------------------+----------+
| account    | 192.168.4.40:6009 | unlocked |
| account    | 192.168.4.30:6009 | unlocked |
| account    | 192.168.4.20:6009 | unlocked |
| account    | 192.168.4.10:6009 | unlocked |
| beanstalkd | 192.168.4.40:6014 | unlocked |
| beanstalkd | 192.168.4.30:6014 | unlocked |
| beanstalkd | 192.168.4.20:6014 | unlocked |
| beanstalkd | 192.168.4.10:6014 | unlocked |
| meta0      | 192.168.4.30:6001 | unlocked |
| meta0      | 192.168.4.20:6001 | unlocked |
| meta0      | 192.168.4.10:6001 | unlocked |
| meta1      | 192.168.4.30:6110 | unlocked |
| meta1      | 192.168.4.20:6110 | unlocked |
| meta1      | 192.168.4.10:6110 | unlocked |
| meta2      | 192.168.4.40:6121 | unlocked |
| meta2      | 192.168.4.40:6120 | unlocked |
| meta2      | 192.168.4.30:6120 | unlocked |
| meta2      | 192.168.4.30:6121 | unlocked |
| meta2      | 192.168.4.20:6121 | unlocked |
| meta2      | 192.168.4.20:6120 | unlocked |
| meta2      | 192.168.4.10:6120 | unlocked |
| meta2      | 192.168.4.10:6121 | unlocked |
| oioproxy   | 192.168.4.40:6006 | unlocked |
| oioproxy   | 192.168.4.30:6006 | unlocked |
| oioproxy   | 192.168.4.20:6006 | unlocked |
| oioproxy   | 192.168.4.10:6006 | unlocked |
| oioswift   | 192.168.4.40:6007 | unlocked |
| oioswift   | 192.168.4.30:6007 | unlocked |
| oioswift   | 192.168.4.10:6007 | unlocked |
| oioswift   | 192.168.4.20:6007 | unlocked |
| rawx       | 192.168.4.40:6201 | unlocked |
| rawx       | 192.168.4.40:6200 | unlocked |
| rawx       | 192.168.4.40:6202 | unlocked |
| rawx       | 192.168.4.30:6203 | unlocked |
| rawx       | 192.168.4.30:6201 | unlocked |
| rawx       | 192.168.4.30:6200 | unlocked |
| rawx       | 192.168.4.30:6202 | unlocked |
| rawx       | 192.168.4.20:6201 | unlocked |
| rawx       | 192.168.4.20:6202 | unlocked |
| rawx       | 192.168.4.20:6200 | unlocked |
| rawx       | 192.168.4.10:6200 | unlocked |
| rawx       | 192.168.4.10:6202 | unlocked |
| rawx       | 192.168.4.10:6203 | unlocked |
| rawx       | 192.168.4.10:6201 | unlocked |
| rdir       | 192.168.4.40:6301 | unlocked |
| rdir       | 192.168.4.40:6300 | unlocked |
| rdir       | 192.168.4.40:6302 | unlocked |
| rdir       | 192.168.4.30:6303 | unlocked |
| rdir       | 192.168.4.30:6301 | unlocked |
| rdir       | 192.168.4.30:6300 | unlocked |
| rdir       | 192.168.4.30:6302 | unlocked |
| rdir       | 192.168.4.20:6301 | unlocked |
| rdir       | 192.168.4.20:6302 | unlocked |
| rdir       | 192.168.4.20:6300 | unlocked |
| rdir       | 192.168.4.10:6302 | unlocked |
| rdir       | 192.168.4.10:6300 | unlocked |
| rdir       | 192.168.4.10:6303 | unlocked |
| rdir       | 192.168.4.10:6301 | unlocked |
+------------+-------------------+----------+
[root@node4 ~]# openio cluster list |grep node4
| account    | 192.168.4.40:6009 | n/a                                  | n/a                            | node4.0  | account    | True |    99 | False  |
| beanstalkd | 192.168.4.40:6014 | n/a                                  | /mnt/disk3/OPENIO/beanstalkd-0 | node4.0  | beanstalkd | True |    99 | False  |
| meta2      | 192.168.4.40:6121 | n/a                                  | /mnt/disk3/OPENIO/meta2-1      | node4.1  | meta2      | True |    99 | False  |
| meta2      | 192.168.4.40:6120 | n/a                                  | /mnt/disk3/OPENIO/meta2-0      | node4.0  | meta2      | True |    99 | False  |
| oioproxy   | 192.168.4.40:6006 | n/a                                  | n/a                            | node4.0  | oioproxy   | True |    99 | False  |
| oioswift   | 192.168.4.40:6007 | b1f205b3-50a5-5373-acea-e8ee6001a4ce | n/a                            | node4.0  | oioswift   | True |    99 | False  |
| rawx       | 192.168.4.40:6201 | 192.168.4.40:6201                    | /mnt/disk1/OPENIO/rawx-1       | node4.1  | rawx       | True |    99 | False  |
| rawx       | 192.168.4.40:6200 | 192.168.4.40:6200                    | /mnt/disk0/OPENIO/rawx-0       | node4.0  | rawx       | True |    99 | False  |
| rawx       | 192.168.4.40:6202 | 192.168.4.40:6202                    | /mnt/disk2/OPENIO/rawx-2       | node4.2  | rawx       | True |    99 | False  |
| rdir       | 192.168.4.40:6301 | n/a                                  | /mnt/disk1/OPENIO/rdir-1       | node4.1  | rdir       | True |    99 | False  |
| rdir       | 192.168.4.40:6300 | n/a                                  | /mnt/disk0/OPENIO/rdir-0       | node4.0  | rdir       | True |    99 | False  |
| rdir       | 192.168.4.40:6302 | n/a                                  | /mnt/disk2/OPENIO/rdir-2       | node4.2  | rdir       | True |    99 | False  |

We can check that the data actually arrives on this new server by pushing objects.

(openio_venv) [vagrant@lb sds]$ aws  --endpoint-url https://vagrant.demo.openio.io  s3 mb s3://4thserver 
make_bucket: 4thserver
(openio_venv) [vagrant@lb sds]$ aws  --endpoint-url https://vagrant.demo.openio.io  s3 sync /etc s3://4thserver

[root@node4 ~]# find /mnt/disk1/OPENIO/rawx-1 -type f -ls | head
/mnt/disk1/OPENIO/rawx-1/A99/A998FD16C77943E8CE4A288F610E2A00B1116E88CEC26AFB8D8B15ACFDF90CA0
/mnt/disk1/OPENIO/rawx-1/CB4/CB40D0AEB42F28E096D3ACC3F69D21241DA5734DF5C8C6ADEC2E9A0DAF70BC70
/mnt/disk1/OPENIO/rawx-1/745/7455ECEE698D2BB51D2EFD1D88DEBD7E6A6D9E3152AF34E43E09879F0EC1AE72
/mnt/disk1/OPENIO/rawx-1/C8F/C8FC90FFE5023AE6E23FA5F50E5481D48FB785CDA13C0E3E941B8930CE7914F8
/mnt/disk1/OPENIO/rawx-1/EDE/EDE772BB09862B383F380A3B2677CB9AA42D6F52C63455F2CED5520B289EE5F9
/mnt/disk1/OPENIO/rawx-1/309/3093662384CF9CB582DC429182F76584363BE2F1A2B42978951B6A2E52F20F1E
/mnt/disk1/OPENIO/rawx-1/F55/F55135C2B0A99FB593D036E33C2466D37AFBBEF2E562A4F292E24819AB07219A
/mnt/disk1/OPENIO/rawx-1/589/589158F576254347A9D942002C2A677F66B8D0707CF0139026B91559389131C6
/mnt/disk1/OPENIO/rawx-1/E69/E69172B74549929F295ED37384CAC2E786B021C68DE5AB8EC46D5480DE02590D
/mnt/disk1/OPENIO/rawx-1/E69/E69FF7E48D85EF236A2E99C98B432C27520FCA636B8754A44C025D1C9B40365F

As a last note, this server also contains an oioswift.

This service manages the S3 part in our stack, we have to tell traefik that they can also load balance on it.

Just add the 192.168.4.40 to the s3-gateway-svc service.

(openio_venv) [vagrant@lb sds]$ sudo -i
(openio_venv) [root@lb ~]# vi /etc/traefik.d/openio.yml
...
services:
  s3-gateway-svc:
    loadBalancer:
      healthCheck:
        path: /healthcheck
        interval: "10s"
        timeout: "3s"
      servers:
        - url: http://192.168.4.10:6007/
        - url: http://192.168.4.20:6007/
        - url: http://192.168.4.30:6007/
        - url: http://192.168.4.40:6007/


(openio_venv) [root@lb ~]# systemctl restart traefik.service  

So now you know how to deploy openio 20.04! As we've shown, it's simple to do and most importantly, the new servers that you add are recognized in real-time. This is what we call 'instant scaling', you can read more about that in a blog post dedicated to the subject. Let me know if you have questions, I'd be happy to discuss!

Cédric Delgehier
Share
Automation addict + CI/CD enthusiast. Science Fiction + Pastry.
All posts by Cédric