How to export Gluster volume (via Gluster-block) as an S3 compliant object store

Vinayak Hariharmath
6 min readDec 2, 2020

Gluster is a highly scalable, distributed filesystem and well known in the storage world for its good bunch of data management services and stability. Gluster with so much potential eventually can serve as a backend for a distributed object-store. So I was thinking to club the available resources around and build an object storage service on top of Gluster and effectively utilize the power of distributed storage and its services. The idea is to use Gluster for data management services and Minio as an S3 endpoint. To make this happen, I chose the combination of Gluster-block (To export Gluster volume as block storage)+ Minio object store over Gluster to achieve this.

I have divided this write up into 3 parts

  1. Build Gluster volume
  2. Export Gluster volume as block storage using Gluster-block
  3. Launch Minio server using Gluster block storage

So let's start:

Part 1: Build Gluster volume

Bringing up Gluster volume is very simple and can be set up on a single node. (of course, a multinode setup is preferred to get most of Gluster but just for an experiment purpose, I am referring a single node setup. If you are interested in multinode/replicated/features setup, please refer Gluster quick start guide). In the below steps, I have explained how to bring up 4 bricks, simple distributed volume on a single node.

  1. # dnf -y install glusterfs-server

2. # systemctl start glusterd.service

3. # systemctl status glusterd.service

4. # gluster volume create testvol 127.0.0.2:/bricks/brick{1..4} force

Note: As we are running Gluster volume on a single node setup, I am referring loopback IP address and testvol is the volume name.

5. # gluster volume start testvol

6. # gluster vol status

7. # gluster vol info

Part 2: Export Gluster volume as block storage using Gluster-block

We have a very wise way of exporting the above created Gluster volume (testvol) as block storage using the Gluster-block project. Now, let's jump and create block storage and export it as iscsi block device.

1. # git clone https://github.com/gluster/gluster-block.git

2. # cd gluster-block/

3. # dnf -y install gcc autoconf automake make file libtool libuuid-devel json-c-devel glusterfs-api-devel glusterfs-server tcmu-runner targetcli

Note:
# dnf install gcc autoconf automake make file libtool libuuid-devel json-c-devel glusterfs-api-devel glusterfs-server tcmu-runner targetcli

On Fedora27 and Centos7 [Which use legacy glibc RPC], pass '--enable-tirpc=no' flag at configure time
# ./autogen.sh && ./configure --enable-tirpc=no && make -j install

On Fedora28 and higher [Which use TIRPC], in addition to above, we should also install
# dnf install rpcgen libtirpc-devel

And pass '--enable-tirpc=yes'(default) flag or nothing at configure time
# ./autogen.sh && ./configure && make -j install

4. # systemctl daemon-reload

5. # systemctl start gluster-blockd

6. # systemctl status gluster-blockd

7. # gluster-block create testvol/block-volume 127.0.0.1 5GiB — json-pretty

Note: testvol is gluster volume created in Part 1

8. # gluster-block list testvol — json-pretty

9. # gluster-block info testvol/block-volume — json-pretty

10. Install ISCSI utils

# dnf -y install iscsi-initiator-utils

11. # systemctl start iscsid.service

12. # systemctl status iscsid.service

13. Login to ISCSI target/block device

# iscsiadm -m discovery -t st -p 127.0.0.1 -l

Note: I am referring lookback address here. Feel free to use your system IP address depending on your configuration

14. # lsblk

15. # mkfs.xfs /dev/sdb

16. # mkdir /mnt/gluster-blk-storage

17. # mount -t xfs /dev/sdb /mnt/gluster-blk-storage/

18. # df -Th

Part 3: Launch Minio server using Gluster block storage

To start with, I just love Minio for its simplicity. It opened up a new perception of storage. Minio server binary just can take a “directory/mount” point as an argument and export the same as an object-store. How cool is that !! Here are the steps to deal with Minio.

1. # cd .. && mkdir minio

2. # wget https://dl.min.io/server/minio/release/linux-amd64/minio

3. # chmod +x minio && cp minio /usr/local/bin/.

4. # minio server /mnt/gluster-blk-storage/

Note: Keep the Minio server process running and close when you want to close the object server.

5. Open “http://192.168.56.101:9000" in your browser

6. Login with the default credentials shown after running the Minio server, create a bucket, and upload a file

7. Now go to the terminal and observe that the storage space on “/dev/sdb” is reduced by the media size we just uploaded

So if you want to close the object server, just go to the terminal and do ctrl + c.

We can enable different Gluster data management services like replication (provides highly available data), sharding (enables us to store very big files > brick size), EC coded volumes, etc. depending on our use case.

Thanks, pkalever for your contribution towards Gluster block demos. That helped a lot in drafting this blog.

Thanks for reading. See you soon

--

--