Install GlusterFS Server And Client On CentOS 7

GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It is free software, with some parts licensed under the GNU General Public License(GPL) v3 while others are dual licensed under either GPL v2 or the Lesser General Public License (LGPL) v3. GlusterFS is based on a stackable user space design.

GlusterFS has a client and server component. Servers are typically deployed as storage bricks, with each server running a glusterfsd daemon to export a local file system as a volume. The glusterfs client process, which connects to servers with a custom protocol over TCP/IP, InfiniBand or Sockets Direct Protocol, creates composite virtual volumes from multiple remote servers using stackable translators. By default, files are stored whole, but striping of files across multiple remote volumes is also supported. The final volume may then be mounted by the client host using its own native protocol via the FUSE mechanism, using NFS v3 protocol using a built-in server translator, or accessed via gfapi client library. Native-protocol mounts may then be re-exported e.g. via the kernel NFSv4 server, SAMBA, or the object-based OpenStack Storage (Swift) protocol using the “UFO” (Unified File and Object) translator.

Please  read these   gluster  terms  :

brick

The brick is the storage filesystem that has been assigned to a volume. e.g /data on server

client

The machine which mounts the volume (this may also be a server).

server

The machine (physical or virtual or bare metal) which hosts the actual filesystem in which data will be stored.

volume

A volume is a logical collection of bricks where each brick is an export directory on a server . A volume can be of several types and you can create any of them in storage pool for a single volume.

Distributed – Distributed volumes distributes files throughout the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.

Replicated – Replicated volumes replicates files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

Striped – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large file

I am using 2 CentOS 7 nodes with hostnames: gluster1 and gluster2. + 1 Client

Servers:

 [[email protected] ~]# cat  /etc/os-release
 NAME="CentOS Linux"
 VERSION="7 (Core)"
 ID="centos"
 ID_LIKE="rhel fedora"
 VERSION_ID="7"
 PRETTY_NAME="CentOS Linux 7 (Core)"
 ANSI_COLOR="0;31"
 CPE_NAME="cpe:/o:centos:centos:7"
 HOME_URL="https://www.centos.org/"
 BUG_REPORT_URL="https://bugs.centos.org/"
 [[email protected] ~]# cat /etc/os-release
 N NAME="CentOS Linux"
 VERSION="7 (Core)"
 ID="centos"
 ID_LIKE="rhel fedora"
 VERSION_ID="7"
 PRETTY_NAME="CentOS Linux 7 (Core)"
 ANSI_COLOR="0;31"
 CPE_NAME="cpe:/o:centos:centos:7"
 HOME_URL="https://www.centos.org/"
 BUG_REPORT_URL="https://bugs.centos.org/"

Gluster Client 

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

 

Add this to both  Gluster servers in /etc/hosts. and  even to  Gluster  client later

172.16.217.128 gluster1
172.16.217.129 gluster2

To avoid  some dependencies problems, please  add   EPEL repo to ur  Linux 

rpm  -ivh  http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Installing in CentOS:

yum.repos.d]# wget  -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.5/CentOS/glusterfs-epel.repo
yum -y install glusterfs glusterfs-fuse glusterfs-server
systemctl start glusterd.service

Please  note  that  Firewalld is  diabled  in this  instalation. else   you  have  to cistomize your firewall setiting

 

Added glusterfs2 in glusterfs1’s hosts file, and tested the config:

 [[email protected] ~]# gluster peer probe glusterfs2
 peer probe: success.
 [[email protected] ~]# gluster peer probe glusterfs1
 peer probe: success. Host glusterfs1 port 24007 already in peer list

Check   the  Status  :

From Glusterfs1

[[email protected] yum.repos.d]# gluster peer status
Number of Peers: 1
Hostname: glusterfs2
Uuid: 2dd45746-eba1-4002-ba7b-325e9e282077
State: Peer in Cluster (Connected)

From Glusterfs2

[[email protected] ~]# gluster peer status
Number of Peers: 1
Hostname: glusterfs1
Uuid: 9e4b62db-7d2a-4b94-8cf5-71da19078e1c
State: Peer in Cluster (Connected)

 

At this time I can test the storage pool:

 [[email protected] glusterfs]# gluster pool list
 UUID                                    Hostname        State
 4cf47688-74ba-4c5b-bf3f-3270bb9a4871    glusterfs2      Connected
 a3ce0329-35d8-4774-a061-148a735657c4    localhost       Connected
[[email protected] ~]# gluster volume status
No volumes present

Create and  use  Distributed  volumes

 

gluster volume create dist-volume gluster1:/dist1 gluster2:/dist2  force

###### if vol creation fails for some reason, do # setfattr -x trusted.glusterfs.volume-id /data/gluster/brick and restart glusterd.

check  and  start  the volume

[[email protected] ~]# gluster volume start dist-volume
volume start: dist-volume: success

Check volume info  and  status

gluster volume info
Volume Name: dist-volume
Type: Distribute
Volume ID: eb896d27-0d43-499b-8ac9-62199f184e0a
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/dist1
Brick2: gluster2:/dist2
Options Reconfigured:
performance.readdir-ahead: on
[[email protected] ~]#
 
[[email protected] ~]# gluster volume status
Status of volume: dist-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster1:/dist1                       49152     0          Y       12959
Brick gluster2:/dist2                       49152     0          Y       41438
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on gluster2                      N/A       N/A        N       N/A  
 
Task Status of Volume dist-volume
------------------------------------------------------------------------------
There are no active volume tasks

Install  Gluster Client

add Gluster EPEL repo

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
install   the Client
yum -y install glusterfs glusterfs-fuse
 lets make   an directory  on the Client and  try to  mount this   from Gluster servers
[email protected] ~]# mkdir  /mnt/gluster1-2
 [[email protected] ~]# mount.glusterfs  gluster1:/dist-volume   /mnt/gluster1-2/ 

[[email protected] ~]# df  -h 
Filesystem             Size  Used Avail Use% Mounted on
/dev/sda3               18G  3.9G   14G  22% /
devtmpfs               728M     0  728M   0% /dev
tmpfs                  736M     0  736M   0% /dev/shm
tmpfs                  736M  8.9M  727M   2% /run
tmpfs                  736M     0  736M   0% /sys/fs/cgroup
/dev/sda1              297M  151M  146M  51% /boot
gluster1:/dist-volume   36G  7.7G   28G  22% /mnt/gluster1-2

your can see    more    in command   mount

 

/dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
gluster1:/dist-volume on /mnt/gluster1-2 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

to  make  this  mount  permanency   after  every reboot .  please  add the  following to  your /etc/fstab Client

vi  /etc/fstab

gluster1:/dist-volume   /mnt/gluster1-2  glusterfs defaults,_netdev 0 0

Check how  how to  Access Distributed volume

From the   Client  :

cd /mnt/gluster1-2; touch  X1 X2 X3 X4

the  files  will be  distributed  on both server  Glusters

 [[email protected] ~]# ls /dist2/
X1 X3
[[email protected] ~]# ls /dist1/
X2  X4

 Create and  use  Replicated  volumes

Please use  replicated  volumes  in availability environments

 

form gluster 1 do

gluster volume create rep-volume replica  gluster1:/replica1  gluster2:/replica2 force

Check  volume  info

Start  and  check the  info

gluster volume start rep-volume

 

 [[email protected] ~]# gluster  volume  info  rep-volume
 
Volume Name: rep-volume
Type: Replicate
Volume ID: 49fd382a-378e-4fe8-8c1b-3acc0319d399
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/replica1
Brick2: gluster2:/replica2
Options Reconfigured:
performance.readdir-ahead: on
[[email protected] ~]# 

 

Mount  the replica  from the gluster client

mkdir  /mnt/replica
[[email protected] gluster1-2]# mount.glusterfs gluster1:/rep-volume /mnt/replica/
[[email protected] gluster1-2]# df -h 
Filesystem             Size  Used Avail Use% Mounted on
/dev/sda3               18G  3.9G   14G  22% /
devtmpfs               728M     0  728M   0% /dev
tmpfs                  736M     0  736M   0% /dev/shm
tmpfs                  736M  8.9M  727M   2% /run
tmpfs                  736M     0  736M   0% /sys/fs/cgroup
/dev/sda1              297M  151M  146M  51% /boot
gluster1:/dist-volume   36G  7.7G   28G  22% /mnt/gluster1-2
gluster1:/rep-volume    18G  3.9G   14G  22% /mnt/replica

to   make  this mount  permanant  please   add this to  /etc/fstab

gluster1:/rep-volume /mnt/replica glusterfs defaults,_netdev 0 0

test this  volume

From the  Client :

[[email protected] replica]# cd  /mnt/replica/; touch R1 R2 R3 R4 R5 R6
[[email protected] replica]# ls  -altr 
total 1
drwxr-xr-x. 4 root root 37 Aug 11 14:02 ..
drwxr-xr-x. 3 root root 24 Aug 11 14:04 .trashcan
-rw-r--r--. 1 root root  0 Aug 11 14:10 R1
-rw-r--r--. 1 root root  0 Aug 11 14:10 R2
-rw-r--r--. 1 root root  0 Aug 11 14:10 R3
-rw-r--r--. 1 root root  0 Aug 11 14:10 R4
-rw-r--r--. 1 root root  0 Aug 11 14:10 R5
drwxr-xr-x. 4 root root 93 Aug 11 14:10 .
-rw-r--r--. 1 root root  0 Aug 11 14:10 R6
[[email protected] replica]#

Lets  check  the  files  in the   servers

Gluster1

[[email protected] ~]# ls -altr /replica1/
total 12
dr-xr-xr-x. 19 root root 4096 Aug 11 13:57 ..
drwxr-xr-x.  3 root root   24 Aug 11 14:04 .trashcan
-rw-r--r--.  2 root root    0 Aug 11 14:10 R1
-rw-r--r--.  2 root root    0 Aug 11 14:10 R2
-rw-r--r--.  2 root root    0 Aug 11 14:10 R3
-rw-r--r--.  2 root root    0 Aug 11 14:10 R4
-rw-r--r--.  2 root root    0 Aug 11 14:10 R5
drw-------. 12 root root 4096 Aug 11 14:10 .glusterfs
drwxr-xr-x.  4 root root   93 Aug 11 14:10 .
-rw-r--r--.  2 root root    0 Aug 11 14:10 R6
[[email protected] ~]#

Gluster2

[[email protected] ~]# ls -altr /replica2/
total 12
dr-xr-xr-x. 19 root root 4096 Aug 11 13:57 ..
drwxr-xr-x.  3 root root   24 Aug 11 14:04 .trashcan
-rw-r--r--.  2 root root    0 Aug 11 14:10 R1
-rw-r--r--.  2 root root    0 Aug 11 14:10 R2
-rw-r--r--.  2 root root    0 Aug 11 14:10 R3
-rw-r--r--.  2 root root    0 Aug 11 14:10 R4
-rw-r--r--.  2 root root    0 Aug 11 14:10 R5
drwxr-xr-x.  4 root root   93 Aug 11 14:10 .
drw-------. 12 root root 4096 Aug 11 14:10 .glusterfs
-rw-r--r--.  2 root root    0 Aug 11 14:10 R6
[[email protected] ~]#

Create and  use  Replicated  volumes

This  installation is  highly recommended  for  big files like   ISO  or  IMG

You can  proceed  Like this  :

gluster volume create strip-volume strip 3  glustert1:/strip1 gluster2:/strip2  force
start   the  volume
gluster volume start strip-volume
From the  Client   mount   and  use  the  volumes

[[email protected] replica]# mount.glusterfs gluster1:/strip-volume /mnt/stripped/
[[email protected] replica]# df  -h 
Filesystem              Size  Used Avail Use% Mounted on
/dev/sda3                18G  3.9G   14G  22% /
devtmpfs                728M     0  728M   0% /dev
tmpfs                   736M     0  736M   0% /dev/shm
tmpfs                   736M  8.9M  728M   2% /run
tmpfs                   736M     0  736M   0% /sys/fs/cgroup
/dev/sda1               297M  151M  146M  51% /boot
gluster1:/dist-volume    36G  7.7G   28G  22% /mnt/gluster1-2
gluster1:/rep-volume     18G  3.9G   14G  22% /mnt/replica
gluster1:/strip-volume   36G  7.7G   28G  22% /mnt/stripped
[[email protected] replica]#
to  make  this   permanent please  add this  to   /etc/fstab
gluster1:/strip-volume /mnt/stripped glusterfs defaults,_netdev 0 0

Now the  files  created  on  /mnt/tripped  will  be stripped  across  the   volume

Separate disks for  the volumes

if  you  want  to  keep   your  system clean   and  use  the  volumes  data  on separate  disks

please follow this  my  small example  in my  vmware   test  machines

 

please disk /dev/sdb with same  size

Type ‘n’ for new partition. choose ‘p’ for primary, follow the wizard to complete, ‘w’ to write data to disk.

Create file system:

mkfs.ext4 /dev/sdb1

Create Sync directory on both machine:

Gljuster1

mkdir -p /replica1
mount.ext4  /dev/sdb1 /replica1

Gluster 2

mdkdir /replica2
mount.ext4 /dev/sdb1  /replica2

You can add this to fstab to make it ready for the next reboot. (gluster1)

/dev/sdb1    /replica1    ext4 default  1 2

That’s it.

Cheers!

Used   ressources  =   gluster.org sohailriaz.com