Glusterfs
Install glusterfs
Add gluster repo
Create the repo file "/etc/yum.repos.d/glusterfs.repo
":
[gluster312]
name=Gluster 3.12
baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.12/
gpgcheck=0
enabled=1
Install packages
yum install glusterfs-server
yum install glusterfs-client
yum install glusterfs-fuse
Start services
service glusterd start
systemctl enable glusterd
Connect peers
Peers need to trust eachother. To do that run the following command on all nodes with all hostnames in tje cluster.
gluster peer probe <host>
gluster pool list / gluster peer status
Add storage
First add the disk to every system and mount it under a fixed location. This location is going to be used as "Brick" for the glusterfs. In case you use vSphere to prvision the volumes make sure to mount the disk based on id and not on name. The name can easely change but the id stays the same (make sure to "set disk.enableUUID=1" on the vSphere machine).
For example:
mkdir -p /mnt/gluster-storage
echo "/dev/disk/by-id/wwn-<id> /mnt/gluster-storage xfs defaults 0 0" > /etc/fstab
mkfs.xfs -i size=512 /dev/disk/by-id/wwn-<id>
mount /mnt/gluster-storage
Add bricks
gluster volume create gluster replica 3 <host1>:/mnt/gluster-storage/brick1 <host2>:/mnt/gluster-storage/brick1 <host3>:/mnt/gluster-storage/brick1
gluster volume start gluster
Integrate with Kubernetes
kubectl apply -f gluster-service.yml
kubectl apply -f gluster-endpoint.yml
Links
Debugging
gluster pool list / gluster peer status
gluster vol status
gluster volume list
gluster volume stop <volume>
gluster volume delete <volume>
Files
gluster-service.yml
apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1
gluster-endpoint.yml
apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: ports: - port: 1
- addresses: - ip:
ports:
- port: 1
- addresses: - ip:
ports:
- port: 1
persistentvolume.yml
apiVersion: v1 kind: PersistentVolume metadata: name: gluster-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster path: gluster readOnly: false persistentVolumeReclaimPolicy: Retain
persistentvolumeclaim.yml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi
- Glusterfs tuning
Tuning is done on a per-volume basis. This allows for different usage patterns for each volume.
General tuning
Startup time is not very lang but if you don't use NFS you could just as well switch the support off.…