Glusterfs
Install glusterfs
Add gluster repo
Create the repo file "/etc/yum.repos.d/glusterfs.repo":
[gluster312]name=Gluster 3.12baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.12/gpgcheck=0enabled=1
Install packages
yum install glusterfs-serveryum install glusterfs-clientyum install glusterfs-fuse
Start services
service glusterd startsystemctl enable glusterd
Connect peers
Peers need to trust eachother. To do that run the following command on all nodes with all hostnames in tje cluster.
gluster peer probe <host>gluster pool list / gluster peer status
Add storage
First add the disk to every system and mount it under a fixed location. This location is going to be used as "Brick" for the glusterfs. In case you use vSphere to prvision the volumes make sure to mount the disk based on id and not on name. The name can easely change but the id stays the same (make sure to "set disk.enableUUID=1" on the vSphere machine).
For example:
mkdir -p /mnt/gluster-storageecho "/dev/disk/by-id/wwn-<id> /mnt/gluster-storage xfs defaults 0 0" > /etc/fstabmkfs.xfs -i size=512 /dev/disk/by-id/wwn-<id>mount /mnt/gluster-storage
Add bricks
gluster volume create gluster replica 3 <host1>:/mnt/gluster-storage/brick1 <host2>:/mnt/gluster-storage/brick1 <host3>:/mnt/gluster-storage/brick1gluster volume start gluster
Integrate with Kubernetes
kubectl apply -f gluster-service.ymlkubectl apply -f gluster-endpoint.yml
Links
Debugging
gluster pool list / gluster peer statusgluster vol status
gluster volume listgluster volume stop <volume>gluster volume delete <volume>
Files
gluster-service.yml
apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1
gluster-endpoint.yml
apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: ports: - port: 1
- addresses: - ip:
ports:
- port: 1
- addresses: - ip:
ports:
- port: 1
persistentvolume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: gluster
readOnly: false
persistentVolumeReclaimPolicy: Retain
persistentvolumeclaim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
- Glusterfs tuning
Tuning is done on a per-volume basis. This allows for different usage patterns for each volume.
General tuning
Startup time is not very lang but if you don't use NFS you could just as well switch the support off.…