👋K8S
基于NFS
来动态创建PV
⚽️需求
在正式运维环境中,由于使用手动创建PV的操作过于繁琐,Kubernetes官方提供了一种方式,用来完美解决此问题。
即通过StorageClass动态创建PV的方式来简化运维,提高开发效率。
⚽️环境
- 组件
名称 | 版本 |
CentOS | v7.9.2009 |
Kubernetes | v1.24.0 |
NFS | v1.3.0 |
- 节点
名称 | 节点地址 |
main | 192.168.81.128 |
node1 | 192.168.81.129 |
node2 | 192.168.81.130 |
⚽️注意事项
- 关于创建 PVC 出现 SelfLink 的解决方案
博主的kubernetes的版本是1.24.0,- --feature-gates=RemoveSelfLink=false 默认是true,且不支持修改为false,否则apiserver会启动失败! 最终解决方案是将provisioner的容器镜像更新为最新版本,如 registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0。
- 问题链接
selfLink issues
⚽️流程
- 首先需要在main节点安装并配置NFS服务
- 其次创建k8s存储对象,如下:
- StorageClass
- ServiceAccount
- ClusterRole
- ClusterRoleBinding
- Role
- RoleBinding
- Deployment
- PersistentVolumeClaim
- 测试
⚽️搭建NFS服务
- 在主节点安装配置NFS服务
# 安装 nfs-utils, rpcbind yum install -y nfs-utils rpcbind # 启动服务 systemctl enable nfs systemctl start nfs # 创建挂载目录 mkdir -pv /data/nfs ## 配置目录参数 echo "/data/nfs 192.168.81.128/24(rw,sync,no_root_squash,no_all_squash)" > /etc/exports # 重启NFS服务 systemctl restart nfs # 验证 showmount -e 127.0.0.1
- 在从节点安装NFS服务
# 安装 nfs-utils yum install -y nfs-utils # 验证 showmount -e 192.168.81.128
⚽️创建K8S存储对象
- 创建StorageClass
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nfs-storage namespace: loongstudio annotations: storageclass.beta.kubernetes.io/is-default-class: 'true' storageclass.kubernetes.io/is-default-class: 'true' labels: environment: test provisioner: fuseim.pri/ifs # 外部制备器提供者,编写为提供者的名称 reclaimPolicy: Retain # 回收策略,默认为Delete可以配置为Retain volumeBindingMode: Immediate # 默认为Immediate,表示创建PVC立即进行绑定,只有azuredisk和AWSelasticblockstore支持其他值
kubectl apply -f storageClass.yaml kubectl delete -f storageClass.yaml
- 创建RBAC权限
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: loongstudio --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: loongstudio roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: loongstudio rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: loongstudio subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: loongstudio roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac.yaml kubectl delete -f rbac.yaml
- 创建Provisioner
kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: loongstudio spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs # 这里必须要填写storageclass中的PROVISIONER名称信息一致 - name: NFS_SERVER value: 192.168.81.128 # 指定NFS服务器的IP地址 - name: NFS_PATH value: /data/nfs # 指定NFS服务器中的共享挂载目录 volumes: - name: nfs-client-root # 定义持久化卷的名称,必须要上面volumeMounts挂载的名称一致 nfs: server: 192.168.81.128 # 指定NFS所在的IP地址 path: /data/nfs # 指定NFS服务器中的共享挂载目录
kubectl apply -f provisioner.yaml kubectl delete -f provisioner.yaml
- 创建PVC
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc namespace: loongstudio labels: environment: test app: nginx spec: storageClassName: nfs-storage accessModes: - ReadWriteMany resources: requests: storage: 256Mi
kubectl apply -f pvc.yaml kubectl delete -f pvc.yaml kubectl get pvc
👬 交友小贴士: