nfs-client-provisioner简介
nfs-client-provisioner
可动态为kubernetes
提供pv卷,是Kubernetes
的简易NFS
的外部provisioner
,本身不提供NFS
,需要现有的NFS服务器提供存储。持久卷目录的命名规则为:${namespace}-${pvcName}-${pvName}
。
K8S
的外部NFS驱动可以按照其工作方式(是作为NFS server
还是NFS client
)分为两类:
nfs-client
它通过K8S
内置的NFS
驱动挂载远端的NFS
服务器到本地目录;然后将自身作为storage provider
关联storage class
。当用户创建对应的PVC
来申请PV
时,该provider
就将PVC
的要求与自身的属性比较,一旦满足就在本地挂载好的NFS目录中创建PV所属的子目录,为Pod
提供动态的存储服务。
nfs-server
与nfs-client
不同,该驱动并不使用k8s
的NFS
驱动来挂载远端的NFS
到本地再分配,而是直接将本地文件映射到容器内部,然后在容器内使用ganesha.nfsd
来对外提供NFS
服务;在每次创建PV
的时候,直接在本地的NFS
根目录中创建对应文件夹,并export
出该子目录。
本文将介绍使用nfs-client-provisioner这个应用,利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV。前提条件是有已经安装好的NFS服务器,并且NFS服务器与Kubernetes的Slave节点网络能够连通。将nfs-client驱动做为一个deployment部署到K8S集群中,然后对外提供存储服务
准备NFS服务端
当前环境信息
[root@k8s-master1 ~]# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master1 Ready control-plane 33h v1.24.3 192.168.0.220 <none> CentOS Linux 7 (Core) 6.0.0-1.el7.elrepo.x86_64 containerd://1.6.8 k8s-node1 Ready <none> 19h v1.24.3 192.168.0.221 <none> CentOS Linux 7 (Core) 5.19.6-1.el7.elrepo.x86_64 containerd://1.6.8 k8s-node2 Ready <none> 19h v1.24.3 192.168.0.222 <none> CentOS Linux 7 (Core) 5.19.6-1.el7.elrepo.x86_64 containerd://1.6.8 [root@k8s-master1 ~]#
安装nfs server服务端
rpm -qa|egrep "nfs|rpc" yum -y install nfs-utils rpcbind
设置开机启动
#启动nfs-server,并加入开机启动 systemctl start rpcbind.service systemctl enable rpcbind.service systemctl start nfs systemctl enable nfs-server --now #查看nfs server是否已经正常启动 systemctl status nfs-server
注意:每个node节点都要安装nfs客户端
配置共享目录
#vi /etc/exports /nfs_dir/nfs_provisioner 192.168.0.0/24(rw,no_root_squash) #不用重启nfs服务,配置文件就会生效 exportfs -arv
用于配置NFS服务程序配置文件的参数
部署nfs-provisioner
创建ServiceAccount
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
执行如下命令创建用户:
[root@k8s-master1 nfs]# kubectl apply -f rbac.yaml [root@k8s-master1 nfs]# kubectl get sa | grep nfs nfs-client-provisioner 0 22m
创建Deployment
注意:千万不要使用以下方式来解决selfLink的问题,k8s1.24.0版本默认是true,不支持修改为false,否则apiserver会启动失败!
kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-provisioner # 和3.Storage中provisioner保持一致便可 - name: NFS_SERVER value: 192.168.0.220 - name: NFS_PATH value: /nfs_dir/nfs_provisioner volumes: - name: nfs-client-root nfs: server: 192.168.0.220 path: /nfs_dir/nfs_provisioner
执行如下命令创建deployment:
kubectl apply -f deployment.yaml [root@k8s-master1 nfs]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nfs-client-provisioner 1/1 1 1 23m [root@k8s-master1 nfs]# kubectl get pod | grep nfs nfs-client-provisioner-5bb8b9dbb-pxx9n 1/1 Running 0 24m
创建storageclass
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: nfs-storage provisioner: nfs-provisioner volumeBindingMode: Immediate reclaimPolicy: Delete
注意:provisioner值要与deployment的PROVISIONER_NAME的值一样。
执行如下命令创建:
kubectl apply -f sc.yaml [root@k8s-master1 nfs]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-storage (default) nfs-provisioner Delete Immediate false 27m
创建pvc
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs-storage annotations: volume.beta.kubernetes.io/storage-class: "nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
提示:volume.beta.kubernetes.io/storage-class
的值对应的是kubectl get sc
获取的名字
执行如下命令创建:
[root@k8s-master1 nfs]# kubectl apply -f nfs-pvc.yaml [root@k8s-master1 nfs]# kubectl get pvc,pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/nfs-storage Bound pvc-68becff5-c146-4e33-b104-dcd02f2b19c0 1Mi RWX nfs-storage 27m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-68becff5-c146-4e33-b104-dcd02f2b19c0 1Mi RWX Delete Bound default/nfs-storage nfs-storage 27m
创建一个nginx应用,挂载pv
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 volumeMounts: - name: test-pod-nginx-storage mountPath: "/usr/share/nginx/html" volumes: - name: test-pod-nginx-storage persistentVolumeClaim: claimName: nfs-storage
注意:claimName的值对应的是pvc的名字,如上述pvc的nfs-storage
nfs-provisioner
以下是nfs-provisioner全部的yaml文件
apiVersion: v1 kind: Namespace metadata: labels: app.kubernetes.io/instance: nfs-provisioner app.kubernetes.io/name: nfs-provisioner name: nfs-provisioner --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-provisioner roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-provisioner roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner namespace: nfs-provisioner spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-data-provisioner # 和3.Storage中provisioner保持一致便可 - name: NFS_SERVER value: 192.168.100.200 - name: NFS_PATH value: /opt/nfsdata volumes: - name: nfs-client-root nfs: server: 192.168.100.200 path: /opt/nfsdata --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: nfs-storage namespace: nfs-provisioner provisioner: nfs-data-provisioner volumeBindingMode: Immediate reclaimPolicy: Delete