前言:
k8s集群可以快速的部署各种服务,而MySQL作为有状态服务,必须要提供数据持久化存储,说人话就是volume。
在k8s中volume可以是本地目录提供,也可以是动态的由网络存储比如nfs或者块存储服务(比如,ceph,iscsi等等)提供,本文将使用nfs网络存储服务,动态的做这个volume持久化。
环境介绍:
实验环境应该是有一个正常运行的k8s环境,我的k8s版本是1.19.4
[root@master ~]# k version Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
集群已经正常运行了
[root@master ~]# k get po -A NAMESPACE NAME READY STATUS RESTARTS AGE database mysql-7c545744db-xzvnl 1/1 Running 2 23h database mysql2-5db57c8bc8-xrnz6 1/1 Running 1 13h ingress-nginx c7n3-ingress-nginx-controller-kmhz9 1/1 Running 8 5d10h ingress-nginx c7n3-ingress-nginx-controller-z84cg 1/1 Running 10 4d13h ingress-nginx c7n3-ingress-nginx-defaultbackend-7d64b4f74f-t7q7p 1/1 Running 7 4d kube-system coredns-6c76c8bb89-qt6zj 1/1 Running 12 5d15h kube-system coredns-6c76c8bb89-r5vhq 1/1 Running 12 5d15h kube-system etcd-c7n.gzinfo 1/1 Running 13 6d21h kube-system kube-apiserver-c7n.gzinfo 1/1 Running 13 6d21h kube-system kube-controller-manager-c7n.gzinfo 1/1 Running 14 6d21h kube-system kube-flannel-ds-d8sk8 1/1 Running 8 4d14h kube-system kube-flannel-ds-jmnqj 1/1 Running 8 4d13h kube-system kube-flannel-ds-lv57p 1/1 Running 15 6d20h kube-system kube-proxy-dkxqz 1/1 Running 11 6d21h kube-system kube-proxy-llxrd 1/1 Running 14 6d21h kube-system kube-proxy-m49g2 1/1 Running 8 4d14h kube-system kube-scheduler-c7n.gzinfo 1/1 Running 14 6d21h
集群所用网段是192.168.217.0/24
nfs的配置:
[root@master ~]# cat /etc/exports /data/k8s 10.244.0.0/16(rw,no_root_squash,no_subtree_check) 192.168.217.16(rw,no_root_squash,no_subtree_check) 192.168.217.0/24(rw,no_root_squash,no_subtree_check) /data/nfs-sc 10.244.0.0/16(rw,no_root_squash,no_subtree_check) 192.168.217.16(rw,no_root_squash,no_subtree_check) 192.168.217.0/24(rw,no_root_squash,no_subtree_check)
实验步骤:
一,查询StorageClass的name
[root@master ~]# k get sc -A NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE mynfs mynfs Delete Immediate true 16h nfs (default) nfs Delete Immediate true 16h nfs-provisioner choerodon.io/nfs-client-provisioner Delete Immediate false 4d8h nfs-sc storage.pri/nfs Delete Immediate true 17h
可以看到nfs是我设定的默认StorageClass 名称。详细信息也验证了default是这个,(StorageClass怎么来的这就不说了,现在只说pvc绑定StorageClass):
[root@master ~]# k describe sc nfs Name: nfs IsDefaultClass: Yes Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs"},"provisioner":"nfs","reclaimPolicy":"Delete"} ,storageclass.kubernetes.io/is-default-class=true Provisioner: nfs Parameters: <none> AllowVolumeExpansion: True MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
二,建立pvc并与nfs这个默认的StorageClass绑定
[root@master mysql]# cat pvc_mysql.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs-pvc-test namespace: database spec: accessModes: - ReadWriteOnce resources: requests: storage: 1.5Gi storageClassName: nfs
这里需要注意一点,有使用到namespace,因此,namespace需要建立,建立命令为:
k create namespace database
三,建立pv
[root@master mysql]# cat pv_mysql.yaml kind: PersistentVolume apiVersion: v1 metadata: name: nfs-pv-test spec: accessModes: - ReadWriteOnce capacity: storage: 1.5Gi persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: /data/nfs_sc/nfs-pv-test server: 192.168.217.16
这里因为写的path是/data/nfs_sc/nfs-pv-test,因此,需要建立目录 mkdir /data/nfs_sc/nfs-pv-test&&chmod a+x /data/nfs_sc/nfs-pv-test
四,部署MySQL
[root@master mysql]# cat deploy_mysql.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql2 namespace: database spec: selector: matchLabels: app: mysql2 template: metadata: labels: app: mysql2 spec: containers: - name: mysql2 image: mysql:5.7.23 env: - name: MYSQL_ROOT_PASSWORD value: "mima" ports: - containerPort: 3306 volumeMounts: - name: nfs-pvc-test mountPath: /var/lib/mysql subPath: mysql volumes: - name: nfs-pvc-test persistentVolumeClaim: claimName: nfs-pvc-test
五,将部署的MySQL作为一个服务发布出去,从而可以进行维护
[root@master mysql]# cat svc_mysql.yaml apiVersion: v1 kind: Service metadata: name: mysql2 namespace: database spec: type: NodePort ports: - port: 3306 targetPort: 3306 nodePort: 32222 selector: app: mysql2 selector: app: mysql2
六,开始部署,执行顺序没有规定,但最好是按这个来啦
k apply -f pv_mysql.yaml k apply -f pvc_mysql.yaml k apply -f deploy_mysql.yaml k apply -f svc_mysql.yaml
七,验证部署是否正确
查看pod
[root@master mysql]# k get po -A NAMESPACE NAME READY STATUS RESTARTS AGE database mysql-7c545744db-xzvnl 1/1 Running 2 24h database mysql2-5db57c8bc8-xrnz6 1/1 Running 1 13h
查看端口
[root@master mysql]# netstat -antup |grep 32222 tcp 0 0 0.0.0.0:32222 0.0.0.0:* LISTEN 3610/kube-proxy
这个32222端口是由kube-proxy代理发布的,因此,填写任意一个集群内的主机IP都可以访问到MySQL。(我的集群有三个服务器,ip是192.168.217.16/17/18),随便写哪个IP都可以访问到MySQL。
查看pvc,第二行绑定的是nfs那个默认的
[root@master mysql]# k get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE database mysql-pvc-test Bound mysql-pv-test 1Gi RWO nfs-provisioner 47h database nfs-pvc-test Bound nfs-pv-test 1536Mi RWO nfs 16h
至此,k8s集群内部署单机MySQL就成功了。