Helm 是一个 Kubernetes 的包管理工具,就像 Linux 下的包管理器,如 yum/apt 等,可以
很方便的将之前打包好的 yaml 文件部署到 kubernetes 上。
Helm 有 3 个重要概念:
(1)helm:一个命令行客户端工具,主要用于 Kubernetes 应用 chart 的创建、打包、发
布和管理。
(2)Chart:应用描述,一系列用于描述 k8s 资源相关文件的集合。
(3)Release:基于 Chart 的部署实体,一个 chart 被 Helm 运行后将会生成对应的一个
release;将在 k8s 中创建出真实运行的资源对象。
chart 包地址就跟maven仓库一样
https://artifacthub.io/packages/helm/bitnami/redis
部署 helm 客户端
Helm 客户端下载地址:https://github.com/helm/helm/releases
解压移动到/usr/bin/目录即可。
wget https://get.helm.sh/helm-vv3.2.1-linux-amd64.tar.gz tar zxvf helm-v3.2.1-linux-amd64.tar.gz mv linux-amd64/helm /usr/bin/
helm 常用命令
命令 | 描述 |
create | 创建一个 chart 并指定名字 |
dependency | 管理 chart 依赖 |
get | 下载一个 release。可用子命令:all、hooks、manifest、notes、values |
history | 获取 release 历史 |
install | 安装一个 chart |
list | 列出 release |
package | 将 chart 目录打包到 chart 存档文件中 |
pull | 从远程仓库中下载 chart 并解压到本地 # helm pull stable/mysql --untar |
repo | 添加,列出,移除,更新和索引 chart 仓库。可用子命令:add、index、 list、remove、update |
rollback | 从之前版本回滚 |
search | 根据关键字搜索 chart。可用子命令:hub、repo |
show | 查看 chart 详细信息。可用子命令:all、chart、readme、values |
status | 显示已命名版本的状态 |
template | 本地呈现模板 |
uninstall | 卸载一个 release |
upgrade | 更新一个 release |
version | 查看 helm 客户端版本 |
配置国内 chart 仓库
微软仓库(http://mirror.azure.cn/kubernetes/charts/)这个仓库推荐,基本上官网有的 chart 这里都有。
阿里云仓库(https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts )
官方仓库(https://hub.kubeapps.com/charts/incubator)官方 chart 仓库,国内内有点不好使。
添加存储库
helm repo add stable http://mirror.azure.cn/kubernetes/charts helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts helm repo update
查看配置的存储库
helm repo list helm search repo stable
删除存储库
helm repo remove aliyun
使用 chart 部署一个应用
# 查找 chart [root@node1 redis]# helm search repo weave NAME CHART VERSION APP VERSION DESCRIPTION aliyun/weave-cloud 0.1.2 Weave Cloud is a add-on to Kubernetes which pro... aliyun/weave-scope 0.9.2 1.6.5 A Helm chart for the Weave Scope cluster visual... stable/weave-cloud 0.3.9 1.4.0 DEPRECATED - Weave Cloud is a add-on to Kuberne... stable/weave-scope 1.1.12 1.12.0 DEPRECATED - A Helm chart for the Weave Scope c... # 查看详细信息 [root@node1 ~]# helm show values aliyun/weave-cloud # 也可以通过这个命令查看 [root@node1 ~]# helm show values aliyun/redis | more # 也可以通过这种方式这种是一个chart包 [root@node1 redis]# helm pull aliyun/redis [root@node1 redis]# ls redis redis-1.1.15.tgz [root@node1 redis]# cd redis/ [root@node1 redis]# ls Chart.yaml README.md templates values.yaml # 进入到解压包中进行安装 [root@node1 redis]# helm install test-redis . # 查看 chart 信息 [root@node1 redis]# helm show chart aliyun/weave-cloud apiVersion: v1 description: | Weave Cloud is a add-on to Kubernetes which provides Continuous Delivery, along with hosted Prometheus Monitoring and a visual dashboard for exploring & debugging microservices home: https://weave.works icon: https://www.weave.works/assets/images/bltd108e8f850ae9e7c/weave-logo-512.png maintainers: - email: ilya@weave.works name: Ilya Dmitrichenko name: weave-cloud version: 0.1.2 # 安装并起一个名字:ui-test [root@node1 redis]# helm install ui-test stable/weave-scope NAME: ui-test LAST DEPLOYED: Sun Apr 9 10:56:24 2023 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: You should now be able to access the Scope frontend in your web browser, by using kubectl port-forward: kubectl -n default port-forward $(kubectl -n default get endpoints \ ui-test-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040 then browsing to http://localhost:8080/. For more details on using Weave Scope, see the Weave Scope documentation: https://www.weave.works/docs/scope/latest/introducing/ # 查看发布状态 [root@node1 redis]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ui default 1 2023-03-01 23:14:36.243624614 +0800 CST deployed weave-scope-1.1.12 1.12.0 ui-test default 1 2023-04-09 10:56:24.010413533 +0800 CST deployed weave-scope-1.1.12 1.12.0 # 查看详细信息 [root@node1 redis]# helm status ui-test NAME: ui-test LAST DEPLOYED: Sun Apr 9 10:56:24 2023 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: You should now be able to access the Scope frontend in your web browser, by using kubectl port-forward: kubectl -n default port-forward $(kubectl -n default get endpoints \ ui-test-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040 then browsing to http://localhost:8080/. For more details on using Weave Scope, see the Weave Scope documentation: https://www.weave.works/docs/scope/latest/introducing/ # 查看POD 信息 [root@node1 redis]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES weave-scope-agent-ui-test-9g67k 1/1 Running 0 17m 192.168.31.138 node1 <none> <none> weave-scope-cluster-agent-ui-test-6db7576b54-mtftt 1/1 Running 0 17m 10.244.1.143 node1 <none> <none> # 这时候可以看到SVC并没有分配对外暴露的端口 [root@node1 redis]# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 202d default ui-test-weave-scope ClusterIP 10.100.152.114 <none> 80/TCP 3m20s # 修改svc 提供可对外访问的端口修改成 type: NodePort [root@node1 redis]# kubectl edit svc ui-test-weave-scope #再次查看已经看到分配了端口信息 [root@node1 redis]# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 202d default ui-test-weave-scope NodePort 10.100.152.114 <none> 80:30052/TCP 21m # 卸载服务,可以看到 pod 和 svc 都被删除了 [root@node1 redis]# helm uninstall ui-test release "ui-test" uninstalled
自定义chart
# 创建chart模板 [root@node1 test]# helm create mychart Creating mychart [root@node1 test]# cd mychart/ #可以看到有很多模板文件 [root@node1 mychart]# ls charts Chart.yaml templates values.yaml ### 文件中 # Chart.yaml 存放chrat属性信息 # templates 编写yaml文件 # yaml 文件可以使用全局变量 # templates 把这个文件清空使用k8s快速创建 [root@node1 templates]# kubectl create deploy web-test --image=nginx --dry-run -o yaml > web-deploy.yaml [root@node1 templates]# ls web-deploy.yaml # 快速创建svc [root@node1 templates]# kubectl expose deployment web-test --port=80 --target-port=80 --type=NodePort --dry-run -o yaml > servics.yaml # 可以看到创建的文件 [root@node1 templates]# ls servics.yaml web-deploy.yaml # 使用helm创建自定义的模板 [root@node1 test]# helm install web1 mychart/ NAME: web1 LAST DEPLOYED: Sun Apr 9 16:26:33 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None # 可以看到创建的 pod和svc [root@node1 test]# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 203d default web-test NodePort 10.103.178.251 <none> 80:30187/TCP 39s [root@node1 test]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-dep1-cf6cfcf66-2sh46 0/1 Completed 0 37d default web-test-5f547489c6-rm5c8 0/1 ContainerCreating 0 63s # 通过helm卸载 [root@node1 test]# helm uninstall web1 release "web1" uninstalled # 如果helm中yaml修改了要升级使用-可以看到 REVISION: 2 是2了 [root@node1 test]# helm upgrade web1 mychart/ Release "web1" has been upgraded. Happy Helming! NAME: web1 LAST DEPLOYED: Sun Apr 9 16:32:21 2023 NAMESPACE: default STATUS: deployed REVISION: 2 TEST SUITE: None [root@node1 test]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION web1 default 2 2023-04-09 16:32:21.26344832 +0800 CST deployed mychart-0.1.0 1.16.0 # helm模板高效互用
helm 模板高效互用动态参数生成
# 取参数的格式 # {{ .Values.变量名称 }} # {{ .Release.Name }} #修改 values.yaml 定义几个变量 [root@node1 mychart]# ls charts Chart.yaml templates values.yaml [root@node1 mychart]# vim values.yaml
values.yaml
# 定义副本数 replicas: 1 # 镜像 image: nginx # 版本 tag: 1.16 # 标签 label: nginx # 端口 port: 80
修改 templates 文件夹中的yaml
[root@node1 mychart]# ls charts Chart.yaml templates values.yaml [root@node1 mychart]# cd templates/ [root@node1 templates]# ls deploy.yaml service.yaml
deploy.yaml
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: {{.Values.label }} name: {{ .Release.Name}}-deploy spec: replicas: 1 selector: matchLabels: app: {{.Values.label }} strategy: {} template: metadata: creationTimestamp: null labels: app: {{.Values.label }} spec: containers: - image: {{.Values.image }} name: {{.Values.image }} resources: {} status: {}
service.yaml
apiVersion: v1 kind: Service metadata: labels: app: web-test name: {{ .Release.Name}}-svc spec: ports: - port: {{ .Values.port}} protocol: TCP targetPort: 80 selector: app: {{ .Values.label}} type: NodePort
测试:[root@node1 test]# helm install --dry-run web2 mychart/ 并不真正的运行
NAME: web2 LAST DEPLOYED: Sun Apr 9 17:02:42 2023 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None HOOKS: MANIFEST: --- # Source: mychart/templates/service.yaml apiVersion: v1 kind: Service metadata: labels: app: web-test name: web2-svc spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: NodePort --- # Source: mychart/templates/deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: web2-deploy spec: replicas: 1 selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx name: nginx resources: {} status: {}
使用模板运行
[root@node1 test]# helm install web2 mychart/ NAME: web2 LAST DEPLOYED: Sun Apr 9 17:08:23 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None # 可以看到 [root@node1 test]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION web2 default 1 2023-04-09 17:08:23.614684245 +0800 CST deployed mychart-0.1.0 1.16.0 # 运行的 svc [root@node1 test]# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 203d default web2-svc NodePort 10.97.119.8 <none> 80:30961/TCP 76s
helm 安装mysql
# 找到安装版本 [root@node1 ~]# helm search repo mysql NAME CHART VERSION APP VERSION DESCRIPTION aliyun/mysql 0.3.5 Fast, reliable, scalable, and easy to use open-... bitnami/mysql 9.7.1 8.0.32 MySQL is a fast, reliable, scalable, and easy t... stable/mysql 1.6.9 5.7.30 DEPRECATED - Fast, reliable, scalable, and easy... stable/mysqldump 2.6.2 2.4.1 DEPRECATED! - A Helm chart to help backup MySQL... stable/prometheus-mysql-exporter 0.7.1 v0.11.0 DEPRECATED A Helm chart for prometheus mysql ex... aliyun/percona 0.3.0 free, fully compatible, enhanced, open source d... aliyun/percona-xtradb-cluster 0.0.2 5.7.19 free, fully compatible, enhanced, open source d... bitnami/phpmyadmin 10.4.6 5.2.1 phpMyAdmin is a free software tool written in P... stable/percona 1.2.3 5.7.26 DEPRECATED - free, fully compatible, enhanced, ... stable/percona-xtradb-cluster 1.0.8 5.7.19 DEPRECATED - free, fully compatible, enhanced, ... stable/phpmyadmin 4.3.5 5.0.1 DEPRECATED phpMyAdmin is an mysql administratio... aliyun/gcloud-sqlproxy 0.2.3 Google Cloud SQL Proxy aliyun/mariadb 2.1.6 10.1.31 Fast, reliable, scalable, and easy to use open-... bitnami/mariadb 11.5.6 10.6.12 MariaDB is an open source, community-developed ... bitnami/mariadb-galera 7.5.5 10.6.12 MariaDB Galera is a multi-primary database clus... stable/gcloud-sqlproxy 0.6.1 1.11 DEPRECATED Google Cloud SQL Proxy stable/mariadb 7.3.14 10.3.22 DEPRECATED Fast, reliable, scalable, and easy t... # 进行安装,安装一个叫 mydb的数据库,可以看到打印了很多东西。 # 怎么获取mysql初始密码 # 怎么进行测试,我们可以进入到容器中测试 # 以及连接地址等等... [root@node1 ~]# helm install mydb stable/mysql NAME: mydb LAST DEPLOYED: Mon Apr 10 21:28:49 2023 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mydb-mysql.default.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mydb-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il 2. Install the mysql client: $ apt-get update && apt-get install mysql-client -y 3. Connect using the mysql cli, then provide your password: $ mysql -h mydb-mysql -p To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/mydb-mysql 3306 mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD} # 可以看到已经部署成功 [root@node1 ~]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mydb default 1 2023-04-10 21:28:49.550686117 +0800 CST deployed mysql-1.6.9 5.7.30 # 但是实际POD状态是 Pending 状态 [root@node1 ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE default mydb-mysql-746cbdbff6-ttqvq 0/1 Pending 0 3m55s # 查看错误信息:提示PersistentVolumeClaims PVC没有绑定 [root@node1 ~]# kubectl describe pods mydb-mysql-746cbdbff6-ttqvq Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 26s (x6 over 4m50s) default-scheduler running "VolumeBinding" filter plugin for pod "mydb-mysql-746cbdbff6-ttqvq": pod has unbound immediate PersistentVolumeClaims # 可以看到创建的pvc也是等待的状态,此时创建pv能被pvc匹配到马上就可以运行 [root@node1 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mydb-mysql Pending 6m29s # 查看pvc 需要什么样的pv [root@node1 ~]# kubectl get pvc mydb-mysql -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: # 这时候我们就要创建pv了,创建pv的YAML如下: [root@node1 pv]# vim pv.yaml • 84 • 85
创建pv.yaml
apiVersion: v1 #指定的api版本,要符合kubectl apiVersion规定,v1是稳定版本 kind: PersistentVolume #k8s资源类型,PersistentVolumeClaim资源, metadata: #资源的元数据语句块,是针对kind对应资源的全局属性元数据块 name: pvc0001 #PVC名称,自定义 spec: capacity: storage: 8Gi volumeMode: Filesystem accessModes: - ReadWriteOnce # 我这里是nfs也可以是本地 nfs: # 这个路径是nfs服务端路径 path: /home/nfs/mydb server: 192.168.31.119
# 创建一下pv [root@node1 pv]# kubectl apply -f pv.yaml persistentvolume/pvc0001 created # 查看创建的pv [root@node1 pv]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc0001 8Gi RWO Retain Bound default/mydb-mysql 15s # 可以看到已经是绑定状态了 [root@node1 pv]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mydb-mysql Bound pvc0001 8Gi RWO 26m # 可以看到已经是运行状态了 NAMESPACE NAME READY STATUS RESTARTS AGE default mydb-mysql-746cbdbff6-jpsgn 1/1 Running 0 4m19s # 查看详细信息 [root@node1 pv]# helm status mydb NAME: mydb LAST DEPLOYED: Mon Apr 10 22:11:22 2023 # 可以看到已经打印出mysql的默认密码 [root@node1 pv]# kubectl get secret --namespace default mydb-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo uaOA6xlSOQ # 我们就不用自己安装客户端测试了,我们直接进入容器进行测试 [root@node1 pv]# kubectl exec -it mydb-mysql-746cbdbff6-jpsgn /bin/sh # 进入容器执行登录命令 # mysql -uroot -puaOA6xlSOQ mysql: [Warning] Using a password on the command line interface can be insecure. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> # 查看数据库 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec) # 创建一个数据库 mysql> create database test; Query OK, 1 row affected (0.01 sec)
自动绑定PV和PVC
# kubernetes 持久化存储 - StorageClass动态绑定PV # PV的属性。 比如,存储类型、Volume 的大小等 # 创建 PV 需要用到的存储插件(provisioner)。比如,nfs,ceph等 ##### 官网案例 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2 reclaimPolicy: Retain allowVolumeExpansion: true mountOptions: - debug volumeBindingMode: Immediate ############ # 有了这两个主体信息之后,k8s就可以根据用户提交的PVC,找到对应的StorageClass,调用sc声明的provisioner 去创建PV #回收策略 ### 目前的回收策略有: ### Retain -- 手动回收 ### Recycle -- 需要擦除后才能再次使用 ### Delete -- 当用户删除对应的 PersistentVolumeClaim 时,动态配置的 volume 将被自动删除。 #一个Volume卷发生的状态: ### Available:空闲的资源,未绑定给PVC ### Bound:成功绑定PVC ### Released:PVC已经被删除,但PV还没有被集群回收 ### Failed:PV回收失败 # 把我们需要的包拉到本地,方便大家查看其中的内容 [root@node1 ~]# helm search repo nfs-client-provisioner NAME CHART VERSION APP VERSION DESCRIPTION stable/nfs-client-provisioner 1.2.11 3.1.0 DEPRECATED - nfs-client is an automatic provisi... # 拉取到本地 [root@node1 ~]# helm pull stable/nfs-client-provisioner [root@node1 ~]# ls # 解压 [root@node1 ~]# tar -zvxf nfs-client-provisioner-1.2.11.tgz nfs-client-provisioner/Chart.yaml #进入目录 [root@node1 ~]# cd nfs-client-provisioner/ [root@node1 nfs-client-provisioner]# ls Chart.yaml ci README.md templates values.yaml # 编辑 [root@node1 nfs-client-provisioner]# vim values.yaml #### 只编辑这些 nfs: server: 192.168.31.119 ##### nfs-server地址 path: /home/nfs ##### nfs配置的共享目录 mountOptions: # Set a StorageClass name # Ignored if storageClass.create is false # 名字修改成自己的 name: nfs-client # 创建命名空间 [root@ycloud nfs-client-provisioner]# kubectl create ns nfs-pro # 安装 [root@node1 ~]# helm install nfs-provisioner ./nfs-client-provisioner -n nfs-pro NAME: nfs-provisioner LAST DEPLOYED: Tue Apr 11 21:09:14 2023 NAMESPACE: nfs-pro STATUS: deployed REVISION: 1 TEST SUITE: None # 查看 [root@node1 ~]# helm list -n nfs-pro NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nfs-provisioner nfs-pro 1 2023-04-11 21:09:14.348090337 +0800 CST deployed nfs-client-provisioner-1.2.11 3.1.0 # 查看sc [root@node1 ~]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client cluster.local/nfs-provisioner-nfs-client-provisioner Delete Immediate true 4m51s # 详情 [root@node1 ~]# kubectl describe sc nfs-client Name: nfs-client IsDefaultClass: No Annotations: <none> Provisioner: cluster.local/nfs-provisioner-nfs-client-provisioner Parameters: archiveOnDelete=true AllowVolumeExpansion: True MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none> # 查看POD 状态 [root@node1 ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE nfs-pro nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7 1/1 Running 0 10m
我们只需要在PVC里指定要使用的StorageClass名字即可
[root@ycloud ycloud]# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-test spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
自动绑定
[root@node1 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-test Bound pvc-d253f2a3-12ea-40c8-bbb7-298936a80a1e 500Mi RWO nfs-client 2m47s [root@node1 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d253f2a3-12ea-40c8-bbb7-298936a80a1e 500Mi RWO Delete Bound default/my-test nfs-client [root@node1 ~]# kubectl describe pvc my-test Name: my-test Namespace: default StorageClass: nfs-client Status: Bound Volume: pvc-d253f2a3-12ea-40c8-bbb7-298936a80a1e Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-provisioner-nfs-client-provisioner Finalizers: [kubernetes.io/pvc-protection] Capacity: 500Mi Access Modes: RWO VolumeMode: Filesystem Mounted By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 108s cluster.local/nfs-provisioner-nfs-client-provisioner_nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7_133bb9c2-d86a-11ed-9e13-3ad5ec42bf04 External provisioner is provisioning volume for claim "default/my-test" Normal ExternalProvisioning 108s (x2 over 108s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "cluster.local/nfs-provisioner-nfs-client-provisioner" or manually created by system administrator Normal ProvisioningSucceeded 108s cluster.local/nfs-provisioner-nfs-client-provisioner_nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7_133bb9c2-d86a-11ed-9e13-3ad5ec42bf04 Successfully provisioned volume pvc-d253f2a3-12ea-40c8-bbb7-298936a80a1e
我们使用mysql案例进行测试
# 导出一份 [root@node1 ~]# helm show values stable/mysql > values.yaml # 把不需要的都删除,storageClass 写之前创建的,名字要一致。 [root@node1 ~]# vim values.yaml # 创建root 密码 mysqlRootPassword: testing # 创建一个用户和密码 mysqlUser: k8s mysqlPassword: k8sasd@123 mysqlDatabase: k8s persistence: ientnabled: true storageClass: "nfs-client" accessMode: ReadWriteOnce size: 8Gi annotations: {} # 使用自定义个文件运行 [root@node1 ~]# helm install mydb -f values.yaml stable/mysql # 可以看到已经在运行 [root@node1 ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE default mydb-mysql-7c47598cdc-v6x7c 1/1 Running 0 15s # 可以看到 pvc自动绑定了 pv [root@node1 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mydb-mysql Bound pvc-76ce92b0-a90a-4b0c-a8de-d0f6c8cdf361 8Gi RWO nfs-client 28m # 可以看到已经绑定的pvc名字 [root@node1 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc0001 8Gi RWO Retain Released default/mydb-mysql