k8s基于keyring文件认证对接rbd块设备

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: 文章介绍了如何在Kubernetes集群中使用Ceph的keyring文件进行认证,并对接RBD块设备,包括使用admin用户和自定义用户两种方式的详细步骤和注意事项。

一.k8s基于ceph的keyring秘钥环文件认证-admin用户

1.ceph集群创建rbd块设备

    1 创建K8S特用的存储池
[root@ceph141 ~]# ceph osd pool create yinzhengjie-k8s 128 128
pool 'yinzhengjie-k8s' created
[root@ceph141 ~]# 


    2 创建镜像大小
[root@ceph141 ~]# rbd create -s 5G yinzhengjie-k8s/nginx-web --image-feature layering,exclusive-lock
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd info yinzhengjie-k8s/nginx-web | grep "\sfeatures"
    features: layering, exclusive-lock
[root@ceph141 ~]#

2.k8s所有worker节点部署ceph-common的ceph工具通用包

curl  -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
EOF
yum -y install ceph-common

3.ceph集群将ceph管理员的秘钥环keyring拷贝到所有的worker节点

[root@ceph141 ~]# scp /etc/ceph/ceph.client.admin.keyring 10.0.0.231:/etc/ceph/
[root@ceph141 ~]# scp /etc/ceph/ceph.client.admin.keyring 10.0.0.232:/etc/ceph/
[root@ceph141 ~]# scp /etc/ceph/ceph.client.admin.keyring 10.0.0.233:/etc/ceph/

4.编写资源清单

[root@master231 rbd]# cat 01-deploy-svc-volume-rbd-admin-keyring.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-volume-rbd-admin-keyring
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: ceph-rbd
  template:
    metadata:
      labels:
        apps: ceph-rbd
    spec:
      volumes:
      - name: data
        # 指定存储卷的类型是rbd
        rbd:
          # 连接ceph集群的mon组件地址
          monitors:
          - 10.0.0.141:6789
          - 10.0.0.142:6789
          - 10.0.0.143:6789
          # 指定存储池
          pool: yinzhengjie-k8s
          # 指定块设备镜像
          image: nginx-web
          # 指定文件系统,目前仅支持: "ext4", "xfs", "ntfs"。
          fsType: xfs
          # 块设备是否只读,默认值为false。
          readOnly: false
          # 指定连接ceph集群的用户,若不指定,默认为admin
          user: admin
          # 指定ceph秘钥环的路径,默认值为: "/etc/ceph/keyring"
          keyring: "/etc/ceph/ceph.client.admin.keyring"
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /yinzhengjie-data
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-rbd
spec:
  type: NodePort
  selector:
    apps: ceph-rbd
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 20080
[root@master231 rbd]# 
[root@master231 rbd]# kubectl apply -f 01-deploy-svc-volume-rbd-admin-keyring.yaml 
deployment.apps/deploy-volume-rbd-admin-keyring created
service/svc-rbd created
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-volume-rbd-admin-keyring-5c75b54fd4-vhvls   1/1     Running   0          10s   10.100.2.30    worker233   <none>           <none>

...
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get svc svc-rbd 
NAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
svc-rbd   NodePort   10.200.40.135   <none>        80:20080/TCP   70s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get ep svc-rbd 
NAME      ENDPOINTS        AGE
svc-rbd   10.100.2.30:80   2m20s
[root@master231 rbd]#

5.访问测试

http://10.0.0.233:20080

6.将副本数调到3时,观察实验对比

[root@master231 rbd]# kubectl get pods -o wide
NAME                                               READY   STATUS              RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-volume-rbd-admin-keyring-5c75b54fd4-5qr59   0/1     Pending             0          79s     <none>         <none>      <none>           <none>
deploy-volume-rbd-admin-keyring-5c75b54fd4-5r5cp   1/1     Running             0          2m42s   10.100.2.26    worker233   <none>           <none>
deploy-volume-rbd-admin-keyring-5c75b54fd4-9mgr6   0/1     ContainerCreating   0          79s     <none>         worker232   <none>           <none>
...
[root@master231 rbd]# 


温馨提示:
    发现有2个Pod处于无法正常运行,原因是同一块设备同时不能被多个Pod共同使用。

7. 在删除Pod资源前,注意观察ceph集群对于块设备的状态锁的释放情况

[root@ceph141 ~]# rbd ls -p yinzhengjie-k8s -l
NAME      SIZE  PARENT FMT PROT LOCK 
nginx-web 5 GiB          2      excl 
[root@ceph141 ~]#

8.删除资源

[root@master231 rbd]# kubectl delete -f 01-deploy-svc-volume-rbd-admin-keyring.yaml 
deployment.apps "deploy-volume-rbd-admin-keyring" deleted
service "svc-rbd" deleted
[root@master231 rbd]#

9. 在删除Pod资源后,注意观察ceph集群对于块设备的状态锁的释放情况

[root@ceph141 ~]# rbd ls -p yinzhengjie-k8s -l
NAME      SIZE  PARENT FMT PROT LOCK 
nginx-web 5 GiB          2           
[root@ceph141 ~]#

二.k8s基于ceph的keyring秘钥环文件认证-自定义用户

1.ceph要创建自定义用户

[root@ceph141 ~]# ceph auth get-or-create-key client.k8s mon 'allow r' osd 'allow rwx'
AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph auth get client.k8s 
[client.k8s]
    key = AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==
    caps mon = "allow r"
    caps osd = "allow rwx"
exported keyring for client.k8s
[root@ceph141 ~]#

2.导出秘钥环

[root@ceph141 ~]# ceph auth export client.k8s -o ceph.client.k8s.keyring
export auth(key=AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==)
[root@ceph141 ~]# 
[root@ceph141 ~]# ll ceph.client.k8s.keyring 
-rw-r--r-- 1 root root 107 Feb  3 09:39 ceph.client.k8s.keyring
[root@ceph141 ~]# 
[root@ceph141 ~]# cat ceph.client.k8s.keyring 
[client.k8s]
    key = AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==
    caps mon = "allow r"
    caps osd = "allow rwx"
[root@ceph141 ~]#

3.拷贝文件到所有worker node节点,而且我故意不放在"/etc/ceph"路径。

[root@ceph141 ~]# scp ceph.client.k8s.keyring 10.0.0.231:/etc/ceph/
[root@ceph141 ~]# 
[root@ceph141 ~]# scp ceph.client.k8s.keyring 10.0.0.232:/etc/ceph/
[root@ceph141 ~]# 
[root@ceph141 ~]# scp ceph.client.k8s.keyring 10.0.0.233:/etc/ceph/
[root@ceph141 ~]#

4.k8s编写资源清单

[root@master231 rbd]# cat 02-deploy-svc-ing-volume-rbd-k8s-keyring.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-volume-rbd-k8s-keyring
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: ceph-rbd
  template:
    metadata:
      labels:
        apps: ceph-rbd
    spec:
      volumes:
      - name: data
        rbd:
          monitors:
          - 10.0.0.141:6789
          - 10.0.0.142:6789
          - 10.0.0.143:6789
          pool: yinzhengjie-k8s
          image: nginx-web
          fsType: xfs
          readOnly: false
          # 指定连接ceph集群的用户
          user: k8s
          # 指定连接ceph集群的秘钥路径,注意,必须放在worker节点的"/etc/ceph"目录
          # 如果你将认证文件拷贝到其他目录,对于咱们的ceph nautilus(V14.2.22)版本而言,ceph本身不支持指定文件的路径。
          keyring: "/etc/ceph/ceph.client.k8s.keyring"
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
        volumeMounts:
        - name: data
          mountPath: /yinzhengjie-data
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-rbd-k8s
spec:
  selector:
    apps: ceph-rbd
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apps-ingress
spec:
  # 指定Ingress controller的名称
  ingressClassName: mytraefik
  rules:
  - host: v2.yinzhengjie.com
    http:
      paths:
      - backend:
          service:
            name: svc-rbd-k8s
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
[root@master231 rbd]#

5.运行服务

[root@master231 rbd]# kubectl apply -f 02-deploy-svc-ing-volume-rbd-k8s-keyring.yaml 
deployment.apps/deploy-volume-rbd-k8s-keyring created
service/svc-rbd-k8s created
ingress.networking.k8s.io/apps-ingress created
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get pods
NAME                                            READY   STATUS    RESTARTS   AGE
deploy-volume-rbd-k8s-keyring-878f87d5c-nwcwr   1/1     Running   0          51s
...
[root@master231 rbd]# 
[root@master231 rbd]# 
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get ingress 
apps-ingress            yinzhengjie-traefik-apps  
[root@master231 rbd]# kubectl get ingress apps-ingress 
NAME           CLASS       HOSTS              ADDRESS   PORTS   AGE
apps-ingress   mytraefik   v2.yinzhengjie.com             80      58s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get ingress apps-ingress 
NAME           CLASS       HOSTS              ADDRESS   PORTS   AGE
apps-ingress   mytraefik   v2.yinzhengjie.com             80      59s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl describe ingress apps-ingress 
Name:             apps-ingress
Labels:           <none>
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host              Path  Backends
  ----              ----  --------
  v2.yinzhengjie.com  
                    /   svc-rbd-k8s:80 (10.100.2.29:80)
Annotations:        <none>
Events:             <none>
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get svc svc-rbd-k8s 
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
svc-rbd-k8s   ClusterIP   10.200.66.84   <none>        80/TCP    71s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl -n yinzhengjie-traefik get svc mytraefik 
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
mytraefik   LoadBalancer   10.200.205.77   10.0.0.189    80:18238/TCP,443:13380/TCP   12d
[root@master231 rbd]# 


可能会出现的报错:
2024-02-03 09:53:54.855 7f5c52484c80 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2024-02-03 09:53:54.881 7f5c52484c80 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2024-02-03 09:53:54.881 7f5c52484c80 -1 AuthRegistry(0x55750150e088) no keyring found at /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2024-02-03 09:53:54.881 7f5c52484c80 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2024-02-03 09:53:54.881 7f5c52484c80 -1 AuthRegistry(0x7ffe24b3e658) no keyring found at /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2024-02-03 09:53:54.889 7f5c52484c80 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
rbd: couldn't connect to the cluster!


问题原因:
    找不到ceph的认证文件。请注意文件是否拷贝成功。

6.windows配置解析

10.0.0.189  v2.yinzhengjie.com

7.访问测试

http://v2.yinzhengjie.com/
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
2月前
|
JSON Kubernetes API
深入理解Kubernetes配置:编写高效的YAML文件
深入理解Kubernetes配置:编写高效的YAML文件
|
1月前
|
Kubernetes 应用服务中间件 nginx
k8s学习--YAML资源清单文件托管服务nginx
k8s学习--YAML资源清单文件托管服务nginx
k8s学习--YAML资源清单文件托管服务nginx
|
1月前
|
Kubernetes Linux 容器
1.xshell传不了文件输出0000如何解决.....2.k8s中metalLB文件内容
1.xshell传不了文件输出0000如何解决.....2.k8s中metalLB文件内容
|
1月前
|
Kubernetes Docker Perl
k8s常见故障--yaml文件检查没有问题 pod起不来(一直处于创建中)
k8s常见故障--yaml文件检查没有问题 pod起不来(一直处于创建中)
|
2月前
|
Kubernetes Docker Python
dockercompose与k8s的pod文件的爱恨情仇
dockercompose与k8s的pod文件的爱恨情仇
|
2月前
|
Kubernetes 网络虚拟化 Docker
K8S镜像下载报错解决方案(使用阿里云镜像去下载kubeadm需要的镜像文件)
文章提供了一个解决方案,用于在无法直接访问Google镜像仓库的情况下,通过使用阿里云镜像来下载kubeadm所需的Kubernetes镜像。
280 4
K8S镜像下载报错解决方案(使用阿里云镜像去下载kubeadm需要的镜像文件)
|
2月前
|
Kubernetes 容器
k8s基于secretRef认证对接rbd块设备
文章介绍了如何在Kubernetes集群中通过secretRef认证方式接入Ceph的RBD块设备,并提供了详细的步骤和配置文件示例。
43 7
|
2月前
|
存储 Kubernetes 数据安全/隐私保护
k8s对接ceph集群的分布式文件系统CephFS
文章介绍了如何在Kubernetes集群中使用CephFS作为持久化存储,包括通过secretFile和secretRef两种方式进行认证和配置。
89 5
|
2月前
|
Kubernetes 安全 API
Kubernetes系统安全-认证(Authentication)
文章主要介绍了Kubernetes系统中的安全认证机制,包括API服务器的访问控制、认证、授权策略和准入控制,以及如何使用kubeconfig文件和创建自定义用户与服务账号。
160 0
Kubernetes系统安全-认证(Authentication)
|
24天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景