在ACK 集群中使用本地盘创建LVM数据卷

本文涉及的产品
容器镜像服务 ACR,镜像仓库100个 不限时长
简介:

阿里云容器服务团队提供的CSI存储插件支持在容器场景使用多种存储服务,包括云盘、NAS、OSS、LVM、内存卷等;本文介绍如何通过CSI插件在阿里云本地盘上自动化创建lvm数据卷。

介绍:

LVM卷属于本地存储的一种类型,CSI插件提供对多种本地数据卷统一管理的方式:localplugin.csi.alibabacloud.com;lvm卷为local plugin管理的一种存储类型。插件包含两部分:ControllerServer 和 NodeServer;

ControllerServer:实现lvm卷的创建、删除功能;

NodeServer:实现lvm卷的挂载、卸载、格式化、lvm创建、清除等能力;

目前支持的lvm功能包括:

LVM卷的全生命周期管理;

LVM卷的格式化、挂载、删除等基本功能;

自动化创建VG;

实现lvm卷自动化扩容功能;

依赖:

创建Kubernetes集群,推荐使用阿里云ACK服务;

添加有本地盘的ecs节点;

集群版本在1.14或者以上;

部署:

部署CSI NodeServer: (csi-local-plugin)

apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: localplugin.csi.alibabacloud.com
spec:
  attachRequired: false
  podInfoOnMount: true
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-local-plugin
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-local-plugin
  template:
    metadata:
      labels:
        app: csi-local-plugin
    spec:
      tolerations:
        - operator: Exists
      serviceAccount: admin
      priorityClassName: system-node-critical
      hostNetwork: true
      hostPID: true
      containers:
        - name: driver-registrar
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v1.1.0
          imagePullPolicy: Always
          args:
            - "--v=5"
            - "--csi-address=/csi/csi.sock"
            - "--kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration

        - name: csi-localplugin
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: registry.cn-hangzhou.aliyuncs.com/plugins/csi-plugin:v1.14-9fa71837c
          imagePullPolicy: "Always"
          args :
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--v=5"
            - "--nodeid=$(KUBE_NODE_NAME)"
            - "--driver=localplugin.csi.alibabacloud.com"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: CSI_ENDPOINT
              value: unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
          volumeMounts:
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet
              mountPropagation: "Bidirectional"
            - mountPath: /dev
              mountPropagation: "HostToContainer"
              name: host-dev
            - mountPath: /var/log/
              name: host-log
      volumes:
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
            type: DirectoryOrCreate
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry
            type: DirectoryOrCreate
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet
            type: Directory
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-log
          hostPath:
            path: /var/log/
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 10%
    type: RollingUpdate

部署ControllerServer: (csi-local-provisioner)

kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-local-provisioner
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-local-provisioner
  replicas: 2
  template:
    metadata:
      labels:
        app: csi-local-provisioner
    spec:
      tolerations:
      - operator: "Exists"
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
      priorityClassName: system-node-critical
      serviceAccount: admin
      hostNetwork: true
      containers:
        - name: external-local-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1-5f99079e0-ack
          args:
            - "--csi-address=$(ADDRESS)"
            - "--feature-gates=Topology=True"
            - "--volume-name-prefix=local"
            - "--strict-topology=true"
            - "--timeout=150s"
            - "--extra-create-metadata=true"
            - "--enable-leader-election=true"
            - "--leader-election-type=leases"
            - "--retry-interval-start=500ms"
            - "--v=5"
          env:
            - name: ADDRESS
              value: /socketDir/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
            - name: socket-dir
              mountPath: /socketDir
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
            type: DirectoryOrCreate

插件创建完成:

]# kubectl get pod -nkube-system | grep local
csi-local-plugin-54zvk                             2/2     Running   0          48m
csi-local-plugin-dxxq9                             2/2     Running   0          48m
csi-local-plugin-f5gr4                             2/2     Running   0          48m
csi-local-plugin-fq88g                             2/2     Running   0          48m
csi-local-plugin-tn5vh                             2/2     Running   0          48m
csi-local-provisioner-759699c8d4-6srrq             1/1     Running   3          3h42m
csi-local-provisioner-759699c8d4-fxlb5             1/1     Running   3          3h42m

使用:

1. 集群中添加包含本地盘的节点:

# kubectl get node
cn-shanghai.192.168.2.70   Ready    <none>   86s     v1.16.6-aliyun.1

# ls /dev/vd*
/dev/vda  /dev/vda1  /dev/vdb  /dev/vdc

执行vgdisplay,显示为空;
# vgdisplay

2. 创建StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: csi-local
provisioner: localplugin.csi.alibabacloud.com
parameters:
    volumeType: LVM
    vgName: volumegroup1
    fsType: ext4
    pvType: "localdisk"
    LvmType: "striping"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

provisioner:代表使用的驱动类型,即本地存储驱动;

volumeType:定义使用哪种本地存储类型,这里选择为LVM类型;

fsType:定义lvm卷的格式化文件系统类型;

pvType:表示创建vg使用的底层数据盘类型,localdisk标志阿里云本地盘类型;

LvmTypeTag:表示lvm卷类型,支持linear(线性分配)、striping(条带化方案);

volumeBindingMode:表示使用延迟绑定模式,即只有pod使用pvc的时候,才出发自动创建pv;

allowVolumeExpansion:配置是否运行自动扩容功能;

3. 创建PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: csi-local
# kubectl create -f pvc.yaml
persistentvolumeclaim/lvm-pvc created

由于使用延迟绑定,pvc会等待pod消费他的时候,才触发创建pv,否则一直pending;
# kubectl get pvc
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
lvm-pvc   Pending                                      csi-local      11s

4. 创建Pod消费pvc:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-lvm
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        volumeMounts:
          - name: lvm-pvc
            mountPath: "/data"
      volumes:
        - name: lvm-pvc
          persistentVolumeClaim:
            claimName: lvm-pvc

创建pod如下:

# kubectl create -f deploy.yaml
deployment.apps/deployment-lvm created

# kubectl get pvc
NAME      STATUS   VOLUME                                       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
lvm-pvc   Bound    local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf   2Gi        RWO            csi-local      47s

# kubectl get pod
NAME                             READY   STATUS    RESTARTS   AGE
deployment-lvm-9f798687c-4h2bs   1/1     Running   0          56s

LVM 数据卷信息如下:
# kubectl get pv local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: localplugin.csi.alibabacloud.com
  name: local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  csi:
    driver: localplugin.csi.alibabacloud.com
    fsType: ext4
    volumeAttributes:
      LvmTypeTag: striping
      csi.storage.k8s.io/pv/name: local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
      csi.storage.k8s.io/pvc/name: lvm-pvc
      csi.storage.k8s.io/pvc/namespace: default
      fsType: ext4
      pvType: localdisk
      storage.kubernetes.io/csiProvisionerIdentity: 1584985469710-8081-localplugin.csi.alibabacloud.com
      vgName: volumegroup1
      volume.beta.kubernetes.io/storage-provisioner: localplugin.csi.alibabacloud.com
      volume.kubernetes.io/selected-node: cn-shanghai.192.168.2.70
      volumeType: LVM
    volumeHandle: local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - cn-shanghai.192.168.2.70
  persistentVolumeReclaimPolicy: Delete
  storageClassName: csi-local

5. 登陆节点验证lvm卷:

vg信息如下:

# vgdisplay
  --- Volume group ---
  VG Name               volumegroup1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               207.99 GiB
  PE Size               4.00 MiB
  Total PE              53246
  Alloc PE / Size       512 / 2.00 GiB
  Free  PE / Size       52734 / 205.99 GiB
  VG UUID               04A5WZ-6dtd-LVPe-EvUh-n9uG-4ndy-fvRfaf

PV信息如下:

# pvdisplay
  --- Physical volume ---
  PV Name               /dev/vdb
  VG Name               volumegroup1
  PV Size               104.00 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              26623
  Free PE               26111
  Allocated PE          512
  PV UUID               nBDn6O-7lf7-333d-PjvR-mezx-js6Y-pOOd57

  --- Physical volume ---
  PV Name               /dev/vdc
  VG Name               volumegroup1
  PV Size               104.00 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              26623
  Free PE               26623
  Allocated PE          0
  PV UUID               9ELK0x-Gw6e-VAUq-DcP2-kYgR-Mxva-pAUB48  

LVM信息如下:

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/volumegroup1/local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
  LV Name                local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
  VG Name                volumegroup1
  LV UUID                ZHb6N9-XN7t-DxER-OWi0-rUXi-XcbL-YoY1aK
  LV Write Access        read/write
  LV Creation host, time iZuf667ifeuz5zz5j9oteeZ, 2020-03-24 01:44:31 +0800
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:0

6. 删除Pod、PVC:

# kubectl delete -f deploy.yaml
deployment.apps "deployment-lvm" deleted

# kubectl delete -f pvc.yaml
persistentvolumeclaim "lvm-pvc" deleted


到节点插件lvm卷:
# vgdisplay
  --- Volume group ---
  VG Name               volumegroup1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               207.99 GiB
  PE Size               4.00 MiB
  Total PE              53246
  Alloc PE / Size       0 / 0
  Free  PE / Size       53246 / 207.99 GiB
  VG UUID               04A5WZ-6dtd-LVPe-EvUh-n9uG-4ndy-fvRfaf

LVM数据卷已经删除:
# lvdisplay
#
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
9天前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
1月前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
71 1
|
2月前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
2月前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
2月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
202 4
|
2月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
915 1
|
2月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
91 3
|
2月前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
131 1
|
2月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
71 1
|
2月前
|
弹性计算 Kubernetes Linux
如何使用minikube搭建k8s集群
如何使用minikube搭建k8s集群

相关产品

  • 容器服务Kubernetes版
  • 下一篇
    DataWorks