Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排

简介: Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排

代码仓库

会同步代码到 GitHub

https://github.com/turbo-duck/flink-demo

背景介绍

单机模式

在之前的章节中,已经验证过,但是实际运行的时候,我们是需要提交到服务器去运行的。


集群模式

在之前章节中,已经验证过,(方便测试) 使用了 Docker 的方式进行部署。同时也利用容器编排工具docker-compose的方式,进行集群的模式部署:


简单测试 1 x JobManager + 1 x TaskManager

集群模式 1 x JobManager + 3 x TaskManager

HA模式 2 x JobManager + Zookeeper + 3 x TaskManager

但是对于上述的 docker-compose 编排的方式,我们依然是在一台机器上的。对于实际的生产情况,我们可能需要面对多台机器进行部署,同时最好的方案是:动态扩容、动态缩容,最大程度的利用资源。


云原生部署

首先确保你有 Kubernetes

然后为了简化 这里借助 Rancher 可视化进行操作

等熟悉了,你可以使用 yaml尝试部署。

最后,生产环境可借助 Helm + Flink Operator 极大简化过程。

JobManager

需要开放的端口如下:

完整的Yaml 如下:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    field.cattle.io/creatorId: u-s4mkr
    field.cattle.io/publicEndpoints: '[{"addresses":["10.10.52.11"],"port":31120,"protocol":"TCP","serviceName":"flink-server:jobmanager-nodeport","allNodes":true},{"addresses":["10.10.52.11"],"port":30985,"protocol":"TCP","serviceName":"flink-server:jobmanager-nodeport","allNodes":true},{"addresses":["10.10.52.11"],"port":30166,"protocol":"TCP","serviceName":"flink-server:jobmanager-nodeport","allNodes":true}]'
  creationTimestamp: "2023-06-08T08:49:14Z"
  generation: 21
  labels:
    cattle.io/creator: norman
    workload.user.cattle.io/workloadselector: statefulSet-flink-server-jobmanager
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:field.cattle.io/creatorId: {}
          f:field.cattle.io/publicEndpoints: {}
        f:labels:
          .: {}
          f:cattle.io/creator: {}
          f:workload.user.cattle.io/workloadselector: {}
      f:spec:
        f:podManagementPolicy: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector: {}
        f:serviceName: {}
        f:template:
          f:metadata:
            f:annotations:
              .: {}
              f:cattle.io/timestamp: {}
              f:field.cattle.io/ports: {}
            f:labels:
              .: {}
              f:workload.user.cattle.io/workloadselector: {}
          f:spec:
            f:containers:
              k:{"name":"jobmanager"}:
                .: {}
                f:args: {}
                f:env:
                  .: {}
                  k:{"name":"JOB_MANAGER_RPC_ADDRESS"}:
                    .: {}
                    f:name: {}
                    f:value: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":6123,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:name: {}
                    f:protocol: {}
                  k:{"containerPort":6124,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:name: {}
                    f:protocol: {}
                  k:{"containerPort":8081,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:name: {}
                    f:protocol: {}
                f:resources: {}
                f:securityContext:
                  .: {}
                  f:allowPrivilegeEscalation: {}
                  f:capabilities: {}
                  f:privileged: {}
                  f:readOnlyRootFilesystem: {}
                  f:runAsNonRoot: {}
                f:stdin: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
                f:tty: {}
            f:dnsConfig: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
        f:updateStrategy:
          f:type: {}
    manager: rancher
    operation: Update
    time: "2023-06-08T11:20:03Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:collisionCount: {}
        f:currentReplicas: {}
        f:currentRevision: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updateRevision: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2024-06-27T01:24:45Z"
  name: jobmanager
  namespace: flink-server
  resourceVersion: "126393476"
  uid: 461079aa-69f6-423e-85b7-36fdc5c513e7
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: statefulSet-flink-server-jobmanager
  serviceName: jobmanager
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2023-06-08T11:18:06Z"
        field.cattle.io/ports: '[[{"containerPort":8081,"dnsName":"jobmanager-nodeport","hostPort":0,"kind":"NodePort","name":"jm81","protocol":"TCP","sourcePort":31120},{"containerPort":6123,"dnsName":"jobmanager-nodeport","hostPort":0,"kind":"NodePort","name":"jm23","protocol":"TCP"},{"containerPort":6124,"dnsName":"jobmanager-nodeport","hostPort":0,"kind":"NodePort","name":"jm24","protocol":"TCP"}]]'
      creationTimestamp: null
      labels:
        workload.user.cattle.io/workloadselector: statefulSet-flink-server-jobmanager
    spec:
      containers:
      - args:
        - jobmanager
        env:
        - name: JOB_MANAGER_RPC_ADDRESS
          value: jobmanager
        image: 10.10.52.8/flink/flink
        imagePullPolicy: Always
        name: jobmanager
        ports:
        - containerPort: 8081
          name: jm81
          protocol: TCP
        - containerPort: 6123
          name: jm23
          protocol: TCP
        - containerPort: 6124
          name: jm24
          protocol: TCP
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        tty: true
      dnsConfig: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
  updateStrategy:
    type: RollingUpdate
status:
  collisionCount: 0
  currentReplicas: 1
  currentRevision: jobmanager-6ddbf8767b
  observedGeneration: 21
  readyReplicas: 1
  replicas: 1
  updateRevision: jobmanager-6ddbf8767b
  updatedReplicas: 1

TaskManager

需要开放的端口如下:

完整Yaml 如下:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    field.cattle.io/creatorId: u-s4mkr
    field.cattle.io/publicEndpoints: '[{"addresses":["10.10.52.11"],"port":31538,"protocol":"TCP","serviceName":"flink-server:taskmanager-nodeport","allNodes":true},{"addresses":["10.10.52.11"],"port":30067,"protocol":"TCP","serviceName":"flink-server:taskmanager-nodeport","allNodes":true}]'
  creationTimestamp: "2023-06-08T08:49:30Z"
  generation: 10
  labels:
    cattle.io/creator: norman
    workload.user.cattle.io/workloadselector: statefulSet-flink-server-taskmanager
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:field.cattle.io/creatorId: {}
          f:field.cattle.io/publicEndpoints: {}
        f:labels:
          .: {}
          f:cattle.io/creator: {}
          f:workload.user.cattle.io/workloadselector: {}
      f:spec:
        f:podManagementPolicy: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector: {}
        f:serviceName: {}
        f:template:
          f:metadata:
            f:annotations:
              .: {}
              f:cattle.io/timestamp: {}
              f:field.cattle.io/ports: {}
              f:field.cattle.io/publicEndpoints: {}
            f:labels:
              .: {}
              f:workload.user.cattle.io/workloadselector: {}
          f:spec:
            f:containers:
              k:{"name":"taskmanager"}:
                .: {}
                f:args: {}
                f:env:
                  .: {}
                  k:{"name":"JOB_MANAGER_RPC_ADDRESS"}:
                    .: {}
                    f:name: {}
                    f:value: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":6121,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:name: {}
                    f:protocol: {}
                  k:{"containerPort":6122,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:name: {}
                    f:protocol: {}
                f:resources: {}
                f:securityContext:
                  .: {}
                  f:allowPrivilegeEscalation: {}
                  f:privileged: {}
                  f:readOnlyRootFilesystem: {}
                  f:runAsNonRoot: {}
                f:stdin: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
                f:tty: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
        f:updateStrategy:
          f:type: {}
    manager: rancher
    operation: Update
    time: "2023-06-08T11:20:40Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:collisionCount: {}
        f:currentReplicas: {}
        f:currentRevision: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updateRevision: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2024-06-27T01:24:58Z"
  name: taskmanager
  namespace: flink-server
  resourceVersion: "126393537"
  uid: 9ddfa3b3-b497-43af-9bb6-af83ffa0aed3
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: statefulSet-flink-server-taskmanager
  serviceName: taskmanager
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2023-06-08T11:19:53Z"
        field.cattle.io/ports: '[[{"containerPort":6121,"dnsName":"taskmanager-nodeport","kind":"NodePort","name":"tm21","protocol":"TCP"},{"containerPort":6122,"dnsName":"taskmanager-nodeport","kind":"NodePort","name":"tm22","protocol":"TCP"}]]'
        field.cattle.io/publicEndpoints: '[{"addresses":["10.10.52.11"],"allNodes":true,"port":31538,"protocol":"TCP","serviceId":"flink-server:taskmanager-nodeport"},{"addresses":["10.10.52.11"],"allNodes":true,"port":30067,"protocol":"TCP","serviceId":"flink-server:taskmanager-nodeport"}]'
      creationTimestamp: null
      labels:
        workload.user.cattle.io/workloadselector: statefulSet-flink-server-taskmanager
    spec:
      containers:
      - args:
        - taskmanager
        env:
        - name: JOB_MANAGER_RPC_ADDRESS
          value: jobmanager
        image: 10.10.52.8/flink/flink
        imagePullPolicy: Always
        name: taskmanager
        ports:
        - containerPort: 6121
          name: tm21
          protocol: TCP
        - containerPort: 6122
          name: tm22
          protocol: TCP
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          privileged: false
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        tty: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
  updateStrategy:
    type: RollingUpdate
status:
  collisionCount: 0
  currentReplicas: 1
  currentRevision: taskmanager-6fd48fdb8d
  observedGeneration: 10
  readyReplicas: 1
  replicas: 1
  updateRevision: taskmanager-6fd48fdb8d
  updatedReplicas: 1

测试效果

控制台

(根据自己的情况)

http://10.10.52.11:31120/#/overview

扩容TaskManager

当我们对 TaskManager 进行扩容的时候,这里我扩展了 5个,正在逐步扩容,完毕之后 ,我们可以对应观察到控制台中的:

HA模式

同样的操作,你需要在 K8s 集群中用如下的方案来做高可用方案:

  • 3 x JobManager
  • N x TaskManager
  • Zookeeper
相关文章
|
25天前
|
弹性计算 人工智能 架构师
阿里云携手Altair共拓云上工业仿真新机遇
2024年9月12日,「2024 Altair 技术大会杭州站」成功召开,阿里云弹性计算产品运营与生态负责人何川,与Altair中国技术总监赵阳在会上联合发布了最新的“云上CAE一体机”。
阿里云携手Altair共拓云上工业仿真新机遇
|
17天前
|
存储 关系型数据库 分布式数据库
GraphRAG:基于PolarDB+通义千问+LangChain的知识图谱+大模型最佳实践
本文介绍了如何使用PolarDB、通义千问和LangChain搭建GraphRAG系统,结合知识图谱和向量检索提升问答质量。通过实例展示了单独使用向量检索和图检索的局限性,并通过图+向量联合搜索增强了问答准确性。PolarDB支持AGE图引擎和pgvector插件,实现图数据和向量数据的统一存储与检索,提升了RAG系统的性能和效果。
|
4天前
|
JSON 自然语言处理 数据管理
阿里云百炼产品月刊【2024年9月】
阿里云百炼产品月刊【2024年9月】,涵盖本月产品和功能发布、活动,应用实践等内容,帮助您快速了解阿里云百炼产品的最新动态。
阿里云百炼产品月刊【2024年9月】
|
1天前
|
人工智能 Rust Java
10月更文挑战赛火热启动,坚持热爱坚持创作!
开发者社区10月更文挑战,寻找热爱技术内容创作的你,欢迎来创作!
263 12
|
19天前
|
人工智能 IDE 程序员
期盼已久!通义灵码 AI 程序员开启邀测,全流程开发仅用几分钟
在云栖大会上,阿里云云原生应用平台负责人丁宇宣布,「通义灵码」完成全面升级,并正式发布 AI 程序员。
|
21天前
|
机器学习/深度学习 算法 大数据
【BetterBench博士】2024 “华为杯”第二十一届中国研究生数学建模竞赛 选题分析
2024“华为杯”数学建模竞赛,对ABCDEF每个题进行详细的分析,涵盖风电场功率优化、WLAN网络吞吐量、磁性元件损耗建模、地理环境问题、高速公路应急车道启用和X射线脉冲星建模等多领域问题,解析了问题类型、专业和技能的需要。
2582 22
【BetterBench博士】2024 “华为杯”第二十一届中国研究生数学建模竞赛 选题分析
|
3天前
|
存储 人工智能 搜索推荐
数据治理,是时候打破刻板印象了
瓴羊智能数据建设与治理产品Datapin全面升级,可演进扩展的数据架构体系为企业数据治理预留发展空间,推出敏捷版用以解决企业数据量不大但需构建数据的场景问题,基于大模型打造的DataAgent更是为企业用好数据资产提供了便利。
170 2
|
1天前
|
编译器 C#
C#多态概述:通过继承实现的不同对象调用相同的方法,表现出不同的行为
C#多态概述:通过继承实现的不同对象调用相同的方法,表现出不同的行为
101 65
|
21天前
|
机器学习/深度学习 算法 数据可视化
【BetterBench博士】2024年中国研究生数学建模竞赛 C题:数据驱动下磁性元件的磁芯损耗建模 问题分析、数学模型、python 代码
2024年中国研究生数学建模竞赛C题聚焦磁性元件磁芯损耗建模。题目背景介绍了电能变换技术的发展与应用,强调磁性元件在功率变换器中的重要性。磁芯损耗受多种因素影响,现有模型难以精确预测。题目要求通过数据分析建立高精度磁芯损耗模型。具体任务包括励磁波形分类、修正斯坦麦茨方程、分析影响因素、构建预测模型及优化设计条件。涉及数据预处理、特征提取、机器学习及优化算法等技术。适合电气、材料、计算机等多个专业学生参与。
1578 16
【BetterBench博士】2024年中国研究生数学建模竞赛 C题:数据驱动下磁性元件的磁芯损耗建模 问题分析、数学模型、python 代码
|
5天前
|
Linux 虚拟化 开发者
一键将CentOs的yum源更换为国内阿里yum源
一键将CentOs的yum源更换为国内阿里yum源
263 2