在 k8s 环境中使用 mysql 部署 dolphinscheduler (非 helm 的方式)

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
RDS MySQL Serverless 高可用系列,价值2615元额度,1个月
简介: 在 k8s 环境中使用 mysql 部署 dolphinscheduler (非 helm 的方式)

浅言碎语

  • 官方文档有多种部署方式,可惜 k8s 的部署方式使用的是 helm 的方式,并不适用于公司产品的部署形式,需要转换成 yaml 的方式(自己摸石头过河把)

关于 DolphinScheduler

  • Apache DolphinScheduler 是一个分布式易扩展的可视化 DAG 工作流任务调度开源系统。
  • 解决数据研发 ETL 错综复杂的依赖关系,不能直观监控任务健康状态等问题。
  • DolphinScheduler 以 DAG 流式的方式将 Task 组装起来,可实时监控任务的运行状态,同时支持重试、从指定节点恢复失败、暂停及 Kill 任务等操作

简单易用

  • DAG监控界面,所有流程定义都是可视化,通过拖拽任务定制DAG,通过API方式与第三方系统对接,一键部署

高可靠性

  • 去中心化的多 Master 和多 Worker ,自身支持 HA 功能,采用任务队列来避免过载,不会造成机器卡死

丰富的使用场景

  • 支持暂停恢复操作.支持多租户,更好的应对大数据的使用场景
  • 支持更多的任务类型,如 spark, hive, mr, python, sub_process, shell

高扩展性

  • 支持自定义任务类型,调度器使用分布式调度,调度能力随集群线性增长,Master 和 Worker 支持动态上下线

默认端口

组件 默认端口
MasterServer 5678
WorkerServer 1234
ApiApplicationServer 12345

模块介绍

  • dolphinscheduler-alert - 告警模块,提供 AlertServer 服务
  • dolphinscheduler-api - web 应用模块,提供 ApiServer 服务
  • dolphinscheduler-common - 通用的常量枚举、工具类、数据结构或者基类
  • dolphinscheduler-dao - 提供数据库访问等操作
  • dolphinscheduler-remote - 基于 netty 的客户端、服务端
  • dolphinscheduler-server - MasterServer 和 WorkerServer 服务
  • dolphinscheduler-service - service模块,

    • 包含 Quartz、Zookeeper、日志客户端访问服务,便于 server 模块和 api 模块调用
  • dolphinscheduler-ui - 前端模块

制作镜像

  • DolphinScheduler 元数据存储在关系型数据库中,目前支持 PostgreSQL 和 MySQL,如果使用 MySQL 则需要手动下载 mysql-connector-java (8.0.16) 驱动并移动到 DolphinScheduler 的 lib 目录下
  • 下载 mysql 驱动包 mysql-connector-java-8.0.16.jar (要求 >=8.0.1)
准备 debian 的阿里云源,文件名称为: sources.list,和下载好的 mysql 驱动包放一起
deb http://mirrors.cloud.aliyuncs.com/debian stable main contrib non-free
deb http://mirrors.cloud.aliyuncs.com/debian stable-proposed-updates main contrib non-free
deb http://mirrors.cloud.aliyuncs.com/debian stable-updates main contrib non-free
deb-src http://mirrors.cloud.aliyuncs.com/debian stable main contrib non-free
deb-src http://mirrors.cloud.aliyuncs.com/debian stable-proposed-updates main contrib non-free
deb-src http://mirrors.cloud.aliyuncs.com/debian stable-updates main contrib non-free

deb http://mirrors.aliyun.com/debian stable main contrib non-free
deb http://mirrors.aliyun.com/debian stable-proposed-updates main contrib non-free
deb http://mirrors.aliyun.com/debian stable-updates main contrib non-free
deb-src http://mirrors.aliyun.com/debian stable main contrib non-free
deb-src http://mirrors.aliyun.com/debian stable-proposed-updates main contrib non-free
deb-src http://mirrors.aliyun.com/debian stable-updates main contrib non-free
增加一些大佬们需要使用的工具
FROM apache/dolphinscheduler:2.0.6

ENV PIP_CMD='pip3 install --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple'

COPY mysql-connector-java-8.0.16.jar /opt/apache-dolphinscheduler-2.0.6-bin/lib/mysql-connector-java-8.0.16.jar
COPY ./sources.list /tmp/

RUN cat /tmp/sources.list > /etc/apt/sources.list && \
    apt-get update && \
    apt-get install -y libsasl2-dev python3-pip && \
    apt-get autoclean

RUN ${PIP_CMD} \
    pyhive \
    thrift \
    thrift-sasl \
    pymysql \
    pandas \
    faker \
    sasl \
    setuptools_rust \
    wheel \
    rust \
    oss2
生成镜像
docker build -t dolphinscheduler_mysql:2.0.6 .

准备 yaml 文件

以下的 yaml 文件是通过 helm 启动 dolphinscheduler 导出的 yaml 文件,有很多参数没有做修改,需要各自根据实际的场景修改后使用,进攻参考使用

以下 yaml 文件内指定的 namespace 均为 bigdata,默认已经有 mysql 以及 zookeeper

mysql 默认用户名密码为:dolphinscheduler/dolphinscheduler

dolphinscheduler-master.yaml

---
apiVersion: v1
data:
  LOGGER_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
  MASTER_DISPATCH_TASK_NUM: "3"
  MASTER_EXEC_TASK_NUM: "20"
  MASTER_EXEC_THREADS: "100"
  MASTER_FAILOVER_INTERVAL: "10"
  MASTER_HEARTBEAT_INTERVAL: "10"
  MASTER_HOST_SELECTOR: LowerWeight
  MASTER_KILL_YARN_JOB_WHEN_HANDLE_FAILOVER: "true"
  MASTER_MAX_CPULOAD_AVG: "-1"
  MASTER_PERSIST_EVENT_STATE_THREADS: "10"
  MASTER_RESERVED_MEMORY: "0.3"
  MASTER_SERVER_OPTS: -Xms1g -Xmx1g -Xmn512m
  MASTER_TASK_COMMIT_INTERVAL: "1000"
  MASTER_TASK_COMMIT_RETRYTIMES: "5"
  ORG_QUARTZ_SCHEDULER_BATCHTRIGGERACQUISTITIONMAXCOUNT: "1"
  ORG_QUARTZ_THREADPOOL_THREADCOUNT: "25"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-master
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-master
  namespace: bigdata
---
apiVersion: v1
data:
  DATA_BASEDIR_PATH: /tmp/dolphinscheduler
  DATASOURCE_ENCRYPTION_ENABLE: "false"
  DATASOURCE_ENCRYPTION_SALT: '!@#$%^&*'
  DATAX_HOME: /opt/soft/datax
  DOLPHINSCHEDULER_OPTS: ""
  HADOOP_CONF_DIR: /opt/soft/hadoop/etc/hadoop
  HADOOP_HOME: /opt/soft/hadoop
  HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE: "false"
  HDFS_ROOT_USER: hdfs
  HIVE_HOME: /opt/soft/hive
  JAVA_HOME: /usr/local/openjdk-8
  LOGIN_USER_KEYTAB_USERNAME: hdfs@HADOOP.COM
  ORG_QUARTZ_SCHEDULER_BATCHTRIGGERACQUISTITIONMAXCOUNT: "1"
  ORG_QUARTZ_THREADPOOL_THREADCOUNT: "25"
  PYTHON_HOME: /usr/bin/python
  RESOURCE_MANAGER_HTTPADDRESS_PORT: "8088"
  RESOURCE_STORAGE_TYPE: HDFS
  RESOURCE_UPLOAD_PATH: /dolphinscheduler
  SESSION_TIMEOUT_MS: "60000"
  SPARK_HOME1: /opt/soft/spark1
  SPARK_HOME2: /opt/soft/spark2
  SUDO_ENABLE: "true"
  YARN_APPLICATION_STATUS_ADDRESS: http://ds1:%s/ws/v1/cluster/apps/%s
  YARN_JOB_HISTORY_STATUS_ADDRESS: http://ds1:19888/ws/v1/history/mapreduce/jobs/%s
  YARN_RESOURCEMANAGER_HA_RM_IDS: ""
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-common
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-common
  namespace: bigdata
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: dolphinscheduler
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: dolphinscheduler-master-headless
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-master-svc
  namespace: bigdata
spec:
  ports:
  - name: master-port
    port: 5678
    protocol: TCP
  selector:
    app.kubernetes.io/name: dolphinscheduler-master
    app.kubernetes.io/version: 2.0.6
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-master
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-master
  namespace: bigdata
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: dolphinscheduler-master
      app.kubernetes.io/version: 2.0.6
  serviceName: dolphinscheduler-master-svc
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/name: dolphinscheduler-master
        app.kubernetes.io/version: 2.0.6
    spec:
      containers:
      - args:
        - master-server
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: DATABASE_TYPE
          value: mysql
        # 官方要求使用的 jdbc 包是 8.0 的
        ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
        ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
        - name: DATABASE_DRIVER
          value: com.mysql.cj.jdbc.Driver
        # 根据自己的实际场景修改 value
        ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
        - name: DATABASE_HOST
          value: mysql-svc.bigdata.svc.cluster.local
        - name: DATABASE_PORT
          value: "3306"
        # 如果创建的 mysql 用户不是 dolphinscheduler
        ## 需要修改这里的 value 的值
        - name: DATABASE_USERNAME
          value: dolphinscheduler
        # 同上,密码不同,需要修改 value 的值
        - name: DATABASE_PASSWORD
          value: dolphinscheduler
        # 同上,库名不同,需要修改 value 的值
        - name: DATABASE_DATABASE
          value: dolphinscheduler
        # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        - name: DATABASE_PARAMS
          value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
        - name: REGISTRY_PLUGIN_NAME
          value: zookeeper
        # 同 mysql 地址,zk 是 k8s 内部署,这里写的 svc
        - name: REGISTRY_SERVERS
          value: zk-svc.bigdata.svc.cluster.local:2181
        envFrom:
        - configMapRef:
            name: dolphinscheduler-common
        - configMapRef:
            name: dolphinscheduler-master
        image: dolphinscheduler_mysql:2.0.6
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - MasterServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        name: dolphinscheduler-master
        ports:
        - containerPort: 5678
          name: master-port
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - MasterServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /opt/dolphinscheduler/logs
          name: dolphinscheduler-master
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      volumes:
      - emptyDir: {}
        name: dolphinscheduler-master

dolphinscheduler-alert.yaml

---
apiVersion: v1
data:
  ALERT_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-alert
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-alert
  namespace: bigdata
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-alert
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-alert
  namespace: bigdata
spec:
  ports:
  - name: alert-port
    port: 50052
    protocol: TCP
  selector:
    app.kubernetes.io/component: alert
    app.kubernetes.io/name: dolphinscheduler-alert
    app.kubernetes.io/version: 2.0.6
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  labels:
    app.kubernetes.io/name: dolphinscheduler-alert
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-alert
  namespace: bigdata
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/name: dolphinscheduler-alert
      app.kubernetes.io/version: 2.0.6
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/name: dolphinscheduler-alert
        app.kubernetes.io/version: 2.0.6
    spec:
      containers:
      - args:
        - alert-server
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: DATABASE_TYPE
          value: mysql
        # 官方要求使用的 jdbc 包是 8.0 的
        ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
        ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
        - name: DATABASE_DRIVER
          value: com.mysql.cj.jdbc.Driver
        # 根据自己的实际场景修改 value
        ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
        - name: DATABASE_HOST
          value: mysql-svc.bigdata.svc.cluster.local
        - name: DATABASE_PORT
          value: "3306"
        # 如果创建的 mysql 用户不是 dolphinscheduler
        ## 需要修改这里的 value 的值
        - name: DATABASE_USERNAME
          value: dolphinscheduler
        # 同上,密码不同,需要修改 value 的值
        - name: DATABASE_PASSWORD
          value: dolphinscheduler
        # 同上,库名不同,需要修改 value 的值
        - name: DATABASE_DATABASE
          value: dolphinscheduler
        # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        - name: DATABASE_PARAMS
          value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
        envFrom:
        - configMapRef:
            name: dolphinscheduler-common
        - configMapRef:
            name: dolphinscheduler-alert
        image: dolphinscheduler_mysql:2.0.6
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - AlertServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        name: dolphinscheduler-alert
        ports:
        - containerPort: 50052
          name: alert-port
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - AlertServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/dolphinscheduler/logs
          name: dolphinscheduler-alert
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: dolphinscheduler-alert

dolphinscheduler-worker.yaml

---
apiVersion: v1
data:
  LOGGER_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
  WORKER_EXEC_THREADS: "100"
  WORKER_GROUPS: default
  WORKER_HEARTBEAT_INTERVAL: "10"
  WORKER_HOST_WEIGHT: "100"
  WORKER_MAX_CPULOAD_AVG: "-1"
  WORKER_RESERVED_MEMORY: "0.3"
  WORKER_RETRY_REPORT_TASK_STATUS_INTERVAL: "600"
  WORKER_SERVER_OPTS: -Xms1g -Xmx1g -Xmn512m
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-worker
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-worker
  namespace: bigdata
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-worker-headless
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-worker-headless
  namespace: bigdata
spec:
  ports:
  - name: worker-port
    port: 1234
    protocol: TCP
  - name: logger-port
    port: 50051
    protocol: TCP
  selector:
    app.kubernetes.io/component: worker
    app.kubernetes.io/instance: dolphinscheduler
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: dolphinscheduler-worker
    app.kubernetes.io/version: 2.0.6
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/component: worker
    app.kubernetes.io/instance: dolphinscheduler
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: dolphinscheduler-worker
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-worker
  namespace: bigdata
spec:
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: worker
      app.kubernetes.io/instance: dolphinscheduler
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: dolphinscheduler-worker
      app.kubernetes.io/version: 2.0.6
  serviceName: dolphinscheduler-worker-headless
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: worker
        app.kubernetes.io/instance: dolphinscheduler
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: dolphinscheduler-worker
        app.kubernetes.io/version: 2.0.6
    spec:
      containers:
      - args:
        - worker-server
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: ALERT_LISTEN_HOST
          value: dolphinscheduler-alert
        - name: DATABASE_TYPE
          value: mysql
        # 官方要求使用的 jdbc 包是 8.0 的
        ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
        ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
        - name: DATABASE_DRIVER
          value: com.mysql.cj.jdbc.Driver
        # 根据自己的实际场景修改 value
        ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
        - name: DATABASE_HOST
          value: mysql-svc.bigdata.svc.cluster.local
        - name: DATABASE_PORT
          value: "3306"
        # 如果创建的 mysql 用户不是 dolphinscheduler
        ## 需要修改这里的 value 的值
        - name: DATABASE_USERNAME
          value: dolphinscheduler
        # 同上,密码不同,需要修改 value 的值
        - name: DATABASE_PASSWORD
          value: dolphinscheduler
        # 同上,库名不同,需要修改 value 的值
        - name: DATABASE_DATABASE
          value: dolphinscheduler
        # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        - name: DATABASE_PARAMS
          value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
        - name: REGISTRY_PLUGIN_NAME
          value: zookeeper
        # 同 mysql 地址,zk 是 k8s 内部署,这里写的 svc
        - name: REGISTRY_SERVERS
          value: zk-svc.bigdata.svc.cluster.local:2181
        envFrom:
        - configMapRef:
            name: dolphinscheduler-common
        - configMapRef:
            name: dolphinscheduler-worker
        - configMapRef:
            name: dolphinscheduler-alert
        image: dolphinscheduler_mysql:2.0.6
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - WorkerServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        name: dolphinscheduler-worker
        ports:
        - containerPort: 1234
          name: worker-port
          protocol: TCP
        - containerPort: 50051
          name: logger-port
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - WorkerServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp/dolphinscheduler
          name: dolphinscheduler-worker-data
        - mountPath: /opt/dolphinscheduler/logs
          name: dolphinscheduler-worker-logs
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      # 对 data 目录做一个持久化,以 hostpath 的形式创建
      - hostPath:
          path: /data/k8s_data/dolphinscheduler
          type: DirectoryOrCreate
        name: dolphinscheduler-worker-data
      - emptyDir: {}
        name: dolphinscheduler-worker-logs

dolphinscheduler-api.yaml

---
apiVersion: v1
data:
  API_SERVER_OPTS: -Xms512m -Xmx512m -Xmn256m
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-api
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-api
  namespace: bigdata
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: dolphinscheduler-api
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-api
  namespace: bigdata
spec:
  ports:
  - name: api-port
    port: 12345
    protocol: TCP
  selector:
    app.kubernetes.io/name: dolphinscheduler-api
    app.kubernetes.io/version: 2.0.6
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  labels:
    app.kubernetes.io/name: dolphinscheduler-api
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler-api
  namespace: bigdata
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: api
      app.kubernetes.io/instance: dolphinscheduler
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: dolphinscheduler-api
      app.kubernetes.io/version: 2.0.6
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: api
        app.kubernetes.io/instance: dolphinscheduler
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: dolphinscheduler-api
        app.kubernetes.io/version: 2.0.6
    spec:
      containers:
      - args:
        - api-server
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: DATABASE_TYPE
          value: mysql
        # 官方要求使用的 jdbc 包是 8.0 的
        ## 因此 driver 需要写成 com.mysql.cj.jdbc.Driver
        ## 如果是 5.x 以下的版本,需要写成 com.mysql.jdbc.Driver
        - name: DATABASE_DRIVER
          value: com.mysql.cj.jdbc.Driver
        # 根据自己的实际场景修改 value
        ## 我的 mysql 是 k8s 内的,因此直接写的 svc 地址
        - name: DATABASE_HOST
          value: mysql-svc.bigdata.svc.cluster.local
        - name: DATABASE_PORT
          value: "3306"
        # 如果创建的 mysql 用户不是 dolphinscheduler
        ## 需要修改这里的 value 的值
        - name: DATABASE_USERNAME
          value: dolphinscheduler
        # 同上,密码不同,需要修改 value 的值
        - name: DATABASE_PASSWORD
          value: dolphinscheduler
        # 同上,库名不同,需要修改 value 的值
        - name: DATABASE_DATABASE
          value: dolphinscheduler
        # jdbc 6.x 以上版本,都需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        # jdbc 5.x 以下版本,不需要增加 useSSL=false&serverTimezone=Asia/Shanghai
        - name: DATABASE_PARAMS
          value: useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8
        - name: REGISTRY_PLUGIN_NAME
          value: zookeeper
        # 同 mysql 地址,zk 是 k8s 内部署,这里写的 svc
        - name: REGISTRY_SERVERS
          value: zk-svc.bigdata.svc.cluster.local:2181
        envFrom:
        - configMapRef:
            name: dolphinscheduler-common
        - configMapRef:
            name: dolphinscheduler-api
        image: dolphinscheduler_mysql:2.0.6
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - ApiApplicationServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        name: dolphinscheduler-api
        ports:
        - containerPort: 12345
          name: api-port
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - bash
            - /root/checkpoint.sh
            - ApiApplicationServer
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/dolphinscheduler/logs
          name: dolphinscheduler-api
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: dolphinscheduler-api

dolphinscheduler-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  generation: 1
  labels:
    app.kubernetes.io/name: dolphinscheduler
    app.kubernetes.io/version: 2.0.6
  name: dolphinscheduler
  namespace: bigdata
spec:
  rules:
  - host: dolphinscheduler.org
    http:
      paths:
      - backend:
          serviceName: dolphinscheduler-api
          servicePort: api-port
        path: /dolphinscheduler

mysql 初始化

创建用户
GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%' IDENTIFIED BY 'dolphinscheduler';
FLUSH PRIVILEGES;
建库建表
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
*/

SET FOREIGN_KEY_CHECKS=0;

-- ----------------------------
-- Database of dolphinscheduler
-- ----------------------------
CREATE DATABASE IF NOT EXISTS dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

USE dolphinscheduler;

-- ----------------------------
-- Table structure for QRTZ_BLOB_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_BLOB_TRIGGERS`;
CREATE TABLE `QRTZ_BLOB_TRIGGERS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `TRIGGER_NAME` varchar(200) NOT NULL,
  `TRIGGER_GROUP` varchar(200) NOT NULL,
  `BLOB_DATA` blob,
  PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
  KEY `SCHED_NAME` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
  CONSTRAINT `QRTZ_BLOB_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_BLOB_TRIGGERS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_CALENDARS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_CALENDARS`;
CREATE TABLE `QRTZ_CALENDARS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `CALENDAR_NAME` varchar(200) NOT NULL,
  `CALENDAR` blob NOT NULL,
  PRIMARY KEY (`SCHED_NAME`,`CALENDAR_NAME`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_CALENDARS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_CRON_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_CRON_TRIGGERS`;
CREATE TABLE `QRTZ_CRON_TRIGGERS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `TRIGGER_NAME` varchar(200) NOT NULL,
  `TRIGGER_GROUP` varchar(200) NOT NULL,
  `CRON_EXPRESSION` varchar(120) NOT NULL,
  `TIME_ZONE_ID` varchar(80) DEFAULT NULL,
  PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
  CONSTRAINT `QRTZ_CRON_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_CRON_TRIGGERS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_FIRED_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_FIRED_TRIGGERS`;
CREATE TABLE `QRTZ_FIRED_TRIGGERS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `ENTRY_ID` varchar(200) NOT NULL,
  `TRIGGER_NAME` varchar(200) NOT NULL,
  `TRIGGER_GROUP` varchar(200) NOT NULL,
  `INSTANCE_NAME` varchar(200) NOT NULL,
  `FIRED_TIME` bigint(13) NOT NULL,
  `SCHED_TIME` bigint(13) NOT NULL,
  `PRIORITY` int(11) NOT NULL,
  `STATE` varchar(16) NOT NULL,
  `JOB_NAME` varchar(200) DEFAULT NULL,
  `JOB_GROUP` varchar(200) DEFAULT NULL,
  `IS_NONCONCURRENT` varchar(1) DEFAULT NULL,
  `REQUESTS_RECOVERY` varchar(1) DEFAULT NULL,
  PRIMARY KEY (`SCHED_NAME`,`ENTRY_ID`),
  KEY `IDX_QRTZ_FT_TRIG_INST_NAME` (`SCHED_NAME`,`INSTANCE_NAME`),
  KEY `IDX_QRTZ_FT_INST_JOB_REQ_RCVRY` (`SCHED_NAME`,`INSTANCE_NAME`,`REQUESTS_RECOVERY`),
  KEY `IDX_QRTZ_FT_J_G` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
  KEY `IDX_QRTZ_FT_JG` (`SCHED_NAME`,`JOB_GROUP`),
  KEY `IDX_QRTZ_FT_T_G` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
  KEY `IDX_QRTZ_FT_TG` (`SCHED_NAME`,`TRIGGER_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_FIRED_TRIGGERS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_JOB_DETAILS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_JOB_DETAILS`;
CREATE TABLE `QRTZ_JOB_DETAILS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `JOB_NAME` varchar(200) NOT NULL,
  `JOB_GROUP` varchar(200) NOT NULL,
  `DESCRIPTION` varchar(250) DEFAULT NULL,
  `JOB_CLASS_NAME` varchar(250) NOT NULL,
  `IS_DURABLE` varchar(1) NOT NULL,
  `IS_NONCONCURRENT` varchar(1) NOT NULL,
  `IS_UPDATE_DATA` varchar(1) NOT NULL,
  `REQUESTS_RECOVERY` varchar(1) NOT NULL,
  `JOB_DATA` blob,
  PRIMARY KEY (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
  KEY `IDX_QRTZ_J_REQ_RECOVERY` (`SCHED_NAME`,`REQUESTS_RECOVERY`),
  KEY `IDX_QRTZ_J_GRP` (`SCHED_NAME`,`JOB_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_JOB_DETAILS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_LOCKS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_LOCKS`;
CREATE TABLE `QRTZ_LOCKS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `LOCK_NAME` varchar(40) NOT NULL,
  PRIMARY KEY (`SCHED_NAME`,`LOCK_NAME`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_LOCKS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_PAUSED_TRIGGER_GRPS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_PAUSED_TRIGGER_GRPS`;
CREATE TABLE `QRTZ_PAUSED_TRIGGER_GRPS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `TRIGGER_GROUP` varchar(200) NOT NULL,
  PRIMARY KEY (`SCHED_NAME`,`TRIGGER_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_PAUSED_TRIGGER_GRPS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_SCHEDULER_STATE
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_SCHEDULER_STATE`;
CREATE TABLE `QRTZ_SCHEDULER_STATE` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `INSTANCE_NAME` varchar(200) NOT NULL,
  `LAST_CHECKIN_TIME` bigint(13) NOT NULL,
  `CHECKIN_INTERVAL` bigint(13) NOT NULL,
  PRIMARY KEY (`SCHED_NAME`,`INSTANCE_NAME`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_SCHEDULER_STATE
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_SIMPLE_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_SIMPLE_TRIGGERS`;
CREATE TABLE `QRTZ_SIMPLE_TRIGGERS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `TRIGGER_NAME` varchar(200) NOT NULL,
  `TRIGGER_GROUP` varchar(200) NOT NULL,
  `REPEAT_COUNT` bigint(7) NOT NULL,
  `REPEAT_INTERVAL` bigint(12) NOT NULL,
  `TIMES_TRIGGERED` bigint(10) NOT NULL,
  PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
  CONSTRAINT `QRTZ_SIMPLE_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_SIMPLE_TRIGGERS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_SIMPROP_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_SIMPROP_TRIGGERS`;
CREATE TABLE `QRTZ_SIMPROP_TRIGGERS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `TRIGGER_NAME` varchar(200) NOT NULL,
  `TRIGGER_GROUP` varchar(200) NOT NULL,
  `STR_PROP_1` varchar(512) DEFAULT NULL,
  `STR_PROP_2` varchar(512) DEFAULT NULL,
  `STR_PROP_3` varchar(512) DEFAULT NULL,
  `INT_PROP_1` int(11) DEFAULT NULL,
  `INT_PROP_2` int(11) DEFAULT NULL,
  `LONG_PROP_1` bigint(20) DEFAULT NULL,
  `LONG_PROP_2` bigint(20) DEFAULT NULL,
  `DEC_PROP_1` decimal(13,4) DEFAULT NULL,
  `DEC_PROP_2` decimal(13,4) DEFAULT NULL,
  `BOOL_PROP_1` varchar(1) DEFAULT NULL,
  `BOOL_PROP_2` varchar(1) DEFAULT NULL,
  PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
  CONSTRAINT `QRTZ_SIMPROP_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_SIMPROP_TRIGGERS
-- ----------------------------

-- ----------------------------
-- Table structure for QRTZ_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS `QRTZ_TRIGGERS`;
CREATE TABLE `QRTZ_TRIGGERS` (
  `SCHED_NAME` varchar(120) NOT NULL,
  `TRIGGER_NAME` varchar(200) NOT NULL,
  `TRIGGER_GROUP` varchar(200) NOT NULL,
  `JOB_NAME` varchar(200) NOT NULL,
  `JOB_GROUP` varchar(200) NOT NULL,
  `DESCRIPTION` varchar(250) DEFAULT NULL,
  `NEXT_FIRE_TIME` bigint(13) DEFAULT NULL,
  `PREV_FIRE_TIME` bigint(13) DEFAULT NULL,
  `PRIORITY` int(11) DEFAULT NULL,
  `TRIGGER_STATE` varchar(16) NOT NULL,
  `TRIGGER_TYPE` varchar(8) NOT NULL,
  `START_TIME` bigint(13) NOT NULL,
  `END_TIME` bigint(13) DEFAULT NULL,
  `CALENDAR_NAME` varchar(200) DEFAULT NULL,
  `MISFIRE_INSTR` smallint(2) DEFAULT NULL,
  `JOB_DATA` blob,
  PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`),
  KEY `IDX_QRTZ_T_J` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`),
  KEY `IDX_QRTZ_T_JG` (`SCHED_NAME`,`JOB_GROUP`),
  KEY `IDX_QRTZ_T_C` (`SCHED_NAME`,`CALENDAR_NAME`),
  KEY `IDX_QRTZ_T_G` (`SCHED_NAME`,`TRIGGER_GROUP`),
  KEY `IDX_QRTZ_T_STATE` (`SCHED_NAME`,`TRIGGER_STATE`),
  KEY `IDX_QRTZ_T_N_STATE` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
  KEY `IDX_QRTZ_T_N_G_STATE` (`SCHED_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
  KEY `IDX_QRTZ_T_NEXT_FIRE_TIME` (`SCHED_NAME`,`NEXT_FIRE_TIME`),
  KEY `IDX_QRTZ_T_NFT_ST` (`SCHED_NAME`,`TRIGGER_STATE`,`NEXT_FIRE_TIME`),
  KEY `IDX_QRTZ_T_NFT_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`),
  KEY `IDX_QRTZ_T_NFT_ST_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_STATE`),
  KEY `IDX_QRTZ_T_NFT_ST_MISFIRE_GRP` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_GROUP`,`TRIGGER_STATE`),
  CONSTRAINT `QRTZ_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`) REFERENCES `QRTZ_JOB_DETAILS` (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of QRTZ_TRIGGERS
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_access_token
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_access_token`;
CREATE TABLE `t_ds_access_token` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `user_id` int(11) DEFAULT NULL COMMENT 'user id',
  `token` varchar(64) DEFAULT NULL COMMENT 'token',
  `expire_time` datetime DEFAULT NULL COMMENT 'end time of token ',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_access_token
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_alert
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_alert`;
CREATE TABLE `t_ds_alert` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `title` varchar(64) DEFAULT NULL COMMENT 'title',
  `content` text COMMENT 'Message content (can be email, can be SMS. Mail is stored in JSON map, and SMS is string)',
  `alert_status` tinyint(4) DEFAULT '0' COMMENT '0:wait running,1:success,2:failed',
  `log` text COMMENT 'log',
  `alertgroup_id` int(11) DEFAULT NULL COMMENT 'alert group id',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_alert
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_alertgroup
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_alertgroup`;
CREATE TABLE `t_ds_alertgroup`(
  `id`             int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `alert_instance_ids` varchar (255) DEFAULT NULL COMMENT 'alert instance ids',
  `create_user_id` int(11) DEFAULT NULL COMMENT 'create user id',
  `group_name`     varchar(255) DEFAULT NULL COMMENT 'group name',
  `description`    varchar(255) DEFAULT NULL,
  `create_time`    datetime     DEFAULT NULL COMMENT 'create time',
  `update_time`    datetime     DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  UNIQUE KEY `t_ds_alertgroup_name_un` (`group_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_alertgroup
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_command
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_command`;
CREATE TABLE `t_ds_command` (
  `id`                        int(11)    NOT NULL AUTO_INCREMENT COMMENT 'key',
  `command_type`              tinyint(4) DEFAULT NULL COMMENT 'Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread',
  `process_definition_code`   bigint(20) NOT NULL COMMENT 'process definition code',
  `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version',
  `process_instance_id`       int(11) DEFAULT '0' COMMENT 'process instance id',
  `command_param`             text COMMENT 'json command parameters',
  `task_depend_type`          tinyint(4) DEFAULT NULL COMMENT 'Node dependency type: 0 current node, 1 forward, 2 backward',
  `failure_strategy`          tinyint(4) DEFAULT '0' COMMENT 'Failed policy: 0 end, 1 continue',
  `warning_type`              tinyint(4) DEFAULT '0' COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent',
  `warning_group_id`          int(11) DEFAULT NULL COMMENT 'warning group',
  `schedule_time`             datetime DEFAULT NULL COMMENT 'schedule time',
  `start_time`                datetime DEFAULT NULL COMMENT 'start time',
  `executor_id`               int(11) DEFAULT NULL COMMENT 'executor id',
  `update_time`               datetime DEFAULT NULL COMMENT 'update time',
  `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest',
  `worker_group`              varchar(64)  COMMENT 'worker group',
  `environment_code`          bigint(20) DEFAULT '-1' COMMENT 'environment code',
  `dry_run`                   tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run',
  PRIMARY KEY (`id`),
  KEY `priority_id_index` (`process_instance_priority`,`id`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_command
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_datasource
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_datasource`;
CREATE TABLE `t_ds_datasource` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `name` varchar(64) NOT NULL COMMENT 'data source name',
  `note` varchar(255) DEFAULT NULL COMMENT 'description',
  `type` tinyint(4) NOT NULL COMMENT 'data source type: 0:mysql,1:postgresql,2:hive,3:spark',
  `user_id` int(11) NOT NULL COMMENT 'the creator id',
  `connection_params` text NOT NULL COMMENT 'json connection params',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  UNIQUE KEY `t_ds_datasource_name_un` (`name`, `type`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_datasource
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_error_command
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_error_command`;
CREATE TABLE `t_ds_error_command` (
  `id` int(11) NOT NULL COMMENT 'key',
  `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type',
  `executor_id` int(11) DEFAULT NULL COMMENT 'executor id',
  `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code',
  `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version',
  `process_instance_id` int(11) DEFAULT '0' COMMENT 'process instance id: 0',
  `command_param` text COMMENT 'json command parameters',
  `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type',
  `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy',
  `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type',
  `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id',
  `schedule_time` datetime DEFAULT NULL COMMENT 'scheduler time',
  `start_time` datetime DEFAULT NULL COMMENT 'start time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority, 0 Highest,1 High,2 Medium,3 Low,4 Lowest',
  `worker_group` varchar(64)  COMMENT 'worker group',
  `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
  `message` text COMMENT 'message',
  `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run',
  PRIMARY KEY (`id`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;

-- ----------------------------
-- Records of t_ds_error_command
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_process_definition
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_process_definition`;
CREATE TABLE `t_ds_process_definition` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
  `code` bigint(20) NOT NULL COMMENT 'encoding',
  `name` varchar(255) DEFAULT NULL COMMENT 'process definition name',
  `version` int(11) DEFAULT '0' COMMENT 'process definition version',
  `description` text COMMENT 'description',
  `project_code` bigint(20) NOT NULL COMMENT 'project code',
  `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online',
  `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id',
  `global_params` text COMMENT 'global parameters',
  `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available',
  `locations` text COMMENT 'Node location information',
  `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id',
  `timeout` int(11) DEFAULT '0' COMMENT 'time out, unit: minute',
  `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id',
  `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`,`code`),
  UNIQUE KEY `process_unique` (`name`,`project_code`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_process_definition
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_process_definition_log
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_process_definition_log`;
CREATE TABLE `t_ds_process_definition_log` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
  `code` bigint(20) NOT NULL COMMENT 'encoding',
  `name` varchar(200) DEFAULT NULL COMMENT 'process definition name',
  `version` int(11) DEFAULT '0' COMMENT 'process definition version',
  `description` text COMMENT 'description',
  `project_code` bigint(20) NOT NULL COMMENT 'project code',
  `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online',
  `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id',
  `global_params` text COMMENT 'global parameters',
  `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available',
  `locations` text COMMENT 'Node location information',
  `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id',
  `timeout` int(11) DEFAULT '0' COMMENT 'time out,unit: minute',
  `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id',
  `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority',
  `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
  `operate_time` datetime DEFAULT NULL COMMENT 'operate time',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_task_definition
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_task_definition`;
CREATE TABLE `t_ds_task_definition` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
  `code` bigint(20) NOT NULL COMMENT 'encoding',
  `name` varchar(200) DEFAULT NULL COMMENT 'task definition name',
  `version` int(11) DEFAULT '0' COMMENT 'task definition version',
  `description` text COMMENT 'description',
  `project_code` bigint(20) NOT NULL COMMENT 'project code',
  `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id',
  `task_type` varchar(50) NOT NULL COMMENT 'task type',
  `task_params` longtext COMMENT 'job custom parameters',
  `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available',
  `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority',
  `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping',
  `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
  `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries',
  `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval',
  `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open',
  `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail',
  `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute',
  `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute',
  `resource_ids` text COMMENT 'resource id, separated by comma',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`,`code`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_task_definition_log
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_task_definition_log`;
CREATE TABLE `t_ds_task_definition_log` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
  `code` bigint(20) NOT NULL COMMENT 'encoding',
  `name` varchar(200) DEFAULT NULL COMMENT 'task definition name',
  `version` int(11) DEFAULT '0' COMMENT 'task definition version',
  `description` text COMMENT 'description',
  `project_code` bigint(20) NOT NULL COMMENT 'project code',
  `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id',
  `task_type` varchar(50) NOT NULL COMMENT 'task type',
  `task_params` longtext COMMENT 'job custom parameters',
  `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available',
  `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority',
  `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping',
  `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
  `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries',
  `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval',
  `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open',
  `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail',
  `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute',
  `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute',
  `resource_ids` text DEFAULT NULL COMMENT 'resource id, separated by comma',
  `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
  `operate_time` datetime DEFAULT NULL COMMENT 'operate time',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  KEY `idx_code_version` (`code`,`version`),
  KEY `idx_project_code` (`project_code`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_process_task_relation
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_process_task_relation`;
CREATE TABLE `t_ds_process_task_relation` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
  `name` varchar(200) DEFAULT NULL COMMENT 'relation name',
  `project_code` bigint(20) NOT NULL COMMENT 'project code',
  `process_definition_code` bigint(20) NOT NULL COMMENT 'process code',
  `process_definition_version` int(11) NOT NULL COMMENT 'process version',
  `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code',
  `pre_task_version` int(11) NOT NULL COMMENT 'pre task version',
  `post_task_code` bigint(20) NOT NULL COMMENT 'post task code',
  `post_task_version` int(11) NOT NULL COMMENT 'post task version',
  `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay',
  `condition_params` text COMMENT 'condition params(json)',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  KEY `idx_code` (`project_code`,`process_definition_code`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_process_task_relation_log
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_process_task_relation_log`;
CREATE TABLE `t_ds_process_task_relation_log` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id',
  `name` varchar(200) DEFAULT NULL COMMENT 'relation name',
  `project_code` bigint(20) NOT NULL COMMENT 'project code',
  `process_definition_code` bigint(20) NOT NULL COMMENT 'process code',
  `process_definition_version` int(11) NOT NULL COMMENT 'process version',
  `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code',
  `pre_task_version` int(11) NOT NULL COMMENT 'pre task version',
  `post_task_code` bigint(20) NOT NULL COMMENT 'post task code',
  `post_task_version` int(11) NOT NULL COMMENT 'post task version',
  `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay',
  `condition_params` text COMMENT 'condition params(json)',
  `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
  `operate_time` datetime DEFAULT NULL COMMENT 'operate time',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  KEY `idx_process_code_version` (`process_definition_code`,`process_definition_version`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_process_instance
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_process_instance`;
CREATE TABLE `t_ds_process_instance` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `name` varchar(255) DEFAULT NULL COMMENT 'process instance name',
  `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code',
  `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version',
  `state` tinyint(4) DEFAULT NULL COMMENT 'process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete',
  `recovery` tinyint(4) DEFAULT NULL COMMENT 'process instance failover flag:0:normal,1:failover instance',
  `start_time` datetime DEFAULT NULL COMMENT 'process instance start time',
  `end_time` datetime DEFAULT NULL COMMENT 'process instance end time',
  `run_times` int(11) DEFAULT NULL COMMENT 'process instance run times',
  `host` varchar(135) DEFAULT NULL COMMENT 'process instance host',
  `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type',
  `command_param` text COMMENT 'json command parameters',
  `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type. 0: only current node,1:before the node,2:later nodes',
  `max_try_times` tinyint(4) DEFAULT '0' COMMENT 'max try times',
  `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed',
  `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success',
  `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id',
  `schedule_time` datetime DEFAULT NULL COMMENT 'schedule time',
  `command_start_time` datetime DEFAULT NULL COMMENT 'command start time',
  `global_params` text COMMENT 'global parameters',
  `flag` tinyint(4) DEFAULT '1' COMMENT 'flag',
  `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `is_sub_process` int(11) DEFAULT '0' COMMENT 'flag, whether the process is sub process',
  `executor_id` int(11) NOT NULL COMMENT 'executor id',
  `history_cmd` text COMMENT 'history commands of process instance operation',
  `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest',
  `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id',
  `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
  `timeout` int(11) DEFAULT '0' COMMENT 'time out',
  `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id',
  `var_pool` longtext COMMENT 'var_pool',
  `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run',
  `next_process_instance_id` int(11) DEFAULT '0' COMMENT 'serial queue next processInstanceId',
  `restart_time` datetime DEFAULT NULL COMMENT 'process instance restart time',
  PRIMARY KEY (`id`),
  KEY `process_instance_index` (`process_definition_code`,`id`) USING BTREE,
  KEY `start_time_index` (`start_time`,`end_time`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_process_instance
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_project
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_project`;
CREATE TABLE `t_ds_project` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `name` varchar(100) DEFAULT NULL COMMENT 'project name',
  `code` bigint(20) NOT NULL COMMENT 'encoding',
  `description` varchar(200) DEFAULT NULL,
  `user_id` int(11) DEFAULT NULL COMMENT 'creator id',
  `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  KEY `user_id_index` (`user_id`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_project
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_queue
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_queue`;
CREATE TABLE `t_ds_queue` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `queue_name` varchar(64) DEFAULT NULL COMMENT 'queue name',
  `queue` varchar(64) DEFAULT NULL COMMENT 'yarn queue name',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_queue
-- ----------------------------
INSERT INTO `t_ds_queue` VALUES ('1', 'default', 'default', null, null);

-- ----------------------------
-- Table structure for t_ds_relation_datasource_user
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_relation_datasource_user`;
CREATE TABLE `t_ds_relation_datasource_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `user_id` int(11) NOT NULL COMMENT 'user id',
  `datasource_id` int(11) DEFAULT NULL COMMENT 'data source id',
  `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_relation_datasource_user
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_relation_process_instance
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_relation_process_instance`;
CREATE TABLE `t_ds_relation_process_instance` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `parent_process_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id',
  `parent_task_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id',
  `process_instance_id` int(11) DEFAULT NULL COMMENT 'child process instance id',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_relation_process_instance
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_relation_project_user
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_relation_project_user`;
CREATE TABLE `t_ds_relation_project_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `user_id` int(11) NOT NULL COMMENT 'user id',
  `project_id` int(11) DEFAULT NULL COMMENT 'project id',
  `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  KEY `user_id_index` (`user_id`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_relation_project_user
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_relation_resources_user
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_relation_resources_user`;
CREATE TABLE `t_ds_relation_resources_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `user_id` int(11) NOT NULL COMMENT 'user id',
  `resources_id` int(11) DEFAULT NULL COMMENT 'resource id',
  `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_relation_resources_user
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_relation_udfs_user
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_relation_udfs_user`;
CREATE TABLE `t_ds_relation_udfs_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `user_id` int(11) NOT NULL COMMENT 'userid',
  `udf_id` int(11) DEFAULT NULL COMMENT 'udf id',
  `perm` int(11) DEFAULT '1' COMMENT 'limits of authority',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_resources
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_resources`;
CREATE TABLE `t_ds_resources` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `alias` varchar(64) DEFAULT NULL COMMENT 'alias',
  `file_name` varchar(64) DEFAULT NULL COMMENT 'file name',
  `description` varchar(255) DEFAULT NULL,
  `user_id` int(11) DEFAULT NULL COMMENT 'user id',
  `type` tinyint(4) DEFAULT NULL COMMENT 'resource type,0:FILE,1:UDF',
  `size` bigint(20) DEFAULT NULL COMMENT 'resource size',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  `pid` int(11) DEFAULT NULL,
  `full_name` varchar(128) DEFAULT NULL,
  `is_directory` tinyint(4) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `t_ds_resources_un` (`full_name`,`type`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_resources
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_schedules
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_schedules`;
CREATE TABLE `t_ds_schedules` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code',
  `start_time` datetime NOT NULL COMMENT 'start time',
  `end_time` datetime NOT NULL COMMENT 'end time',
  `timezone_id` varchar(40) DEFAULT NULL COMMENT 'schedule timezone id',
  `crontab` varchar(255) NOT NULL COMMENT 'crontab description',
  `failure_strategy` tinyint(4) NOT NULL COMMENT 'failure strategy. 0:end,1:continue',
  `user_id` int(11) NOT NULL COMMENT 'user id',
  `release_state` tinyint(4) NOT NULL COMMENT 'release state. 0:offline,1:online ',
  `warning_type` tinyint(4) NOT NULL COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent',
  `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id',
  `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
  `worker_group` varchar(64) DEFAULT '' COMMENT 'worker group id',
  `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_schedules
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_session
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_session`;
CREATE TABLE `t_ds_session` (
  `id` varchar(64) NOT NULL COMMENT 'key',
  `user_id` int(11) DEFAULT NULL COMMENT 'user id',
  `ip` varchar(45) DEFAULT NULL COMMENT 'ip',
  `last_login_time` datetime DEFAULT NULL COMMENT 'last login time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_session
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_task_instance
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_task_instance`;
CREATE TABLE `t_ds_task_instance` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `name` varchar(255) DEFAULT NULL COMMENT 'task name',
  `task_type` varchar(50) NOT NULL COMMENT 'task type',
  `task_code` bigint(20) NOT NULL COMMENT 'task definition code',
  `task_definition_version` int(11) DEFAULT '0' COMMENT 'task definition version',
  `process_instance_id` int(11) DEFAULT NULL COMMENT 'process instance id',
  `state` tinyint(4) DEFAULT NULL COMMENT 'Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete',
  `submit_time` datetime DEFAULT NULL COMMENT 'task submit time',
  `start_time` datetime DEFAULT NULL COMMENT 'task start time',
  `end_time` datetime DEFAULT NULL COMMENT 'task end time',
  `host` varchar(135) DEFAULT NULL COMMENT 'host of task running on',
  `execute_path` varchar(200) DEFAULT NULL COMMENT 'task execute path in the host',
  `log_path` varchar(200) DEFAULT NULL COMMENT 'task log path',
  `alert_flag` tinyint(4) DEFAULT NULL COMMENT 'whether alert',
  `retry_times` int(4) DEFAULT '0' COMMENT 'task retry times',
  `pid` int(4) DEFAULT NULL COMMENT 'pid of task',
  `app_link` longtext COMMENT 'yarn app id',
  `task_params` longtext COMMENT 'job custom parameters',
  `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available',
  `retry_interval` int(4) DEFAULT NULL COMMENT 'retry interval when task failed ',
  `max_retry_times` int(2) DEFAULT NULL COMMENT 'max retry times',
  `task_instance_priority` int(11) DEFAULT NULL COMMENT 'task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
  `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id',
  `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code',
  `environment_config` text COMMENT 'this config contains many environment variables config',
  `executor_id` int(11) DEFAULT NULL,
  `first_submit_time` datetime DEFAULT NULL COMMENT 'task first submit time',
  `delay_time` int(4) DEFAULT '0' COMMENT 'task delay execution time',
  `var_pool` longtext COMMENT 'var_pool',
  `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run',
  PRIMARY KEY (`id`),
  KEY `process_instance_id` (`process_instance_id`) USING BTREE,
  KEY `idx_code_version` (`task_code`, `task_definition_version`) USING BTREE,
  CONSTRAINT `foreign_key_instance_id` FOREIGN KEY (`process_instance_id`) REFERENCES `t_ds_process_instance` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_task_instance
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_tenant
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_tenant`;
CREATE TABLE `t_ds_tenant` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `tenant_code` varchar(64) DEFAULT NULL COMMENT 'tenant code',
  `description` varchar(255) DEFAULT NULL,
  `queue_id` int(11) DEFAULT NULL COMMENT 'queue id',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_tenant
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_udfs
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_udfs`;
CREATE TABLE `t_ds_udfs` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',
  `user_id` int(11) NOT NULL COMMENT 'user id',
  `func_name` varchar(100) NOT NULL COMMENT 'UDF function name',
  `class_name` varchar(255) NOT NULL COMMENT 'class of udf',
  `type` tinyint(4) NOT NULL COMMENT 'Udf function type',
  `arg_types` varchar(255) DEFAULT NULL COMMENT 'arguments types',
  `database` varchar(255) DEFAULT NULL COMMENT 'data base',
  `description` varchar(255) DEFAULT NULL,
  `resource_id` int(11) NOT NULL COMMENT 'resource id',
  `resource_name` varchar(255) NOT NULL COMMENT 'resource name',
  `create_time` datetime NOT NULL COMMENT 'create time',
  `update_time` datetime NOT NULL COMMENT 'update time',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_udfs
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_user
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_user`;
CREATE TABLE `t_ds_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'user id',
  `user_name` varchar(64) DEFAULT NULL COMMENT 'user name',
  `user_password` varchar(64) DEFAULT NULL COMMENT 'user password',
  `user_type` tinyint(4) DEFAULT NULL COMMENT 'user type, 0:administrator,1:ordinary user',
  `email` varchar(64) DEFAULT NULL COMMENT 'email',
  `phone` varchar(11) DEFAULT NULL COMMENT 'phone',
  `tenant_id` int(11) DEFAULT NULL COMMENT 'tenant id',
  `create_time` datetime DEFAULT NULL COMMENT 'create time',
  `update_time` datetime DEFAULT NULL COMMENT 'update time',
  `queue` varchar(64) DEFAULT NULL COMMENT 'queue',
  `state` tinyint(4) DEFAULT '1' COMMENT 'state 0:disable 1:enable',
  PRIMARY KEY (`id`),
  UNIQUE KEY `user_name_unique` (`user_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_user
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_worker_group
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_worker_group`;
CREATE TABLE `t_ds_worker_group` (
  `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `name` varchar(255) NOT NULL COMMENT 'worker group name',
  `addr_list` text NULL DEFAULT NULL COMMENT 'worker addr list. split by [,]',
  `create_time` datetime NULL DEFAULT NULL COMMENT 'create time',
  `update_time` datetime NULL DEFAULT NULL COMMENT 'update time',
  PRIMARY KEY (`id`),
  UNIQUE KEY `name_unique` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_ds_worker_group
-- ----------------------------

-- ----------------------------
-- Table structure for t_ds_version
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_version`;
CREATE TABLE `t_ds_version` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `version` varchar(200) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `version_UNIQUE` (`version`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COMMENT='version';

-- ----------------------------
-- Records of t_ds_version
-- ----------------------------
INSERT INTO `t_ds_version` VALUES ('1', '2.0.6');


-- ----------------------------
-- Records of t_ds_alertgroup
-- ----------------------------
INSERT INTO `t_ds_alertgroup`(alert_instance_ids, create_user_id, group_name, description, create_time, update_time)
VALUES ("1,2", 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39');

-- ----------------------------
-- Records of t_ds_user
-- ----------------------------
INSERT INTO `t_ds_user`
VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1);

-- ----------------------------
-- Table structure for t_ds_plugin_define
-- ----------------------------
SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
DROP TABLE IF EXISTS `t_ds_plugin_define`;
CREATE TABLE `t_ds_plugin_define` (
  `id` int NOT NULL AUTO_INCREMENT,
  `plugin_name` varchar(100) NOT NULL COMMENT 'the name of plugin eg: email',
  `plugin_type` varchar(100) NOT NULL COMMENT 'plugin type . alert=alert plugin, job=job plugin',
  `plugin_params` text COMMENT 'plugin params',
  `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `t_ds_plugin_define_UN` (`plugin_name`,`plugin_type`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_alert_plugin_instance
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_alert_plugin_instance`;
CREATE TABLE `t_ds_alert_plugin_instance` (
  `id` int NOT NULL AUTO_INCREMENT,
  `plugin_define_id` int NOT NULL,
  `plugin_instance_params` text COMMENT 'plugin instance params. Also contain the params value which user input in web ui.',
  `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `instance_name` varchar(200) DEFAULT NULL COMMENT 'alert instance name',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_environment
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_environment`;
CREATE TABLE `t_ds_environment` (
  `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `code` bigint(20)  DEFAULT NULL COMMENT 'encoding',
  `name` varchar(100) NOT NULL COMMENT 'environment name',
  `config` text NULL DEFAULT NULL COMMENT 'this config contains many environment variables config',
  `description` text NULL DEFAULT NULL COMMENT 'the details',
  `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
  `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `environment_name_unique` (`name`),
  UNIQUE KEY `environment_code_unique` (`code`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Table structure for t_ds_environment_worker_group_relation
-- ----------------------------
DROP TABLE IF EXISTS `t_ds_environment_worker_group_relation`;
CREATE TABLE `t_ds_environment_worker_group_relation` (
  `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `environment_code` bigint(20) NOT NULL COMMENT 'environment code',
  `worker_group` varchar(255) NOT NULL COMMENT 'worker group id',
  `operator` int(11) DEFAULT NULL COMMENT 'operator user id',
  `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `environment_worker_group_unique` (`environment_code`,`worker_group`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

启动 dolphinscheduler

kubectl apply -f dolphinscheduler-master.yaml
kubectl apply -f dolphinscheduler-alert.yaml
kubectl apply -f dolphinscheduler-worker.yaml
kubectl apply -f dolphinscheduler-api.yaml
kubectl apply -f dolphinscheduler-ingress.yaml
pod 都为 running 后,访问 dolphinscheduler.org/dolphinscheduler ,如果修改过 ingress ,需要以自己配置的域名为准,没有域名服务器或 dns 解析,需要自己本地配置好 hosts 解析

默认用户名/密码:admin/dolphinscheduler123

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
7天前
|
存储 数据采集 Kubernetes
一文详解K8s环境下Job类日志采集方案
本文介绍了K8s中Job和Cronjob控制器用于非常驻容器编排的场景,以及Job容器的特点:增删频率高、生命周期短和突发并发大。文章重点讨论了Job日志采集的关键考虑点,包括容器发现速度、开始采集延时和弹性支持,并对比了5种采集方案:DaemonSet采集、Sidecar采集、ECI采集、同容器采集和独立存储采集。对于短生命周期Job,建议使用Sidecar或ECI采集,通过调整参数确保数据完整性。对于突发大量Job,需要关注服务端资源限制和采集容器的资源调整。文章总结了不同场景下的推荐采集方案,并指出iLogtail和SLS未来可能的优化方向。
|
1天前
|
Kubernetes 应用服务中间件 Docker
Kubernetes学习-集群搭建篇(二) 部署Node服务,启动JNI网络插件
Kubernetes学习-集群搭建篇(二) 部署Node服务,启动JNI网络插件
|
1天前
|
运维 Kubernetes Linux
Kubernetes详解(七)——Service对象部署和应用
Kubernetes详解(七)——Service对象部署和应用
8 3
|
3天前
|
关系型数据库 MySQL 数据库
mysqlTools 一分钟部署安装本mysql多个版本,解锁繁琐部署过程
mysqlTools 一分钟部署安装本mysql多个版本,解锁繁琐部署过程
63 2
|
4天前
|
Kubernetes 应用服务中间件 nginx
Kubernetes详解(六)——Pod对象部署和应用
在Kubernetes系列中,本文聚焦Pod对象的部署和管理。首先,通过`kubectl run`命令创建Pod,如`kubectl run pod-test --image=nginx:1.12 --port=80 --replicas=1`。接着,使用`kubectl get deployment`或`kubectl get pods`查看Pod信息,添加`-o wide`参数获取详细详情。然后,利用Pod的IP地址进行访问。最后,用`kubectl delete pods [Pod名]`删除Pod,但因Controller控制器,删除后Pod可能自动重建。了解更多细节,请参阅原文链接。
11 5
|
4天前
|
Kubernetes Linux Docker
Kubernetes详解(四)——基于kubeadm的Kubernetes部署
Kubernetes详解(四)——基于kubeadm的Kubernetes部署
15 2
|
6天前
|
Kubernetes 关系型数据库 MySQL
MySQL在Kubernetes上的高可用实现
【5月更文挑战第1天】
|
15天前
|
Kubernetes 应用服务中间件 nginx
K8S二进制部署详解,一文教会你部署高可用K8S集群(二)
K8S二进制部署详解,一文教会你部署高可用K8S集群(二)
|
15天前
|
Kubernetes 网络安全 数据安全/隐私保护
K8S二进制部署详解,一文教会你部署高可用K8S集群(一)
K8S二进制部署详解,一文教会你部署高可用K8S集群(一)
|
15天前
|
SQL Kubernetes 调度
【一文看懂】部署Kubernetes模式的Havenask集群
本次分享内容为havenask的kubernetes模式部署,由下面2个部分组成(部署Kubernetes模式Havenask集群、 Kubernetes模式相关问题排查),希望可以帮助大家更好了解和使用Havenask。
85 1