spark on k8s native

简介: spark on k8s native


前置环境

官方参考:https://spark.apache.org/docs/3.2.2/running-on-kubernetes.html#pod-template
测试环境:Ubuntu 11.2.0
JDK 1.8.0_162
k8s server1.15需要搭载的client版本大概是4.6,而spark2.4.7才更新这个包版本
k8s server如果是最新的1.19以上的,需要spark3.0才支持

版本兼容性至关重要:

1: Kubernetes Client ,Server,Minikube,spark之间的版本兼容性。

 


spark on k8s难点


1: 版本兼容性
2:在k8s中运行spark最复杂的问题,权限、授信问题


Download spark


wget  https://dlcdn.apache.org/spark/spark-3.2.2/spark-3.2.2-bin-hadoop3.2.tgztar -xvzf spark-3.2.2-bin-hadoop3.2.tgz



构建镜像


1:构建镜像时,需要将spark/hadoop配置加进去

Dockerfile

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
ARG java_image_tag=11-jre-slim
FROM openjdk:${java_image_tag}
ARG spark_uid=185
# Before building the docker image, first build and make a Spark distribution following
# the instructions in http://spark.apache.org/docs/latest/building-spark.html.
# If this docker file is being used in the context of building your images from a Spark
# distribution, the docker build command should be invoked from the top level directory
# of the Spark distribution. E.g.:
# docker build -t spark:latest -f kubernetes/dockerfiles/spark/Dockerfile .
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y sudo
RUN set -ex && \
    sed -i 's/http:\/\/deb.\(.*\)/https:\/\/deb.\1/g' /etc/apt/sources.list && \
    apt-get update && \
    ln -s /lib /lib64 && \
    apt install -y bash tini libc6 libpam-modules krb5-user libnss3 procps && \
    mkdir -p /opt/spark && \
    mkdir -p /opt/spark/examples && mkdir -p /opt/spark/conf && \
    mkdir -p /usr/local/jdk1.8.0_162 && \ 
    mkdir -p /opt/spark/work-dir && \
    touch /opt/spark/RELEASE && \
    rm /bin/sh && \
    ln -sv /bin/bash /bin/sh && \
    echo "auth required pam_wheel.so use_uid" >> /etc/pam.d/su && \
    chgrp root /etc/passwd && chmod ug+rw /etc/passwd && \
    rm -rf /var/cache/apt/*
COPY jars /opt/spark/jars
COPY bin /opt/spark/bin
COPY sbin /opt/spark/sbin
COPY kubernetes/dockerfiles/spark/entrypoint.sh /opt/
COPY kubernetes/dockerfiles/spark/decom.sh /opt/
COPY examples /opt/spark/examples
COPY kubernetes/tests /opt/spark/tests
COPY data /opt/spark/data
COPY jdk1.8.0_162 /usr/local/jdk1.8.0_162/
COPY hosts /opt/spark/
COPY conf /opt/spark/conf
ENV SPARK_HOME /opt/spark
WORKDIR /opt/spark/work-dir
RUN chmod g+w /opt/spark/work-dir
RUN chmod a+x /opt/decom.sh
ENTRYPOINT [ "/opt/entrypoint.sh" ]
# Specify the User that the actual main process will run as
#USER ${spark_uid}
USER root

2:entrypoint.sh 增加 hosts配置,将集群各个节点host加入


Spark 提供 bin/docker-image-tool.sh 工具用于 build image

这个工具会找到 kubernetes/dockerfiles 下的 docker 文件,根据 docker file 会把需要的 Spark 命令、工具、库、jar 包、java、example、entrypoint.sh 等 build 进 image

2.3 只支持 Java/Scala,从 2.4 开始支持 Python 和 R,会有三个 docker file,会 build 出三个 image,其中 Python 和 R 是基于 Java/Scala 版的(-r :repourl

sudo ./bin/docker-image-tool.sh -r lfspace/thpub -t thspark3.2.2 build

如果保错:

WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz: temporary error (try again later)
ERROR: unsatisfiable constraints:
  bash (missing):
    required by: world[bash]

这是网络问题,可以修改 ./bin/docker-image-tool.sh,在里面的 docker build 命令加上 --network=host 使容器使用宿主机网络 (要确保宿主机网络是 OK 的)

4:镜像上传

docker tag lfspace/thpub/spark:thspark3.2.2 lfspace/thpub:spark3.2.2
docker push lfspace/thpub:spark3.2.2

5: 执行

bin/spark-submit \
    --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
    --deploy-mode cluster \
    --name spark-pi \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.executor.instances=5 \
    --conf spark.kubernetes.container.image=<spark-image> \
    local:///path/to/examples.jar

该操作默认使用default用户,在default用户没有足够权限的情况下,该任务最终会失败

 


serviceAccount权限问题


spark driver需要获取executor的创建、运行、watch等权限,需要配置对应用户权限。

1:准备一个 role.yaml 文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: spark
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: spark-role
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: spark-role-binding
  namespace: default
subjects:
- kind: ServiceAccount
  name: spark
  namespace: default
roleRef:
  kind: Role
  name: spark-role
  apiGroup: rbac.authorization.k8s.io

可以参考:

https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/manifest/spark-rbac.yaml

执行命令

sudo kubectl apply -f role.yaml

查看配置

sudo kubectl get role
sudo kubectl get role spark-role -o yaml
sudo kubectl get rolebinding
sudo kubectl get rolebinding spark-role-binding -o yaml

2: 创建serviceAccount

kubectl create serviceaccount spark
kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default



容器里提交job



定义 deployment,注意指定 serviceAccountName 使用前面创建的 spark role

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spark-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spark
      component: client
  template:
    metadata:
      labels:
        app: spark
        component: client
    spec:
      containers:
      - name: sparkclient
        image: lfspace/thpub:spark3.2.2
        workingDir: /opt/spark
        command: ["/bin/bash", "-c", "while true;do echo hello;sleep 6000;done"]
      serviceAccountName: spark

部署:

sudo kubectl create -f client-deployment.yaml

查看并登陆 pod:

sudo kubectl exec -t -i spark-client-6479b76776-l5bzw /bin/bash

通过 env 命令可以看到容器里有定义 Kubernetes API Server 的地址

实际上容器上还有相应的 token 和证书,可以用来访问 API Server

TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
     -H "Authorization: Bearer $TOKEN" \
     -s https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}/api/v1/namespaces/default/pods

但通过 spark-submit 提交 Job 报错了,说是没权限获取 configMap,看来要求的权限和在宿主机提交不一样

改变 spark role 的配置,允许操作所有资源,然后重新执行 kubectl create

- apiGroups: [""]
  resources: ["*"]
  verbs: ["*"]

重新提交 Job,可以看到成功启动运行了

# 第二个 wordcount.py 是作为参数用
bin/spark-submit \
    --master k8s://https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS} \
    --deploy-mode cluster \
    --name spark-test \
    --conf spark.executor.instances=3 \
    --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
    --conf spark.kubernetes.container.image=lfspace/thpub:spark3.2.2 \
    /opt/spark/examples/src/main/python/wordcount.py \
    /opt/spark/examples/src/main/python/wordcount.py


授信问题


 

最快捷的解决方案写在最下面

在k8s中运行spark最复杂的问题,我认为就是授信问题。关于授信问题,spark2.4的官方文档写的很不好,官方文档地址: http://spark.apache.org/docs/2.4.1/running-on-kubernetes.html

在授信问题上提供了很多参数如: spark.kubernetes.authenticate.submission.caCertFile,spark.kubernetes.authenticate.submission.clientKeyFile,配置这些证书过程很复杂

  • 异常:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

这个是因为自签证书不受信任,需要将ca.pem导入keystore。参考https://blog.csdn.net/ljskr/article/details/84570573

  • 异常
2020/10/15 11:09:49.147 WARN WatchConnectionManager : Exec Failure: HTTP 403, Status: 403 - pods "spark-pi-1602731387162-driver" is forbidden: User "system:anonymous" cannot watch resource "pods" in API group "" in the namespace "default"

这是客户端不受信任。解决方案是spark-submit添加参数

--conf spark.kubernetes.authenticate.submission.clientKeyFile=/root/admin-key.pem
--conf spark.kubernetes.authenticate.submission.clientCertFile=/root/admin.pem

或者执行:

kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous


这里的证书有一个配置错误都会出问题

所以最佳配置方案是:

将.kube文件拷贝到$HOME目录下

原理是:

spark使用的是io.fabric8库,虽然spark提供了一堆的参数,但是该库默认还是会寻找~/.kube/config文件。

代码逻辑:

https://github.com/apache/spark/blob/branch-2.4/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala#L66

https://github.com/fabric8io/kubernetes-client/blob/74cc63df9b6333d083ee24a6ff3455eaad0a6da8/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Config.java#L538

而这里的授信认证仅仅在spark-submit时生效。当命令提交后,生成driver pod之后,授信文件pem就失去职能了


driver阶段授信:

spark on k8s大致的流程是driver pod去创建和销毁executor pod,所以driver pod的权限需要很大才行。这里需要配置RBAC,参考 http://spark.apache.org/docs/2.4.1/running-on-kubernetes.html#rbac

配置完成之后,需要告知driver以什么身份执行,因此还需要配置参数:

--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark

最后

k8s client和server版本兼容很烂

k8s server1.15需要搭载的client版本大概是4.6,而spark2.4.7才更新这个包版本

参考: https://www.waitingforcode.com/apache-spark/setting-up-apache-spark-kubernetes-microk8s/read#unknownhostexception_kubernetes_default_svc_try_again

因此测试时格外要注意client和server版本问题

最后发布命令如下:

/Users/zhouwenyang/Desktop/tmp/spark/bin/spark-submit \
--master k8s://https://knode1:6443 --deploy-mode cluster \
--name spark-pi --class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.container.image=vm2173:5000/spark:2.4.7 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.submission.waitAppCompletion=false \
local:///opt/spark/examples/jars/spark-examples_2.11-2.4.7.jar

spark官方配置: https://spark.apache.org/docs/3.0.0/running-on-kubernetes.html#rbac

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
4月前
|
Kubernetes Java 流计算
Flink application on k8s 有没有和 session 模式通过-C 指定额外的 jar 的相同功能啊?
Flink application on k8s 有没有和 session 模式通过-C 指定额外的 jar 的相同功能啊?
30 0
|
6月前
|
Kubernetes 流计算 容器
Flink on k8s的话,怎么在容器运行前初始化一些脚本?
Flink on k8s的话,怎么在容器运行前初始化一些脚本?
44 1
|
4月前
|
弹性计算 资源调度 Kubernetes
Flink三种集群模式,Standalone模式,Flink On YARN,Flink On K8S,这三种模式有啥优缺点,生产环境如何选择呢?
Flink三种集群模式,Standalone模式,Flink On YARN,Flink On K8S,这三种模式有啥优缺点,生产环境如何选择呢?
236 3
|
3月前
|
Kubernetes 固态存储 容器
pulsar on k8s
pulsar on k8s
|
3月前
|
存储 Kubernetes Apache
pulsar on k8s 部署验证
pulsar on k8s 部署验证
|
6月前
|
SQL 分布式计算 Kubernetes
Hadoop on K8s 编排部署进阶篇
Hadoop on K8s 编排部署进阶篇
Hadoop on K8s 编排部署进阶篇
|
5月前
|
分布式计算 Kubernetes Serverless
Hago 的 Spark on ACK 实践
Hago 的 Spark on ACK 实践
|
6月前
|
分布式计算 资源调度 Hadoop
Spark on Yarn集群模式搭建及测试
Spark on Yarn集群模式搭建及测试
156 0
|
分布式计算 Kubernetes Spark
Spark on k8s 试用步骤
背景:Spark 2.3.0 开始支持使用k8s 作为资源管理原生调度spark。使用k8s原生调度的spark主要有以下好处: 采用k8s原生调度,不再需要二级调度,直接使用k8s原生的调度模块,实现与其他应用的混布;资源隔离:任务可以提交到指定的namespace,这样可以复用k8s原生的qo.
2933 0
|
3月前
|
机器学习/深度学习 SQL 分布式计算
Apache Spark 的基本概念和在大数据分析中的应用
介绍 Apache Spark 的基本概念和在大数据分析中的应用
159 0