云原生|kubernetes|kubernetes-1.18 二进制安装教程单master(其它的版本也基本一样)(二)

简介: 云原生|kubernetes|kubernetes-1.18 二进制安装教程单master(其它的版本也基本一样)

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

总共会生成4个证书文件,这四个文件是以pem为后缀的,将这四个文件拷贝到 /opt/kubernetes/ssl 目录下:

cp server*.pem ca*.pem /opt/kubernetes/ssl/

证书生成的工作就到这告一段落了。下面是启用 TLS Bootstrapping 自签机制。

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:nodebootstrapper"
EOF

这里的token可以使用下面的命令生成然后替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

kube-apiserver的启动脚本:

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

服务启动和加入自启:

1. systemctl daemon-reload
2. systemctl start kube-apiserver
3. systemctl enable kube-apiserver

该服务状态为绿色表示正常:

[root@master ssl]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-08-26 15:33:19 CST; 6h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 3009 (kube-apiserver)
   Memory: 365.3M
   CGroup: /system.slice/kube-apiserver.service
           └─3009 /opt/kubernetes/bin/kube-apiserver --v=2 --logtostderr=false --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.217.16:2379,https://1...
Aug 26 15:33:19 master systemd[1]: Started Kubernetes API Server.
Aug 26 15:33:19 master systemd[1]: Starting Kubernetes API Server...
Aug 26 15:33:28 master kube-apiserver[3009]: E0826 15:33:28.034854    3009 controller.go:152] Unable to remove old endpoints from kubernetes service: Sto...ErrorMsg:
Hint: Some lines were ellipsized, use -l to show in full.

如果有错误导致服务未能正常启动,可查看系统日志 /var/log/messages

通过对/var/log/messages日志的观察,可以发现,在第一次启动apiserver的时候,生成了非常多的角色,这些角色对应了k8s内的各种资源。例如:

Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.321342    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.335178    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:discovery
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.346905    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:basic-user
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.359675    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.370449    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/admin
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.381805    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/edit
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.395624    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/view
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.406568    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.415029    6822 healthz.go:200] [+]ping ok
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.516294    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.525808    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.535778    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.545944    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.558356    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.567806    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.577033    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.585929    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.596499    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.605861    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:ku
g 30 10:01:23 master kube-apiserver: I0830 10:01:23.614996    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.624625    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.635380    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.644132    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.653821    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.663108    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.672682    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.685326    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.694401    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.703354    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.713226    6822 healthz.go:200] [+]ping ok
Aug 30 10:01:23 master kube-apiserver: [+]log ok
Aug 30 10:01:23 master kube-apiserver: [+]etcd ok
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.123145    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.132424    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.149014    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.160210    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.169018    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.178514    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.187484    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.201137    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.213896    6822 healthz.go:200] [+]ping ok

(9)部署kube-controller-manager

该服务的配置文件:

vim /opt/kubernetes/cfg/kube-controller-manager.conf

 KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/16 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"

配置文件说明:

--master:通过本地非安全本地端口8080连接apiserver。
--leader-elect:当该组件启动多个时,自动选举(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致,也就是两个服务共用ca证书。

该服务的启动脚本:

vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

启动并设置开机启动:

1. systemctl daemon-reload
2. systemctl start kube-controller-manager
3. systemctl enable kube-controller-manager

(10)部署kube-scheduler

这个服务是调度服务,主要调度各类资源的,通过和controller-manage服务通信,以及etcd通知进行各类资源调度。

配置文件:

vim /opt/kubernetes/cfg/kube-scheduler.conf

KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"

配置文件说明:

1. --master:通过本地非安全本地端口8080连接apiserver。
2. --leader-elect:当该组件启动多个时,自动选举(HA)

启动脚本:

vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

启动并设置开机启动:

1. systemctl daemon-reload
2. systemctl start kube-scheduler
3. systemctl enable kube-scheduler

(11)

此时,这三个服务搭建完毕后,就可以集群的健康检查了:

[root@master cfg]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

如果哪个服务没有启动或者异常,此命令都会显示出来。例如,停止一个etcd,上面的命令将会报告错误:

[root@master cfg]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                             ERROR
scheduler            Healthy     ok                                                                                                  
etcd-1               Unhealthy   Get https://192.168.217.17:2379/health: dial tcp 192.168.217.17:2379: connect: connection refused   
controller-manager   Healthy     ok                                                                                                  
etcd-0               Healthy     {"health":"true"}                                                                                   
etcd-2               Healthy     {"health":"true"}   

(12)node节点安装kubelet

kubelet服务是node工作节点比较重要的一个服务,这个服务也不太好配置:

kubelet服务的配置文件:

vim /opt/kubernetes/cfg/kubelet.conf

KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"

导入相关镜像包,包名是registry.cn-hangzhou.aliyuncs.com_google_containers_pause_3.2.tar,三个节点都导入

配置文件说明:

--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像

这里要注意一个难点,-kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig这一段,表示这个文件会在服务启动的时候自动生成,但一般稍微有点错,它就生成不了,比如下面的文件如果有写错,那么,将不会自动生成这个文件。

vim /opt/kubernetes/cfg/kubelet-config.yml

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
  - 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

生成bootstrap.kubeconfig文件:

1. KUBE_APISERVER="https://192.168.217.16:6443"
2. TOKEN="c47ffb939f5ca36231d9e3121a252940"


相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
11月前
|
Cloud Native Serverless 数据中心
阿里云ACK One:注册集群支持ACS算力——云原生时代的计算新引擎
ACK One注册集群已正式支持ACS(容器计算服务)算力,为企业的容器化工作负载提供更多选择和更强大的计算能力。
|
11月前
|
Cloud Native Serverless 数据中心
阿里云ACK One:注册集群支持ACS算力——云原生时代的计算新引擎
阿里云ACK One:注册集群支持ACS算力——云原生时代的计算新引擎
334 10
|
存储 Kubernetes 开发者
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
Docker 是一种开源的应用容器引擎,允许开发者将应用程序及其依赖打包成可移植的镜像,并在任何支持 Docker 的平台上运行。其核心概念包括镜像、容器和仓库。镜像是只读的文件系统,容器是镜像的运行实例,仓库用于存储和分发镜像。Kubernetes(k8s)则是容器集群管理系统,提供自动化部署、扩展和维护等功能,支持服务发现、负载均衡、自动伸缩等特性。两者结合使用,可以实现高效的容器化应用管理和运维。Docker 主要用于单主机上的容器管理,而 Kubernetes 则专注于跨多主机的容器编排与调度。尽管 k8s 逐渐减少了对 Docker 作为容器运行时的支持,但 Doc
517 5
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
Kubernetes 网络协议 应用服务中间件
K8S二进制部署实践-1.15.5
K8S二进制部署实践-1.15.5
218 0
|
Kubernetes 调度 Docker
Kubernetes高可用集群二进制部署(五)kubelet、kube-proxy、Calico、CoreDNS
Kubernetes高可用集群二进制部署(五)kubelet、kube-proxy、Calico、CoreDNS
Kubernetes高可用集群二进制部署(五)kubelet、kube-proxy、Calico、CoreDNS
|
存储 Kubernetes 负载均衡
CentOS 7.9二进制部署K8S 1.28.3+集群实战
本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。
2327 4
CentOS 7.9二进制部署K8S 1.28.3+集群实战
|
Kubernetes 负载均衡 前端开发
二进制部署Kubernetes 1.23.15版本高可用集群实战
使用二进制文件部署Kubernetes 1.23.15版本高可用集群的详细教程,涵盖了从环境准备到网络插件部署的完整流程。
693 4
二进制部署Kubernetes 1.23.15版本高可用集群实战
|
存储 Kubernetes Ubuntu
Ubuntu 22.04LTS版本二进制部署K8S 1.30+版本
这篇文章详细介绍了在Ubuntu 22.04 LTS系统上使用VMware Fusion虚拟化软件部署Kubernetes 1.30+版本的完整过程,包括环境准备、安装containerd、配置etcd、生成证书、部署高可用组件、启动Kubernetes核心组件以及网络插件的部署和故障排查。
936 5
|
Kubernetes 网络协议 安全
[kubernetes]二进制方式部署单机k8s-v1.30.5
[kubernetes]二进制方式部署单机k8s-v1.30.5

推荐镜像

更多