ubuntu16.0安装kubernetes集群为练习CKA准备

简介: ubuntu16.0安装kubernetes集群为练习CKA准备

报名

CKA报名:https://training.linuxfoundation.cn/

1035234-20181020215539574-213176954.png

1035234-20181020215539574-213176954.png

时间非常紧

考试注意事项

要保持头再摄像头范围内,让监考老师看到

要保持界面简洁

只能带透明水杯

可以上厕所,但要注意时间

考试流程

jumpserver 通过公网ip远程,通过私网ip远程其他机器。

1035234-20181020215539574-213176954.png

基础环境配置(all)

swapoff -a
cat <<EOF> /etc/hosts
192.168.211.40 master
192.168.211.41 node1
192.168.211.42 node2
EOF

安装docker(all)

如果你觉得默认ubutu默认源慢,可以更换aliyun

$ vim /etc/apt/sources.list
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
$ apt-get update

手动安装(all)

https://www.runoob.com/docker/ubuntu-docker-install.html

注意安装docker一定要指定稳定得版本,否则,会出现兼容性稳定,我们不求新而求稳。

$ apt-get install -y docker-ce=5:19.03.4~3-0~ubuntu-xenial docker-ce-cli=5:19.03.4~3-0~ubuntu-xenial containerd.io=1.2.10-3

问题:

Failed to fetch cdrom://Ubuntu-Server 16.04.6 LTS _Xenial Xerus_ - Release amd64 (20190226)/dists/xenial/main/binary-amd64/Packages  Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs

解决方法:

deb cdrom行注释掉

$ vim /etc/apt/sources.list
#deb cdrom:[Ubuntu-Server 16.04 LTS _Xenial Xerus_ - Release amd64 (20160420.3)]/ xenial main restricted

添加docker配置文件,如果不那么保持严谨可以不用添加这一步。如果拉去镜像,最好配置国内得镜像源。

cat > /etc/docker/daemon.json <<EOF
{
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
   "max-size":  "100m"
    }
 }
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload 
systemctl start docker
systemctl status docker

安装kubernets(all)

配置google kubernets源

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

配置key

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

更新并安装

apt-get update
apt-get install -y kubelet kubeadm kubectl

Kubectl 自动补全

$ source <(kubectl completion bash) # setup autocomplete in bash, bash-completion package should be installed first.
$ source <(kubectl completion zsh)  # setup autocomplete in zsh

如果我们无法访问google,可以考虑用aliyun

配置aliyun kubernets源

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF

验证:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6A030B21BA07F4FB


配置key

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

安装工具

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm创建集群master节点

直接执行初始化会拉取镜像失败(master)

kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.211.40

拉取镜像失败

failed to pull image k8s.gcr.io/kube-apiserver:v1.12.2
failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.2
failed to pull image k8s.gcr.io/kube-scheduler:v1.12.2
failed to pull image k8s.gcr.io/kube-proxy:v1.12.2
failed to pull image k8s.gcr.io/pause:3.1
failed to pull image k8s.gcr.io/etcd:3.2.24
failed to pull image k8s.gcr.io/coredns:1.2.2

查看需要拉取得镜像(all)

$  kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.5
k8s.gcr.io/kube-controller-manager:v1.18.5
k8s.gcr.io/kube-scheduler:v1.18.5
k8s.gcr.io/kube-proxy:v1.18.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

注意:这个列表显示的tag名字和镜像版本号,从Kubernetes v1.12+开始,镜像名后面不带 amd64, arm, arm64, ppc64le 这样的标识了

生成默认kubeadm.conf文件(all)

$ kubeadm config print init-defaults > kubeadm.conf

6.3 绕过墙下载镜像方法(all)

注意这个配置文件默认会从google的镜像仓库地址k8s.gcr.io下载镜像,如果你没有科学上网,那么就会 下载不来。因此,我们通过下面的方法把地址改成国内的,比如用阿里的:

sed -i "s/imageRepository: .*/imageRepository: registry.aliyuncs.com\/google_containers/g" kubeadm.conf

6.4 指定kubeadm安装的Kubernetes版本

sed -i "s/kubernetesVersion: .*/kubernetesVersion: v1.18.1/g" kubeadm.conf

6.5 下载需要用到的镜像(all)

kubeadm.conf修改好后,我们执行下面命令就可以自动从国内下载需要用到的镜像了:

$ kubeadm config images pull --config kubeadm.conf

重新初始化master节点

$ kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.211.40
 mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.211.40:6443 --token yuqokz.7989i4qx5770obvp \
    --discovery-token-ca-cert-hash sha256:167d0176ccd1c90b7373917940620fb7a48b245913eb25a05726345902f6213c 

把最后生成的命令记住

master执行:(master)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

node执行加入集群:(node)

报错

$ kubeadm join 192.168.211.40:6443 --token yuqokz.7989i4qx5770obvp     --discovery-token-ca-cert-hash sha256:167d0176ccd1c90b7373917940620fb7a48b245913eb25a05726345902f6213c
W0804 09:39:51.878220    8270 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
  [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
  [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
  [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决方法:

$ kubeadm reset

再次执行join加入集群

$ kubeadm join 192.168.211.40:6443 --token yuqokz.7989i4qx5770obvp \
    --discovery-token-ca-cert-hash sha256:167d0176ccd1c90b7373917940620fb7a48b245913eb25a05726345902f6213c 
W0804 09:47:01.922091   13137 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

$ kubeadm token create --print-join-command

master执行:

root@master:~# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   19m     v1.18.6
node1    NotReady   <none>   6m55s   v1.18.6
node2    NotReady   <none>   5m16s   v1.18.6
root@master:~# kubectl get pods
No resources found in default namespace.
root@master:~# kubectl get pods -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-2xmnt         0/1     Pending   0          21m
kube-system   coredns-66bff467f8-ghj2s         0/1     Pending   0          21m
kube-system   etcd-master                      1/1     Running   0          22m
kube-system   kube-apiserver-master            1/1     Running   0          22m
kube-system   kube-controller-manager-master   1/1     Running   0          22m
kube-system   kube-proxy-dh46z                 1/1     Running   0          7m35s
kube-system   kube-proxy-jq6cb                 1/1     Running   0          21m
kube-system   kube-proxy-z6prp                 1/1     Running   0          9m14s
kube-system   kube-scheduler-master            1/1     Running   0          22m

node执行:

root@node2:~# kubectl get nodes
(卡住)
root@node2:~# mkdir -p $HOME/.kube
root@node2:~# scp root@192.168.211.40:/root/.kube/config /root/.kube/
root@node2:~# kubectl get pods -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-2xmnt         0/1     Pending   0          29m
kube-system   coredns-66bff467f8-ghj2s         0/1     Pending   0          29m
kube-system   etcd-master                      1/1     Running   0          29m
kube-system   kube-apiserver-master            1/1     Running   0          29m
kube-system   kube-controller-manager-master   1/1     Running   0          29m
kube-system   kube-proxy-dh46z                 1/1     Running   0          14m
kube-system   kube-proxy-jq6cb                 1/1     Running   0          29m
kube-system   kube-proxy-z6prp                 1/1     Running   0          16m
kube-system   kube-scheduler-master            1/1     Running   0          29m

安装网络calico

$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

pod创建失败,删除重新载尝试一次

$ kubectl delete -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml

可能是拉取镜像失败或者DNS问题,kubectl describe看一下

kubectl describe pods <pod_name> -n <namespace>

假如拉取镜像失败,手动尝试拉取

docker pull xxx

DNS问题,换用

$ cat /etc/resolve.conf
nameserver 114.114.114.114  #国内DNS

最后查看pod是否创建成功

kubectl get pods -A
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
2天前
|
缓存 容灾 网络协议
ACK One多集群网关:实现高效容灾方案
ACK One多集群网关可以帮助您快速构建同城跨AZ多活容灾系统、混合云同城跨AZ多活容灾系统,以及异地容灾系统。
|
15天前
|
Prometheus Kubernetes 监控
OpenAI故障复盘 - 阿里云容器服务与可观测产品如何保障大规模K8s集群稳定性
聚焦近日OpenAI的大规模K8s集群故障,介绍阿里云容器服务与可观测团队在大规模K8s场景下我们的建设与沉淀。以及分享对类似故障问题的应对方案:包括在K8s和Prometheus的高可用架构设计方面、事前事后的稳定性保障体系方面。
|
12天前
|
Kubernetes Ubuntu 网络安全
ubuntu使用kubeadm搭建k8s集群
通过以上步骤,您可以在 Ubuntu 系统上使用 kubeadm 成功搭建一个 Kubernetes 集群。本文详细介绍了从环境准备、安装 Kubernetes 组件、初始化集群到管理和使用集群的完整过程,希望对您有所帮助。在实际应用中,您可以根据具体需求调整配置,进一步优化集群性能和安全性。
59 12
|
14天前
|
JSON Ubuntu 开发者
ubuntu 22安装lua环境&&编译lua cjson模块
通过上述步骤,可以在 Ubuntu 22.04 系统上成功安装 Lua 环境,并使用 LuaRocks 或手动编译的方式安装 lua-cjson 模块。本文详细介绍了每一步的命令和操作,确保每一步都能顺利完成,适合需要在 Ubuntu 系统上配置 Lua 开发环境的开发者参考和使用。
71 13
|
11天前
|
监控 关系型数据库 MySQL
Ubuntu24.04安装Librenms
此指南介绍了在Linux系统上安装和配置LibreNMS网络监控系统的步骤。主要内容包括:安装所需软件包、创建用户、克隆LibreNMS仓库、设置文件权限、安装PHP依赖、配置时区、设置MariaDB数据库、调整PHP-FPM与Nginx配置、配置SNMP及防火墙、启用命令补全、设置Cron任务和日志配置,最后通过网页完成安装。整个过程确保LibreNMS能稳定运行并提供有效的网络监控功能。
|
21天前
|
Ubuntu Linux Docker
Ubuntu22.04上Docker的安装
通过以上详细的安装步骤和命令,您可以在Ubuntu 22.04系统上顺利安装
342 11
|
17天前
|
Kubernetes 网络协议 应用服务中间件
Kubernetes Ingress:灵活的集群外部网络访问的利器
《Kubernetes Ingress:集群外部访问的利器-打造灵活的集群网络》介绍了如何通过Ingress实现Kubernetes集群的外部访问。前提条件是已拥有Kubernetes集群并安装了kubectl工具。文章详细讲解了Ingress的基本组成(Ingress Controller和资源对象),选择合适的版本,以及具体的安装步骤,如下载配置文件、部署Nginx Ingress Controller等。此外,还提供了常见问题的解决方案,例如镜像下载失败的应对措施。最后,通过部署示例应用展示了Ingress的实际使用方法。
33 2
|
29天前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
5月前
|
存储 Ubuntu Go
在Ubuntu 16.04上安装Go 1.6的方法
在Ubuntu 16.04上安装Go 1.6的方法
60 1
|
5月前
|
存储 Ubuntu Go
在Ubuntu 18.04上安装Go的方法
在Ubuntu 18.04上安装Go的方法
72 1