ubuntu16.0安装kubernetes集群为练习CKA准备

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: ubuntu16.0安装kubernetes集群为练习CKA准备

报名

CKA报名:https://training.linuxfoundation.cn/

1035234-20181020215539574-213176954.png

1035234-20181020215539574-213176954.png

时间非常紧

考试注意事项

要保持头再摄像头范围内,让监考老师看到

要保持界面简洁

只能带透明水杯

可以上厕所,但要注意时间

考试流程

jumpserver 通过公网ip远程,通过私网ip远程其他机器。

1035234-20181020215539574-213176954.png

基础环境配置(all)

swapoff -a
cat <<EOF> /etc/hosts
192.168.211.40 master
192.168.211.41 node1
192.168.211.42 node2
EOF

安装docker(all)

如果你觉得默认ubutu默认源慢,可以更换aliyun

$ vim /etc/apt/sources.list
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
$ apt-get update

手动安装(all)

https://www.runoob.com/docker/ubuntu-docker-install.html

注意安装docker一定要指定稳定得版本,否则,会出现兼容性稳定,我们不求新而求稳。

$ apt-get install -y docker-ce=5:19.03.4~3-0~ubuntu-xenial docker-ce-cli=5:19.03.4~3-0~ubuntu-xenial containerd.io=1.2.10-3

问题:

Failed to fetch cdrom://Ubuntu-Server 16.04.6 LTS _Xenial Xerus_ - Release amd64 (20190226)/dists/xenial/main/binary-amd64/Packages  Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs

解决方法:

deb cdrom行注释掉

$ vim /etc/apt/sources.list
#deb cdrom:[Ubuntu-Server 16.04 LTS _Xenial Xerus_ - Release amd64 (20160420.3)]/ xenial main restricted

添加docker配置文件,如果不那么保持严谨可以不用添加这一步。如果拉去镜像,最好配置国内得镜像源。

cat > /etc/docker/daemon.json <<EOF
{
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
   "max-size":  "100m"
    }
 }
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload 
systemctl start docker
systemctl status docker

安装kubernets(all)

配置google kubernets源

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

配置key

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

更新并安装

apt-get update
apt-get install -y kubelet kubeadm kubectl

Kubectl 自动补全

$ source <(kubectl completion bash) # setup autocomplete in bash, bash-completion package should be installed first.
$ source <(kubectl completion zsh)  # setup autocomplete in zsh

如果我们无法访问google,可以考虑用aliyun

配置aliyun kubernets源

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF

验证:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6A030B21BA07F4FB


配置key

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

安装工具

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm创建集群master节点

直接执行初始化会拉取镜像失败(master)

kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.211.40

拉取镜像失败

failed to pull image k8s.gcr.io/kube-apiserver:v1.12.2
failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.2
failed to pull image k8s.gcr.io/kube-scheduler:v1.12.2
failed to pull image k8s.gcr.io/kube-proxy:v1.12.2
failed to pull image k8s.gcr.io/pause:3.1
failed to pull image k8s.gcr.io/etcd:3.2.24
failed to pull image k8s.gcr.io/coredns:1.2.2

查看需要拉取得镜像(all)

$  kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.5
k8s.gcr.io/kube-controller-manager:v1.18.5
k8s.gcr.io/kube-scheduler:v1.18.5
k8s.gcr.io/kube-proxy:v1.18.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

注意:这个列表显示的tag名字和镜像版本号,从Kubernetes v1.12+开始,镜像名后面不带 amd64, arm, arm64, ppc64le 这样的标识了

生成默认kubeadm.conf文件(all)

$ kubeadm config print init-defaults > kubeadm.conf

6.3 绕过墙下载镜像方法(all)

注意这个配置文件默认会从google的镜像仓库地址k8s.gcr.io下载镜像,如果你没有科学上网,那么就会 下载不来。因此,我们通过下面的方法把地址改成国内的,比如用阿里的:

sed -i "s/imageRepository: .*/imageRepository: registry.aliyuncs.com\/google_containers/g" kubeadm.conf

6.4 指定kubeadm安装的Kubernetes版本

sed -i "s/kubernetesVersion: .*/kubernetesVersion: v1.18.1/g" kubeadm.conf

6.5 下载需要用到的镜像(all)

kubeadm.conf修改好后,我们执行下面命令就可以自动从国内下载需要用到的镜像了:

$ kubeadm config images pull --config kubeadm.conf

重新初始化master节点

$ kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.211.40
 mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.211.40:6443 --token yuqokz.7989i4qx5770obvp \
    --discovery-token-ca-cert-hash sha256:167d0176ccd1c90b7373917940620fb7a48b245913eb25a05726345902f6213c 

把最后生成的命令记住

master执行:(master)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

node执行加入集群:(node)

报错

$ kubeadm join 192.168.211.40:6443 --token yuqokz.7989i4qx5770obvp     --discovery-token-ca-cert-hash sha256:167d0176ccd1c90b7373917940620fb7a48b245913eb25a05726345902f6213c
W0804 09:39:51.878220    8270 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
  [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
  [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
  [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决方法:

$ kubeadm reset

再次执行join加入集群

$ kubeadm join 192.168.211.40:6443 --token yuqokz.7989i4qx5770obvp \
    --discovery-token-ca-cert-hash sha256:167d0176ccd1c90b7373917940620fb7a48b245913eb25a05726345902f6213c 
W0804 09:47:01.922091   13137 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

$ kubeadm token create --print-join-command

master执行:

root@master:~# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   19m     v1.18.6
node1    NotReady   <none>   6m55s   v1.18.6
node2    NotReady   <none>   5m16s   v1.18.6
root@master:~# kubectl get pods
No resources found in default namespace.
root@master:~# kubectl get pods -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-2xmnt         0/1     Pending   0          21m
kube-system   coredns-66bff467f8-ghj2s         0/1     Pending   0          21m
kube-system   etcd-master                      1/1     Running   0          22m
kube-system   kube-apiserver-master            1/1     Running   0          22m
kube-system   kube-controller-manager-master   1/1     Running   0          22m
kube-system   kube-proxy-dh46z                 1/1     Running   0          7m35s
kube-system   kube-proxy-jq6cb                 1/1     Running   0          21m
kube-system   kube-proxy-z6prp                 1/1     Running   0          9m14s
kube-system   kube-scheduler-master            1/1     Running   0          22m

node执行:

root@node2:~# kubectl get nodes
(卡住)
root@node2:~# mkdir -p $HOME/.kube
root@node2:~# scp root@192.168.211.40:/root/.kube/config /root/.kube/
root@node2:~# kubectl get pods -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-2xmnt         0/1     Pending   0          29m
kube-system   coredns-66bff467f8-ghj2s         0/1     Pending   0          29m
kube-system   etcd-master                      1/1     Running   0          29m
kube-system   kube-apiserver-master            1/1     Running   0          29m
kube-system   kube-controller-manager-master   1/1     Running   0          29m
kube-system   kube-proxy-dh46z                 1/1     Running   0          14m
kube-system   kube-proxy-jq6cb                 1/1     Running   0          29m
kube-system   kube-proxy-z6prp                 1/1     Running   0          16m
kube-system   kube-scheduler-master            1/1     Running   0          29m

安装网络calico

$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

pod创建失败,删除重新载尝试一次

$ kubectl delete -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml

可能是拉取镜像失败或者DNS问题,kubectl describe看一下

kubectl describe pods <pod_name> -n <namespace>

假如拉取镜像失败,手动尝试拉取

docker pull xxx

DNS问题,换用

$ cat /etc/resolve.conf
nameserver 114.114.114.114  #国内DNS

最后查看pod是否创建成功

kubectl get pods -A
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
20天前
|
Ubuntu Linux 测试技术
Linux系统之Ubuntu安装cockpit管理工具
【10月更文挑战第13天】Linux系统之Ubuntu安装cockpit管理工具
75 4
Linux系统之Ubuntu安装cockpit管理工具
|
13天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
14天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
21天前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
60 1
|
7天前
|
消息中间件 Ubuntu Java
Ubuntu系统上安装Apache Kafka
Ubuntu系统上安装Apache Kafka
|
14天前
|
Ubuntu Linux
Ubuntu 16.04下无法安装.deb的解决方法
希望以上策略能有效协助您克服在Ubuntu 16.04中安装.deb文件时遇到的挑战。
14 0
|
存储 Kubernetes API
在K8S集群中,如何正确选择工作节点资源大小? 2
在K8S集群中,如何正确选择工作节点资源大小?
|
Kubernetes Serverless 异构计算
基于ACK One注册集群实现IDC中K8s集群以Serverless方式使用云上CPU/GPU资源
在前一篇文章《基于ACK One注册集群实现IDC中K8s集群添加云上CPU/GPU节点》中,我们介绍了如何为IDC中K8s集群添加云上节点,应对业务流量的增长,通过多级弹性调度,灵活使用云上资源,并通过自动弹性伸缩,提高使用率,降低云上成本。这种直接添加节点的方式,适合需要自定义配置节点(runtime,kubelet,NVIDIA等),需要特定ECS实例规格等场景。同时,这种方式意味您需要自行
基于ACK One注册集群实现IDC中K8s集群以Serverless方式使用云上CPU/GPU资源
|
Kubernetes API 调度
在K8S集群中,如何正确选择工作节点资源大小?1
在K8S集群中,如何正确选择工作节点资源大小?
|
弹性计算 运维 Kubernetes
本地 IDC 中的 K8s 集群如何以 Serverless 方式使用云上计算资源
本地 IDC 中的 K8s 集群如何以 Serverless 方式使用云上计算资源