开发者学堂课程【Kubernetes 极速入门:K8S 集群部署_集群初始化】学习笔记,与课程紧密联系,让用户快速学习知识。
课程地址:https://developer.aliyun.com/learning/course/658/detail/10883
K8S 集群部署_集群初始化
内容介绍:
一、集群初始化操作
二、初始化的命令
三、数据初始化之后的操作
四、三台主机的实际操作
一、集群初始化操作
注意所在的位置是在 master 节点操作,这就意味着在哪个节点进行操作,哪个节点就是 master 节点,如果在第一台主机上操作,第一台主机就是 master 节点。
二、初始化的命令
[root@master1 ~]# kubeadm init --kubernetes-version=v1.17.2(版本)--pod-network-cidr(写法是cidr)=172.16.0.0/16(pod网络)--apiserver-advertise-
address=192.168.216.100(当前主机的IP)
执行以上命令以后,就可以进行初始化的动作。
运行结果如下:
[init] Using Kubernetes version :v1.17 .2(初始化过程中会出现版本信息)
[preflght] Runnng pre-flight checks(根据版本去检查一些东西)
[preflight] Pulling images required for setting up a Kubernetes cluster(如果没有提前下载会浪费较长时间)
[preflight] This might take a minute or two,depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using'kubeadm config images pull'
[ kubelet- start] writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags .env"
[kubelet-start] Writing kubelet configuration to file"/var/lib/kubelet/config.yaml"
[kubelet-startj starting the kubelet(启动kubelet)
[certs] using certificateDir folder "/etc/kubernetes/pki"(使用证书的目录)
(生成下列证书)
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.216.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.216.100127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [masterl localhost] and IPs [192.168.216.100 127.0.0. 1 : :1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[ kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[ kubeconfigj writing "admin.conf" kubeconfig file(管理接口文件)
[ kubeconfig] writing " kubelet.conf"kubeconfig file
[ kubeconfig] writing "controller-manager.conf kubeconfig file
[ kubeconfig] writing "scheduler.conf"kubeconfig file
[control-plane] Using manifest folder " /etc/kubernetes/manifests"
[control-plane] creating static Pod manifest for "kube-apiserver"
[control-plane] creating static Pod manifest for"kube-controller-manager"
W0208 16:36:47.632464 8298 manifests.go:214] the default kube-apiserver authorization-mode is 'wode , RBAC" ; using "Node ,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W208 16:36:47.634408.8298 manifests.go:214] the default kube-apiserver authorization-mode is Mode,RBAC" ; using "Node , RBAC"
[etcd] creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as statry "/etc/kubernetes/manifests "This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.503874 seconds
[upload-config] Storing the configuration used in ConfigMap " kubeadm-config" in the "kube- system" Namespace
[kubelet] Creating a ConfigMap " kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[ upload-certs] Skipping phase. Pleaser see --upload-certs
[ mark-control-plane] Marking the node" master1 as control-plane by adding the label "node- role. kubernetes. io/master='' "
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node- role. kube rnetes. io/master :NoSchedule]
[bootstrap-token] Using token: uahnj7 .2wzl7nvx8obxhfnh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CS
Rs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the cs rapprover controller autom atically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the C luster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" name space
[kubelet-finalize] Updating " /etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client cerificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
文件都完成以后,这句话是在说整个集群的控制平面已经初始化完成。
控制平面的意思是在 Kubernetes 当中,由于 master 节点属于管理整个集群的,包括 work 节点是用来运行用户 pod,通过 Kubernete 进行的 master 角色和 work 角色来完成整个的集群的控制,除了平面控制以外还有数据平面。
三、数据初始化之后的操作
第一步:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/ . kube(创建文件夹)
sudo cp -i /etc/kube rnetes/admin. conf $HOME/ . kube/config(拷贝)
sudo chown $(id -u):$(id -g) $HOME/ . kube/config(赋予相应的属主和属组)
第二步:
You should now deploy a pod network to the cluster.(使用网络附件的形式为集群布置一个pod网络)
Run "kubectl apply -f [podnetwork] . yaml" with one of the options listed at:
https://kube rnetes . io/docs/concepts/cluster- administration/addons/
第三步:
添加任意数量的工作节点
Then you can join any number of worker nodes by running the following on each as root:(把以下内容复制到所有的工作节点,就可以完成工作节点加入到集群的工作了)
kubeadm join 192. 168.216. 100: 6443 --token uahnj7 .2wzl7nvx8obxhfnh \
-discovery- token-ca-cert-hash sha256: 48486b2cd572881 fa2818417 7d331353146f2692744d2cc6bfa21132f01c4478
四、三台主机的实际操作
1.打开文件
在 worker1、worker2和 worker3中分别打开文件 calico-39,通过拖拉的方式即可。
[ root@master1 ~]# ls
anaconda- ks.cfg calico-39. image. List kube-p.tar p. tar
导入相应的镜像
[ root@masterl -]# cd calico-39/
[ root@master1 calico-39]# ls
calico-cni.tar calico. Yml pod2daemon- flexvol. tar
calico-node. tar kube-controllers. tar .
(第一个镜像)
[ root@masterl calico-39]# docker load -i calico-cni. tar
1c95c77433e8: Loading layer 72. 47MB/ 72.47MB
f919277f01fb: Loading layer 90.76MB/90.76MB
0094c919faf3: Loading layer 10.2:4kB/10.24kB
9e1263ee4198: Loading layer 2.56kB/2.56kB
Loaded image: calico/cni:v3.9.0
(第二个镜像)
[ root@master1 calico-39]# docker load -i calico-node.tar
538afb24c98b: Loading layer 33. 76MB/33.76MB
85b8bbfa3535: Loading layer 3.584kB/3.584kB
7a653a5cb14b: Loading layer 3 。584kB/3.584kB
97cc86557fed: Loading layer 21. 86MB/21.86MB
3abae82a71aa: Loading layer 11. 26kB/11.26kB
7c85b99e7c27: Loading layer 11. 26kB/11.26kB
0e20735d7144: Loading layer 6. 55MB/6.55MB
2e3dede6195a: Loading layer 2. 975MB/2.975MB
f85ff1d9077d: Loading layer 55.87MB /55.87MB
9d55754fd45b: Loading layer 1.14MB/1. 14MB
Loaded image: calico/node:v3.9.0
(第三个镜像)
[root@masterl calico-39]# docker load -i kube-controllers.tar
fd6ffbcdb09f: Loading layer 47 .35MB/47.35MB
9c4005f3e0bc: Loading layer 3. 104MB/3.104MB
Loaded image: calico/kube-controllers:v3.9.0
(最后的镜像)
[ root@masterl calico-39]# docker load -i pod2daemon-flexvol. tar
3fc64803ca2d: Loading layer 4. 463MB/4.463MB
3aff8caf48a7: Loading layer 5. 12kB/5.12kB
89effeea5ce5: Loading layer 5. 572MB/5.572MB
Loaded image: calico/pod2daemon- flexvol:v3.9.0
以上操作是在第一台主机的操作,第二、三台主机都是需要去做相应的操作。
2.master节点中修改calico的yml文件
Yml文件是用来作资源对象部署的说明式文件,需要去修改几项内容。
(1)由于 calico 自身网络发现机制有问题,因为需要修改ca1ico使用的物理网卡,添加607及608行。
602 - name: CLUSTER TYPE
603 value: "k8s ,bgp"
604 # Auto-detect the BGP IP address.
605 - name: IP
606 value: "autodetect"
607 - name: IP_ AUTODETECTION_ METHOD
608 value: "interface=eth. *"
(2)修改为初始化时设置的 pod-network-cidr
619 - name: CALICO IPV4POOL_ CIDR
620 value: "172.16.0.0/16"
修改的具体操作:
607 name: IP_ AUTODETECTION_ METHOD
608 value: "interface=ens .*”
622 value: “172.16.0.0/16”
3.应用的具体操作
[ root@master1 calico-39]# kubectl apply -f calico. yml
configmap/calico-config created
>--discove ry- token-ca-cert-hash sha256: 48486b2cd572881fa28184177d331353146f2692744d2cc6bfa21132f01c4478
W020816:56:30. 87226723286 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
如何知道是否已经加入集群:
Run ' kubectl get nodes 'on the control-plane to see this node join the cluster.
[ root@master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 20m v1.17.2
worker1 Ready <none> 45s v1.17.2
worker2 Ready <none> 56s v1.17.2