5 配置启动kubelet(所有节点上操作)
5.1 配置变量
DOCKER_CGROUPS=`docker info |grep 'Cgroup' | awk ' NR==1 {print $3}'`
5.2 配置kubelet的cgroups
cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.2" EOF
5.3 开机自启kubelet,并且reload它
systemctl daemon-reload systemctl enable kubelet && systemctl restart kubelet
以上三台服务器均操作,接下来是关键,在mastter节点上,需要初始化kubelet,或者无法成功启动
5.4 初始化kubelet
kubeadm init --kubernetes-version=v1.20.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.123.150 --ignore-preflight-errors=Swap
注: kubernetes-version 根据你拉取的容器来决定,
–pod-network-cidr自带的ip不需要修改。API端口advertise-address需要修改,改成master节点的ip
初始化过程中,会提示三条命令,复制粘贴即可,此外加入集群的方法出现在最后几行,需要复制备用。附上图片
其他节点不需要初始化,复制 kubeadm join最后几行,加入集群即可
6 配置使用网络插件(pod与pod之间用flannel进行通信)
6.1 下载flannel.yml文件
(可能要翻墙,我每次可以正常下载,以防万一我附上自己的文件)
6.1 下载flannel.yml文件 (可能要翻墙,我每次可以正常下载,以防万一我附上自己的文件) curl -O https://ra
不知道为什么,上传不了yaml文件,只能粘贴到代码块中了
--- kind: Namespace apiVersion: v1 metadata: name: kube-flannel labels: pod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-flannel labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply) image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply) image: quay.io/coreos/flannel:v0.14.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply) image: quay.io/coreos/flannel:v0.14.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate
6.2 修改flannel的yaml文件中flannel镜像的版本
因为我前面拉取的flannel就像是v0.14.0 ,这两个地方改一下即可
(6的整个过程只需要master节点操作,服务器节点只需要后面加入集群即可)
7 运用修改完成的flannel文件
flanne文件我放在了root下,注意自己的路径
kubectl apply -f ~/kube-flannel.yml
等待一段时间,查看pod信息
kubectl get pods --namespace kube-system
8 其他节点,加入集群
5.4 步初始化kubelet时候,终端显示的最后几行,复制备用的内容,在其他节点运行即可
kubeadm join 192.168.123.150:6443 --token n4mhv4.u2i0we9jumwvyvnp --discovery-token-ca-cert-hash sha256:3a06212d370ab4fed86975841502c063c468ce4e63eadac1fcddffdb56aa7114
在master 节点检查其他节点是否加入成功
kubectl get nodes
结果如下:
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 126m v1.20.2 k8s-node1 Ready <none> 111m v1.20.2 k8s-node2 Ready <none> 111m v1.20.2
看到状态为ready ,即搭建成功
如果觉得拉取的镜像太多,造成记忆负担,可以先行安装kubeadm,敲命令 kubuadm config imags list 则可以列出部署k8s需要的容器
kubeadm config images list
结果如下:
注,我这个方法显示出来的,是最新拉取的镜像(部署之前忘记了要拉取哪些镜像的有用方法),我本次实验使用的是1.20.2版本,如果按照我步骤安装的小伙伴们,敲这个命令显示的应该是我们已经安装了的版本,如图:
完结撒花~