环境介绍:
master 192.168.2.18
node1 192.168.2.19
node2 192.168.2.20
CentOS 7.5
Docker 19.03.13
2核+CPU,2GB+内存
报错信息:
1.在k8s_master1上进行初始化时出现以下报错
[root@k8s_master1 ~]# kubeadm init --kubernetes-version=v1.22.1 --apiserver-advertise-address=192.168.1.18 --image-repository registry.aliyuncs.com/google0.0/16 [init] Using Kubernetes version: v1.22.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING Hostname]: hostname "k8s-master" could not be reached ...... nodeRegistration.name: Invalid value: "k8s_master1": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.',or validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') To see the stack trace of this error execute with --v=5 or higher
2.在k8s_node1/2上加入集群时也出现同样报错
[root@k8s_node1 ~]# kubeadm join 192.168.1.18:6443 --token 9t2nu9.00ieyfqmc50dgub6 \ > --discovery-token-ca-cert-hash sha256:183b6c95b4e49f0bd4074c61aeefc56d70215240fbeb7a633afe3526006c4dc9 nodeRegistration.name: Invalid value: "k8s_node1": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.',or validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') To see the stack trace of this error execute with --v=5 or higher
问题分析:
从上面k8s_master1跟k8s_node1/2的相同报错可以判断出,报错的原因是因为我们三台主机的命名问题,不能在主机名中使用下划线”_
”来命名。
解决方法:
修改三台主机的名称,不使用下划线“_
”,使用“-
”来命名即可。
[root@k8s_node1 ~]# vim /etc/hostname //永久修改主机名 k8s-node1 [root@k8s_node1 ~]# hostname k8s-node1 //临时修改主机名
修改完主机名称后可以使用Ctrl+D来退出当前终端,重新登录即可显示新主机名。
验证:
修改完主机名后,在k8s-node1主机上验证重新加入集群
[root@k8s-node1 ~]# kubeadm join 192.168.1.18:6443 --token 9t2nu9.00ieyfqmc50dgub6 --discovery-token-ca-cert-hash sha256:183b6c95b4e49f0b [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING Hostname]: hostname "k8s-node1" could not be reached [WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 192.168.1.1:53: no such host [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
可以看到现在k8s-node1已经成功加入到集群中,问题解决!
可能会出现其他报错:
在安装部署Kubenetes的过程中出现的任何报错,尝试解决报错问题后,最好是先执行一次kubeadm reset -f
来清除一下kubeadm的信息,再进行验证上个错误是否得到解决,不然可能上个报错没有解决又出现以下或者新的报错。
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.22.1 --apiserver-advertise-address=192.168.1.18 --image-repository registry.aliyuncs.com/google0.0/16 [init] Using Kubernetes version: v1.22.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING Hostname]: hostname "k8s-master" could not be reached [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 192.168.1.1:53: no such host error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-6443]: Port 6443 is in use [ERROR Port-10259]: Port 10259 is in use [ERROR Port-10257]: Port 10257 is in use [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exis [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exis [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [ERROR Port-10250]: Port 10250 is in use [ERROR Port-2379]: Port 2379 is in use [ERROR Port-2380]: Port 2380 is in use [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master ~]# kubeadm reset -f --- 清除/重置kubeadm