我初始化主节点并将工作节点连接到集群kubeadm。根据日志工作者节点成功加入群集。
但是,当我使用master列出节点时kubectl get nodes,缺少工作节点。怎么了?
[vagrant@localhost ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 12m v1.13.1
这是kubeadm日志
PLAY[
Alusta kubernetes masterit
]**
TASK[
Gathering Facts
]*
ok:[
k8s-n1
]TASK[
kubeadm reset
]*
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:01.078073",
"end":"2019-01-05 07:06:59.079748",
"rc":0,
"start":"2019-01-05 07:06:58.001675",
"stderr":"",
"stderr_lines":[
],
...
}TASK[
kubeadm init
]**
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
"delta":"0:01:05.163377",
"end":"2019-01-05 07:08:06.229286",
"rc":0,
"start":"2019-01-05 07:07:01.065909",
"stderr":"t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[init] Using Kubernetes version: v1.13.1n[preflight] Running pre-flight checksn[preflight] Pulling images required for setting up a Kubernetes clustern[preflight] This might take a minute or two, depending on the speed of your internet connectionn[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"n[kubelet-start] Activating the kubelet servicen[certs] Using certificateDir folder "/etc/kubernetes/pki"n[certs] Generating "ca" certificate and keyn[certs] Generating "apiserver" certificate and keyn[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]n[certs] Generating "apiserver-kubelet-client" certificate and keyn[certs] Generating "etcd/ca" certificate and keyn[certs] Generating "etcd/server" certificate and keyn[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]n[certs] Generating "etcd/healthcheck-client" certificate and keyn[certs] Generating "etcd/peer" certificate and keyn[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]n[certs] Generating "apiserver-etcd-client" certificate and keyn[certs] Generating "front-proxy-ca" certificate and keyn[certs] Generating "front-proxy-client" certificate and keyn[certs] Generating "sa" key and public keyn[kubeconfig] Using kubeconfig folder "/etc/kubernetes"n[kubeconfig] Writing "admin.conf" kubeconfig filen[kubeconfig] Writing "kubelet.conf" kubeconfig filen[kubeconfig] Writing "controller-manager.conf" kubeconfig filen[kubeconfig] Writing "scheduler.conf" kubeconfig filen[control-plane] Using manifest folder "/etc/kubernetes/manifests"n[control-plane] Creating static Pod manifest for "kube-apiserver"n[control-plane] Creating static Pod manifest for "kube-controller-manager"n[control-plane] Creating static Pod manifest for "kube-scheduler"n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0sn[apiclient] All control plane components are healthy after 19.504023 secondsn[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespacen[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the clustern[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotationn[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"n[mark-control-plane]
默认情况下,节点(kubelet)使用其主机名标识自己。似乎您的VM的主机名未设置。
在将值Vagrantfile设置hostname为每个VM的不同名称。 https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。