开发者社区> 问答> 正文

Kubernetes没有显示节点

我初始化主节点并将工作节点连接到集群kubeadm。根据日志工作者节点成功加入群集。

但是,当我使用master列出节点时kubectl get nodes,缺少工作节点。怎么了?

[vagrant@localhost ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 12m v1.13.1
这是kubeadm日志

PLAY[
Alusta kubernetes masterit
]**

TASK[
Gathering Facts
]*
ok:[
k8s-n1
]TASK[
kubeadm reset
]*
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:01.078073",
"end":"2019-01-05 07:06:59.079748",
"rc":0,
"start":"2019-01-05 07:06:58.001675",
"stderr":"",
"stderr_lines":[

],
...
}TASK[
kubeadm init
]**
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
"delta":"0:01:05.163377",
"end":"2019-01-05 07:08:06.229286",
"rc":0,
"start":"2019-01-05 07:07:01.065909",
"stderr":"t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[

  "\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"

],
"stdout":"[init] Using Kubernetes version: v1.13.1n[preflight] Running pre-flight checksn[preflight] Pulling images required for setting up a Kubernetes clustern[preflight] This might take a minute or two, depending on the speed of your internet connectionn[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"n[kubelet-start] Activating the kubelet servicen[certs] Using certificateDir folder "/etc/kubernetes/pki"n[certs] Generating "ca" certificate and keyn[certs] Generating "apiserver" certificate and keyn[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]n[certs] Generating "apiserver-kubelet-client" certificate and keyn[certs] Generating "etcd/ca" certificate and keyn[certs] Generating "etcd/server" certificate and keyn[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]n[certs] Generating "etcd/healthcheck-client" certificate and keyn[certs] Generating "etcd/peer" certificate and keyn[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]n[certs] Generating "apiserver-etcd-client" certificate and keyn[certs] Generating "front-proxy-ca" certificate and keyn[certs] Generating "front-proxy-client" certificate and keyn[certs] Generating "sa" key and public keyn[kubeconfig] Using kubeconfig folder "/etc/kubernetes"n[kubeconfig] Writing "admin.conf" kubeconfig filen[kubeconfig] Writing "kubelet.conf" kubeconfig filen[kubeconfig] Writing "controller-manager.conf" kubeconfig filen[kubeconfig] Writing "scheduler.conf" kubeconfig filen[control-plane] Using manifest folder "/etc/kubernetes/manifests"n[control-plane] Creating static Pod manifest for "kube-apiserver"n[control-plane] Creating static Pod manifest for "kube-controller-manager"n[control-plane] Creating static Pod manifest for "kube-scheduler"n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0sn[apiclient] All control plane components are healthy after 19.504023 secondsn[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespacen[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the clustern[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotationn[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"n[mark-control-plane]

展开
收起
k8s小能手 2019-01-09 13:43:45 3559 0
1 条回答
写回答
取消 提交回答
问答排行榜
最热
最新

相关电子书

更多
ACK 云原生弹性方案—云原生时代的加速器 立即下载
ACK集群类型选择最佳实践 立即下载
企业运维之云原生和Kubernetes 实战 立即下载

相关镜像