Kubernetes没有显示节点-问答-阿里云开发者社区-阿里云

开发者社区> 问答> 正文
阿里云
为了无法计算的价值
打开APP
阿里云APP内打开

Kubernetes没有显示节点

2019-01-09 13:43:45 2984 1

我初始化主节点并将工作节点连接到集群kubeadm。根据日志工作者节点成功加入群集。

但是,当我使用master列出节点时kubectl get nodes,缺少工作节点。怎么了?

[vagrant@localhost ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 12m v1.13.1
这是kubeadm日志

PLAY[
Alusta kubernetes masterit
]**

TASK[
Gathering Facts
]*
ok:[
k8s-n1
]TASK[
kubeadm reset
]*
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:01.078073",
"end":"2019-01-05 07:06:59.079748",
"rc":0,
"start":"2019-01-05 07:06:58.001675",
"stderr":"",
"stderr_lines":[

],
...
}TASK[
kubeadm init
]**
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
"delta":"0:01:05.163377",
"end":"2019-01-05 07:08:06.229286",
"rc":0,
"start":"2019-01-05 07:07:01.065909",
"stderr":"t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[

  "\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"

],
"stdout":"[init] Using Kubernetes version: v1.13.1n[preflight] Running pre-flight checksn[preflight] Pulling images required for setting up a Kubernetes clustern[preflight] This might take a minute or two, depending on the speed of your internet connectionn[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"n[kubelet-start] Activating the kubelet servicen[certs] Using certificateDir folder "/etc/kubernetes/pki"n[certs] Generating "ca" certificate and keyn[certs] Generating "apiserver" certificate and keyn[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]n[certs] Generating "apiserver-kubelet-client" certificate and keyn[certs] Generating "etcd/ca" certificate and keyn[certs] Generating "etcd/server" certificate and keyn[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]n[certs] Generating "etcd/healthcheck-client" certificate and keyn[certs] Generating "etcd/peer" certificate and keyn[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]n[certs] Generating "apiserver-etcd-client" certificate and keyn[certs] Generating "front-proxy-ca" certificate and keyn[certs] Generating "front-proxy-client" certificate and keyn[certs] Generating "sa" key and public keyn[kubeconfig] Using kubeconfig folder "/etc/kubernetes"n[kubeconfig] Writing "admin.conf" kubeconfig filen[kubeconfig] Writing "kubelet.conf" kubeconfig filen[kubeconfig] Writing "controller-manager.conf" kubeconfig filen[kubeconfig] Writing "scheduler.conf" kubeconfig filen[control-plane] Using manifest folder "/etc/kubernetes/manifests"n[control-plane] Creating static Pod manifest for "kube-apiserver"n[control-plane] Creating static Pod manifest for "kube-controller-manager"n[control-plane] Creating static Pod manifest for "kube-scheduler"n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0sn[apiclient] All control plane components are healthy after 19.504023 secondsn[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespacen[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the clustern[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotationn[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"n[mark-control-plane]

取消 提交回答
全部回答(1)
相关问答

1

回答

什么是k8s节点超买?

2022-10-14 17:11:14 49浏览量 回答数 1

1

回答

K8S三个节点如何降低成本?

2022-09-22 14:41:31 448浏览量 回答数 1

0

回答

怎么将windows作为工作节点加入到现有的K8S集群

2022-09-05 13:35:56 129浏览量 回答数 0

1

回答

发现个问题,我这边环境跑了几个礼拜,其中有个节点磁盘利用率过80%后,k8s把上面的pod驱逐了,之

2022-08-30 17:35:24 506浏览量 回答数 1

1

回答

监控节点信息、K8s 集群 Pod 信息的大盘包括哪些?

2022-08-24 10:42:16 55浏览量 回答数 1

1

回答

K8s 基础监控和节点状态接入发挥什么作用?

2022-08-24 10:39:52 35浏览量 回答数 1

1

回答

K8S 设计之初就是集群架构,按节点角色来划分可以哪两种?

2022-07-05 17:23:36 329浏览量 回答数 1

1

回答

问下,k8s部署的时候,能配置节点亲和相关的配置项吗

2022-06-29 16:46:01 890浏览量 回答数 1

1

回答

假如三节点,k8s部署完了之后 cn和dn节点各三个么?

2022-06-22 10:12:56 144浏览量 回答数 1

1

回答

PolarDB-X 的cn节点是怎么探活的啊,如果在非k8s的环境呢?

2022-06-22 10:09:02 131浏览量 回答数 1
+关注
k8s小能手
整合最优质的专家资源和技术资料,问答解疑
文章
问答
问答排行榜
最热
最新
相关电子书
更多
多租户Kubernetes实践:从容器运行时到SDN
立即下载
Kubernetes下日志实时采集、存储与计算实践
立即下载
Kubernetes日志采集与分析
立即下载