使用kubeadm部署Kubernetes1.8.5

本文涉及的产品
全局流量管理 GTM,标准版 1个月
云解析 DNS,旗舰版 1个月
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
简介:

在完成科学上网的前提下,我们准备使用kubeadm通过http代理部署Kubernetes。

环境准备(在所有节点上执行)

hostname IP 作用
k8s-master 172.16.100.50 master/etcd
k8s-node1 172.16.100.51 node
k8s-node2 172.16.100.52 node

关闭swap

由于Kubernetes1.8之后需要关闭swap,否则将会出现如下报错:
running with swap on is not supported. Please disable swap

# swapoff -a
# sed -i '/swap/d' /etc/fstab

配置http代理

由于kubeadm init时需要访问google的网站,如果不科学上网将会出现如下报错:
unable to get URL "https://dl.k8s.io/release/stable-1.8.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.8.txt: dial tcp 172.217.160.112:443: i/o timeout

# vi ~/.profile
export http_proxy="http://k8s-master:8118"
export https_proxy=$http_proxy
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,172.16.100.50"

注:如果no_proxy中不添加172.16.100.50,则会出现[preflight] WARNING: Connection to "https://172.16.100.50:6443" uses proxy "http://172.16.100.50:8118". If that is not intended, adjust your proxy settings的告警

Docker配置http代理

# mkdir /etc/systemd/system/docker.service.d/
# cd /etc/systemd/system/docker.service.d/
# vi http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://k8s-master:8118/"
Environment="HTTPS_PROXY=https://k8s-master:8118/"

添加Kubernetes的apt源

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

安装所需软件

# apt-get update
# apt-get install -y docker.io kubelet=1.8.5-00 kubeadm=1.8.5-00 kubectl=1.8.5-00

部署Master节点

通过kubeadm初始化

root@k8s-master:~# kubeadm init --apiserver-advertise-address 172.16.100.50 --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.5
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.100.50]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 615.502170 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 3d52f3.9899527f02a75122
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 3d52f3.9899527f02a75122 172.16.100.50:6443 --discovery-token-ca-cert-hash sha256:c04f230146d11fd87932bb589b0a6ccc897bd15f99bda74f009a69919de5a205

初始化过程主要完成:

  1. [preflight]:kubeadm执行初始化前的检查;
  2. [certificates]:生成token和证书。
  3. [kubeconfig]~[etcd]:生成相关的配置文件;
  4. [init]~[bootstraptoken]:安装Master组件,会从goolge的 Registry下载组件的Docker镜像,这一步可能会花一些时间,主要取决于网络质量。
  5. [addons]:安装附加组件kube-dns和kube-proxy;
  6. Kubernetes Master 初始化成功;
  7. 提示如何配置kubectl(建议使用普通用户);
  8. 提示如何安装 Pod 网络(参考 http://kubernetes.io/docs/admin/addons/ );
  9. 提示如何注册其他节点到 Cluster(需要记录下提示命令)。
  • 使用普通用户管理Kubernetes:
    vnimos@k8s-master:~$ mkdir -p $HOME/.kube
    vnimos@k8s-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    vnimos@k8s-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查Kubernetes状态

由于还未部署pod网络,所以kube-dns还处于Pending状态

vnimos@k8s-master:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Thu 2017-12-21 15:33:24 CST; 14min ago
     Docs: http://kubernetes.io/docs/
 Main PID: 7941 (kubelet)
    Tasks: 16
   Memory: 42.7M
      CPU: 15.932s
   CGroup: /system.slice/kubelet.service
           └─7941 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests -
vnimos@k8s-master:~$ sudo docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kube-apiserver-amd64            v1.8.5              ff90510bd7a8        13 days ago         194 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.8.5              b3710be972a6        13 days ago         129 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.8.5              b7977f445d3b        13 days ago         55 MB
gcr.io/google_containers/etcd-amd64                      3.0.17              243830dae7dd        10 months ago       169 MB
gcr.io/google_containers/pause-amd64                     3.0                 99e59f495ffa        19 months ago       747 kB
vnimos@k8s-master:~$ kubectl get node
NAME         STATUS     ROLES     AGE       VERSION
k8s-master   NotReady   master    2m        v1.8.5
vnimos@k8s-master:~$ kubectl get pod -n kube-system -o wide
NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-k8s-master                      1/1       Running   0          3s        172.16.100.50   k8s-master
kube-apiserver-k8s-master            1/1       Running   0          3s        172.16.100.50   k8s-master
kube-controller-manager-k8s-master   1/1       Running   0          3s        172.16.100.50   k8s-master
kube-dns-545bc4bfd4-d299p            0/3       Pending   0          19m       <none>          <none>
kube-proxy-9bnnx                     1/1       Running   0          19m       172.16.100.50   k8s-master
kube-scheduler-k8s-master            1/1       Running   0          3s        172.16.100.50   k8s-master

部署Pod网络(Fannel)

vnimos@k8s-master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
vnimos@k8s-master:~$ kubectl get pod -n kube-system -o wide
NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-k8s-master                      1/1       Running   0          1m        172.16.100.50   k8s-master
kube-apiserver-k8s-master            1/1       Running   0          1m        172.16.100.50   k8s-master
kube-controller-manager-k8s-master   1/1       Running   0          1m        172.16.100.50   k8s-master
kube-dns-545bc4bfd4-d299p            3/3       Running   0          31m       10.244.0.2      k8s-master
kube-flannel-ds-fw56r                1/1       Running   0          2m        172.16.100.50   k8s-master
kube-proxy-9bnnx                     1/1       Running   0          31m       172.16.100.50   k8s-master
kube-scheduler-k8s-master            1/1       Running   0          1m        172.16.100.50   k8s-master
vnimos@k8s-master:~$ kubectl get node
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    31m       v1.8.5

部署node节点

join Kubernetes集群

如果部署完Master节点忘了记录Token,可通过kubeadm token list查看

root@k8s-node1:~# kubeadm join --token 3d52f3.9899527f02a75122 172.16.100.50:6443 --discovery-token-ca-cert-hash sha256:c04f230146d11fd87932bb589b0a6ccc897bd15f99bda74f009a69919de5a205
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "172.16.100.50:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.16.100.50:6443"
[discovery] Requesting info from "https://172.16.100.50:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.16.100.50:6443"
[discovery] Successfully established connection with API Server "172.16.100.50:6443"
[bootstrap] Detected server version: v1.8.5
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

检查Kubernetes状态

vnimos@k8s-master:~$ kubectl get pod -o wide -n kube-system
NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-k8s-master                      1/1       Running   0          18m       172.16.100.50   k8s-master
kube-apiserver-k8s-master            1/1       Running   0          17m       172.16.100.50   k8s-master
kube-controller-manager-k8s-master   1/1       Running   0          18m       172.16.100.50   k8s-master
kube-dns-545bc4bfd4-frlb5            3/3       Running   0          17m       10.244.0.2      k8s-master
kube-flannel-ds-68xvq                1/1       Running   0          16m       172.16.100.50   k8s-master
kube-flannel-ds-hp5ck                1/1       Running   0          15m       172.16.100.51   k8s-node1
kube-flannel-ds-j67hh                1/1       Running   3          4m        172.16.100.52   k8s-node2
kube-proxy-lck5q                     1/1       Running   0          4m        172.16.100.52   k8s-node2
kube-proxy-rtrxh                     1/1       Running   0          17m       172.16.100.50   k8s-master
kube-proxy-trlt7                     1/1       Running   0          15m       172.16.100.51   k8s-node1
kube-scheduler-k8s-master            1/1       Running   0          18m       172.16.100.50   k8s-master
vnimos@k8s-master:~$ kubectl get node
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    18m       v1.8.5
k8s-node1    Ready     <none>    15m       v1.8.5

k8s-node2 Ready <none> 4m v1.8.5




本文转自Vnimos51CTO博客,原文链接:http://blog.51cto.com/vnimos/2053217,如需转载请自行联系原作者

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
4天前
|
存储 Kubernetes 测试技术
企业级LLM推理部署新范式:基于ACK的DeepSeek蒸馏模型生产环境落地指南
本教程演示如何在ACK中使用vLLM框架快速部署DeepSeek R1模型推理服务。
|
5天前
|
存储 人工智能 弹性计算
NVIDIA NIM on ACK:优化生成式AI模型的部署与管理
本文结合NVIDIA NIM和阿里云容器服务,提出了基于ACK的完整服务化管理方案,用于优化生成式AI模型的部署和管理。
|
4月前
|
Kubernetes 持续交付 Docker
利用 Docker 和 Kubernetes 实现微服务部署
【10月更文挑战第2天】利用 Docker 和 Kubernetes 实现微服务部署
|
2月前
|
Kubernetes Ubuntu 网络安全
ubuntu使用kubeadm搭建k8s集群
通过以上步骤,您可以在 Ubuntu 系统上使用 kubeadm 成功搭建一个 Kubernetes 集群。本文详细介绍了从环境准备、安装 Kubernetes 组件、初始化集群到管理和使用集群的完整过程,希望对您有所帮助。在实际应用中,您可以根据具体需求调整配置,进一步优化集群性能和安全性。
132 12
|
2月前
|
存储 Kubernetes 容器
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。
|
4月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
299 62
|
4月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
178 60
|
3月前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
3月前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
100 0
|
4月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
125 3