云原生|kubernetes|minikube的部署安装完全手册(修订版)

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: 云原生|kubernetes|minikube的部署安装完全手册(修订版)

前言:

学习一个新平台首先当然是能够有这么一个平台了,而kubernetes的部署安装无疑是提高了这一学习的门槛,不管是二进制安装还是kubeadm安装都还是需要比较多的运维技巧的,并且在搭建学习的时候,需要的硬件资源也是比较多的,至少都需要三台或者三台以上的服务器才能够完成部署安装。

那么,kind或者minikube这样的工具就是一个能够快速的搭建出学习平台的工具,特点是简单,易用,一台服务器的资源就可以搞定了(只是单机需要的内存大一些,建议至少8G内存吧),自动化程度高,基本什么都给你设置好了,并且支持多种虚拟化引擎,比如,docker,container,kvm这些常用的虚拟化引擎都支持。缺点是基本没有定制化。

minikube支持的虚拟化引擎:

好了,本教程大部分资料都是从官网的docs里扒的,docs的网址是:Welcome! | minikube




相关安装部署文件(conntrack.tar.gz解压后,rpm -ivh * 安装就可以了,是相关依赖,minikube-images.tar.gz是镜像包,解压后倒入docker,三个可执行文件放入/root/.minikube/cache/linux/amd64/v1.18.8/目录下即可。):

链接:https://pan.baidu.com/s/14-r59VfpZRpfiVGj4IadxA?pwd=k8ss
提取码:k8ss

 

一,

get started minikube(开始部署minkube)

安装部署前的先决条件

至少两个CPU,2G内存,20G空余磁盘空间,可访问互联网,有一个虚拟化引擎, Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation其中的一个,那么,docker是比较容易安装的,就不说了,docker吧,操作系统是centos。

What you’ll need
2 CPUs or more
2GB of free memory
20GB of free disk space
Internet connection
Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation

docker的离线安装以及本地化配置_zsk_john的博客-CSDN博客离线安装docker环境的博文,照此做就可以了,请确保docker环境是安装好的。

docker版本至少是18.09到20.10

二,

开始安装

下载minikube的执行程序

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

三,

导入镜像

由于minikube在安装kubernetes的时候使用的镜像是从外网拉取的,国内由于被墙是无法拉取的,因此,制作了这个离线镜像包。

[root@slave3 ~]# tar zxf minikube-images.tar.gz 
[root@slave3 ~]# cd minikube-images
[root@slave3 minikube-images]# for i in `ls ./*`;do docker load <$i;done
dfccba63d0cc: Loading layer [==================================================>]  80.82MB/80.82MB
Loaded image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
c965b38a6629: Loading layer [==================================================>]  43.58MB/43.58MB
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。略略略

四,

初始化kubernetes集群的命令:

这里大概介绍一下,image-repostory是使用阿里云下载镜像,cni是指定网络插件就用flannel,如果不想用这个删掉这行就可以了,其它没什么要注意的。

minikube config set driver none
minikube start pod-network-cidr='10.244.0.0/16'\
    --extra-config=kubelet.pod-cidr=10.244.0.0/16 \
    --network-plugin=cni \
    --image-repository='registry.aliyuncs.com/google_containers' \
    --cni=flannel \
    --apiserver-ips=192.168.217.23 \
    --kubernetes-version=1.18.8 \
    --vm-driver=none

启动集群的日志

[root@slave3 conntrack]# minikube start --driver=none --kubernetes-version=1.18.8
* minikube v1.26.1 on Centos 7.4.1708
* Using the none driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Running on localhost (CPUs=4, Memory=7983MB, Disk=51175MB) ...
* OS release is CentOS Linux 7 (Core)
E0911 11:23:25.121495   14039 docker.go:148] "Failed to enable" err=<
  sudo systemctl enable docker.socket: exit status 1
  stdout:
  stderr:
  Failed to execute operation: No such file or directory
 > service="docker.socket"
! This bare metal machine is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
    > kubectl.sha256:  65 B / 65 B [-------------------------] 100.00% ? p/s 0s
    > kubelet:  108.05 MiB / 108.05 MiB [--------] 100.00% 639.49 KiB p/s 2m53s                                                                                                                             
  - Generating certificates and keys ...
  - Booting up control plane ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.8:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  Unfortunately, an error has occurred:
    timed out waiting for the condition
  This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
  If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'
  Additionally, a control plane component may have crashed or exited when started by the container runtime.
  To troubleshoot, list all containers using your preferred container runtimes CLI.
  Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
stderr:
W0911 11:26:38.783101   14450 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  [WARNING Swap]: running with swap on is not supported. Please disable swap
  [WARNING FileExisting-socat]: socat not found in system path
  [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0911 11:26:48.464749   14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0911 11:26:48.466754   14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Configuring local host environment ...
* 
! The 'none' driver is designed for experts who need to integrate with an existing VM
* Most users should use the newer 'docker' driver instead, which does not require root!
* For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* 
! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
* 
  - sudo mv /root/.kube /root/.minikube $HOME
  - sudo chown -R $USER $HOME/.kube $HOME/.minikube
* 
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

minikube的停止和删除:

如果这个集群想停止的话,那么命令就非常简单了:

minkube stop
输出如下;
* Stopping "minikube" in none ...
* Node "minikube" stopped.

如果重启了服务器,那么,只需要参数换成start就可以再次启动minikube了。删除minkube也非常简单,参数换成 delete即可,这个删除会将配置文件什么的都给删除掉,前提这些文件是minikube自己建立的,否则不会删除。

start的输出:

[root@node3 manifests]# minikube start
* minikube v1.12.0 on Centos 7.4.1708
* Using the none driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...
* OS release is CentOS Linux 7 (Core)
* Preparing Kubernetes v1.18.8 on Docker 19.03.9 ...
* Configuring local host environment ...
* 
! The 'none' driver is designed for experts who need to integrate with an existing VM
* Most users should use the newer 'docker' driver instead, which does not require root!
* For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* 
! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
* 
  - sudo mv /root/.kube /root/.minikube $HOME
  - sudo chown -R $USER $HOME/.kube $HOME/.minikube
* 
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"

 

以上的输出表明kubernetes单节点集群已经安装成功了,但有一些警告需要处理:

(1)

关于kubeadmin,kubelet,kubectl这三个命令的缓存

    > kubectl.sha256:  65 B / 65 B [-------------------------] 100.00% ? p/s 0s
    > kubelet:  108.05 MiB / 108.05 MiB [--------] 100.00% 639.49 KiB p/s 2m53s   

这几个命令是下载到/root/.minikube/cache/linux/amd64/v1.18.8/这个目录下的,因此,想要提高速度,离线化部署就需要这么做:

建立以上的目录:

mkdir -p /root/.minikube/cache/linux/amd64/v1.18.8/

给文件赋予权限并拷贝文件到这个目录下:

chmod a+x kube*  #赋予权限
[root@node3 v1.18.8]# pwd
/root/.minikube/cache/linux/amd64/v1.18.8
[root@slave3 v1.18.8]# ll
total 192544
-rwxr-xr-x 1 root root  39821312 Sep 11 11:24 kubeadm
-rwxr-xr-x 1 root root  44040192 Sep 11 11:24 kubectl
-rwxr-xr-x 1 root root 113300248 Sep 11 11:26 kubelet

(2)

集群健康检查报错的解决方案

[root@slave3 ~]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                               

解决方案:

删除/etc/kubernetes/manifests/kube-scheduler.yaml和/etc/kubernetes/manifests/kube-controller-manager.yaml两个文件内的--port=0 这个字段

稍等片刻,再次查询就正常了:

[root@slave3 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

三,

dashboard的安装

[root@slave3 ~]# minikube dashboard
* Enabling dashboard ...
  - Using image kubernetesui/metrics-scraper:v1.0.8
  - Using image kubernetesui/dashboard:v2.6.0
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
http://127.0.0.1:32844/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

设置代理

[root@slave3 v1.18.8]# kubectl proxy --port=45396 --address='0.0.0.0' --disable-filter=true --accept-hosts='^.*' 
W0911 12:49:38.664081    8709 proxy.go:167] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious
Starting to serve on [::]:45396

浏览器登录网址

本机IP是192.168.217.11,和上面的http://127.0.0.1:32844/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

拼接就好了

http://192.168.217.11:45396/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

至此,minikube就安装完了。

附录:

关于addons

可以看到有安装StorageClass,但很多addons还没有安装

[root@slave3 v1.18.8]# minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | 3rd party (Ambassador)         |
| auto-pause                  | minikube | disabled     | Google                         |
| csi-hostpath-driver         | minikube | disabled     | Kubernetes                     |
| dashboard                   | minikube | enabled ✅   | Kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | Kubernetes                     |
| efk                         | minikube | disabled     | 3rd party (Elastic)            |
| freshpod                    | minikube | disabled     | Google                         |
| gcp-auth                    | minikube | disabled     | Google                         |
| gvisor                      | minikube | disabled     | Google                         |
| headlamp                    | minikube | disabled     | 3rd party (kinvolk.io)         |
| helm-tiller                 | minikube | disabled     | 3rd party (Helm)               |
| inaccel                     | minikube | disabled     | 3rd party (InAccel             |
|                             |          |              | [info@inaccel.com])            |
| ingress                     | minikube | disabled     | Kubernetes                     |
| ingress-dns                 | minikube | disabled     | Google                         |
| istio                       | minikube | disabled     | 3rd party (Istio)              |
| istio-provisioner           | minikube | disabled     | 3rd party (Istio)              |
| kong                        | minikube | disabled     | 3rd party (Kong HQ)            |
| kubevirt                    | minikube | disabled     | 3rd party (KubeVirt)           |
| logviewer                   | minikube | disabled     | 3rd party (unknown)            |
| metallb                     | minikube | disabled     | 3rd party (MetalLB)            |
| metrics-server              | minikube | disabled     | Kubernetes                     |
| nvidia-driver-installer     | minikube | disabled     | Google                         |
| nvidia-gpu-device-plugin    | minikube | disabled     | 3rd party (Nvidia)             |
| olm                         | minikube | disabled     | 3rd party (Operator Framework) |
| pod-security-policy         | minikube | disabled     | 3rd party (unknown)            |
| portainer                   | minikube | disabled     | 3rd party (Portainer.io)       |
| registry                    | minikube | disabled     | Google                         |
| registry-aliases            | minikube | disabled     | 3rd party (unknown)            |
| registry-creds              | minikube | disabled     | 3rd party (UPMC Enterprises)   |
| storage-provisioner         | minikube | enabled ✅   | Google                         |
| storage-provisioner-gluster | minikube | disabled     | 3rd party (Gluster)            |
| volumesnapshots             | minikube | disabled     | Kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

以安装ingress为例(安装的同时,输出安装的错误日志):

[root@slave3 v1.18.8]# minikube addons enable ingress --alsologtostderr
I0911 13:09:08.559523   14428 out.go:296] Setting OutFile to fd 1 ...
I0911 13:09:08.572541   14428 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
I0911 13:09:08.572593   14428 out.go:309] Setting ErrFile to fd 2...
I0911 13:09:08.572609   14428 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
I0911 13:09:08.572908   14428 root.go:333] Updating PATH: /root/.minikube/bin
I0911 13:09:08.577988   14428 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
I0911 13:09:08.580137   14428 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.18.8
I0911 13:09:08.580198   14428 addons.go:65] Setting ingress=true in profile "minikube"
I0911 13:09:08.580243   14428 addons.go:153] Setting addon ingress=true in "minikube"
I0911 13:09:08.580572   14428 host.go:66] Checking if "minikube" exists ...
I0911 13:09:08.581080   14428 exec_runner.go:51] Run: systemctl --version
I0911 13:09:08.584877   14428 kubeconfig.go:92] found "minikube" server: "https://192.168.217.136:8443"
I0911 13:09:08.584942   14428 api_server.go:165] Checking apiserver status ...
I0911 13:09:08.584982   14428 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0911 13:09:08.611630   14428 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15576/cgroup
I0911 13:09:08.626851   14428 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/pod1a4a24f29bac3cef528a8b328b9798b5/c8a589a612154591de984664d86a3ad96a449f3d0b1145527ceea9c5ed267124"
I0911 13:09:08.626952   14428 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1a4a24f29bac3cef528a8b328b9798b5/c8a589a612154591de984664d86a3ad96a449f3d0b1145527ceea9c5ed267124/freezer.state
I0911 13:09:08.638188   14428 api_server.go:203] freezer state: "THAWED"
I0911 13:09:08.638329   14428 api_server.go:240] Checking apiserver healthz at https://192.168.217.136:8443/healthz ...
I0911 13:09:08.649018   14428 api_server.go:266] https://192.168.217.136:8443/healthz returned 200:
ok
I0911 13:09:08.650082   14428 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
I0911 13:09:08.652268   14428 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0911 13:09:08.653129   14428 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0911 13:09:08.654440   14428 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0911 13:09:08.654528   14428 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
I0911 13:09:08.654720   14428 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4099945938 /etc/kubernetes/addons/ingress-deploy.yaml
I0911 13:09:08.668351   14428 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0911 13:09:09.748481   14428 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.080019138s)
I0911 13:09:09.748552   14428 addons.go:383] Verifying addon ingress=true in "minikube"
I0911 13:09:09.751805   14428 out.go:177] * Verifying ingress addon...

可以看到,安装的时候使用的资源清单文件是这个:

sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml

该文件内容非常多,但,由于是使用的国外的镜像网址,因此,一般是不会安装成功的。

解决方案为查找里面涉及的images,替换为国内可下载的镜像即可。

还有一个权限问题,可能会报错:

F0911 05:24:52.171825       6 ssl.go:389] unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied

解决方案是:

还是编辑下面这个文件, runAsUser 的值修改为33

重新apply 此文件:

kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml

[root@slave3 v1.18.8]# cat /etc/kubernetes/addons/ingress-deploy.yaml
# Copyright 2021 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ref: https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/kind/deploy.yaml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resourceNames:
  - ingress-controller-leader
  resources:
  - configmaps
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - extensions
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  # see https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md for all possible options and their description
  hsts: "false"
  # see https://github.com/kubernetes/minikube/pull/12702#discussion_r727519180: 'allow-snippet-annotations' should be used only if strictly required by another part of the deployment
#  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        gcp-auth-skip-secret: "true"
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=ingress-controller-leader
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: k8s.gcr.io/ingress-nginx/controller:v0.49.3@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          hostPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        minikube.k8s.io/primary: "true"
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Equal
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        minikube.k8s.io/primary: "true"
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        minikube.k8s.io/primary: "true"
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  - v1beta1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1beta1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1beta1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None

安装完毕后可以看到:

[root@slave3 v1.18.8]# kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-n5hc5        0/1     Completed   0          28m
pod/ingress-nginx-admission-patch-cgzl9         0/1     Completed   0          28m
pod/ingress-nginx-controller-54b856d6d7-7fr7q   1/1     Running     0          9m54s
NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.107.186.74   <none>        80:31411/TCP,443:32683/TCP   28m
service/ingress-nginx-controller-admission   ClusterIP   10.106.184.40   <none>        443/TCP                      28m
NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           28m
NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-54b856d6d7   1         1         1       9m54s
replicaset.apps/ingress-nginx-controller-7689b8b4f9   0         0         0       17m
replicaset.apps/ingress-nginx-controller-77cc874b76   0         0         0       28m
NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           21s        28m
job.batch/ingress-nginx-admission-patch    1/1           22s        28m
[root@slave3 v1.18.8]# 

addons里的ingress就安装好啦。

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
2天前
|
Kubernetes 负载均衡 Cloud Native
云原生应用:Kubernetes在容器编排中的实践与挑战
【10月更文挑战第27天】Kubernetes(简称K8s)是云原生应用的核心容器编排平台,提供自动化、扩展和管理容器化应用的能力。本文介绍Kubernetes的基本概念、安装配置、核心组件(如Pod和Deployment)、服务发现与负载均衡、网络配置及安全性挑战,帮助读者理解和实践Kubernetes在容器编排中的应用。
10 4
|
2天前
|
Kubernetes 监控 Cloud Native
云原生应用:Kubernetes在容器编排中的实践与挑战
【10月更文挑战第26天】随着云计算技术的发展,容器化成为现代应用部署的核心趋势。Kubernetes(K8s)作为容器编排领域的佼佼者,以其强大的可扩展性和自动化能力,为开发者提供了高效管理和部署容器化应用的平台。本文将详细介绍Kubernetes的基本概念、核心组件、实践过程及面临的挑战,帮助读者更好地理解和应用这一技术。
12 3
|
7天前
|
Kubernetes Cloud Native 开发者
云原生技术入门:Kubernetes和Docker的协作之旅
【10月更文挑战第22天】在数字化转型的浪潮中,云原生技术成为推动企业创新的重要力量。本文旨在通过浅显易懂的语言,引领读者步入云原生的世界,着重介绍Kubernetes和Docker如何携手打造弹性、可扩展的云环境。我们将从基础概念入手,逐步深入到它们在实际场景中的应用,以及如何简化部署和管理过程。文章不仅为初学者提供入门指南,还为有一定基础的开发者提供实践参考,共同探索云原生技术的无限可能。
18 3
|
5天前
|
运维 Kubernetes Cloud Native
云原生入门:Kubernetes和容器化的未来
【10月更文挑战第23天】本文将带你走进云原生的世界,探索Kubernetes如何成为现代软件部署的心脏。我们将一起揭开容器化技术的神秘面纱,了解它如何改变软件开发和运维的方式。通过实际的代码示例,你将看到理论与实践的结合,感受到云原生技术带来的革命性影响。无论你是初学者还是有经验的开发者,这篇文章都将为你开启一段新的旅程。让我们一起踏上这段探索之旅,解锁云原生技术的力量吧!
|
13天前
|
Kubernetes Cloud Native 开发者
探秘云原生计算:Kubernetes与Docker的协同进化
在这个快节奏的数字时代,云原生技术以其灵活性和可扩展性成为了开发者们的新宠。本文将带你深入了解Kubernetes和Docker如何共同塑造现代云计算的架构,以及它们如何帮助企业构建更加敏捷和高效的IT基础设施。
|
Kubernetes 开发者 微服务
简化Kubernetes应用部署工具-Helm之Hook
微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由Deis 公司发起,该公司已经被微软收购。
1642 0
|
Kubernetes 开发者 微服务
简化Kubernetes应用部署工具-Helm之Hook
本文讲的是简化Kubernetes应用部署工具-Helm之Hook【编者的话】微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由Deis 公司发起,该公司已经被微软收购。
2556 0
|
8天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
9天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
1月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
100 17

推荐镜像

更多