Minikube v1.25.2 在 Centos 7.9 部署 Kubernetes v1.23.8(2)

本文涉及的产品
.cn 域名,1个 12个月
应用型负载均衡 ALB,每月750个小时 15LCU
传统型负载均衡 CLB,每月750个小时 15LCU
简介: Minikube v1.25.2 在 Centos 7.9 部署 Kubernetes v1.23.8(2)

8. 常用操作

8.1 进入集群节点

minikube ssh

8.2 停止集群

minikube stop

8.3 启动集群

minikube start

8.4 删除集群

minikube delete
minikube delete --all

8.5 暂停集群

但不影响已部署的应用程序

minikube pause

8.5 取消暂停

minikube unpause

8.6 修改默认内存限制

minikube config set memory 9001

9. 部署 Ingress

启用Ingress插件

$ minikube addons enable ingress
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

查看 pod

$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-6hf47        0/1     Completed   0          91s
ingress-nginx-admission-patch-5dpqz         0/1     Completed   0          91s
ingress-nginx-controller-6cfb67d797-gqj98   1/1     Running     0          91s

10.管理 dashboard

Dashboard 是一个基于 Web 的 Kubernetes 用户界面。您可以使用它来:


将容器化应用程序部署到 Kubernetes 集群

对您的容器化应用程序进行故障排除

管理集群资源

概览在您的集群上运行的应用程序

创建或修改单个 Kubernetes 资源(例如 Deployment、Jobs、DaemonSets 等)

例如,您可以使用部署向导扩展部署、启动滚动更新、重新启动 pod 或部署新应用程序。

10.1 创建 dashboard

minikube dashboard

输出:

🔌  Enabling dashboard ...
    ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
    ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
💡  Some dashboard features require the metrics-server addon. To enable all features please run:
        minikube addons enable metrics-server
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
http://127.0.0.1:43995/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

当然,我们可以指定喜欢的端口(port)

$ minikube dashboard --port 8081
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
http://127.0.0.1:8081/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

这将启用 dashboard 插件,并在默认 Web 浏览器中打开代理。

要停止代理(使仪表板保持运行),请中止已启动的进程 ( Ctrl+C)。

查看 dashboard 是否启动正常

$ kubectl  get all -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-57d8d5b8b8-zhtjq   1/1     Running   0          126m
pod/kubernetes-dashboard-6f75b5c656-dxr87        1/1     Running   0          126m
NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.97.77.170     <none>        8000/TCP   126m
service/kubernetes-dashboard        ClusterIP   10.101.172.254   <none>        80/TCP     126m
NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           126m
deployment.apps/kubernetes-dashboard        1/1     1            1           126m
NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-57d8d5b8b8   1         1         1       126m
replicaset.apps/kubernetes-dashboard-6f75b5c656        1         1         1       126m

查看界面 URL

$ minikube dashboard --url
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
http://127.0.0.1:43995/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

10.2 访问 API

访问 dashboard API资源

$ curl http://127.0.0.1:43995/
{
  "paths": [
    "/.well-known/openid-configuration",
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apps",
    "/apis/apps/v1",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/autoscaling/v2",
    "/apis/autoscaling/v2beta1",
    "/apis/autoscaling/v2beta2",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v1beta1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1",
    "/apis/coordination.k8s.io",
    "/apis/coordination.k8s.io/v1",
    "/apis/discovery.k8s.io",
    "/apis/discovery.k8s.io/v1",
    "/apis/discovery.k8s.io/v1beta1",
    "/apis/events.k8s.io",
    "/apis/events.k8s.io/v1",
    "/apis/events.k8s.io/v1beta1",
    "/apis/flowcontrol.apiserver.k8s.io",
    "/apis/flowcontrol.apiserver.k8s.io/v1beta1",
    "/apis/flowcontrol.apiserver.k8s.io/v1beta2",
    "/apis/networking.k8s.io",
    "/apis/networking.k8s.io/v1",
    "/apis/node.k8s.io",
    "/apis/node.k8s.io/v1",
    "/apis/node.k8s.io/v1beta1",
    "/apis/policy",
    "/apis/policy/v1",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1",
    "/apis/scheduling.k8s.io",
    "/apis/scheduling.k8s.io/v1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1",
    "/healthz",
    "/healthz/autoregister-completion",
    "/healthz/etcd",
    "/healthz/log",
    "/healthz/ping",
    "/healthz/poststarthook/aggregator-reload-proxy-client-cert",
    "/healthz/poststarthook/apiservice-openapi-controller",
    "/healthz/poststarthook/apiservice-registration-controller",
    "/healthz/poststarthook/apiservice-status-available-controller",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/crd-informer-synced",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/healthz/poststarthook/kube-apiserver-autoregistration",
    "/healthz/poststarthook/priority-and-fairness-config-consumer",
    "/healthz/poststarthook/priority-and-fairness-config-producer",
    "/healthz/poststarthook/priority-and-fairness-filter",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/healthz/poststarthook/start-apiextensions-controllers",
    "/healthz/poststarthook/start-apiextensions-informers",
    "/healthz/poststarthook/start-cluster-authentication-info-controller",
    "/healthz/poststarthook/start-kube-aggregator-informers",
    "/healthz/poststarthook/start-kube-apiserver-admission-initializer",
    "/livez",
    "/livez/autoregister-completion",
    "/livez/etcd",
    "/livez/log",
    "/livez/ping",
    "/livez/poststarthook/aggregator-reload-proxy-client-cert",
    "/livez/poststarthook/apiservice-openapi-controller",
    "/livez/poststarthook/apiservice-registration-controller",
    "/livez/poststarthook/apiservice-status-available-controller",
    "/livez/poststarthook/bootstrap-controller",
    "/livez/poststarthook/crd-informer-synced",
    "/livez/poststarthook/generic-apiserver-start-informers",
    "/livez/poststarthook/kube-apiserver-autoregistration",
    "/livez/poststarthook/priority-and-fairness-config-consumer",
    "/livez/poststarthook/priority-and-fairness-config-producer",
    "/livez/poststarthook/priority-and-fairness-filter",
    "/livez/poststarthook/rbac/bootstrap-roles",
    "/livez/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/livez/poststarthook/start-apiextensions-controllers",
    "/livez/poststarthook/start-apiextensions-informers",
    "/livez/poststarthook/start-cluster-authentication-info-controller",
    "/livez/poststarthook/start-kube-aggregator-informers",
    "/livez/poststarthook/start-kube-apiserver-admission-initializer",
    "/logs",
    "/metrics",
    "/openapi/v2",
    "/openid/v1/jwks",
    "/readyz",
    "/readyz/autoregister-completion",
    "/readyz/etcd",
    "/readyz/informer-sync",
    "/readyz/log",
    "/readyz/ping",
    "/readyz/poststarthook/aggregator-reload-proxy-client-cert",
    "/readyz/poststarthook/apiservice-openapi-controller",
    "/readyz/poststarthook/apiservice-registration-controller",
    "/readyz/poststarthook/apiservice-status-available-controller",
    "/readyz/poststarthook/bootstrap-controller",
    "/readyz/poststarthook/crd-informer-synced",
    "/readyz/poststarthook/generic-apiserver-start-informers",
    "/readyz/poststarthook/kube-apiserver-autoregistration",
    "/readyz/poststarthook/priority-and-fairness-config-consumer",
    "/readyz/poststarthook/priority-and-fairness-config-producer",
    "/readyz/poststarthook/priority-and-fairness-filter",
    "/readyz/poststarthook/rbac/bootstrap-roles",
    "/readyz/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/readyz/poststarthook/start-apiextensions-controllers",
    "/readyz/poststarthook/start-apiextensions-informers",
    "/readyz/poststarthook/start-cluster-authentication-info-controller",
    "/readyz/poststarthook/start-kube-aggregator-informers",
    "/readyz/poststarthook/start-kube-apiserver-admission-initializer",
    "/readyz/shutdown",
    "/version"
  ]
}

例如,访问集群是否健康

$ curl http://127.0.0.1:8085/healthz
ok

10.3 域名访问

我准备中止已启动的进程 ( Ctrl+C),实现通过域名访问 kubernetes-dashboard,我们已经部署了ingress-controller,只需要编写一个ingress yaml文件即可。

  • dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
  namespace: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: dashboard.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: kubernetes-dashboard
              port:
                number: 80

创建为 dashboard-ingress

$ k apply -f dashboard-ingress.yaml
ingress.networking.k8s.io/dashboard-ingress created
注意:这里ADDRESS需要等待一段时间域名才能解析到主机地址
$ k get -n kubernetes-dashboard ingress
NAME                CLASS   HOSTS           ADDRESS   PORTS   AGE
dashboard-ingress   nginx   dashboard.com             80      23s
等到了
$ k get -n kubernetes-dashboard ingress --watch
NAME                CLASS   HOSTS           ADDRESS         PORTS   AGE
dashboard-ingress   nginx   dashboard.com   192.168.10.25   80      11m

在主机hosts文件添加此映射配置

cat <<EOF >> /etc/hosts
192.168.10.25   dashboard.com
EOF

windows: 在 C:\Windows\System32\drivers\etc\hosts添加192.168.10.25 dashboard.com

访问 dashboard.com

1035234-20181020215539574-213176954.png

11. 部署应用

11.1 创建 NodePort 类型的deployment

kubectl create deployment hello-minikube --image=docker.io/nginx:1.23
kubectl expose deployment hello-minikube --type=NodePort --port=80
$ kubectl get services hello-minikube
NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello-minikube   NodePort   10.96.236.93   <none>        80:30578/TCP   60m
$  minikube service hello-minikube
|-----------|----------------|-------------|----------------------------|
| NAMESPACE |      NAME      | TARGET PORT |            URL             |
|-----------|----------------|-------------|----------------------------|
| default   | hello-minikube |          80 | http://192.168.10.25:30578 |
|-----------|----------------|-------------|----------------------------|
🎉  Opening service default/hello-minikube in default browser...
👉  http://192.168.10.25:30578

浏览器访问:

1035234-20181020215539574-213176954.png

查询 URL

$ minikube service hello-minikube --url
http://192.168.10.25:30578

或者,使用kubectl转发端口:

$ kubectl port-forward service/hello-minikube 7080:80
Forwarding from 127.0.0.1:7080 -> 80
Forwarding from [::1]:7080 -> 80

新打开一个终端:

$ curl 127.0.0.1:7080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

11.2 创建 LoadBalancer 类型的 deployment

当你想被集群外访问,创建 LoadBalancer 类型的 deployment

kubectl create deployment hello-minikube1 --image=docker.io/nginx:1.23
kubectl expose deployment hello-minikube1 --type=LoadBalancer --port=8080

查看svc

$ k get svc hello-minikube1
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-minikube1   LoadBalancer   10.101.92.170   <pending>     8080:31412/TCP   113s

pending,那么如何获取EXTERNAL-IP

minikube tunnel 作为一个进程运行,在主机上使用集群的IP地址作为网关创建到集群的服务CIDR的网络路由。tunnel命令直接向主机操作系统上运行的任何程序公开外部IP。

$ minikube tunnel
Status:
        machine: minikube
        pid: 15915
        route: 10.96.0.0/12 -> 192.168.10.25
        minikube: Running
        services: [hello-minikube1]
    errors:
                minikube: no errors
                router: no errors
                loadbalancer emulator: no errors

新打开一个终端

$ kubectl get svc hello-minikube1
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
hello-minikube1   LoadBalancer   10.101.92.170   10.101.92.170   8080:31412/TCP   5m48s

在浏览器中打开(确保没有代理)

访问:http://REPLACE_WITH_EXTERNAL_IP:8080


虽然获取到了EXTERNAL_IP,但访问测试没通,姿势不对。

讨论:


Minikube - External IP not match host’s public IP

Unable to access application through minikube tunnel

11.3 TLS 域名访问

创建证书

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

根据证书生成 secret

kubectl -n default create secret tls mkcert --key key.pem --cert cert.pem

创建 app 应用

kubectl create deployment hello-minikube1 --image=docker.io/nginx:1.23
kubectl expose deployment hello-minikube1  --port=80

查看 svc

$ k get svc hello-minikube1
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
hello-minikube1   ClusterIP   10.99.155.128   <none>        80/TCP   8m10s

编写 tls-ingress-nginx 文件

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: secure-ingress-hello
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
      - minikube.nginx.com
    secretName: mkcert
  rules:
  - host: minikube.nginx.com
    http:
      paths:
      - path: /hello
        pathType: Prefix
        backend:
          service:
            name: hello-minikube1
            port:
              number: 80

查看域名获取地址

$ k get ingress --watch
NAME                   CLASS   HOSTS                ADDRESS         PORTS     AGE
secure-ingress-hello   nginx   minikube.nginx.com   192.168.10.25   80, 443   9m43s

访问:https://minikube.nginx.com/hello

1035234-20181020215539574-213176954.png

✈推荐阅读:


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
19天前
|
存储 Kubernetes 容器
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。
|
3月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
154 60
|
2月前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
2月前
|
Oracle 关系型数据库 MySQL
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
|
2月前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
71 0
|
3月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
238 4
|
3月前
|
NoSQL 关系型数据库 Redis
高可用和性能:基于ACK部署Dify的最佳实践
本文介绍了基于阿里云容器服务ACK,部署高可用、可伸缩且具备高SLA的生产可用的Dify服务的详细解决方案。
|
13天前
|
Prometheus Kubernetes 监控
OpenAI故障复盘 - 阿里云容器服务与可观测产品如何保障大规模K8s集群稳定性
聚焦近日OpenAI的大规模K8s集群故障,介绍阿里云容器服务与可观测团队在大规模K8s场景下我们的建设与沉淀。以及分享对类似故障问题的应对方案:包括在K8s和Prometheus的高可用架构设计方面、事前事后的稳定性保障体系方面。
|
11天前
|
Kubernetes Ubuntu 网络安全
ubuntu使用kubeadm搭建k8s集群
通过以上步骤,您可以在 Ubuntu 系统上使用 kubeadm 成功搭建一个 Kubernetes 集群。本文详细介绍了从环境准备、安装 Kubernetes 组件、初始化集群到管理和使用集群的完整过程,希望对您有所帮助。在实际应用中,您可以根据具体需求调整配置,进一步优化集群性能和安全性。
51 12
|
16天前
|
Kubernetes 网络协议 应用服务中间件
Kubernetes Ingress:灵活的集群外部网络访问的利器
《Kubernetes Ingress:集群外部访问的利器-打造灵活的集群网络》介绍了如何通过Ingress实现Kubernetes集群的外部访问。前提条件是已拥有Kubernetes集群并安装了kubectl工具。文章详细讲解了Ingress的基本组成(Ingress Controller和资源对象),选择合适的版本,以及具体的安装步骤,如下载配置文件、部署Nginx Ingress Controller等。此外,还提供了常见问题的解决方案,例如镜像下载失败的应对措施。最后,通过部署示例应用展示了Ingress的实际使用方法。
30 2

热门文章

最新文章