一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(二)

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: 一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(二)

一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(一):https://developer.aliyun.com/article/1495644

Run ‘kubectl get nodes’ to see this node join the cluster.

node节点加入:

kubeadm join 10.10.0.10:7443 --token abcdef.0123456789abcdef

–discovery-token-ca-cert-hash sha256:eac394f46b758da8502c3e25882584432f195c809d29c6038f0fcefc201c8fac

[root@node01 ~ ]# kubeadm join 10.10.0.10:7443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:eac394f46b758da8502c3e25882584432f195c809d29c6038f0fcefc201c8fac

[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster…

[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

Certificate signing request was sent to apiserver and a response was received.

The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

在master节点查看集群:

[root@master02 ~ ]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master01 NotReady control-plane,master 60m v1.21.14

master02 NotReady control-plane,master 25m v1.21.14

master03 NotReady control-plane,master 7m40s v1.21.14

node01 NotReady 2m49s v1.21.14

node02 NotReady 33s v1.21.14

给node节点角色命名:

#可以看到node01、node02的ROLES角色为空,就表示这个节点是工作节点。

#可以把node01和node02的ROLES变成work,按照如下方法:

[root@master01 ~ ]# kubectl label node node01 node-role.kubernetes.io/worker=true

node/node01 labeled

[root@master01 ~ ]# kubectl label node node02 node-role.kubernetes.io/worker=true

node/node02 labeled

[root@master01 ~ ]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master01 Ready control-plane,master 5h40m v1.21.14

node01 Ready worker 5h31m v1.21.14

node02 Ready worker 159m v1.21.14

[root@master01 ~ ]# kubectl get pod -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-59d64cd4d4-2b6tt 0/1 Pending 0 92m

coredns-59d64cd4d4-7jlws 0/1 Pending 0 92m

etcd-master01 1/1 Running 0 92m

etcd-master02 1/1 Running 0 57m

etcd-master03 1/1 Running 0 38m

kube-apiserver-master01 1/1 Running 0 92m

kube-apiserver-master02 1/1 Running 0 57m

kube-apiserver-master03 1/1 Running 0 38m

kube-controller-manager-master01 1/1 Running 1 92m

kube-controller-manager-master02 1/1 Running 0 57m

kube-controller-manager-master03 1/1 Running 0 38m

kube-proxy-69dmp 1/1 Running 0 92m

kube-proxy-pswl9 1/1 Running 0 57m

kube-proxy-t567z 1/1 Running 0 34m

kube-proxy-wgh9t 1/1 Running 0 38m

kube-proxy-z4mk9 1/1 Running 0 31m

kube-scheduler-master01 1/1 Running 1 92m

kube-scheduler-master02 1/1 Running 0 57m

kube-scheduler-master03 1/1 Running 0 38m

tail -fn300 /var/log/messages

Jul 8 15:55:25 master03 kubelet: I0708 15:55:25.680936 22892 cni.go:239] “Unable to update cni config” err=“no networks found in /etc/cni/net.d”

Jul 8 15:55:26 master03 kubelet: E0708 15:55:26.743448 22892 kubelet.go:2211] “Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized”

是因为没装网络插件

5.Calico安装

Calico:https://www.projectcalico.org/

https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises

curl https://docs.projectcalico.org/manifests/calico.yaml -O 原版本

点击Manifest下载:

curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O 新版本

点击requirements 可以查看calico支持的kubernetes的版本

修改calico.yaml的以下位置,把注释取消,改成自己配置的podSubnet网段

- name: CALICO_IPV4POOL_CIDR

value: “172.16.0.0/12”

kubectl apply -f calico.yaml

[root@master01 ~ ]# kubectl apply -f calico.yaml

configmap/calico-config created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

daemonset.apps/calico-node created

serviceaccount/calico-node created

deployment.apps/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

poddisruptionbudget.policy/calico-kube-controllers created

查看token秘钥:

[root@master01 ~ ]# kubectl get secret -n kube-system

NAME TYPE DATA AGE

attachdetach-controller-token-4s7jv kubernetes.io/service-account-token 3 3h53m

bootstrap-signer-token-7tpd5 kubernetes.io/service-account-token 3 3h53m

bootstrap-token-abcdef bootstrap.kubernetes.io/token 6 3h53m 这个

calico-kube-controllers-token-rhbnf kubernetes.io/service-account-token 3 37m

[root@master01 ~ ]# kubectl get secret -n kube-system bootstrap-token-abcdef -oyaml

apiVersion: v1

data:

auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=

expiration: MjAyMi0wNy0wOVQxMzoxMDozMCswODowMA== token过期时间,是通过base64加密的,可以解密

token-id: YWJjZGVm

token-secret: MDEyMzQ1Njc4OWFiY2RlZg==

usage-bootstrap-authentication: dHJ1ZQ==

usage-bootstrap-signing: dHJ1ZQ==

kind: Secret

metadata:

creationTimestamp: “2022-07-08T05:10:30Z”

name: bootstrap-token-abcdef

namespace: kube-system

resourceVersion: “372”

uid: 83b63111-373a-471b-9530-bfbe55763326

type: bootstrap.kubernetes.io/token

[root@master01 ~ ]# echo “MjAyMi0wNy0wOVQxMzoxMDozMCswODowMA==”|base64 -d

2022-07-09T13:10:30+08:00[root@master01 ~ ]#

token一天就过期,过期后可以使用kubeadm来重新生成token

假如这个token过期,我们先把它删掉:

[root@master01 ~ ]# kubectl delete secret -n kube-system bootstrap-token-abcdef

secret “bootstrap-token-abcdef” deleted

重新生成node节点加入的token:

[root@master01 ~ ]# kubeadm token create --print-join-command

kubeadm join 10.10.0.10:7443 --token hbfk39.j5aj5rcejq401rhb --discovery-token-ca-cert-hash sha256:eac394f46b758da8502c3e25882584432f195c809d29c6038f0fcefc201c8fac

查看过期时间:

[root@master01 ~ ]# kubectl get secret bootstrap-token-hbfk39 -n kube-system -oyaml

apiVersion: v1

data:

auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=

expiration: MjAyMi0wNy0wOVQxNzoyMDo0NiswODowMA==

token-id: aGJmazM5

token-secret: ajVhajVyY2VqcTQwMXJoYg==

usage-bootstrap-authentication: dHJ1ZQ==

usage-bootstrap-signing: dHJ1ZQ==

kind: Secret

metadata:

creationTimestamp: “2022-07-08T09:20:48Z”

name: bootstrap-token-hbfk39

namespace: kube-system

resourceVersion: “23993”

uid: 2904221d-7da7-4656-8fc8-30c32cade8d1

type: bootstrap.kubernetes.io/token

生成master的 --certificate-key:

[root@master01 ~ ]# kubeadm init phase upload-certs --upload-certs

I0708 17:29:52.123627 105799 version.go:254] remote version is much newer: v1.24.2; falling back to: stable-1.21

[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace

[upload-certs] Using certificate key:

35652b0c344704df60bdab4e4d2386b93d12f31d719263cddccbbcf32bb8764c

其他master加入只需要修改 --certificate-key为这个key即可

kubeadm join 10.10.0.10:7443 --token abcdef.0123456789abcdef

–discovery-token-ca-cert-hash sha256:35652b0c344704df60bdab4e4d2386b93d12f31d719263cddccbbcf32bb8764c --control-plane --certificate-key 35652b0c344704df60bdab4e4d2386b93d12f31d719263cddccbbcf32bb8764c

6.Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

以前比较老的版本使用Heapster,现在已经被弃用

下载软件包:

git clone https://github.com/dotbalo/k8s-ha-install.git

安装metrics-server

·官网:https://github.com/kubernets-sigs/metrics-server

https://github.com/kubernetes-sigs/metrics-server

下载镜像和资源文件,并导入私有仓库

镜像导入

安装metrics-server

——rbac.yaml 授权控制器

——pdb.yaml 中断控制器

——development.yaml 主进程 metrics

——service.yaml 后端是 metrics 主进程的服务

——apiservice.yaml 注册集群API

下载软件包:

git clone https://github.com/dotbalo/k8s-ha-install.git

[root@master01 metrics-server-3.6.1 ]# pwd

/root/k8s-ha-install/metrics-server-3.6.1

安装:

[root@master01 metrics-server-3.6.1 ]# kubectl apply -f .

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created

clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created

rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created

Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService

apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

serviceaccount/metrics-server created

deployment.apps/metrics-server created

service/metrics-server created

clusterrole.rbac.authorization.k8s.io/system:metrics-server created

clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

注:如果失败,删除metrics-server资源:

kubectl delete -f metrics-server.yaml

github官网下载安装metrics-server:

https://github.com/kubernetes-sigs/metrics-server/releases

https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.0/components.yaml

先修改两个地方:

containers:

- args:

- --cert-dir=/tmp

- --secure-port=4443

- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

- --kubelet-use-node-status-port

- --metric-resolution=15s

- --kubelet-insecure-tls # 加上该启动参数

image: registry.cn-hangzhou.aliyuncs.com/zailushang/metrics-server:v0.6.0 # 国内集群,请替换成这个镜像

[root@master01 ~ ]# kubectl apply -f components.yaml

serviceaccount/metrics-server created

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created

clusterrole.rbac.authorization.k8s.io/system:metrics-server created

rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created

clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created

clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

service/metrics-server created

deployment.apps/metrics-server created

apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态:

[root@master01 ~ ]# kubectl get pod -n kube-system -l k8s-app=metrics-server

NAME READY STATUS RESTARTS AGE

metrics-server-7f44b488b9-gljj2 1/1 Running 0 11m

kubectl top命令可显示节点和Pod对象的资源使用信息,它依赖于集群中的资源指标API来收集各项指标数据。

它包含有node和pod两个命令,可分别用于显示Node对象和Pod对象的相关资源占用率。

[root@master01 ~ ]# kubectl top nodes --use-protocol-buffers

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%

master01 282m 14% 1161Mi 62%

master02 333m 16% 1256Mi 67%

master03 296m 14% 1237Mi 66%

node01 134m 3% 1044Mi 55%

1、查看集群资源占用

[root@master01 ~ ]# kubectl top pod --use-protocol-buffers -A

NAMESPACE NAME CPU(cores) MEMORY(bytes)

kube-system calico-kube-controllers-69f595f8f8-9tgr4 2m 34Mi

kube-system calico-node-7htlt 31m 135Mi

kube-system calico-node-8j7zg 55m 80Mi

kube-system calico-node-rgwsf 32m 135Mi

kube-system calico-node-tgwjx 36m 143Mi

kube-system coredns-59d64cd4d4-2b6tt 3m 25Mi

kube-system coredns-59d64cd4d4-7jlws 5m 17Mi

kube-system etcd-master01 43m 89Mi

kube-system etcd-master02 59m 90Mi

kube-system etcd-master03 47m 88Mi

kube-system kube-apiserver-master01 72m 331Mi

kube-system kube-apiserver-master02 82m 384Mi

kube-system kube-apiserver-master03 88m 366Mi

kube-system kube-controller-manager-master01 2m 32Mi

kube-system kube-controller-manager-master02 20m 90Mi

kube-system kube-controller-manager-master03 3m 39Mi

kube-system kube-proxy-69dmp 1m 34Mi

kube-system kube-proxy-pswl9 1m 32Mi

kube-system kube-proxy-wgh9t 1m 30Mi

kube-system kube-proxy-wj5nm 1m 18Mi

kube-system kube-scheduler-master01 6m 30Mi

kube-system kube-scheduler-master02 4m 41Mi

kube-system kube-scheduler-master03 3m 35Mi

kube-system metrics-server-7f44b488b9-gljj2 5m 25Mi

kubernetes-dashboard dashboard-metrics-scraper-7c857855d9-rqwm5 1m 14Mi

kubernetes-dashboard kubernetes-dashboard-bcf9d8968-5wh7b 1m 25Mi

查看内存较大的pod,基于内存占用排序:

[root@k8s-master shell ]#kubectl top pod --use-protocol-buffers -A|sed 1d|awk -F “M” ‘{print $1}’|sort -k4nr

kube-logging es-cluster-0 91m 1148

kube-logging es-cluster-1 99m 1093

tools tools-7c45b75fd7-chxrx 1m 927

kube-logging es-cluster-2 86m 922

storage tfies-0 2m 887

monitor-sa prometheus-server-7474845b45-tvlck 38m 882

storage tfies-1 2m 822

cattle-prometheus prometheus-cluster-monitoring-0 21m 678

gateway gateway-69bfb5df5c-gm8zv 7m 559

查看节点详细信息:

kubectl describe node master01

7.Dashboard部署

官方GitHub:https://github.com/kubernetes/dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:谷歌浏览器快捷键属性,目标文件的最后添加

–test-type --ignore-certificate-errors

安装最新版:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml

[root@master01 ~ ]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created

如果不能安装,需要将文件打开复制出来编辑个yaml文件

查看dashboard:

[root@master01 ~ ]# kubectl get po -n kubernetes-dashboard

NAME READY STATUS RESTARTS AGE

dashboard-metrics-scraper-7c857855d9-tqhjk 1/1 Running 0 4m6s

kubernetes-dashboard-bcf9d8968-b9td5 1/1 Running 0 4m6s

更改dashboard的svc为NodePort:在每个宿主机上启动一个端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

[root@master01 ~ ]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes-dashboard NodePort 192.168.20.101 443:31459/TCP 2d20h

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:

https://10.10.0.224:31459

查看有没有管理员用户:

[root@master01 ~ ]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk ‘{print $1}’)

创建管理员用户vim admin.yaml

[root@master01 ~ ]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system


查看token值:

[root@master01 ~ ]# kubectl apply -f admin.yaml -n kube-system
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created


[root@master01 ~ ]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-6dpml
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: a871d4cf-28a4-4d36-b3a7-917efbbace02

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlVFMHY0TjJuY3BzRmVPejRtZUFNSlp3TWs4bExNYjhpZWR0dVNDdEt3ZjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZkcG1sIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhODcxZDRjZi0yOGE0LTRkMzYtYjNhNy05MTdlZmJiYWNlMDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.zdtLkxuyB3LpWbx7G24cbRMjlcS9LO1xnIkQrhUcPutqH_IDNMOD7a5KL-5QLVZoJyyf5_w6oAfsdK24fXcKspSrNiFbl_N79BP8Ktqur8NbP06giU3bFH4rhchjNaGJNkpkpl9j99_8tSIUPvH_BCQsLryynLwi9S7bsojEwG-iMLvtUbfuwudEaQv4oN-OfNcFNlqlSAMTAto65WtLhTn4YV6XAjSAFebrbhpnUs06_JRUdnpZooSuhkCquAW7cuKJbCTTkVly5jBuvdQA67cQsbb9V82k2S97NK6ov5WhXH2nY1GrMDPa5xLJD_kkUvOvVZXCfF2YelAwtr3oZQ

https://10.10.0.224:31459

选择token,输入token值即可登录

查看所有集群容器:

[root@master01 ~ ]# kubectl get pod --all-namespaces

集群验证:

kubernetes ip:service网段的第一个地址

[root@master01 ~ ]# kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 192.168.0.1 443/TCP 2d21h

DNS service网段第十个地址:

[root@master01 ~ ]# kubectl get svc -n kube-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kube-dns ClusterIP 192.168.0.10 53/UDP,53/TCP,9153/TCP 2d21h

metrics-server ClusterIP 192.168.233.65 443/TCP 2d16h

[root@master01 ~ ]# kubectl get pod --all-namespaces -owide

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

kube-system calico-kube-controllers-69f595f8f8-9tgr4 1/1 Running 7 2d18h 172.20.59.200 master02

kube-system calico-node-7htlt 1/1 Running 2 2d18h 10.10.0.221 master02

kube-system calico-node-8j6gc 1/1 Running 2 2d18h 10.10.0.224 node01

kube-system calico-node-rgwsf 1/1 Running 5 2d18h 10.10.0.220 master01

kube-system calico-node-sxczq 0/1 Running 0 2d18h 10.10.0.225 node02

kube-system calico-node-tgwjx 1/1 Running 2 2d18h 10.10.0.223 master03

kube-system coredns-59d64cd4d4-2b6tt 1/1 Running 2 2d22h 172.20.59.199 master02

kube-system coredns-59d64cd4d4-7jlws 1/1 Running 2 2d22h 172.20.59.201 master02

kube-system etcd-master01 1/1 Running 9 2d22h 10.10.0.220 master01

kube-system etcd-master02 1/1 Running 6 2d21h 10.10.0.221 master02

kube-system etcd-master03 1/1 Running 6 2d21h 10.10.0.223 master03

kube-system kube-apiserver-master01 1/1 Running 12 2d22h 10.10.0.220 master01

kube-system kube-apiserver-master02 1/1 Running 10 2d21h 10.10.0.221 master02

kube-system kube-apiserver-master03 1/1 Running 14 2d21h 10.10.0.223 master03

kube-system kube-controller-manager-master01 1/1 Running 11 2d22h 10.10.0.220 master01

kube-system kube-controller-manager-master02 1/1 Running 6 2d21h 10.10.0.221 master02

kube-system kube-controller-manager-master03 1/1 Running 4 2d21h 10.10.0.223 master03

kube-system kube-proxy-69dmp 1/1 Running 4 2d22h 10.10.0.220 master01

kube-system kube-proxy-pswl9 1/1 Running 2 2d21h 10.10.0.221 master02

kube-system kube-proxy-t567z 1/1 Running 2 2d21h 10.10.0.224 node01

kube-system kube-proxy-wgh9t 1/1 Running 2 2d21h 10.10.0.223 master03

kube-system kube-proxy-z4mk9 1/1 Running 1 2d21h 10.10.0.225 node02

kube-system kube-scheduler-master01 1/1 Running 11 2d22h 10.10.0.220 master01

kube-system kube-scheduler-master02 1/1 Running 5 2d21h 10.10.0.221 master02

kube-system kube-scheduler-master03 1/1 Running 5 2d21h 10.10.0.223 master03

kube-system metrics-server-769bd9c6f4-hvtlq 0/1 CrashLoopBackOff 41 2d17h 172.29.55.9 node01

kubernetes-dashboard dashboard-metrics-scraper-7c857855d9-tqhjk 1/1 Running 2 2d17h 172.29.55.7 node01

kubernetes-dashboard kubernetes-dashboard-bcf9d8968-b9td5 1/1 Running 3 2d17h 172.29.55.8 node01

进入到容器中执行命令:

[root@master01 ~ ]# kubectl exec -it calico-node-8j6gc -n kube-system – sh

Defaulted container “calico-node” out of: calico-node, upgrade-ipam (init), install-cni (init)

sh-4.4#

查看日志:

[root@master01 ~ ]# kubectl logs -n kube-system metrics-server-769bd9c6f4-hvtlq

删除pod:

[root@master01 metrics-server-3.6.1 ]# kubectl delete pod metrics-server-769bd9c6f4-hvtlq -n kube-system

pod “metrics-server-769bd9c6f4-hvtlq” deleted

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:

2.11 一些必须的配置更改

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:

在master01节点执行

kubectl edit cm kube-proxy -n kube-system

mode: “ipvs”

注意事项

注意:kubeadm安装的集群,证书有效期默认是一年。

master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。

可以通过kubectl get po -n kube-system查看。

启动和二进制不同的是,kubelet的配置文件在/etc/sysconfig/Kubelet

其他组件的配置文件在/etc/kubernetes目录下,比如kube-apiserver.yaml,该yaml文件更改后,

kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件

dashboard使用过程中很不方便,浏览器里面的编辑,添加等操作都是yaml格式形式编辑的

国内开发的https://kuboard.cn/ 比较好用

https://kuboard.cn/install/v3/install-in-k8s.html#%E6%96%B9%E6%B3%95%E4%B8%80-%E4%BD%BF%E7%94%A8-hostpath-%E6%8F%90%E4%BE%9B%E6%8C%81%E4%B9%85%E5%8C%96

选择在线安装:

kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml

查看:

[root@master01 ~ ]# kubectl get nodes -n kuboard

NAME STATUS ROLES AGE VERSION

master01 Ready control-plane,master 3d3h v1.21.14

master02 Ready control-plane,master 3d3h v1.21.14

master03 Ready control-plane,master 3d2h v1.21.14

node01 Ready 167m v1.21.14

node02 NotReady 3d2h v1.21.14

查看服务:

[root@master01 ~ ]# kubectl get svc -n kuboard

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kuboard-v3 NodePort 192.168.16.221 80:30080/TCP,10081:30081/TCP,10081:30081/UDP 23m

访问 Kuboard

在浏览器中打开链接 http://your-node-ip-address:30080

输入初始用户名和密码,并登录

用户名: admin

密码: Kuboard123

在浏览器导入kuboard-agent

删除节点:

[root@master01 cfg ]# kubectl delete node master01

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
7天前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
30 1
|
28天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
6天前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
15 0
|
29天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
1月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
122 17
|
1月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
565 1
|
1月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
76 3
|
1月前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
114 1
|
1月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
56 1
|
1月前
|
Kubernetes Cloud Native Ubuntu
云原生之旅:Kubernetes集群搭建与应用部署
【8月更文挑战第65天】本文将带你进入云原生的世界,通过一步步指导如何在本地环境中搭建Kubernetes集群,并部署一个简单的应用。我们将使用Minikube和Docker作为工具,探索云原生技术的魅力所在。无论你是初学者还是有经验的开发者,这篇文章都将为你提供有价值的信息和实践技巧。
下一篇
无影云桌面