3.4 在master1节点初始化k8s集群,在master1上操作如下
echo """ apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.17.3 controlPlaneEndpoint:"192.168.124.199:6443" apiServer: certSANs: -192.168.124.16 -192.168.124.26 -192.168.124.36 -192.168.124.56 - 192.168.124.199 networking: podSubnet: 10.244.0.0/16 --- apiVersion:kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs """ > kubeadm-config.yaml
kubeadm init --config kubeadm-config.yaml
显示如下,说明初始化成功了
To start using your cluster, you need torun the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to thecluster. Run "kubectl apply -f [podnetwork].yaml"with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of workernodes by running the following on each as root: kubeadm join 192.168.124.199:6443 --tokeny6ryw4.69atvhqbxins51vs \ --discovery-token-ca-cert-hashsha256:797acd093254f02e48e6528862de131cea18801d5b4f28a651592d2ad854c2b6
注:kubeadm join ... 这条命令需要记住,下面我们把k8s的master2、master3,node1节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到
3.5 在master1节点执行如下,这样才能有权限操作k8s资源
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config sudo chown $(id -u):$(id -g)$HOME/.kube/config
在master1节点执行kubectl get nodes
显示如下
NAME STATUS ROLES AGE VERSION master1 NotReady master 2m13s v1.17.3
kubectl get pods -n kube-system
可看到cordns也是处于pending状态
coredns-6955765f44-cj8cv 0/1 Pending 0 3m16 scoredns-6955765f44-lxt6f 0/1 Pending 0 3m16s
上面可以看到STATUS状态是NotReady,cordns是pending,是因为没有安装网络插件,需要安装calico或者flannel,接下来我们安装calico,在master1节点安装calico网络插件:
在master1节点执行如下:
kubectl apply -f calico.yaml
cat calico.yaml
# Calico Version v3.5.3 #https://docs.projectcalico.org/v3.5/releases#v3.5.3 # This manifest includes the followingcomponent versions: # calico/node:v3.5.3 # calico/cni:v3.5.3 # This ConfigMap is used to configure aself-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: #Typha is disabled. typha_service_name: "none" #Configure the Calico backend to use. calico_backend: "bird" #Configure the MTU to use veth_mtu: "1440" #The CNI network configuration to install on each node. The special #values in this config will be automatically populated. cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.0", "plugins": [ { "type": "calico", "log_level": "info", "datastore_type": "kubernetes", "nodename": "__KUBERNETES_NODE_NAME__", "mtu": __CNI_MTU__, "ipam": { "type": "host-local", "subnet": "usePodCidr" }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig":"__KUBECONFIG_FILEPATH__" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} } ] } --- # This manifest installs the calico/nodecontainer, as well # as the Calico CNI plugins and networkconfig on # each master and worker node in aKubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tellKubernetes to do a "force # deletion":https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 initContainers: # This container installs the Calico CNI binaries # and CNI network config file on each node. - name: install-cni image: quay.io/calico/cni:v3.5.3 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value:"10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # Set the hostname based on the k8s node name. - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir containers: # Runs calico/node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: quay.io/calico/node:v3.5.3 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Auto-detect the BGP IP address. - name: IP value: "autodetect" - name: IP_AUTODETECTION_METHOD value:"can-reach=192.168.124.56" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPswill be # chosen from this range. Changing this value after installation willhave # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to"info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: httpGet: path: /liveness port: 9099 host: localhost periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -bird-ready - -felix-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false volumes: # Used by calico/node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # Create all the CustomResourceDefinitionsneeded for # Calico policy and networking mode. apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: felixconfigurations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: FelixConfiguration plural: felixconfigurations singular: felixconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: bgppeers.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: BGPPeer plural: bgppeers singular: bgppeer --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: bgpconfigurations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: BGPConfiguration plural: bgpconfigurations singular: bgpconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ippools.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPPool plural: ippools singular: ippool --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: hostendpoints.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: HostEndpoint plural: hostendpoints singular: hostendpoint --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterinformations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: ClusterInformation plural: clusterinformations singular: clusterinformation --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworkpolicies.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkPolicy plural: globalnetworkpolicies singular: globalnetworkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworksets.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkSet plural: globalnetworksets singular: globalnetworkset --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networkpolicies.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkPolicy plural: networkpolicies singular: networkpolicy --- # Include a clusterrole for the calico-nodeDaemonSet, # and bind it to the calico-nodeserviceaccount. kind: ClusterRole apiVersion:rbac.authorization.k8s.io/v1beta1 metadata: name: calico-node rules: #The CNI plugin needs to get pods, nodes, and namespaces. -apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get -apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list # Used to discover Typhas. - get -apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch # Calico stores some configuration information in node annotations. - update #Watch for changes to Kubernetes NetworkPolicies. -apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list #Used by Calico for policy information. -apiGroups: [""] resources: - pods - namespaces - serviceaccounts verbs: - list - watch #The CNI plugin patches pods/status. -apiGroups: [""] resources: - pods/status verbs: - patch #Calico monitors various CRDs for config. -apiGroups: ["crd.projectcalico.org"] resources: - globalfelixconfigs - felixconfigurations - bgppeers - globalbgpconfigs - bgpconfigurations - ippools - globalnetworkpolicies - globalnetworksets - networkpolicies - clusterinformations - hostendpoints verbs: - get - list - watch #Calico must create and update some CRDs on startup. -apiGroups: ["crd.projectcalico.org"] resources: - ippools - felixconfigurations - clusterinformations verbs: - create - update #Calico stores some configuration information on the node. -apiGroups: [""] resources: - nodes verbs: - get - list - watch #These permissions are only requried for upgrade from v2.6, and can #be removed after upgrade or on fresh installations. -apiGroups: ["crd.projectcalico.org"] resources: - bgpconfigurations - bgppeers verbs: - create - update --- apiVersion:rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system ---
安装calico之后,在master1节点执行kubectl get nodes
显示如下,看到STATUS是Ready,kubectl get pods -n kube-system可以看到cordns也是running状态,说明master1节点的calico安装完成
NAME STATUS ROLES AGE VERSION master1 Ready master 2m13s v1.17.3
3.6 把master1节点的证书拷贝到master2和master3上
(1)在master2和master3上创建证书存放目录
cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
在master1节点把证书拷贝到master2和master3上,在master1上操作
scp /etc/kubernetes/pki/ca.crtmaster2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.keymaster2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.keymaster2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pubmaster2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crtmaster2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.keymaster2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crtmaster2:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.keymaster2:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/ca.crtmaster3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.keymaster3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.keymaster3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pubmaster3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crtmaster3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.keymaster3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crtmaster3:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.keymaster3:/etc/kubernetes/pki/etcd/
证书拷贝之后在master2和master3上执行如下,形成集群
kubeadm join 192.168.124.199:6443 --tokeny6ryw4.69atvhqbxins51vs \ --discovery-token-ca-cert-hashsha256:797acd093254f02e48e6528862de131cea18801d5b4f28a651592d2ad854c2b6 --control-plane
--control-plane:这个参数表示加入到k8s集群的是master节点
在master2和master3上操作:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config sudo chown $(id -u):$(id -g)$HOME/.kube/config
kubectl get nodes 显示如下:
NAME STATUS ROLES AGE VERSION master1 Ready master 39m v1.17.3 master2 Ready master 5m9s v1.17.3 master3 Ready master 2m33s v1.17.3
3.7 把node1节点加入到k8s集群,在node节点操作
kubeadm join 192.168.124.199:6443 --tokeny6ryw4.69atvhqbxins51vs \ --discovery-token-ca-cert-hashsha256:797acd093254f02e48e6528862de131cea18801d5b4f28a651592d2ad854c2b6
注:上面的这个加入到k8s节点的一串命令就是在3.3初始化的时候生成的
3.6 在master1节点查看集群节点状态
kubectl get nodes 显示如下
NAME STATUS ROLES AGE VERSION master1 Ready master 3m48s v1.17.3 master2 Ready master 3m48s v1.17.3 master3 Ready master 3m48s v1.17.3 node1 Ready <none> 3m48s v1.17.3
说明node1节点也加入到k8s集群了,通过以上就完成了k8s多master高可用集群的搭建
4.安装kubernetesdashboard(kubernetes的web ui界面)
在k8s-master节点操作
kubectl apply -f kubernetes-dashboard.yaml
cat kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License,Version 2.0 (the "License"); # you may not use this file except incompliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law oragreed to in writing, software # distributed under the License isdistributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANYKIND, either express or implied. # See the License for the specific languagegoverning permissions and # limitations under the License. # Configuration to deploy release versionof the Dashboard UI compatible with # Kubernetes 1.8. # # Example usage: kubectl create -f<this_file> --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard #Allows editing resource and makes sure it is created first. addonmanager.kubernetes.io/mode: EnsureExists name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard #Allows editing resource and makes sure it is created first. addonmanager.kubernetes.io/mode: EnsureExists name: kubernetes-dashboard-key-holder namespace: kube-system type: Opaque --- # ------------------- Dashboard ServiceAccount ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-admin namespace: kube-system --- # ------------------- Dashboard Role &Role Binding ------------------- # apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-watcher rules: - apiGroups: -'*' resources: -'*' verbs: -'get' -'list' - nonResourceURLs: -'*' verbs: -'get' -'list' - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder","kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] #Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] #Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] --- # ------------------- Dashboard Deployment------------------- # apiVersion: apps/v1 kind: Deployment metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: priorityClassName: system-cluster-critical containers: - name: kubernetes-dashboard image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 resources: limits: cpu: 100m memory: 300Mi requests: cpu: 50m memory: 100Mi ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs - name: tmp-volume mountPath: /tmp livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard-admin tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: -port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard type: NodePort --- kind: Ingress apiVersion: extensions/v1beta1 metadata: name: dashboard namespace: kube-system annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: dashboard.multi.io http: paths: - backend: serviceName:kubernetes-dashboard servicePort: 443 path: /
查看dashboard是否安装成功:
kubectl get pods -n kube-system
显示如下,说明dashboard安装成功了
kubernetes-dashboard-7898456f45-7h2q6 1/1 Running 0 61s
查看dashboard前端的service
kubectl get svc -n kube-system
显示如下:
kubernetes-dashboard NodePort 10.106.68.182 <none> 443:30982/TCP 12m
上面可看到service类型是NodePort,访问k8s-master节点ip:30982端口即可访问kubernetes dashboard,我的环境需要输入如下地址
可看到出现了dashboard界面
5.安装metrics监控插件
在k8s-master节点操作
kubectl apply -f metrics.yaml
cat metrics.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-delegator labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: metrics-server-auth-reader namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: -"" resources: -pods -nodes -nodes/stats -namespaces verbs: -get -list - watch - apiGroups: -"extensions" resources: -deployments verbs: -get -list -update -watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:metrics-server labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: metrics-server-config namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: EnsureExists data: NannyConfiguration: |- apiVersion: nannyconfig/v1alpha1 kind: NannyConfiguration --- apiVersion: apps/v1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile version: v0.3.1 spec: selector: matchLabels: k8s-app: metrics-server version: v0.3.1 template: metadata: name: metrics-server labels: k8s-app: metrics-server version: v0.3.1 annotations: scheduler.alpha.kubernetes.io/critical-pod: '' seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: priorityClassName: system-cluster-critical serviceAccountName: metrics-server containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 command: - /metrics-server - --metric-resolution=30s - --kubelet-preferred-address-types=InternalIP - --kubelet-insecure-tls ports: - containerPort: 443 name: https protocol: TCP - name: metrics-server-nanny image: k8s.gcr.io/addon-resizer:1.8.4 resources: limits: cpu: 100m memory: 300Mi requests: cpu: 5m memory: 50Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: metrics-server-config-volume mountPath: /etc/config command: - /pod_nanny - --config-dir=/etc/config - --cpu=300m - --extra-cpu=20m - --memory=200Mi - --extra-memory=10Mi - --threshold=5 - --deployment=metrics-server - --container=metrics-server - --poll-period=300000 - --estimator=exponential - --minClusterSize=2 volumes: - name: metrics-server-config-volume configMap: name: metrics-server-config tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - key: node-role.kubernetes.io/master effect: NoSchedule --- apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" kubernetes.io/name: "Metrics-server" spec: selector: k8s-app: metrics-server ports: -port: 443 protocol: TCP targetPort: https --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100
上面组件都安装之后,kubectl get pods -n kube-system -o wide,查看组件安装是否正常,STATUS状态是Running,说明组件正常,如下所示