基于docker部署高可用Kubernetes1.25.2(下)

简介: 基于docker部署高可用Kubernetes1.25.2(下)

五、 开始安装

所有k8s节点安装kubeadm、kubectl、kubelet

可以参考阿里云安装指导

root@master1:~# apt-get update && apt-get install -y apt-transport-https  #安装依赖
root@master1:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -  #导入密钥
root@master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list  #导入阿里的kubernetes源
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
root@master1:~# for i in  master2 master3 node1 node2 node3;do scp /etc/apt/sources.list.d/kubernetes.list 
$i:/etc/apt/sources.list.d/;done  #传输给其他节点
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt update;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt -y install kubelet kubeadm kubectl;done #安装最新版本

如需要安装指定版本需要按如下操作

root@master1:~# apt-cache madison kubeadm|head  #查询前10条,或者也可以用apt-cache show kubeadm |grep 1.25
   kubeadm |  1.25.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.25.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.25.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
root@master1:~# apt install -y  kubeadm=1.25.2-00 kubelet=1.25.2-00 kubectl=1.25.2-00

所有k8s节点安装cri-dockerd

Kubernetes自v1.24移除了对docker-shim的支持,而Docker Engine默认又不支持CRI规范,因而二者将无法直接完成整合。为此,Mirantis和Docker联合创建了cri-dockerd项目,用于为Docker Engine提供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker 。

root@master1:~#  wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb #在github中有二进制包也有deb和rpm包,为了方便我这里直接下载的deb包
root@master1:~# for i in  master2 master3 node1 node2 node3;do scp cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb 
$i:/root/;done  #传输给其他节点
root@master1:~# for i in  master2 master3 node1 node2 node3;do ssh $i dpkg -i cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.de ;done

修改cri-dockerd的service文件,否则会报错,嫌麻烦可以直接复制下面的文件

root@master1:~# cat /lib/systemd/system/cri-docker.service 
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
#ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infracontainer-image registry.aliyuncs.com/google_containers/pause:3.7  
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 #这里地址不改会起不来,修改仓库地址为阿里云是为了pull镜像加速
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
root@master1:~# for i in  master2 master3 node1 node2 node3;do scp /lib/systemd/system/cri-docker.service  
$i:/lib/systemd/system/cri-docker.service ;done #传输给其他节点
root@master1:~# for i in  master2 master3 node1 node2 node3;do ssh $i systemctl daemon-reload;ssh $i systemctl enable --now cri-docker.service ;done

在master1初始化

root@master1:~#  kubeadm config print init-defaults > init.yaml     #生成初始化配置文件,方便修改参数
root@master1:~# cat init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.21.178    #填nginx的IP地址,如果是其他反向代理也填其他proxy的地址
  bindPort: 6443
nodeRegistration:
  criSocket:  unix:///var/run/cri-dockerd.sock  #由于是用的docker这里必须修改,否则初始化会报错
  imagePullPolicy: IfNotPresent
  name: master1             # 和节点名字对应
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  external:             #这里默认是local,由于我们用的外部etcd,所以需要修改
    endpoints:
      - https://10.10.21.170:2379
      - https://10.10.21.172:2379
      - https://10.10.21.175:2379
    #搭建etcd集群时生成的ca证书
    caFile: /etc/etcd/pki/ca.pem
    #搭建etcd集群时生成的客户端证书
    certFile: /etc/etcd/pki/client.pem
    #搭建etcd集群时生成的客户端密钥
    keyFile: /etc/etcd/pki/client-key.pem
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.25.2   #和版本需要一一对应
networking:
  dnsDomain: cluster.local
  podSubnet: 10.200.0.0/16      #pod的CIDR地址,不要和别的有交叉
  serviceSubnet: 10.100.0.0/16    #service的CIDR地址,不要和别的有交叉
scheduler: {} 
root@master1:~# kubeadm init --config=init.yaml #利用修改完成的文件进行初始化

由于已经配置从阿里拉取镜像,所以这一步并不会太久,出现类似以下提示就代表初始化完成了

c3e13dc6277f420eb780efc1fbe5a6b9.png

root@master1:~# export KUBECONFIG=/etc/kubernetes/admin.conf #如果是root用户执行这条,普通用户需要执行上面的那几条,并且如果你将以上文件scp给其它节点,那么其它节点也能执行kubectl命令

安装CNI插件flannel

root@master1:~#  wget  https://raw.githubusercontent.com/flannelio/flannel/master/Documentation/kube-flannel.yml
root@master1:~#  cat kube-flannel.yml 
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.200.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
# 下载不下来的可以复制我的使用,但是记得修改CIDR与前面定义的pod的一致,否则在后面起coredns的pod会失败
root@master1:~# kubectl apply -f kube-flannel.yml
root@master1:~# kubectl get pod -n kube-system  #查看coredns是否正常,如果是running就可以进行下一步,否则node是notready状态
root@master1:~# kubectl get node
NAME             STATUS   ROLES           AGE     VERSION
master1   Ready    control-plane   6d4h    v1.25.2
# 此时应该是这种状态

扩容work节点

正常初始化完成会直接有kubeadm join的命令提示,如果忘记保存或者是token过期可以重新生成

root@master1:~# kubeadm token create --print-join-command 
kubeadm join 10.10.21.178:6443 --token x4mb27.hidqr7ao758eafcx --discovery-token-ca-cert-hash sha256:44aba1ef82b6b34c40fe748a9c2cd321be91aa3c22dd23e706001b65affb9dc9

然后直接依次去其余work节点执行这条命令即可,但是我们这边的cri接口不上containerd,所以真实命令应该为

kubeadm join 10.10.21.178:6443 --token bnxg1m.6y89w1fsz73ztc34 --discovery-token-ca-cert-hash sha256:44aba1ef82b6b34c40fe748a9c2cd321be91aa3c22dd23e706001b65affb9dc9 --cri-socket unix:///run/cri-dockerd.sock

一定要在结尾加上*–cri-socket unix:///run/cri-dockerd.sock*

执行完成之后再查询node信息

root@master1:~# kubectl get node
NAME             STATUS   ROLES           AGE     VERSION
master1      Ready    control-plane   6d4h    v1.25.2
node1      Ready    <none>          5d12h   v1.25.2
node2        Ready    <none>          5d12h   v1.25.2
node3          Ready    <none>          5d12h   v1.25.2

扩容控制节点

扩容控制节点需要更新证书,正常命令应该如下

root@master1:~# kubeadm init phase upload-certs --upload-certs 
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

可以看到我这边提示cri接口异常,然后我尝试加上

root@master1:~# kubeadm init phase upload-certs --upload-certs --cri-socket  unix:///var/run/cri-dockerd.sock
unknown flag: --cri-socket
To see the stack trace of this error execute with --v=5 or higher

还是报错

后来发现这其实主要原因是因为使用了外部etcd导致的,因此可以按如下方式更新证书,这里卡了我蛮久的,如果有遇到相同问题的可以试一下,init.yaml就是在kubeadm初始化的时候的那个文件

root@master1:~# kubeadm init phase upload-certs --upload-certs --config init.yaml 
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a618b3a050dcc91d7304ca7f3ddf2f02f2825528d9801f9d1bd8d3e742898744

接下来的操作就比较简单了,将之前生成的token和这次生成的key拼接起来即可

root@master2:~# kubeadm join 10.10.21.178:6443 --token q5k1so.r9r6vg7lsj3zzec1 --discovery-token-ca-cert-hash sha256:44aba1ef82b6b34c40fe748a9c2cd321be91aa3c22dd23e706001b65affb9dc9  --control-plane --certificate-key 509228b6886d23b3f5d7e64d8d6d2b74429e9bf136494838e296a4f1b0e89c46 --cri-socket unix:///run/cri-dockerd.sock
#这里的token和key一定要和自己的对应好,我这边因为复制了太多次所以可能不是完全对应的
#此外,结尾也一定要加上 --cri-socket unix:///run/cri-dockerd.sock指定cri接口,否则会有报错

如果出现类似这种报错

unable to add a new control plane instance to a cluster that doesn’t have a stable controlPlaneEndpoint address

675a4cd148da46c1a69e2822eafbdec9.png这是因为kubeadm-config 的configmap文件没有声明controlPlaneEndpoint,需要执行如下操作

kubectl edit cm kubeadm-config -n kube-system

841e551bcc1e4747b99fc18af29cf970.png

在这行加入对应的controlPlaneEndpoint,填写对应的vip地址

在其他两个master节点依次执行命令等待初始化完成之后,master扩容就完成了

root@master2:~# kubectl get nodes,cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                  STATUS   ROLES           AGE     VERSION
node/master1   Ready    control-plane   6d4h    v1.25.2
node/master2   Ready    control-plane   5d2h    v1.25.2
node/master3   Ready    control-plane   5d2h    v1.25.2
node/node1     Ready    <none>          5d12h   v1.25.2
node/node2     Ready    <none>          5d12h   v1.25.2
node/node3     Ready    <none>          5d12h   v1.25.2
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}   
root@master2:~# kubectl cluster-info 
Kubernetes control plane is running at https://10.10.21.178:6443
CoreDNS is running at https://10.10.21.178:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

到这里三主三从的k8s集群就部署完毕了,如果还需要部署dashboard可以参考之前的博客

修改k8s的service中nodeport端口范围

在 Kubernetes 集群中,NodePort 默认范围是 30000-32767,但是我们也可以进行修改

root@master2:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.10.21.172:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.10.21.172
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/etcd/pki/ca.pem
    - --etcd-certfile=/etc/etcd/pki/client.pem
    - --etcd-keyfile=/etc/etcd/pki/client-key.pem
    - --etcd-servers=https://10.10.21.170:2379,https://10.10.21.172:2379,https://10.10.21.175:2379
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.100.0.0/16
    - --service-node-port-range=30000-50000     #这里定义的就是端口范围,如果没有的话也可以新增
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.2
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 10.10.21.172
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 10.10.21.172
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 10.10.21.172
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/etcd/pki
      name: etcd-certs-0
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/etcd/pki
      type: DirectoryOrCreate
    name: etcd-certs-0
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}

修改完毕之后将这个文件传输至其余matser节点并delete原来的apiserver对应的pod

root@master1:~# for i in  master2 master3;do scp /etc/kubernetes/manifests/kube-apiserver.yaml  $i:/etc/kubernetes/manifests/kube-apiserver.yaml;done #传输给其他节点
root@master1:~# kubectl -n kube-system delete pod kube-apiserver-master2
pod "kube-apiserver-master2" deleted
root@master1:~# kubectl -n kube-system delete pod kube-apiserver-master3.hu.org
pod "kube-apiserver-master3" deleted
root@master1:~# kubectl -n kube-system delete pod kube-apiserver-master1.hu.org
pod "kube-apiserver-master1" deleted
root@master1:~# kubectl -n kube-system describe pod kube-apiserver-master1.hu.org |grep -i service
      --service-account-issuer=https://kubernetes.default.svc.cluster.local
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
      --service-cluster-ip-range=10.100.0.0/16
      --service-node-port-range=30000-50000

配置k8s使用ipvs进行流量转发

k8s的kube-proxy支持iptables、ipvs 模式,默认是iptables 模式

root@master1:~# apt-get update
root@master1:~# apt-get install -y ipvsadm ipset sysstat conntrack libseccomp-dev #安装ipvs
root@master1:~# uname -r
5.4.0-126-generic
root@master1:~# modprobe -- ip_vs
root@master1:~# modprobe -- ip_vs_rr
root@master1:~# modprobe -- ip_vs_wrr
root@master1:~# modprobe -- ip_vs_sh
root@master1:~# modprobe -- nf_conntrack
root@master1:~# cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
root@master1:~# systemctl enable --now systemd-modules-load.service
The unit files have no installation config (WantedBy=, RequiredBy=, Also=,
Alias= settings in the [Install] section, and DefaultInstance= for template
units). This means they are not meant to be enabled using systemctl.
Possible reasons for having this kind of units are:
• A unit may be statically enabled by being symlinked from another unit's
  .wants/ or .requires/ directory.
• A unit's purpose may be to act as a helper for some other unit which has
  a requirement dependency on it.
• A unit may be started when needed via activation (socket, path, timer,
  D-Bus, udev, scripted systemctl call, ...).
• In case of template units, the unit is meant to be enabled with some
  instance name specified.
root@master1:~# systemctl status systemd-modules-load.service
● systemd-modules-load.service - Load Kernel Modules
     Loaded: loaded (/lib/systemd/system/systemd-modules-load.service; static; vendor preset: enabled)
     Active: active (exited) since Wed 2022-10-12 03:53:34 UTC; 5 days ago
       Docs: man:systemd-modules-load.service(8)
             man:modules-load.d(5)
   Main PID: 377 (code=exited, status=0/SUCCESS)
      Tasks: 0 (limit: 4640)
     Memory: 0B
     CGroup: /system.slice/systemd-modules-load.service
root@master1:~# lsmod |grep ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  10
ip_vs                 155648  16 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
libcrc32c              16384  6 nf_conntrack,nf_nat,btrfs,xfs,raid456,ip_vs
#确定已经生效

修改kube-proxy配置

root@master1:~# kubectl edit configmaps kube-proxy -n kube-system
#在mode这行修改,后面加上ipvs:   *mode: "ipvs"*

然后删除kube-proxy对应的pod让他重新生成,重新生成完成之后可以进行查询检查

root@master1:~# kubectl -n kube-system logs kube-proxy-xkkhp 
I1013 03:09:02.586507       1 node.go:163] Successfully retrieved node IP: 10.10.21.173
I1013 03:09:02.586554       1 server_others.go:138] "Detected node IP" address="10.10.21.173"
I1013 03:09:02.602042       1 server_others.go:269] "Using ipvs Proxier"   #这里指明用的ipvs
I1013 03:09:02.602175       1 server_others.go:271] "Creating dualStackProxier for ipvs"
I1013 03:09:02.602192       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1013 03:09:02.602873       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"
I1013 03:09:02.603004       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"
I1013 03:09:02.603023       1 ipset.go:113] "Ipset name truncated" ipSetName="KUBE-6-LOAD-BALANCER-SOURCE-CIDR" truncatedName="KUBE-6-LOAD-BALANCER-SOURCE-CID"
I1013 03:09:02.603030       1 ipset.go:113] "Ipset name truncated" ipSetName="KUBE-6-NODE-PORT-LOCAL-SCTP-HASH" truncatedName="KUBE-6-NODE-PORT-LOCAL-SCTP-HAS"
I1013 03:09:02.603166       1 server.go:661] "Version info" version="v1.25.2"
I1013 03:09:02.603177       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1013 03:09:02.608838       1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=131072
I1013 03:09:02.609416       1 config.go:317] "Starting service config controller"
I1013 03:09:02.609429       1 shared_informer.go:255] Waiting for caches to sync for service config
I1013 03:09:02.609461       1 config.go:444] "Starting node config controller"
I1013 03:09:02.609464       1 shared_informer.go:255] Waiting for caches to sync for node config
I1013 03:09:02.609479       1 config.go:226] "Starting endpoint slice config controller"
I1013 03:09:02.609482       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1013 03:09:02.709868       1 shared_informer.go:262] Caches are synced for endpoint slice config
I1013 03:09:02.709923       1 shared_informer.go:262] Caches are synced for node config
I1013 03:09:02.711053       1 shared_informer.go:262] Caches are synced for service config
I1017 15:06:11.940342       1 graceful_termination.go:102] "Removed real server from graceful delete real server list" realServer="10.100.0.1:443/TCP/10.10.21.172:6443"
root@master1:~# kubectl -n kube-system logs kube-proxy-xkkhp  |grep -i ipvs
I1013 03:09:02.602042       1 server_others.go:269] "Using ipvs Proxier"
I1013 03:09:02.602175       1 server_others.go:271] "Creating dualStackProxier for ipvs"
I1013 03:09:02.602873       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"
I1013 03:09:02.603004       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
18天前
|
监控 NoSQL 时序数据库
《docker高级篇(大厂进阶):7.Docker容器监控之CAdvisor+InfluxDB+Granfana》包括:原生命令、是什么、compose容器编排,一套带走
《docker高级篇(大厂进阶):7.Docker容器监控之CAdvisor+InfluxDB+Granfana》包括:原生命令、是什么、compose容器编排,一套带走
161 77
|
5天前
|
存储 Kubernetes 开发者
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
Docker 是一种开源的应用容器引擎,允许开发者将应用程序及其依赖打包成可移植的镜像,并在任何支持 Docker 的平台上运行。其核心概念包括镜像、容器和仓库。镜像是只读的文件系统,容器是镜像的运行实例,仓库用于存储和分发镜像。Kubernetes(k8s)则是容器集群管理系统,提供自动化部署、扩展和维护等功能,支持服务发现、负载均衡、自动伸缩等特性。两者结合使用,可以实现高效的容器化应用管理和运维。Docker 主要用于单主机上的容器管理,而 Kubernetes 则专注于跨多主机的容器编排与调度。尽管 k8s 逐渐减少了对 Docker 作为容器运行时的支持,但 Doc
44 5
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
|
2天前
|
存储 Kubernetes Docker
Kubernetes(k8s)和Docker Compose本质区别
理解它们的区别和各自的优势,有助于选择合适的工具来满足特定的项目需求。
42 19
|
8天前
|
存储 Kubernetes 容器
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。
|
16天前
|
Kubernetes 容灾 调度
阿里云 ACK 高可用稳定性最佳实践
本文整理自2024云栖大会刘佳旭的演讲,主题为《ACK高可用稳定性最佳实践》。文章探讨了云原生高可用架构的重要性,通过Kubernetes的高可用案例分析,介绍了ACK在单集群高可用架构设计、产品能力和最佳实践方面的方法,包括控制面和数据面的高可用策略、工作负载高可用配置、企业版容器镜像服务高可用配置等内容,旨在帮助企业构建更加可靠和高效的应用运行环境。
|
25天前
|
Java 应用服务中间件 Docker
将基于 Spring 的 WAR 应用程序部署到 Docker:详尽指南
将基于 Spring 的 WAR 应用程序部署到 Docker:详尽指南
29 2
|
1月前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
28天前
|
Kubernetes 监控 云计算
Docker与Kubernetes的协同工作
Docker与Kubernetes的协同工作
|
29天前
|
运维 Kubernetes Docker
深入理解容器化技术:Docker与Kubernetes的协同工作
深入理解容器化技术:Docker与Kubernetes的协同工作
50 1
|
1月前
|
Java Linux Docker
什么是 Docker?如何将 Spring Boot 应用程序部署到 Docker?
什么是 Docker?如何将 Spring Boot 应用程序部署到 Docker?
43 3