Kubekey安装kubernetes集群

本文涉及的产品
Elasticsearch Serverless通用抵扣包,测试体验金 200元
可观测监控 Prometheus 版,每月50GB免费额度
简介: 使用开源的Kubekey安装高可用的kubernetes集群

一.K8S1.20.x的重要更新

1、Kubectl debug 设置一个临时容器
2、Sidecar 
3、Volume:更改目录权限,fsGroup
4、ConfigMap和Secret

K8S官网:https://kubernetes.io/docs/setup/
最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

二.K8S1.20.x的安装

2.1 集群规划

主机名 IP地址 说明
k8s-worker6 172.26.119.238 master节点
k8s-worker7 172.26.119.239 worker01节点
k8s-worker8 172.26.119.240 worker02节点
#查看Centos版本
cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)
##https://www.cnblogs.com/liucx/

k8s高可用结构图

修改主机名

# master节点
hostnamectl set-hostname k8s-worker6
#node1节点
hostnamectl set-hostname k8s-worker7
#node2节点
hostnamectl set-hostname k8s-worker8

所有节点配置hosts,修改/etc/hosts如下:

所有节点配置hosts,修改/etc/hosts如下:

cat /etc/hosts 

::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
127.0.0.1 localhost  localhost

172.26.119.240 k8s-worker8  k8s-worker8
172.26.119.238 k8s-worker6  k8s-worker6
172.26.119.239 k8s-worker7  k8s-worker7

2.2 更新配置 (所有节点全部安装)

# 所有节点安装
# Centos 7安装yum源如下:
# 更改为国内阿里yum源
[root@k8s-worker6 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo && yum install -y yum-utils device-mapper-persistent-data lvm2 && yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-worker6 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-worker6 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

安装必备工具

[root@k8s-worker6 ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下:

[root@k8s-worker6 ~]# systemctl disable --now firewalld && systemctl disable --now NetworkManager
[root@k8s-worker6 ~]# systemctl disable --now dnsmasq

[root@k8s-worker6 ~]# setenforce 0
[root@k8s-worker6 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux &&  sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区(所有节点)

[root@k8s-worker6 ~]# swapoff -a && sysctl -w vm.swappiness=0 && sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

时钟同步

#安装同步时钟ntpdate
[root@k8s-worker6 ~]# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm && yum install ntpdate -y

# 所有节点同步时间。时间同步配置如下:
[root@k8s-worker6 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone && ntpdate time2.aliyun.com

# 加入到crontab,每5分钟同步一次
[root@k8s-worker6 ~]# crontab -e
*/5 * * * * ntpdate time2.aliyun.com

配置limit

[root@k8s-worker6 ~]# ulimit -SHn 655350
[root@k8s-worker6 ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimite

配置免密登录

# Master01节点免密钥登录其他节点:
[root@k8s-worker6 ~]# ssh-keygen -t rsa
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.239
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.240

所有节点升级重启

[root@k8s-worker6 ~]# yum update -y  && reboot 

2.3 Linux内核升级(所有节点)

CentOS7 需要升级内核至4.18+  https://www.kernel.org/ 和 https://elrepo.org/linux/kernel/el7/x86_64/
CentOS 7 dnf可能无法安装内核
[root@k8s-worker6 ~]# dnf --disablerepo=\* --enablerepo=elrepo -y install kernel-ml kernel-ml-devel
[root@k8s-worker6 ~]# grubby --default-kernel

# 使用如下指令查看内核版本
[root@k8s-worker6 ~]# uname -a
# 使用如下指令安装最新内核
#导入ELRepo软件仓库的公共秘钥
[root@k8s-worker6 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm && yum --disablerepo="*" --enablerepo="elrepo-kernel" list available && yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y && reboot
# 更改内核顺序
[root@k8s-worker6 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
# 开机查看内核
[root@k8s-worker6 ~]# uname -a

安装ipvsadm

# 所有节点安装ipvsadm
[root@k8s-worker6 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack。
[root@k8s-worker6 ~]# vim /etc/modules-load.d/ipvs.conf
[root@k8s-worker6 ~]# systemctl enable --now systemd-modules-load.service
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

[root@k8s-worker6 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
# k8s内核装载并应用
[root@k8s-worker6 ~]# sysctl --system

三.所有节点K8S基本组件安装

3.1 安装docker-ce

[root@k8s-worker8 ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm && yum install -y docker-ce-cli-20.10.12-3.el7.x86_64 docker-ce-20.10.12-3.el7.x86_64 && service docker start && chkconfig docker on && systemctl daemon-reload && systemctl enable --now docker && systemctl status docker

# 安装docker-ce 20.10版本

# 查看安装的docker版本
[root@k8s-worker8 ~]# rpm -qa|grep docker

kubeadm不改如下字段,kubelet无法启动温馨提示:由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd。(重要)

[root@k8s-worker8 ~]# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重启
[root@k8s-worker8 ~]# systemctl restart docker
# 查看docekr配置文件,主要看Cgroup Driver: systemd
[root@k8s-worker7 ~]# docker info

#启动docker

3.2 kubekey安装K8S集群

3.2.1 下载KubeKey

# 首先运行以下命令以确保从正确的区域下载 KubeKey。
export KKZONE=cn
# 运行以下命令下载 KubeKey:
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh - 
chmod +x kk

kubekey安装的K8S集群版本和KubeSpherev3.1.1,请参阅下表支持的Kubenetes版本。

KubeSphere 版本 支持的 Kubernetes 版本
v3.1.1 v1.17.0, v1.17.4, v1.17.5, v1.17.6, v1.17.7, v1.17.8, v1.17.9, v1.18.3, v1.18.5, v1.18.6, v1.18.8, v1.19.0, v1.19.0 19.8、v1.19.9、v1.20.4、v1.20.6
#  您还可以运行如下命令以查看 KubeKey 可以安装的所有受支持的 Kubernetes 版本。
[root@k8s-worker6 yaml]# ./kk version --show-supported-k8s
# 使用 KubeKey 可以安装的 Kubernetes 版本与 KubeSphere v3.0.0 支持的 Kubernetes 版本不同。如果您想在现有的 Kubernetes 集群上安装 KubeSphere v3.1.1,您的 Kubernetes 版本必须是 v1.17.x、v1.18.x、v1.19.x 或 v1.20.x。

3.2.2 创建Kubernetes多节点集群

首先创建示例配置文件,执行如下命令:

[root@k8s-worker6 yaml]# ./kk create config --with-kubernetes 1.21.5 --with-kubesphere 3.1.1 -f config-sample.yaml

下面为一个示例的config-sample.yaml文件:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: k8s-worker6, address: *******, internalAddress: *******, user: root, password: *******}
  - {name: k8s-worker7, address: *******, internalAddress: *******, user: *******, password: *******}
  - {name: k8s-worker8, address: *******, internalAddress: *******, user: root, password: *******}
  roleGroups:
    etcd:
    - k8s-worker6
    master: 
    - k8s-worker6
    worker:
    - k8s-worker7
    - k8s-worker8
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""       
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""        
  etcd:
    monitoring: false      
    endpointIps: localhost  
    port: 2379             
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi 
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi  
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:  
      elasticsearchMasterVolumeSize: 4Gi   
      elasticsearchDataVolumeSize: 20Gi   
      logMaxAge: 7          
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""  
  console:
    enableMultiLogin: true 
    port: 30880
  alerting:       
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:    
    enabled: false
  devops:           
    enabled: true
    jenkinsMemoryLim: 2Gi     
    jenkinsMemoryReq: 1500Mi 
    jenkinsVolumeSize: 8Gi   
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:          
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:         
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:             
    enabled: true
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi  
    prometheusVolumeSize: 20Gi  
  multicluster:
    clusterRole: none 
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  servicemesh:    
    enabled: true  
  kubeedge:
    enabled: true
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: 
          - ""           
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

使用如下命令创建集群:

[root@k8s-worker6 ~]# ./kk create cluster -f config-sample.yaml

3.2.3 验证安装

安装完成后,可以看到如下内容:

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.0.2:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             20xx-xx-xx xx:xx:xx
#####################################################

现在,您将能够使用<NodeIP>:30880默认帐户和密码 ( admin/P@88w0rd)访问 KubeSphere 的 Web 控制台。

3.2.4 查看集群状态

[root@k8s-worker6 yaml]# kubectl get nodes

3.2.5 查看cs状态

[root@k8s-worker6 ~]# kubectl get cs

如若为status为unhealthy,则执行下述操作:

[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-scheduler.yaml
[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
## 将两个文件中的- --port=0这一行注释掉

3.3 安装flannel插件(Master节点)(Kubesphere已经安装calico作为网络插件)(此步骤不用执行)

[root@k8s-worker6 ~]# curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果显示超时,直接复制下述文件,下面为kube-flannel.yml文件的具体内容:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

使用kubectl安装flannel插件

# 安装flannel插件
[root@k8s-worker6 yaml]# kubectl apply -f kube-flannel.yml

# 再次查看node状态,此时应该全为Ready状态
[root@k8s-worker6 yaml]# kubectl get nodes

4.4.1部署flannel网络插件时发现flannel pod一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr

# 查看pods运行状态
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces
# 针对失败的pods通过日志查找原因
[root@k8s-worker6 yaml]# kubectl logs kube-flannel-ds-2qhdt -n kube-system

解决方法如下,master节点修改/etc/kubernetes/manifests/kube-controller-manager.yaml文件:

[root@k8s-worker6 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
增加参数:
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
重启kubelet
[root@k8s-worker6 ~]# systemctl restart kubelet
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces

4.5 Master节点添加自动补全脚本到系统

[root@k8s-worker6 ~]# yum install -y bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc

5.安装ingress-controller和default-http-backend

# 相关yaml文件如下
# ingress-controller.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  # replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
        vanje/ingress-controller-ready: "true"
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

#---
# 该块可以不要
#apiVersion: v1
#kind: LimitRange
#metadata:
#  name: ingress-nginx
#  namespace: ingress-nginx
#  labels:
#    app.kubernetes.io/name: ingress-nginx
#    app.kubernetes.io/part-of: ingress-nginx
#spec:
#  limits:
#  - min:
#      memory: 90Mi
#      cpu: 100m
#    type: Container
# default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: fungitive/defaultbackend-amd64
          # docker pull fungitive/defaultbackend-amd64
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---
# 部署示例nginx服务,测试ingress的可用性
##################################################
# nginx-aliyun Deployment
##################################################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.18-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
##################################################
# nginx-aliyun Service
##################################################
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: NodePort
---
##################################################
# nginx-aliyun Ingress
##################################################
apiVersion: networking.k8s.io/v1 
kind: Ingress
metadata:
  name: nginx-ing
  namespace: default
spec:
 # ingressClassName: nginx 
  rules:
  - host: www.hinata.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: nginx-service
              port: 
               number: 80
相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
相关文章
|
5月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
ACK One 的多集群应用分发,可以最小成本地结合您已有的单集群 CD 系统,无需对原先应用资源 YAML 进行修改,即可快速构建成多集群的 CD 系统,并同时获得强大的多集群资源调度和分发的能力。
183 9
|
5月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
本文介绍如何利用阿里云的分布式云容器平台ACK One的多集群应用分发功能,结合云效CD能力,快速将单集群CD系统升级为多集群CD系统。通过增加分发策略(PropagationPolicy)和差异化策略(OverridePolicy),并修改单集群kubeconfig为舰队kubeconfig,可实现无损改造。该方案具备多地域多集群智能资源调度、重调度及故障迁移等能力,帮助用户提升业务效率与可靠性。
|
7月前
|
存储 Kubernetes 监控
K8s集群实战:使用kubeadm和kuboard部署Kubernetes集群
总之,使用kubeadm和kuboard部署K8s集群就像回归童年一样,简单又有趣。不要忘记,技术是为人服务的,用K8s集群操控云端资源,我们不过是想在复杂的世界找寻简单。尽管部署过程可能遇到困难,但朝着简化复杂的目标,我们就能找到意义和乐趣。希望你也能利用这些工具,找到你的乐趣,满足你的需求。
646 33
|
7月前
|
Kubernetes 开发者 Docker
集群部署:使用Rancher部署Kubernetes集群。
以上就是使用 Rancher 部署 Kubernetes 集群的流程。使用 Rancher 和 Kubernetes,开发者可以受益于灵活性和可扩展性,允许他们在多种环境中运行多种应用,同时利用自动化工具使工作负载更加高效。
372 19
|
7月前
|
人工智能 分布式计算 调度
打破资源边界、告别资源浪费:ACK One 多集群Spark和AI作业调度
ACK One多集群Spark作业调度,可以帮助您在不影响集群中正在运行的在线业务的前提下,打破资源边界,根据各集群实际剩余资源来进行调度,最大化您多集群中闲置资源的利用率。
|
7月前
|
人工智能 运维 Kubernetes
2025 超详细!Lens Kubernetes IDE 多平台下载安装与集群管理教程
Lens 是一款企业级 Kubernetes 可视化操作平台,2025版实现了三大技术革新:AI智能运维(异常检测准确率98.7%)、多云联邦管理(支持50+集群)和实时3D拓扑展示。本文介绍其安装环境、配置流程、核心功能及高阶技巧,帮助用户快速上手并解决常见问题。适用于 Windows、macOS 和 Ubuntu 系统,需满足最低配置要求并前置依赖组件如 kubectl 和 Helm。通过 Global Cluster Hub 实现多集群管理,AI辅助故障诊断提升运维效率,自定义监控看板和插件生态扩展提供更多功能。
|
7月前
|
Prometheus Kubernetes 监控
OpenAI故障复盘丨如何保障大规模K8s集群稳定性
OpenAI故障复盘丨如何保障大规模K8s集群稳定性
217 0
OpenAI故障复盘丨如何保障大规模K8s集群稳定性
|
9月前
|
缓存 容灾 网络协议
ACK One多集群网关:实现高效容灾方案
ACK One多集群网关可以帮助您快速构建同城跨AZ多活容灾系统、混合云同城跨AZ多活容灾系统,以及异地容灾系统。
|
8月前
|
运维 分布式计算 Kubernetes
ACK One多集群Service帮助大批量应用跨集群无缝迁移
ACK One多集群Service可以帮助您,在无需关注服务间的依赖,和最小化迁移风险的前提下,完成跨集群无缝迁移大批量应用。
|
10月前
|
Kubernetes Ubuntu 网络安全
ubuntu使用kubeadm搭建k8s集群
通过以上步骤,您可以在 Ubuntu 系统上使用 kubeadm 成功搭建一个 Kubernetes 集群。本文详细介绍了从环境准备、安装 Kubernetes 组件、初始化集群到管理和使用集群的完整过程,希望对您有所帮助。在实际应用中,您可以根据具体需求调整配置,进一步优化集群性能和安全性。
740 13

热门文章

最新文章

推荐镜像

更多