Kubekey安装kubernetes集群

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
可观测监控 Prometheus 版,每月50GB免费额度
简介: 使用开源的Kubekey安装高可用的kubernetes集群

一.K8S1.20.x的重要更新

1、Kubectl debug 设置一个临时容器
2、Sidecar 
3、Volume:更改目录权限,fsGroup
4、ConfigMap和Secret

K8S官网:https://kubernetes.io/docs/setup/
最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

二.K8S1.20.x的安装

2.1 集群规划

主机名 IP地址 说明
k8s-worker6 172.26.119.238 master节点
k8s-worker7 172.26.119.239 worker01节点
k8s-worker8 172.26.119.240 worker02节点
#查看Centos版本
cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)
##https://www.cnblogs.com/liucx/

k8s高可用结构图

修改主机名

# master节点
hostnamectl set-hostname k8s-worker6
#node1节点
hostnamectl set-hostname k8s-worker7
#node2节点
hostnamectl set-hostname k8s-worker8

所有节点配置hosts,修改/etc/hosts如下:

所有节点配置hosts,修改/etc/hosts如下:

cat /etc/hosts 

::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
127.0.0.1 localhost  localhost

172.26.119.240 k8s-worker8  k8s-worker8
172.26.119.238 k8s-worker6  k8s-worker6
172.26.119.239 k8s-worker7  k8s-worker7

2.2 更新配置 (所有节点全部安装)

# 所有节点安装
# Centos 7安装yum源如下:
# 更改为国内阿里yum源
[root@k8s-worker6 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo && yum install -y yum-utils device-mapper-persistent-data lvm2 && yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-worker6 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-worker6 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

安装必备工具

[root@k8s-worker6 ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下:

[root@k8s-worker6 ~]# systemctl disable --now firewalld && systemctl disable --now NetworkManager
[root@k8s-worker6 ~]# systemctl disable --now dnsmasq

[root@k8s-worker6 ~]# setenforce 0
[root@k8s-worker6 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux &&  sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区(所有节点)

[root@k8s-worker6 ~]# swapoff -a && sysctl -w vm.swappiness=0 && sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

时钟同步

#安装同步时钟ntpdate
[root@k8s-worker6 ~]# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm && yum install ntpdate -y

# 所有节点同步时间。时间同步配置如下:
[root@k8s-worker6 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone && ntpdate time2.aliyun.com

# 加入到crontab,每5分钟同步一次
[root@k8s-worker6 ~]# crontab -e
*/5 * * * * ntpdate time2.aliyun.com

配置limit

[root@k8s-worker6 ~]# ulimit -SHn 655350
[root@k8s-worker6 ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimite

配置免密登录

# Master01节点免密钥登录其他节点:
[root@k8s-worker6 ~]# ssh-keygen -t rsa
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.239
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.240

所有节点升级重启

[root@k8s-worker6 ~]# yum update -y  && reboot 

2.3 Linux内核升级(所有节点)

CentOS7 需要升级内核至4.18+  https://www.kernel.org/ 和 https://elrepo.org/linux/kernel/el7/x86_64/
CentOS 7 dnf可能无法安装内核
[root@k8s-worker6 ~]# dnf --disablerepo=\* --enablerepo=elrepo -y install kernel-ml kernel-ml-devel
[root@k8s-worker6 ~]# grubby --default-kernel

# 使用如下指令查看内核版本
[root@k8s-worker6 ~]# uname -a
# 使用如下指令安装最新内核
#导入ELRepo软件仓库的公共秘钥
[root@k8s-worker6 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm && yum --disablerepo="*" --enablerepo="elrepo-kernel" list available && yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y && reboot
# 更改内核顺序
[root@k8s-worker6 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
# 开机查看内核
[root@k8s-worker6 ~]# uname -a

安装ipvsadm

# 所有节点安装ipvsadm
[root@k8s-worker6 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack。
[root@k8s-worker6 ~]# vim /etc/modules-load.d/ipvs.conf
[root@k8s-worker6 ~]# systemctl enable --now systemd-modules-load.service
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

[root@k8s-worker6 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
# k8s内核装载并应用
[root@k8s-worker6 ~]# sysctl --system

三.所有节点K8S基本组件安装

3.1 安装docker-ce

[root@k8s-worker8 ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm && yum install -y docker-ce-cli-20.10.12-3.el7.x86_64 docker-ce-20.10.12-3.el7.x86_64 && service docker start && chkconfig docker on && systemctl daemon-reload && systemctl enable --now docker && systemctl status docker

# 安装docker-ce 20.10版本

# 查看安装的docker版本
[root@k8s-worker8 ~]# rpm -qa|grep docker

kubeadm不改如下字段,kubelet无法启动温馨提示:由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd。(重要)

[root@k8s-worker8 ~]# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重启
[root@k8s-worker8 ~]# systemctl restart docker
# 查看docekr配置文件,主要看Cgroup Driver: systemd
[root@k8s-worker7 ~]# docker info

#启动docker

3.2 kubekey安装K8S集群

3.2.1 下载KubeKey

# 首先运行以下命令以确保从正确的区域下载 KubeKey。
export KKZONE=cn
# 运行以下命令下载 KubeKey:
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh - 
chmod +x kk

kubekey安装的K8S集群版本和KubeSpherev3.1.1,请参阅下表支持的Kubenetes版本。

KubeSphere 版本 支持的 Kubernetes 版本
v3.1.1 v1.17.0, v1.17.4, v1.17.5, v1.17.6, v1.17.7, v1.17.8, v1.17.9, v1.18.3, v1.18.5, v1.18.6, v1.18.8, v1.19.0, v1.19.0 19.8、v1.19.9、v1.20.4、v1.20.6
#  您还可以运行如下命令以查看 KubeKey 可以安装的所有受支持的 Kubernetes 版本。
[root@k8s-worker6 yaml]# ./kk version --show-supported-k8s
# 使用 KubeKey 可以安装的 Kubernetes 版本与 KubeSphere v3.0.0 支持的 Kubernetes 版本不同。如果您想在现有的 Kubernetes 集群上安装 KubeSphere v3.1.1,您的 Kubernetes 版本必须是 v1.17.x、v1.18.x、v1.19.x 或 v1.20.x。

3.2.2 创建Kubernetes多节点集群

首先创建示例配置文件,执行如下命令:

[root@k8s-worker6 yaml]# ./kk create config --with-kubernetes 1.21.5 --with-kubesphere 3.1.1 -f config-sample.yaml

下面为一个示例的config-sample.yaml文件:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: k8s-worker6, address: *******, internalAddress: *******, user: root, password: *******}
  - {name: k8s-worker7, address: *******, internalAddress: *******, user: *******, password: *******}
  - {name: k8s-worker8, address: *******, internalAddress: *******, user: root, password: *******}
  roleGroups:
    etcd:
    - k8s-worker6
    master: 
    - k8s-worker6
    worker:
    - k8s-worker7
    - k8s-worker8
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""       
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""        
  etcd:
    monitoring: false      
    endpointIps: localhost  
    port: 2379             
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi 
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi  
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:  
      elasticsearchMasterVolumeSize: 4Gi   
      elasticsearchDataVolumeSize: 20Gi   
      logMaxAge: 7          
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""  
  console:
    enableMultiLogin: true 
    port: 30880
  alerting:       
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:    
    enabled: false
  devops:           
    enabled: true
    jenkinsMemoryLim: 2Gi     
    jenkinsMemoryReq: 1500Mi 
    jenkinsVolumeSize: 8Gi   
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:          
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:         
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:             
    enabled: true
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi  
    prometheusVolumeSize: 20Gi  
  multicluster:
    clusterRole: none 
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  servicemesh:    
    enabled: true  
  kubeedge:
    enabled: true
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: 
          - ""           
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

使用如下命令创建集群:

[root@k8s-worker6 ~]# ./kk create cluster -f config-sample.yaml

3.2.3 验证安装

安装完成后,可以看到如下内容:

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.0.2:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             20xx-xx-xx xx:xx:xx
#####################################################

现在,您将能够使用<NodeIP>:30880默认帐户和密码 ( admin/P@88w0rd)访问 KubeSphere 的 Web 控制台。

3.2.4 查看集群状态

[root@k8s-worker6 yaml]# kubectl get nodes

3.2.5 查看cs状态

[root@k8s-worker6 ~]# kubectl get cs

如若为status为unhealthy,则执行下述操作:

[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-scheduler.yaml
[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
## 将两个文件中的- --port=0这一行注释掉

3.3 安装flannel插件(Master节点)(Kubesphere已经安装calico作为网络插件)(此步骤不用执行)

[root@k8s-worker6 ~]# curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果显示超时,直接复制下述文件,下面为kube-flannel.yml文件的具体内容:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

使用kubectl安装flannel插件

# 安装flannel插件
[root@k8s-worker6 yaml]# kubectl apply -f kube-flannel.yml

# 再次查看node状态,此时应该全为Ready状态
[root@k8s-worker6 yaml]# kubectl get nodes

4.4.1部署flannel网络插件时发现flannel pod一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr

# 查看pods运行状态
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces
# 针对失败的pods通过日志查找原因
[root@k8s-worker6 yaml]# kubectl logs kube-flannel-ds-2qhdt -n kube-system

解决方法如下,master节点修改/etc/kubernetes/manifests/kube-controller-manager.yaml文件:

[root@k8s-worker6 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
增加参数:
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
重启kubelet
[root@k8s-worker6 ~]# systemctl restart kubelet
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces

4.5 Master节点添加自动补全脚本到系统

[root@k8s-worker6 ~]# yum install -y bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc

5.安装ingress-controller和default-http-backend

# 相关yaml文件如下
# ingress-controller.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  # replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
        vanje/ingress-controller-ready: "true"
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

#---
# 该块可以不要
#apiVersion: v1
#kind: LimitRange
#metadata:
#  name: ingress-nginx
#  namespace: ingress-nginx
#  labels:
#    app.kubernetes.io/name: ingress-nginx
#    app.kubernetes.io/part-of: ingress-nginx
#spec:
#  limits:
#  - min:
#      memory: 90Mi
#      cpu: 100m
#    type: Container
# default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: fungitive/defaultbackend-amd64
          # docker pull fungitive/defaultbackend-amd64
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---
# 部署示例nginx服务,测试ingress的可用性
##################################################
# nginx-aliyun Deployment
##################################################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.18-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
##################################################
# nginx-aliyun Service
##################################################
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: NodePort
---
##################################################
# nginx-aliyun Ingress
##################################################
apiVersion: networking.k8s.io/v1 
kind: Ingress
metadata:
  name: nginx-ing
  namespace: default
spec:
 # ingressClassName: nginx 
  rules:
  - host: www.hinata.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: nginx-service
              port: 
               number: 80
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
18天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
19天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
3天前
|
Kubernetes Ubuntu Linux
我应该如何安装Kubernetes
我应该如何安装Kubernetes
|
26天前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
97 1
|
1月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
387 1
|
1月前
|
Kubernetes Linux 开发工具
centos7通过kubeadm安装k8s 1.27.1版本
centos7通过kubeadm安装k8s 1.27.1版本
|
30天前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
131 0
|
1月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
69 0
|
1月前
|
弹性计算 Kubernetes Linux
如何使用minikube搭建k8s集群
如何使用minikube搭建k8s集群
|
1月前
|
Kubernetes Docker 容器
rancher docker k8s安装(二)
rancher docker k8s安装(二)
37 0