kubernets学习 -环境搭建

简介: kubernets学习 -环境搭建

环境准备

  • 3台虚拟机
  • 硬件配置2GB以上内存、2个以上CPU、硬盘30GB以上
  • 集群间网络可以互通
  • 可以访问外网
  • 禁止swap分区
    | 机器名称 | 机器IP |
    | --- | --- |
    | k8smaster1 | 192.168.56.101 |
    | k8smaster2 | 192.168.56.102 |
    | k8smaster3 | 192.168.56.103 |

VirtualBox安装centos8

  1. 创建一个文件夹用来存放系统,这里我选择的是D:\develop_soft\image,并在其中创建三个文件夹,分别为k8smaster1、k8smaster2、k8smaster3
  2. 打开VirtualBox,点击新建,创建一个虚拟机‘

image.png

  1. 按照下图配置,虚拟机名称为k8smaster1,文件夹选择image下的k8smaster1,操作系统选择Linux,版本选择Other Linux64bit

image.png

  1. 分配内存,大小至少是2G即2048MB

image.png

  1. 选择现在创建虚拟硬盘

image.png

  1. 选择VDI,动态分配

image.png
image.png

  1. 硬盘大小建议40GB

image.png

配置虚拟机启动

  1. 选择创建的虚拟机,右键点击设置

image.png

  1. 常规->高级->共享粘贴板和拖放选择双向

image.png

  1. 系统->处理器->选择处理器为2个CPU

image.png

  1. 存储->选择启动盘,如果已经创建好了可以直接选择已经创建的,没有参考创建启动盘一节

image.png

  1. 点击网卡1->选择NAT

image.png

  1. 点击网卡2选择仅主机Host-Only

image.png

创建启动盘

  1. 进入创建虚拟光盘界面

image.png

  1. 点击注册

image.png

  1. 选择镜像的IOS文件所在位置,然后点击确定

    安装Centos8

  2. 选择Install CentOS ...

image.png

  1. 语言选择,建议直接英语即可

image.png

  1. 点击Installation Destion(安装位置),默认点击Done即可

image.png
image.png

  1. 点击Network&Host Manager,将两块网卡打开

image.png
image.png

  1. 安装软件选择,如果不是特殊要求可以直接默认即可,不需要更改

image.png

  1. 设置root密码

image.png

  1. 设置完成点击开始安装即可

image.png

设置IP

网卡选择第二块,因为默认安装后这个配置是没有启动的,所以我们需要配置可以使用ifconfig查看网卡名称,我这边都是enp0s8,如图,我这边是已经配置好IP的
image.png

vim /etc/sysconfig/network-scripts/ifcfg-enp0s8 # 注意后面为网卡名称

主要修改

BOOTPROTO=static      # 设置静态IP
ONBOOT=yes            # 设置开机网卡自动启动
IPADDR=192.168.56.101 # 设置网卡IP

完整配置如下:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=9355bd28-6b3c-40ea-b323-5beaee6b78d9
DEVICE=enp0s8
ONBOOT=yes
IPADDR=192.168.56.101

环境基础配置

基础软件安装

yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data lvm2 git lrzsz -y

配置阿里云源

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos8_base.repo
yum clean all
yum makecache
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

配置host

vim /etc/hosts

添加如下配置

192.168.56.101 master1
192.168.56.102 master2
192.168.56.103 master3

设置域名解析服务器

vim /etc/resolv.conf

设置如下

nameserver 114.114.114.114

关闭防火墙

所有节点关闭firewalld 、dnsmasq、NetworkManager、selinux

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
setenforce 0

禁用swap分区

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

同步时间

centos7

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com

Centos8使用chrony而不是使用ntpdate

  1. 修改配置文件
    vim /etc/chrony.conf
    
    2.centos.pool.ntp.org iburst修改为ntp.aliyun.com iburst,完整配置如下所示
    ```go

    Use public servers from the pool.ntp.org project.

    Please consider joining the pool (http://www.pool.ntp.org/join.html).

    pool 2.centos.pool.ntp.org iburst

    pool ntp.aliyun.com iburst

    Record the rate at which the system clock gains/losses time.

    driftfile /var/lib/chrony/drift

Allow the system clock to be stepped in the first three updates

if its offset is larger than 1 second.

makestep 1.0 3

Enable kernel synchronization of the real-time clock (RTC).

rtcsync

Enable hardware timestamping on all interfaces that support it.

hwtimestamp *

Increase the minimum number of selectable sources required to adjust

the system clock.

minsources 2

Allow NTP client access from local network.

allow 192.168.0.0/16

Serve time even if not synchronized to a time source.

local stratum 10

Specify file containing keys for NTP authentication.

keyfile /etc/chrony.keys

Get TAI-UTC offset and leap seconds from the system tz database.

leapsectz right/UTC

Specify directory for log files.

logdir /var/log/chrony

Select which information is logged.

log measurements statistics tracking


2. 重新加载配置
```go
systemctl restart chronyd.service
  1. 执行时间同步命令
    ```go

    chronyc sources -v

    .-- Source mode '^' = server, '=' = peer, '#' = local clock.
    / .- Source state '*' = current best, '+' = combined, '-' = not combined,
    | / 'x' = may be in error, '~' = too variable, '?' = unusable.
    || .- xxxx [ yyyy ] +/- zzzz
    || Reachability register (octal) -. | xxxx = adjusted offset,
    || Log2(Polling interval) --. | | yyyy = measured offset,
    || \ | | zzzz = estimated error.
    || | | \

    MS Name/IP address Stratum Poll Reach LastRx Last sample

    ^* 203.107.6.88 2 6 7 0 -30us[ +289us] +/- 26ms


4. 查看时间
```go
date
Sun Mar 13 15:44:31 CST 2022

master1免密登录其他节点

ssh-keygen -t rsa
 for i in master1 master2 master3 ;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

内核升级

导入ELRepo仓库的公共密钥:

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

安装ELRepo仓库的yum源:

yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm

列出可用的系统内核安装包

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

安装最新版内核

yum --enablerepo=elrepo-kernel install kernel-ml

设置以新的内核启动
0 表示最新安装的内核,设置为 0 表示以新版本内核启动:

grub2-set-default 0

生成grub配置文件并重启系统

grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

验证新内核

uname -r

查看系统中的内核

rpm -qa | grep kernel

删除旧内核

yum remove kernel-core-4.18.0 kernel-devel-4.18.0 kernel-tools-libs-4.18.0 kernel-headers-4.18.0

再查看已安装的内核

rpm -qa | grep kernel

安装IPVS

yum install ipvsadm ipset sysstat conntrack libseccomp -y

配置加载模块,注意如果内核是4.18及以下nf_conntrack_ipv4,4.19及以上应该使用nf_conntrack

vim /etc/modules-load.d/ipvs.conf 
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

重新加载配置

systemctl enable --now systemd-modules-load.service

所有节点配置K8s内核

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

重启

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

基本组件安装

安装containerd

列出可用的版本

yum list docker-ce --showduplicates | sort -r
yum install docker-ce-20.10.* docker-ce-cli-20.10.* containerd.io -y

配置Containerd所需的模块

# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

加载内核模块

modprobe -- overlay
modprobe -- br_netfilter

配置内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

加载内核

sysctl --system

创建配置文件

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

修改配置文件

vim /etc/containerd/config.toml

修改以下内容

找到containerd.runtimes.runc.options,修改SystemdCgroup = true
将所有的sanbox_image修改为registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6

启动containerd并配置开机启动

systemctl daemon-reload
systemctl enable --now containerd
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

说明一下Cgroup drivers:systemd cgroupfs 区别
那么 systemd 和 cgroupfs 这两种驱动有什么区别呢?

  1. systemd cgroup driver 是 systemd 本身提供了一个 cgroup 的管理方式,使用systemd 做 cgroup 驱动的话,所有的 cgroup 操作都必须通过 systemd 的接口来完成,不能手动更改 cgroup 的文件
  2. cgroupfs 驱动就比较直接,比如说要限制内存是多少、要用 CPU share 为多少?直接把 pid 写入对应的一个 cgroup 文件,然后把对应需要限制的资源也写入相应的 memory cgroup 文件和 CPU 的 cgroup 文件就可以了

所以可以看出来 systemd 更加安全,因为不能手动去更改 cgroup 文件,当然我们也推荐使用 systemd 驱动来管理 cgroup。

K8s组件安装

下载kubernetes1.23.4 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1234

wget https://storage.googleapis.com/kubernetes-release/release/v1.23.4/kubernetes-server-linux-amd64.tar.gz

下载etcd3.5.1 https://github.com/etcd-io/etcd/releases

https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz

解压kubernetes组件

tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{
   
   let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

解压etcd组件

 tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.1-linux-amd64/etcd{
   
   ,ctl}

将组件发送到其他节点

MasterNodes='master2 master3'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{
   
   let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done

下载CNI https://github.com/containernetworking/plugins/releases/tag/v1.1.1

wget https://github.com/containernetworking/plugins/releases/download/v1.1.0/cni-plugins-linux-amd64-v1.1.1.tgz

所有节点创建/opt/cni/bin目录

mkdir -p /opt/cni/bin

解压cni并发送到所有节点

tar -zxf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin
for NODE in $MasterNodes; do     ssh $NODE 'mkdir -p /opt/cni/bin';     scp /opt/cni/bin/* $NODE:/opt/cni/bin/; done

生成证书

master1下载生成证书工具 https://github.com/cloudflare/cfssl

wget https://github.com/cloudflare/cfssl/releases/tag/v1.6.1/cfssl_1.6.1_linux_amd64 -O /usr/local/bin/cfssl
wget https://github.com/cloudflare/cfssl/releases/tag/v1.6.1/cfssljson_1.6.1_linux_amd64 -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/*

所有节点创建etcd证书目录

mkdir -p /etc/etcd/ssl

所有节点创建kubernetes目录

mkdir -p /etc/kubernetes/pki

生成etcd证书

master1生成etcd证书
生成自签证书颁发机构(CA)

cat > ca-config.json << EOF
{
   
   
  "signing": {
   
   
    "default": {
   
   
      "expiry": "876000h"
    },
    "profiles": {
   
   
      "kubernetes": {
   
   
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
cat > etcd-ca-csr.json << EOF
{
   
   
  "CN": "etcd",
  "key": {
   
   
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
   
   
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
   
   
    "expiry": "876000h"
  }
}
EOF
cat > etcd-csr.json << EOF
{
   
   
  "CN": "etcd",
  "key": {
   
   
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
   
   
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF

生成证书CSR文件:证书签名请求文件

 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

颁发客户端证书

cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,master1,master2,master3,192.168.56.101,192.168.56.102,192.168.56.103 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

将etcd证书复制到其他节点

for NODE in $MasterNodes; do
     ssh $NODE "mkdir -p /etc/etcd/ssl"
     for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do
       scp /etc/etcd/ssl/${
   
   FILE} $NODE:/etc/etcd/ssl/${
   
   FILE}
     done
 done

生成kubernetes证书

cd /etc/kubernetes/pki

生成CA证书

cat > ca-csr.json << EOF
{
   
   
  "CN": "kubernetes",
  "key": {
   
   
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
   
   
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
   
   
    "expiry": "876000h"
  }
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

生成apiserver证书

cat > ca-config.json << EOF
{
   
   
  "signing": {
   
   
    "default": {
   
   
      "expiry": "876000h"
    },
    "profiles": {
   
   
      "kubernetes": {
   
   
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
cat > ca-config.json << EOF
{
   
   
  "signing": {
   
   
    "default": {
   
   
      "expiry": "876000h"
    },
    "profiles": {
   
   
      "kubernetes": {
   
   
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
cat > apiserver-csr.json << EOF
{
   
   
  "CN": "kube-apiserver",
  "key": {
   
   
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
   
   
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cfssl gencert \
    -ca=/etc/kubernetes/pki/ca.pem \
    -ca-key=/etc/kubernetes/pki/ca-key.pem \
    -config=ca-config.json \
    -hostname=10.96.0.1,192.168.56.88,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.56.101,192.168.56.102,192.168.56.103 \
    -profile=kubernetes \
    apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

生成apiserver的聚合证书

cat > front-proxy-ca-csr.json << EOF
{
   
   
  "CN": "kubernetes",
  "key": {
   
   
     "algo": "rsa",
     "size": 2048
  },
  "ca": {
   
   
    "expiry": "876000h"
  }
}
EOF
cat > front-proxy-client-csr.json  << EOF
{
   
   
  "CN": "front-proxy-client",
  "key": {
   
   
     "algo": "rsa",
     "size": 2048
  }
}
EOF
cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
cfssl gencert \ 
    -ca=/etc/kubernetes/pki/front-proxy-ca.pem \
    -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes \
    front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

生成controller-manager证书

cat > manager-csr.json << EOF

{
   
   
  "CN": "system:kube-controller-manager",
  "key": {
   
   
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
   
   
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
 cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

配置一个集群项

kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.56.88:8443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

设置一个环境项,一个上下文

kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

设置一个用户项,set-credentials

kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

使用某个环境当作默认环境

kubectl config use-context system:kube-controller-manager@kubernetes \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

生成scheduler证书

cat > scheduler-csr.json << EOF
{
   
   
  "CN": "system:kube-scheduler",
  "key": {
   
   
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
   
   
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

设置一个集群项

kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.56.88:8443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

设置一个环境项,一个上下文

kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

设置一个用户项,set-credentials

kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

使用某个环境当作默认环境

kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

生成admin证书

cat > admin-csr.json << EOF
{
   
   
  "CN": "admin",
  "key": {
   
   
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
   
   
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

设置一个集群项

kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://192.168.56.88:8443 \
    --kubeconfig=/etc/kubernetes/admin.kubeconfig

设置一个环境项,一个上下文

kubectl config set-context kubernetes-admin@kubernetes \
    --cluster=kubernetes \
    --user=kubernetes-admin \
    --kubeconfig=/etc/kubernetes/admin.kubeconfig

设置一个用户项,set-credentials

kubectl config set-credentials kubernetes-admin \
    --client-certificate=/etc/kubernetes/pki/admin.pem \
    --client-key=/etc/kubernetes/pki/admin-key.pem \
    --embed-certs=true \
    --kubeconfig=/etc/kubernetes/admin.kubeconfig

使用某个环境当作默认环境

kubectl config use-context kubernetes-admin@kubernetes \
    --kubeconfig=/etc/kubernetes/admin.kubeconfig

生成ServiceAccount Key

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

发送到其他节点

for NODE in master2 master3; do 
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do 
scp /etc/kubernetes/pki/${
   
   FILE} $NODE:/etc/kubernetes/pki/${
   
   FILE};
done; 
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do 
scp /etc/kubernetes/${
   
   FILE} $NODE:/etc/kubernetes/${
   
   FILE};
done;
done

kubernetes组件配置

etcd配置

master1

cat > /etc/etcd/etcd.config.yml << EOF
name: 'master1'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.56.101:2380'
listen-client-urls: 'https://192.168.56.101:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.56.101:2380'
advertise-client-urls: 'https://192.168.56.101:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'master1=https://192.168.56.101:2380,master2=https://192.168.56.102:2380,master3=https://192.168.56.103:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-output: default
force-new-cluster: false
EOF

master2

cat > /etc/etcd/etcd.config.yml << EOF
name: 'master2'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.56.102:2380'
listen-client-urls: 'https://192.168.56.102:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.56.102:2380'
advertise-client-urls: 'https://192.168.56.102:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'master1=https://192.168.56.101:2380,master2=https://192.168.56.102:2380,master3=https://192.168.56.103:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-output: default
force-new-cluster: false
EOF

master3

cat > /etc/etcd/etcd.config.yml << EOF
name: 'master3'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.56.103:2380'
listen-client-urls: 'https://192.168.56.103:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.56.103:2380'
advertise-client-urls: 'https://192.168.56.103:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'master1=https://192.168.56.101:2380,master2=https://192.168.56.102:2380,master3=https://192.168.56.103:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-output: default
force-new-cluster: false
EOF

创建service文件

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
export  ETCDCTL_API=3
etcdctl --endpoints="192.168.56.101:2379,192.168.56.102:2379,192.168.56.103:2379" \
    --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem \
    --cert=/etc/kubernetes/pki/etcd/etcd.pem \
    --key=/etc/kubernetes/pki/etcd/etcd-key.pem \
    endpoint status --write-out=table

高可用配置

所有节点安装keepalived和haproxy

yum install keepalived haproxy -y

haproxy配置

cat > /etc/haproxy/haproxy.cfg  << EOF
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master1    192.168.56.101:6443  check
  server master2    192.168.56.102:6443  check
  server master3    192.168.56.103:6443  check
EOF

keepalived master1配置,注意每个节点IP和网卡不一样

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
   
   
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
   
   
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
   
   
    state MASTER
    interface enp0s8
    mcast_src_ip 192.168.56.101
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
   
   
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
   
   
        192.168.56.88
    }
    track_script {
   
         chk_apiserver 
} }
EOF

master2

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
   
   
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
   
   
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
   
   
    state MASTER
    interface enp0s8
    mcast_src_ip 192.168.56.102
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
   
   
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
   
   
        192.168.56.88
    }
    track_script {
   
         chk_apiserver 
} }
EOF

master3

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
   
   
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
   
   
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
   
   
    state MASTER
    interface enp0s8
    mcast_src_ip 192.168.56.103
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
   
   
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
   
   
        192.168.56.88
    }
    track_script {
   
         chk_apiserver 
} }
EOF

健康检查配置

cat > /etc/keepalived/check_apiserver.sh  << EOF
#!/bin/bash

err=0
for k in $(seq 1 5)
do
    check_code=$(pgrep kube-apiserver)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_apiserver.sh
systemctl enable --now haproxy
systemctl enable --now keepalived

组件配置

创建相关目录

mkdir -p /etc/kubernetes/manifests/ \
    /etc/systemd/system/kubelet.service.d \
    /var/lib/kubelet \
    /var/log/kubernetes

ApiServer

vim /usr/lib/systemd/system/kube-apiserver.service

master1

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.56.101 \
      --service-cluster-ip-range=10.96.0.0/12  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.56.101:2379,https://192.168.56.102:2379,https://192.168.56.103:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User  \
      --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

master2

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.56.102 \
      --service-cluster-ip-range=10.96.0.0/12  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.56.101:2379,https://192.168.56.102:2379,https://192.168.56.103:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User  \
      --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

master3

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.56.103 \
      --service-cluster-ip-range=10.96.0.0/12  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.56.101:2379,https://192.168.56.102:2379,https://192.168.56.103:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User  \
      --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
vim /etc/kubernetes/token.csv
d7d356746b508a1a478e49968fba7947,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

启动apiserver

systemctl daemon-reload && systemctl enable --now kube-apiserver

未完待续

常见问题及解决

虚拟机启动Failed to open/create the internal network HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter

禁用虚拟网卡 VirtualBox Host-Only Ethernet Adapter
然后再启用它,再勾选"VirtualBox NDIS6 Bridged Networking driver",然后重复禁用->启用

启动haproxy失败Starting proxy stats: cannot bind socket >

setsebool -P haproxy_connect_any=1

om/v1/course/intro?courseId=5&agentId=0

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
18天前
|
SQL Java 开发工具
若依矿建部署教程
若依矿建部署教程
12 0
|
7月前
|
Java 关系型数据库 MySQL
开发环境部署教程
开发环境部署教程
34 0
|
9月前
|
Kubernetes Docker Perl
【k8s 系列】k8s 学习八,在 K8S 中部署一个应用 下
接着上一篇继续部署应用到 K8S中 之前简单部署的简单集群,三个工作节点是运行在 docker 和 kubelet 的,还有一个是控制节点
|
9月前
|
Kubernetes 负载均衡 程序员
【k8s 系列】k8s 学习七,在 K8S 中部署一个应用 上
本身在 K8S 中部署一个应用是需要写 yaml 文件的,我们这次简单部署,通过拉取网络上的镜像来部署应用,会用图解的方式来分享一下,过程中都发生了什么
169 0
|
10月前
|
监控 安全 数据安全/隐私保护
EFK实战一 - 基础环境搭建
EFK实战一 - 基础环境搭建
203 1
EFK实战一 - 基础环境搭建
|
弹性计算 运维 Kubernetes
动手实操,让你的 Kubernetes 集群弹起来!
本文将对于集群自动弹性伸缩(cluster-autosclaer)进行介绍,并在 ACK 集群上进行实操。
4445 4
|
SQL 分布式计算 关系型数据库
安装部署 | 学习笔记
快速学习 安装部署
110 0
安装部署 | 学习笔记
|
Kubernetes Linux 网络安全
『Kubernetes』Linux安装K8S集群过程笔记
📣读完这篇文章里你能收获到 - K8S安装全过程 - 博主自己实操笔记带你跳过所有坑
582 0
『Kubernetes』Linux安装K8S集群过程笔记
|
前端开发 Java 关系型数据库
【Docker】基于实例项目的集群部署(二)部署项目实例介绍与搭建 | 前后端分离项目
【Docker】基于实例项目的集群部署(二)部署项目实例介绍与搭建 | 前后端分离项目
156 0
【Docker】基于实例项目的集群部署(二)部署项目实例介绍与搭建 | 前后端分离项目
|
存储 Kubernetes 安全
Kubernetes升级:自己动手的权威指南
Kubernetes升级:自己动手的权威指南
462 0
Kubernetes升级:自己动手的权威指南