基于K3S构建高可用Rancher集群

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
RDS MySQL Serverless 高可用系列,价值2615元额度,1个月
简介: 基于K3S构建高可用Rancher集群

K3S简述:


K3s (轻量级 Kubernetes): 和 RKE 类似,也是经过认证的 Kubernetes 发行版。它比 RKE 更新,更易用且更轻量化,全部组件都在一个小于 100 MB 的二进制文件中。从 Rancher v2.4 开始,Rancher 可以安装在 K3s 集群上。


详情见:https://rancher2.docs.rancher.cn/docs/installation/_index

Rancher简述:


Rancher 是为使用容器的公司打造的容器管理平台。Rancher 简化了使用 Kubernetes 的流程,开发者可以随处运行 Kubernetes(Run Kubernetes Everywhere),满足 IT 需求规范,赋能 DevOps 团队。


详情见:https://rancher2.docs.rancher.cn/docs/overview/_index


使用环境:


操作系统 主机名 IP地址 节点 配置
CentOS 7 1810 nginx-master 192.168.111.21 Nginx主服务器 2C4G
CentOS 7 1810 nginx-backup 192.168.111.22 Nginx备服务器 2C4G
ubuntu-18.04.3-live-server k3s-node1 192.168.111.50 k3s节点1 4C8G
ubuntu-18.04.3-live-server k3s-node2 192.168.111.51 k3s节点2 4C8G
CentOS 7 1810 k3s-mysql 192.168.111.52 mysql 4C8G


部署前系统环境准备:


关闭防火墙和SeLinux


为防止因端口问题造成集群组建失败,我们在这里提前关闭防火墙以及selinux


  • centos :


systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config


  • Ubuntu:


sudo ufw disable


  • 节点及Docker功能调优


https://rancher2.docs.rancher.cn/docs/best-practices/optimize/os/_index


配置host文件:


192.168.111.21 nginx-master
192.168.111.22 nginx-backup
192.168.111.50 k3s-node1
192.168.111.51 k3s-node2
192.168.111.52 k3s-mysql


  • 配置host文件,并确保每台机器上都可以通过主机名互通


需要用到的工具:


此安装需要以下 CLI 工具。请确保这些工具已经安装并在$PATH中可用


CLI工具的安装在k3s节点上进行


  • kubectl - Kubernetes 命令行工具.


  • helm - Kubernetes 的软件包管理工具。
    请参阅Helm 版本要求选择 Helm 的版本来安装 Rancher。


开始部署:


安装 Kubectl:


  • 安装参考K8S官网,由于某些特殊原因,此处我们使用snap


sudo apt-get install snapd
sudo snap install kubectl --classic # 此处安装较慢,请耐心等待
# 验证安装
kubectl help


安装 Helm:


  • 安装参考Helm官网,Helm是Kubernetes的包管理器,Helm的版本需要高于v3


# 下载安装包
wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
# 解压
tar zxvf helm-v3.2.1-linux-amd64.tar.gz
# 将二进制文件移动至/usr/local/bin/
sudo mv linux-amd64/helm /usr/local/bin/helm
# 验证安装
helm help


创建 Nginx+Keepalived 集群:


此处在CentOS节点上进行


  • 安装 Nginx


# 下载Nginx安装包
wget http://nginx.org/download/nginx-1.17.10.tar.gz
# 解压安装包
tar zxvf nginx-1.17.10.tar.gz
# 安装编译时必备的软件包
yum install -y gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel libnl3-devel
# 进入nginx目录,此处我们需要使用https,所有在编译时选择 --with-http_ssl_module 模块
cd nginx-1.17.10
mkdir -p /usr/local/nginx
./configure --prefix=/usr/local/nginx --with-stream
# 安装nginx
make && make install
# 创建nginx命令软连接
ln -s /usr/local/nginx/sbin/nginx /usr/local/bin/nginx
# 验证安装
nginx -V
# 启动nginx
nginx


  • 安装 Keepalived


# 下载安装包
wget https://www.keepalived.org/software/keepalived-2.0.20.tar.gz
# 解压安装包
tar zxvf keepalived-2.0.20.tar.gz
# 编译安装keepalived
cd keepalived-2.0.20
mkdir /usr/local/keepalived
./configure --prefix=/usr/local/keepalived/
make && make install
# 配置 keepalived 为系统服务
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/keepalived
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/keepalived
touch /etc/init.d/keepalived
chmod +x /etc/init.d/keepalived # keepalived 中的内容见下文
vim /etc/init.d/keepalived
# 配置 keepalived
mkdir /etc/keepalived/
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
vim /etc/keepalived/keepalived.conf #keepalived.conf 中的内容见下文
# 启动keepalived
systemctl start keepalived
systemctl enable keepalived
# 验证
systemctl status keepalived
# 此时keepalived应该是运行,一个为master,一个为backup, master上执行 ip addr 命令时,应该存在一个虚拟ip地址,backup上不应该有
# 访问 https://192.168.111.20 验证配置


# /etc/init.d/keepalived文件内容
#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived
# Source function library
. /etc/rc.d/init.d/functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived
RETVAL=0
prog="keepalived"
start() {
    echo -n $"Starting $prog: "
    daemon keepalived ${KEEPALIVED_OPTIONS}
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}
stop() {
    echo -n $"Stopping $prog: "
    killproc keepalived
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}
reload() {
    echo -n $"Reloading $prog: "
    killproc keepalived -1
    RETVAL=$?
    echo
}
# See how we were called.
case "$1" in
    start)
        start
        ;;
    stop)
        stop
        ;;
    reload)
        reload
        ;;
    restart)
        stop
        start
        ;;
    condrestart)
        if [ -f /var/lock/subsys/$prog ]; then
            stop
            start
        fi
        ;;
    status)
        status keepalived
        RETVAL=$?
        ;;
    *)
        echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
        RETVAL=1
esac
exit $RETVAL


# /etc/keepalived/keepalived.conf 中的内容
! Configuration File for keepalived
global_defs {
   router_id 192.168.111.21 # 此id在网络中有且只有一个,不应有重复的id
}
vrrp_script chk_nginx {     #因为要检测nginx服务状态,所以创建一个检查脚本
    script "/usr/local/keepalived/check_ng.sh"
    interval 3
}
vrrp_instance VI_1 {
    state MASTER    # 配置此节点为master,备机上设置为BACKUP
    interface ens33    # 设置绑定的网卡
    virtual_router_id 51    # vrrp 组, 主备的vrrp组应该一样
    priority 120    # 优先级,优先级大的为主
    advert_int 1    # 检查间隔
    authentication { # 认证
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress { # 虚拟IP
        192.168.111.20
    }
    track_script {    # 执行脚本
        chk_nginx
    }
}


# /usr/local/keepalived/check_ng.sh 中的内容
#!/bin/bash
d=`date --date today +%Y%m%d_%H:%M:%S`
n=`ps -C nginx --no-heading|wc -l`
if [ $n -eq "0" ]; then
        nginx
        n2=`ps -C nginx --no-heading|wc -l`
        if [ $n2 -eq "0"  ]; then
                echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log
                systemctl stop keepalived
        fi
fi


安装 docker-ce :


此处在RKE节点上进行


# 移除旧版本Docker
sudo apt-get remove docker docker-engine docker.io containerd runc
# 安装工具包
sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
# 添加 Docker官方 GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# 添加 stable apt 源
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
# 安装 Docker-ce
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
# 验证安装
docker info
# 将当前用户加入"docker"用户组,加入到该用户组的账号在随后安装过程会用到。用于节点访问的SSH用户必须是节点上docker组的成员
sudo usermod -aG docker $USER


配置四层负载均衡


此处在Nginx集群操作


# 更新nginx配置文件
# vim /usr/local/nginx/conf/nginx.conf
#user  nobody;
worker_processes  4;
worker_rlimit_nofile 40000;
events {
    worker_connections  8192;
}
stream {
    upstream rancher_servers_http {
        least_conn;
        server 192.168.111.50:80 max_fails=3 fail_timeout=5s;
        server 192.168.111.51:80 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     80;
        proxy_pass rancher_servers_http;
    }
    upstream rancher_servers_https {
        least_conn;
        server 192.168.111.50:443 max_fails=3 fail_timeout=5s;
        server 192.168.111.51:443 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     443;
        proxy_pass rancher_servers_https;
    }
}


部署 MySQL 5.7


# 下载地址:https://dev.mysql.com/downloads/mysql/5.7.html#downloads
# 创建运行MySQL数据库的用户和用户组
groupadd -r mysql
useradd -r -g mysql mysql
# 解压安装包,更改目录权限,创建数据库目录
tar zxvf mysql-5.7.30-linux-glibc2.12-x86_64.tar.gz
mkdir -p /app/mysql/data
mv mysql-5.7.30-linux-glibc2.12-x86_64/* /app/mysql/
chown -R mysql:mysql /app/mysql
# 初始化数据库
cd /app/mysql
./bin/mysqld --initialize \
--user=mysql --basedir=/app/mysql/ \
--datadir=/app/mysql/data/
# !!注意最后一行的初始密码
7Jlhi:gg?rE0
# 创建RSA private key
./bin/mysql_ssl_rsa_setup --datadir=/app/mysql/data/
# 添加 MySQL 到开机启动,修改/etc/init.d/mysqld中的basedir和datadir
cp support-files/mysql.server /etc/init.d/mysqld
basedir=/app/mysql
datadir=/app/mysql/data
chkconfig mysqld on
# 修改环境变量
vim /etc/profile
# 添加
export PATH=/app/mysql/bin:$PATH
# 使环境变量生效
source /etc/profile
# 备份系统自带的/etc/my.cnf,在/app/mysql/目录新建my.cnf,并且将文件属性调整为mysql:mysql
mv /etc/my.cnf /etc/my.cnf.bak
touch /app/mysql/my.cnf    # 具体内容见下文
# 启动mysql
/etc/init.d/mysqld start
# 创建mysql.sock软链接
ln -s /app/mysql/mysql.sock /tmp/mysql.sock
# 使用初始密码登陆
mysql -uroot -p
# 登陆成功后修改密码
alter user 'root'@'localhost' identified by "12345678";
flush privileges;
# 配置数据库远程登录
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '12345678' WITH GRANT OPTION;
flush privileges;
# 验证 略


# my.cnf
[mysqld]
character-set-server=utf8
datadir=/app/mysql/data
socket=/app/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
[client]
character-set-server=utf8
socket=/app/mysql/mysql.sock
[mysql]
character-set-server=utf8
socket=/app/mysql/mysql.sock


部署k3s:


# 启动 k3s Server
# !注意,所有k3s节点上都要运行此命令
curl -sfL https://docs.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -s - server \
--datastore-endpoint="mysql://root:12345678@tcp(192.168.111.52:3306)/k3s"
# 验证
sudo k3s kubectl get nodes
# 在每个 Rancher Server 节点上安装 K3s 时,会在节点上/etc/rancher/k3s/k3s.yaml位置创建一个kubeconfig文件。该文件包含用于完全访问集群的凭据。# 复制 k3s.yaml 到 ~/.kube/config
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
# 验证 kubectl
sudo kubectl get pods --all-namespaces
kube-system   coredns-8655855d6-c26h8                  1/1     Running     0          11m
kube-system   metrics-server-7566d596c8-v65fd          1/1     Running     0          11m
kube-system   helm-install-traefik-ttrfg               0/1     Completed   0          11m
kube-system   svclb-traefik-hxmzw                      2/2     Running     0          8m16s
kube-system   svclb-traefik-zxmg2                      2/2     Running     0          8m16s
kube-system   traefik-758cd5fc85-xsxbm                 1/1     Running     0          8m16s
kube-system   local-path-provisioner-6d59f47c7-497rl   1/1     Running     0          11m


部署 Rancher:


  • 添加 Helm Chart 仓库


helm repo add rancher-stable https://releases.rancher.com/server-charts/stable


  • 为 Rancher 创建 Namespace


sudo kubectl create namespace cattle-system


  • 生成证书


mkdir certs
cd certs
touch ~/.rnd
cp /usr/lib/ssl/openssl.cnf ./ # openssl.cnf内容有改动,详情见下文
vim openssl.cnf
openssl genrsa -out cakey.pem 2048
openssl req -x509 -new -nodes -key cakey.pem \
-days 36500 \
-out cacerts.pem \
-extensions v3_ca \
-subj "/CN=rancher.local.com" \
-config ./openssl.cnf
openssl genrsa -out server.key 2048
openssl req -new -key server.key \
-out server.csr \
-subj "/CN=rancher.local.com" \
-config ./openssl.cnf
openssl x509 -req -in server.csr \
-CA cacerts.pem \
-CAkey cakey.pem \
-CAcreateserial -out server.crt \
-days  36500 -extensions v3_req \
-extfile ./openssl.cnf
openssl x509 -noout -in server.crt -text | grep DNS
cp server.crt tls.crt
cp server.key tls.key


  • openssl修改部分


[req]
distinguished_name      = req_distinguished_name
req_extetions           = v3_req
x509_extensions         = v3_ca
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = rancher.local.com
[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
basicConstraints = critical,CA:true
subjectAltName = @alt_names


  • ca证书密文


sudo kubectl -n cattle-system create secret tls tls-rancher-ingress \
--cert=./tls.crt --key=./tls.key
sudo kubectl -n cattle-system create secret generic tls-ca \
--from-file=cacerts.pem


  • 部署 Rancher 集群


sudo helm install rancher rancher-stable/rancher \
 --namespace cattle-system \
 --set hostname=rancher.local.com \
 --set ingress.tls.source=secret \
 --set privateCA=true


  • 等待 Rancher 集群运行


sudo kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out


  • 如果看到以下错误:error: deployment "rancher" exceeded its progress deadline, 可以通过运行以下命令来检查 deployment 的状态


sudo kubectl -n cattle-system get deploy rancher



640.png




相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
相关文章
|
15天前
|
SQL Kubernetes 调度
【一文看懂】部署Kubernetes模式的Havenask集群
本次分享内容为havenask的kubernetes模式部署,由下面2个部分组成(部署Kubernetes模式Havenask集群、 Kubernetes模式相关问题排查),希望可以帮助大家更好了解和使用Havenask。
66 1
|
7月前
|
Kubernetes 负载均衡 应用服务中间件
部署Kubernetes(k8s)多主的高可用集群
在CentOS7上安装Kubernetes多主节点的集群,并且安装calico网络插件和metallb。使用keepalived和haproxy进行负载均衡。最后部署应用
1115 0
|
8月前
|
Kubernetes 网络安全 数据安全/隐私保护
Helm3部署Rancher2.6.3高可用集群
Helm3部署Rancher2.6.3高可用集群,通过创建 Kubernetes Secret 使用自签证书。
189 1
|
5月前
|
Kubernetes Cloud Native Linux
云原生|kubernetes|kubernetes集群部署神器kubekey安装部署高可用k8s集群(半离线形式)
云原生|kubernetes|kubernetes集群部署神器kubekey安装部署高可用k8s集群(半离线形式)
142 1
|
5月前
|
Kubernetes Cloud Native Linux
云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群
云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群
130 0
|
6月前
|
Kubernetes Linux 网络安全
kubeasz 部署高可用 kubernetes 集群
kubeasz 部署高可用 kubernetes 集群
72 0
|
canal Kubernetes Cloud Native
云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群(二)
云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群
467 0
|
7月前
|
缓存 Kubernetes Linux
安装kubernetes集群-灵活安装k8s各个版本高可用集群(上)
安装kubernetes集群-灵活安装k8s各个版本高可用集群
|
7月前
|
Kubernetes 前端开发 Docker
安装kubernetes集群-灵活安装k8s各个版本高可用集群(下)
安装kubernetes集群-灵活安装k8s各个版本高可用集群
|
12月前
|
Kubernetes 负载均衡 API
「容器云架构」设置高可用性Kubernetes Master
「容器云架构」设置高可用性Kubernetes Master