Centos7上搭建kubernetes集群
1、环境准备
版本说明:k8s版本 1.23.5,centos 7.9
1.1、机器准备
准备三台2C4G40G(2核心,4G内存,40GB磁盘)配置且能够访问公网的虚拟机,操作系统为Centos7(7.9版本)。如果不知道如何创建虚拟机及配置网络,请参考教程https://mp.weixin.qq.com/s/_1X_pgZ85Qehvb1RCvEjvQ(注意宿主机配置内存8G+)否则不一定都能跑起来。
机器节点 | IP | 主机名称 |
---|---|---|
master | 192.168.2.2 | k8s-master |
node | 192.168.2.3 | k8s-node1 |
node | 192.168.2.4 | k8s-node2 |
1.2、环境设置
注意:如非特殊说明,默认三台服务器都要进行如下设置。
1.2.1、关闭并停用防火墙
执行如下命令关闭和金庸防火墙
systemctl stop firewalld
systemctl disable firewalld
1.2.2、关闭selinux
执行如下命令
sed -i 's/enforcing/disabled/' /etc/selinux/config
1.2.3、关闭swap,注释swap分区
执行如下命令
swapoff -a
#编辑/etc/fstab文件,注释掉dev/mapper/cl-swap swap swap defaults 0 0
修改后如下图
1.2.4、配置内核参数,将桥接的IPv4流量传递到iptables的链
将下面命令拷贝到终端中执行即可,也可进行手动编辑
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
1.2.5、配置hostname
在终端执行如下命令
cat >> /etc/hosts << EOF
192.168.2.2 k8s-master
192.168.2.3 k8s-node1
192.168.2.4 k8s-node2
EOF
1.2.6、同步每个服务器的时间和时区
在终端执行如下命令
yum install ntpdate -y
ntpdate time.windows.com
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
1.2.7、常用包安装
在终端西执行如下名命令
yum install vim bash-completion net-tools gcc -y
1.3、docker安装与配置
1.3.1、docker 安装
使用aliyun源安装docker-ce,在终端中执行如下命令,默认安装最新版本,此处我安装的版本为 20.10.14。
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
1.3.2、docker配置
配置镜像加速器,我使用的是阿里云的镜像加速,在终端执行如下命令。注意:systemd是Kubernetes自带的cgroup管理器, 负责为每个进程分配cgroups, 但docker的cgroup driver默认是cgroupfs,这样就同时运行有两个cgroup控制管理器, 当资源有压力的情况时,有可能出现不稳定的情况,所以此处通过docker进行配置,否则不修改配置,会在kubeadm init时有警告提示。
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://fhzep8jf.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
1.3.3、重启docker服务
systemctl daemon-reload && systemctl restart docker
1.3.4、将docker设置为开机自启动
systemctl enable docker.service
1.3.5、docker服务验证
service docker status
2、安装k8s集群
前提条件,三台服务器环境配置完成,此处使用的k8s网络组件为calico,下面我们进行集群安装。注意:如非特殊说明,三台服务器操作步骤相同。
2.1、安装kubectl、kubelet、kubeadm
2.1.1、添加阿里kubernetes源
在终端中执行如下命令,网上步骤相关教程中的gpgcheck和repo_gpgcheck值为1,有时候会出现验证报错,如果报错请将这两项值调整0。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.1.2、安装
安装kubectl、kubelet、kubeadm,在终端中执行如下命令
yum install kubectl kubelet kubeadm
systemctl enable kubelet
k8s-master节点
安装完成
k8s-node1节点
安装完成
k8s-node2节点
安装完成
2.1.3、初始化k8s集群
仅在k8s-master节点执行如下命令,进行k8s集群初始化,POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。
这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。
kubeadm init --kubernetes-version=1.23.5 \
--apiserver-advertise-address=192.168.2.2 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
集群初始化完成后,打印如下日志,此步骤操作可能较慢,取决于网络带宽,请耐心等待,执行完毕会输出如下日志(注意最后Your Kubernetes control-plane has initialized successfully!下面的部分语句,有助于后续操作):
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes ku bernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] an d IPs [10.10.0.1 192.168.2.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.2.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] an d IPs [192.168.2.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/ku belet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.y aml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests "
[wait-control-plane] Waiting for the kubelet to boot up the control plane as sta tic Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.005579 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system wi th the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. O nce the UnversionedKubeletConfigMap feature gate graduates to Beta the default n ame will become just "kubelet-config". Kubeadm upgrade will handle this transiti on transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/cont rol-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1ew4s4.llgieb9ythwn6toe
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Rol es
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get no des
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post C SRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller auto matically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all no de client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" nam espace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatab le kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.2:6443 --token 1ew4s4.llgieb9ythwn6toe \
--discovery-token-ca-cert-hash sha256:42a5f14b8ec99545f01d260d422d8bd6c8 21c2360dedd51b876267f2ad8223c1
记录生成的最后部分内容,此内容最后一个需要在其它节点加入Kubernetes集群时执行,根据提示创建kubectl。
2.1.4、k8s-master配置
根据初始化后最后的提示,将k8s生成的管理员连接k8s集群的配置文件考到它默认的工作目录,这样就可以通过kubectl连接k8s集群了,执行命令。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
将配置添加到环境变量中,并使配置生效。注意:子节点中是没有这个配置文件,所以kubectl相关命令无法使用,需要将配置文件copy到子节点,然后执行下面命令才能使用。
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
测试命令,k8s-master 为NotReady,因为corednspod没有启动,缺少网络pod。
kubectl get node
查看所有pod状态
kubectl get pods --all-namespaces
2.1.5、安装pod网络组件
仅在k8s-master进行操作。根据提示进行安装,提示中k8s的网络组件列表介绍地址为https://kubernetes.io/docs/concepts/cluster-administration/addons/。感兴趣的可以自行查看,此处我们安装calico网络组件。
安装calico网络
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
执行如下命令查看pod和node,可以看出coredns 及calico相关pod启动正常,如下图
kubectl get pod --all-namespaces
2.2、将节点加入集群中
同样命令如集群初始化成功后的最后输出语句如下图,但是我们需要替换里面的token因为默认的token是短期的,我们需要生成一个永不过期的token,然后替换下图命令中的token和证书hash值。
2.2.1、创建token并查找证书hash值
在k8s-master节点上创建token常用命令如下表:
命令 | 命令说明 | 备注 |
---|---|---|
kubeadm token list | 查看现在有的token | |
kubeadm token create | 生成一个新的token | |
kubeadm token create --ttl 0 | 生成一个永远不过期的token |
查看token
kubeadm token list
此处我们创建一个永不过期的token,所以在k8s-master节点终端下执行如下命令,然后记住生成的token
kubeadm token create --ttl 0
在k8s-master机器上运行获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
2.2.2、将节点加入到集群中
使用上面获取到的token和hash值替换添加节点到集群的脚本中对应的值,然后再k8s-node1及k8s-node2节点终端中执行即可。注意事项:此处需要拉取相关组件镜像文件,可能比较慢,取决于你的网络,请耐心等待,不要任务状态不变,哪里有问题了!!!。命令如下:
kubeadm join 192.168.2.2:6443 --token 4nulxl.8t3tmub2yn4gyvvi \
--discovery-token-ca-cert-hash sha256:9abc907ab631eadd582eac81d789dc7c17ffdf726a3fd333b4fac54c64a95005
k8s-node1节点加入集群
k8s-node2节点加入集群
k8s-master主节点需要拉取镜像,此处只是看看,会自动拉取,正常情况无需人工干预
docker images
k8s-node1和k8s-node2节点所需镜,同样是自动拉取,正常情况下无需人工干预。如下图:
至此集群安装完毕,检查集群状态都是正常状态【Ready】,命令如下
kubectl get nodes
查看所有节点启动状态
kubectl get pods --all-namespaces
3、安装kubernetes-dashboard
注意:此操作仅在master节点进行操作。官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport,目的可以通过主机ip+port进行访问。此处我们安装最新版本v2.5.1。
3.1、获取yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
3.1.1、添加NodePort
通过vim打开刚刚获取的recommended.yaml文件,找到service节点,将对应的配置调整成如下配置
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #增加类型NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #暴露的端口
selector:
k8s-app: kubernetes-dashboard
3.1.2、调整docker镜像加速
因为kubernetesui/dashboard默认从默认镜像资源库加载很慢,此处我们把k8s-master节点的docker加速器调整为教育网加速器,否则拉取kubernetesui/dashboard镜像会失败。修改docker的配置文件【/etc/docker/daemon.json】为如下配置
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
"exec-opts": [ "native.cgroupdriver=systemd" ]
}
然后重启docker服务
systemctl daemon-reload
systemctl restart docker
3.1.3、部署k8s-dashboard
执行如下命令进行部署:
kubectl create -f recommended.yaml
查看部署状态
kubectl get pod --all-namespaces
开始创建容器
创建部署完成
3.1.4、访问k8s管理页面
通过https://master机器ip:30001访问。
注意:这里必须是https的方式,如果谷哥浏览器不能访问,谷哥有的版本没有添加信任的地方,无法访问,可使用firefox或者其它浏览器。此处通过token进行登录。
3.1.5、账户权限配置
执行如下命令创建默认service account并绑定默认cluster-admin管理员集群角色,然后通过生成的token进行访问。
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
查看token信息
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
使用将token保存下来,然后在登录界面的token输入框中输入,点击登录即可,登录成功后如下图所示。
查看集群节点信息
至此,k8s集群就安装完毕了。