前置准备
使用自有或购买一台或多台Linux云服务器
安装好docker
Ubuntu安装docker和docker compose
Centos安装docker和docker compose
安装依赖软件
安装依赖库
ubuntu:
sudo apt install socat conntrack
centos:
sudo yum install socat conntrack
安装cni
执行以下命令安装
sudo mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz" | sudo tar -C /opt/cni/bin -xz
安装后可以看到目录下有如下内容
ls /opt/cni/bin
bandwidth bridge dhcp dummy firewall host-device host-local ipvlan loopback macvlan portmap ptp sbr static tap tuning vlan vrf
安装crictl
执行以下命令安装
sudo mkdir -p /usr/local/bin
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz" | sudo tar -C /usr/local/bin -xz
安装后确定目录下有crictl文件
ls /usr/local/bin/crictl
/usr/local/bin/crictl
安装容器运行时
下载并安装cri
curl -L https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.amd64.tgz| tar -C . -xz
sudo install -o root -g root -m 0755 cri-dockerd/cri-dockerd /usr/local/bin/cri-dockerd
创建 cri-docker.service
cat > cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd://
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
创建 cri-docker.socket
cat > cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
copy文件到指定目录下
sudo cp cri-docker.* /etc/systemd/system
重启docker,启动cri
sudo systemctl daemon-reload
sudo systemctl enable cri-docker.service
sudo systemctl enable --now cri-docker.socket
安装 kubectl kubelet kubeadm
下载 kubeadm kubelet kubectl
wget https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/{
kubeadm,kubelet,kubectl}
添加可执行权限
chmod +x {
kubeadm,kubelet,kubectl}
移动到PATH路径下
sudo mv kube* /usr/local/bin/
配置 kubelet 为 systemd service
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/v0.15.1/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:/usr/local/bin:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/v0.15.1/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:/usr/local/bin:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
启用并启动kubelet
sudo systemctl enable --now kubelet
预先拉取镜像
kubeadm config images pull --cri-socket unix:///var/run/cri-dockerd.sock --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.27.3
设置pause镜像
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.6
初始化集群
笔者有两台机器VM-2-7-ubuntu,VM-2-6-ubuntu,这里选择了VM-2-7-ubuntu为控制节点
注意 --control-plane-endpoint 参数为本机局域网ip,公网ip,或者本机域名都可以,笔者主机的ip位10.0.2.7
sudo kubeadm init --control-plane-endpoint 10.0.2.7 --pod-network-cidr 10.166.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.27.3
看到如下输出表明集群初始化完成
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown `id -u`:`id -g` $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.0.2.7:6443 --token wi02yz.rze9ui5xylrhgyl4 \
--discovery-token-ca-cert-hash sha256:3d64f26a3c41d5b0eca410652a8068f000e8ac8f59d0f2b22a403313a27d4d92 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.2.7:6443 --token wi02yz.rze9ui5xylrhgyl4 \
--discovery-token-ca-cert-hash sha256:3d64f26a3c41d5b0eca410652a8068f000e8ac8f59d0f2b22a403313a27d4d92
增加连接集群凭证
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ubuntu:ubuntu $HOME/.kube/config
查看集群pod
kubectl get po -n kube-system
观察到coredns还未启动,这是因为还没有部署网络插件
NAME READY STATUS RESTARTS AGE
coredns-65dcc469f7-tmhdt 0/1 Pending 0 5m34s
coredns-65dcc469f7-wzf29 0/1 Pending 0 5m34s
etcd-vm-2-7-ubuntu 1/1 Running 0 5m48s
kube-apiserver-vm-2-7-ubuntu 1/1 Running 0 5m48s
kube-controller-manager-vm-2-7-ubuntu 1/1 Running 0 5m49s
kube-proxy-d8r9c 1/1 Running 0 5m34s
kube-scheduler-vm-2-7-ubuntu 1/1 Running 0 5m48s
部署网络插件
使用如下命令部署网络插件
kubectl apply -f https://gitee.com/flextime/kubernetes-install/raw/v1.27.3/kube-flannel.yml
执行后可以看到如下输出
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
查看集群中pod
kubectl get po -A
可以看到所有pod均已启动
kube-flannel kube-flannel-ds-pddq8 1/1 Running 0 74s
kube-system coredns-65dcc469f7-tmhdt 1/1 Running 0 9m27s
kube-system coredns-65dcc469f7-wzf29 1/1 Running 0 9m27s
kube-system etcd-vm-2-7-ubuntu 1/1 Running 0 9m41s
kube-system kube-apiserver-vm-2-7-ubuntu 1/1 Running 0 9m41s
kube-system kube-controller-manager-vm-2-7-ubuntu 1/1 Running 0 9m42s
kube-system kube-proxy-d8r9c 1/1 Running 0 9m27s
kube-system kube-scheduler-vm-2-7-ubuntu 1/1 Running 0 9m41s
查看集群中的node
kubectl get no
目前只有控制节点一个node
NAME STATUS ROLES AGE VERSION
vm-2-7-ubuntu Ready control-plane 12m v1.27.3
默认控制节点禁用调度pod,如果测试使用,可以启用控制节点支持调度pod
执行以下命令为控制节点设置支持调度pod,这里注意替换为自己的控制节点名称,笔者的控制节点为vm-2-7-ubuntu
kubectl taint nodes vm-2-7-ubuntu node-role.kubernetes.io/control-plane:NoSchedule-
至此一个单node集群已经初始化完成了。
添加节点到集群中
笔者有两台机器VM-2-7-ubuntu,VM-2-6-ubuntu,因为VM-2-7-ubuntu已经成为控制节点,这里将另外一台机器VM-2-6-ubuntu作为普通节点加入集群中
可以从上个步骤的kubeadm init输出中获取到join控制节点的命令,注意添加--cri-socket unix:///var/run/cri-dockerd.sock
kubeadm join 10.0.2.7:6443 --token wi02yz.rze9ui5xylrhgyl4 \
--discovery-token-ca-cert-hash sha256:3d64f26a3c41d5b0eca410652a8068f000e8ac8f59d0f2b22a403313a27d4d92 --cri-socket unix:///var/run/cri-dockerd.sock
执行成功后可以看到如下输出
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
查看集群中的node
kubectl get no
可以看到集群中已经有了新加入的node
NAME STATUS ROLES AGE VERSION
vm-2-6-ubuntu Ready <none> 38s v1.27.3
vm-2-7-ubuntu Ready control-plane 2m34s v1.27.3
至此,一个控制节点和一个普通节点的kubernetes集群已经搭建完成了