环境
Master: 192.168.20.93
Node1: 192.168.20.94
Node2: 192.168.20.95
采用kubeadm安装
kubeadm为kubernetes官方推荐的自动化部署工具,他将kubernetes的组件以pod的形式部署在master和node节点上,并自动完成证书认证等操作。
因为kubeadm默认要从google的镜像仓库下载镜像,但目前国内无法访问google镜像仓库,所以已将镜像下好,只需要将离线包的镜像导入到节点中就可以了.\
开始安装
----所有节点操作:----
下载:
链接:https://pan.baidu.com/s/1pMdK0Td 密码:zjja
[root@master ~]# md5sum k8s_images.tar.bz2
b60ad6a638eda472b8ddcfa9006315ee k8s_images.tar.bz2
解压下载下来的离线包
[root@master ~]# tar -jxvf k8s_images.tar.bz2
tar (child): bzip2: Cannot exec: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
原因:没有bzip2解压工具,安装后解压:
[root@master ~]# yum -y install bzip2
[root@master ~]# tar -jxvf k8s_images.tar.bz2
安装启动docker,关闭selinux,firewalld,后
配置系统路由参数,防止kubeadm报路由警告:
echo "
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
" >> /etc/sysctl.conf
sysctl -p
导入镜像:
[root@master /]# cd k8s_images/docker_images/
[root@master docker_images]# for i in `ls`;do docker load < $i ;done
安装kubelet kubeadm kubectl包:
[root@master docker_images]# cd ..
rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm kubelet-1.9.9-9.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm
rpm -ivh kubectl-1.9.0-0.x86_64.rpm
rpm -ivh kubeadm-1.9.0-0.x86_64.rpm
Master节点操作:
开始初始化master:
[root@master k8s_images]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
kubernetes默认支持多重网络插件如flannel、weave、calico,这里使用flanne,就必须要设置--pod-network-cidr参数,10.244.0.0/16是kube-flannel.yml里面配置的默认网段,如果需要修改的话,需要把kubeadm init的--pod-network-cidr参数和后面的kube-flannel.yml里面修改成一样的网段就可以了。
部分会安装失败:
发现原来是kubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd,需要将kubelet的cgroup改为和docker的cgroup相同: "Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd""
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS$KUBELET_EXTRA_ARGS
重启reload
systemctl daemon-reload && systemctl restart kubelet
此时记得将环境reset一下
执行:
kubeadm reset
在重新执行:
kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
完成后,将kubeadm join xxx保存下来,等下node节点加入集群需要使用
如果忘记了,可以在master上通过kubeadmin token list得到
此时root用户还不能使用kubelet控制集群需要,按提示配置下环境变量
对于非root用户
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
对于root用户
export KUBECONFIG=/etc/kubernetes/admin.conf
也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source一下环境变量
source ~/.bash_profile
kubectl version测试:
[root@master k8s_images]# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
安装网络,可以使用flannel、calico、weave、macvlan这里我们用flannel。
下载此文件
wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
或直接使用离线包里面的
若要修改网段,需要kubeadm --pod-network-cidr=和这里同步
vim kube-flannel.yml
修改network项:
"Network": "10.244.0.0/16",
执行
kubectl create -f kube-flannel.yml
node节点操作
使用刚刚kubeadm后的kubeadm join --xxx:
[root@node2 ~]# kubeadm join --token d508f9.bf00f1b8182fdc3f 192.168.20.93:6443 --discovery-token-ca-cert-hash sha256:3477ff532256a3ffe1915b3a504cd75a10989a49848cc0321cba0277830c2ac3
多次加入报错查看/var/log/message日志.
加入成功
在master节点上check一下:
[root@master k8s_images]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.flora.com Ready master 19h v1.9.1
node1.flora.com Ready <none> 19h v1.9.0
node2.flora.com Ready <none> 19h v1.9.0
kubernetes会在每个node节点创建flannel和kube-proxy的pod:
[root@master k8s_images]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-d5655dd9d-gc7c9 1/1 Running 0 17h
default nginx-deployment-d5655dd9d-pjq5k 1/1 Running 0 17h
kube-system etcd-master.flora.com 1/1 Running 3 19h
kube-system kube-apiserver-master.flora.com 1/1 Running 13 19h
kube-system kube-controller-manager-master.flora.com 1/1 Running 9 19h
kube-system kube-dns-6f4fd4bdf-ds2lf 3/3 Running 23 19h
kube-system kube-flannel-ds-5lhmm 1/1 Running 0 19h
kube-system kube-flannel-ds-cdhmr 1/1 Running 1 19h
kube-system kube-flannel-ds-l5w9b 1/1 Running 0 19h
kube-system kube-proxy-9794w 1/1 Running 0 19h
kube-system kube-proxy-986n2 1/1 Running 0 19h
kube-system kube-proxy-gmncl 1/1 Running 1 19h
kube-system kube-scheduler-master.flora.com 1/1 Running 8 19h
[root@master k8s_images]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-master.flora.com 1/1 Running 3 19h 192.168.20.93 master.flora.com
kube-apiserver-master.flora.com 1/1 Running 13 19h 192.168.20.93 master.flora.com
kube-controller-manager-master.flora.com 1/1 Running 9 19h 192.168.20.93 master.flora.com
kube-dns-6f4fd4bdf-ds2lf 3/3 Running 23 19h 10.244.0.4 master.flora.com
kube-flannel-ds-5lhmm 1/1 Running 0 19h 192.168.20.94 node1.flora.com
kube-flannel-ds-cdhmr 1/1 Running 1 19h 192.168.20.93 master.flora.com
kube-flannel-ds-l5w9b 1/1 Running 0 19h 192.168.20.95 node2.flora.com
kube-proxy-9794w 1/1 Running 0 19h 192.168.20.94 node1.flora.com
kube-proxy-986n2 1/1 Running 0 19h 192.168.20.95 node2.flora.com
kube-proxy-gmncl 1/1 Running 1 19h 192.168.20.93 master.flora.com
kube-scheduler-master.flora.com 1/1 Running 8 19h 192.168.20.93 maste
本文转自不要超过24个字符博客51CTO博客,原文链接http://blog.51cto.com/cstsncv/2061943如需转载请自行联系原作者
cstsncv