一、环境信息:
环境信息(采用一个master节点+两个node节点)
master 172.16.20.11
node1 172.16.20.12
node2 172.16.20.13
操作系统版本
CentOS Linux release 7.5.1804 (Core
二、下载离线包:
链接:https://pan.baidu.com/s/1sPSpccsv93iz3Lgmb_n3pg 密码:zgwf
三、各个节点通用操作
1、将离线包k8s-offline-install.zip 上传到各个节点上
# unzip k8s-offline-install.zip
2、安装docker-ce17.03(kubeadmv1.9最大支持docker-ce17.03)
# yum localinstall docker-ce*
3、修改docker的镜像 源为国内的daocloud的。
#curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://a58c8480.m.daocloud.io
4、启动docker-ce
# systemctl start docker
# systemctl enable docker
5、关闭selinux和防火墙
# setenforce 0
# sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
# systemctl disable firewalld.service
# systemctl stop firewalld.service
6、修改各个节点的主机名
# hostnamectl set-hostname node2
7、修改hosts文件,使用本地解析主机名
cat <<"EOF">/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.20.11 master
172.16.20.12 node1
172.16.20.13 node2
EOF
8、配置系统路由参数,防止kubeadm报路由警告
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl --system
9、关闭swap
# swapoff -a
# vi /etc/fstab
注释swap
#/dev/mapper/cl-swap swap swap defaults 0 0
10、导入镜像
docker load < /root/k8s_images/docker_images/etcd-amd64_v3.1.10.tar
docker load </root/k8s_images/docker_images/flannel\:v0.9.1-amd64.tar
docker load </root/k8s_images/docker_images/k8s-dns-dnsmasq-nanny-amd64_v1.14.7.tar
docker load </root/k8s_images/docker_images/k8s-dns-kube-dns-amd64_1.14.7.tar
docker load </root/k8s_images/docker_images/k8s-dns-sidecar-amd64_1.14.7.tar
docker load </root/k8s_images/docker_images/kube-apiserver-amd64_v1.9.0.tar
docker load </root/k8s_images/docker_images/kube-controller-manager-amd64_v1.9.0.tar
docker load </root/k8s_images/docker_images/kube-scheduler-amd64_v1.9.0.tar
docker load < /root/k8s_images/docker_images/kube-proxy-amd64_v1.9.0.tar
docker load </root/k8s_images/docker_images/pause-amd64_3.0.tar
docker load < /root/k8s_images/docker_images/kubernetes-dashboard_v1.8.1.tar
11、安装安装kubelet kubeadm kubectl包
# cd /root/k8s_images
# yum localinstall socat-1.7.3.2-2.el7.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm kubelet-1.9.9-9.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm kubeadm-1.9.0-0.x86_64.rpm
12、启用开机启动,此处仅需要enable
# systemctl enable kubelet
四、master节上用操作
1、master节点与node节点做互信
[root@master ~]# ssh-keygen
[root@master ~]# ssh-copy-id node1
[root@master ~]# ssh-copy-id node2
[root@master ~]# ssh-copy-id master
2、kubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd
cp -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf_bak
sed -i "s/systemd/cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
3、重启reload
# systemctl daemon-reload
4、将环境reset一下
# kubeadm reset
5、在重新执行
kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
kubernetes默认支持多重网络插件如flannel、weave、calico,这里使用flanne,就必须要设置–pod-network-cidr参数,10.244.0.0/16是kube-flannel.yml里面配置的默认网段,如果需要修改的话,需要把kubeadm init的–pod-network-cidr参数和后面的kube-flannel.yml里面修改成一样的网段就可以了。
**注意:将kubeadm join xxx保存下来,等下node节点需要使用
**
6、配置下环境变量
对于非root用户
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
对于root用户
# export KUBECONFIG=/etc/kubernetes/admin.conf
也可以直接放到~/.bash_profile
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source一下环境变量
source ~/.bash_profile
7、测试
kubectl version
8、安装网络,可以使用flannel、calico、weave、macvlan这里我们用flannel。
若要修改kube-flannel.yml网段,需要kubeadm –pod-network-cidr=和这里同步
修改network项
"Network": "10.244.0.0/16",
执行
# kubectl create -f kube-flannel.yml
9、安装dashboard
# kubectl apply -f kubernetes-dashboard.yaml
kubectl get pods命令来查看部署状态:
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kubernetes-dashboard-7d5dcdb6d9-mf6l2 1/1 Running 0 9m
10、创建用户
1).创建服务账号
首先创建一个叫admin-user的服务账号,并放在kube-system名称空间下:
# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
执行kubectl create命令:
kubectl create -f admin-user.yaml
2.绑定角色
默认情况下,kubeadm创建集群时已经创建了admin角色,我们直接绑定即可:
# admin-user-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
执行kubectl create命令:
# kubectl create -f admin-user-role-binding.yaml
3.获取Token
现在我们需要找到新创建的用户的Token,以便用来登录dashboard:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
输出类似:
Name: admin-user-token-qrj82
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=6cd60673-4d13-11e8-a548-00155d000529
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFyajgyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2Y2Q2MDY3My00ZDEzLTExZTgtYTU0OC0wMDE1NWQwMDA1MjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.C5mjsa2uqJwjscWQ9x4mEsWALUTJu3OSfLYecqpS1niYXxp328mgx0t-QY8A7GQvAr5fWoIhhC_NOHkSkn2ubn0U22VGh2msU6zAbz9sZZ7BMXG4DLMq3AaXTXY8LzS3PQyEOCaLieyEDe-tuTZz4pbqoZQJ6V6zaKJtE9u6-zMBC2_iFujBwhBViaAP9KBbE5WfREEc0SQR9siN8W8gLSc8ZL4snndv527Pe9SxojpDGw6qP_8R-i51bP2nZGlpPadEPXj-lQqz4g5pgGziQqnsInSMpctJmHbfAh7s9lIMoBFW7GVE8AQNSoLHuuevbLArJ7sHriQtDB76_j4fmA
ca.crt: 1025 bytes
namespace: 11 bytes
然后把Token复制到登录界面的Token输入框中,登入后显示如下:
用firefox登录
https://master:32666
五、node节点操作
修改kubelet配置文件根上面有一将cgroup的driver由systemd改为cgroupfs
kubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd
cp -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf_bak
sed -i "s/systemd/cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# systemctl daemon-reload
使用刚刚执行kubeadm后的kubeadm join –xxx
# kubeadm join --token 361c68.fbafaa96a5381651 master:6443 --discovery-token-ca-cert-hash sha256:e5e392f4ce66117635431f76512d96824b88816dfdf0178dc497972cf8631a98
六、测试集群
在master节
查看节点
[root@master k8s_images]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h v1.9.0
node1 Ready <none> 3h v1.9.0
node2 Ready <none> 3h v1.9.0
查看服务
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 3 3h
kube-system kube-apiserver 0/1 CrashLoopBackOff 40 3h
kube-system kube-apiserver-master 1/1 Running 0 3h
kube-system kube-controller-manager-master 1/1 Running 6 3h
kube-system kube-dns-6f4fd4bdf-wlnwk 3/3 Running 9 3h
kube-system kube-flannel-ds-9vhmd 1/1 Running 3 3h
kube-system kube-flannel-ds-q5gc2 1/1 Running 0 3h
kube-system kube-flannel-ds-vp4dj 1/1 Running 0 3h
kube-system kube-proxy-gmgxk 1/1 Running 0 3h
kube-system kube-proxy-pvlcz 1/1 Running 3 3h
kube-system kube-proxy-tjlpk 1/1 Running 0 3h
kube-system kube-scheduler-master 1/1 Running 6 3h
创建ngnix容器
# kubectl run test-nginx --image=nginx --replicas=2 --port=80
查看创建结果
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test-nginx-959dbd6b6-dkplf 1/1 Running 0 14m 10.244.1.2 node01.srv.world
test-nginx-959dbd6b6-qhz55 1/1 Running 0 14m 10.244.2.2 node02.srv.world
# kubectl expose deployment test-nginx
service "test-nginx" exposed
# kubectl describe service test-nginx
Name: test-nginx
Namespace: default
Labels: run=test-nginx
Annotations: <none>
Selector: run=test-nginx
Type: ClusterIP
IP: 10.101.252.243
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.2:80,10.244.2.2:80
Session Affinity: None
Events: <none>
# curl 10.101.252.243
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
.....
.....
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
七、默认token 24小时就会过期,后续的机器要加入集群需要重新生成token
重新生成新的token
# kubeadm token create
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --ttl 0)
aa78f6.8b4cafc8ed26c34f
# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
aa78f6.8b4cafc8ed26c34f 23h 2017-12-26T16:36:29+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
获取ca证书sha256编码hash值
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538
节点加入集群
格式如下:kubeadm join –token xxx master_ip:6443
# kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 172.16.6.79:6443 --skip-preflight-checks
八、集成Heapster
Heapster是容器集群监控和性能分析工具,天然的支持Kubernetes和CoreOS。
Heapster支持多种储存方式,本示例中使用influxdb,直接执行下列命令即可:
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/influxdb.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/grafana.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/heapster.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/rbac/heapster-rbac.yaml
上面命令中用到的yaml是从 https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb 复制的,并将k8s.gcr.io修改为国内镜像。
然后,查看一下Pod的状态:
raining@raining-ubuntu:~/k8s/heapster$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
...
heapster-5869b599bd-kxltn 1/1 Running 0 5m
monitoring-grafana-679f6b46cb-xxsr4 1/1 Running 0 5m
monitoring-influxdb-6f875dc468-7s4xz 1/1 Running 0 6m
…
等待状态变成Running,刷新一下浏览器,最新的效果如下:
本文转自CSDN-离线安装k8s