kubernets 1.12.3集群搭建

简介: kubernets 1.12.3集群怎样去搭建呢?以下内容回答你的问题。

该安装基于Centos7 ,使用阿里云的源,kubernetes版本1.12.3


添加k8s软件源


cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

exclude=kube*

EOF


安装依赖软件


yum install -y yum-utils device-mapper-persistent-data lvm2

docker软件源


yum-config-manager -y --add-repo


安装docker 18.06.1.ce


yum makecache fast && yum -y install docker-ce-18.06.1.ce


创建 /etc/docker 目录.


mkdir /etc/docker


mkdir -p /etc/systemd/system/docker.service.d


重启 docker.


systemctl daemon-reload


systemctl enable docker


systemctl restart docker


systemctl status docker


systemctl disable firewalld


systemctl stop firewalld


禁用SELinux及交换分区


setenforce 0


sed -i 's/^SELINUX=.*$/SELINUX=permissive/' /etc/selinux/config

swapoff -a


安装kubernetes


yum install -y kubelet-1.12.3 kubeadm-1.12.3 kubectl-1.12.3 --disableexcludes=kubernetes


systemctl enable kubelet && systemctl start kubelet


cat <<EOF >  /etc/sysctl.d/k8s.conf


net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1


EOF


sysctl --system


下载k8s.1.12.3所需要的镜像列表


echo 'docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.12.3


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.12.3


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.12.3


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.12.3


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.24


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.3


docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.4


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1


k8s.gcr.io/pause:3.1


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2


k8s.gcr.io/coredns:1.2.2


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.3


k8s.gcr.io/coredns:1.2.3


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.4


k8s.gcr.io/coredns:1.2.4


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-

amd64:3.2.24 k8s.gcr.io/etcd:3.2.24


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.12.3 k8s.gcr.io/kube-proxy:v1.12.3' > ~/down-images.sh


chmod +777 ~/down-images.sh

sh ~/down-images.sh


以上步骤要在所有节点上执行


初始化k8s


这一步只需要在Master节点上执行


sysctl net.bridge.bridge-nf-call-iptables=1

kubeadm init --kubernetes-version=1.12.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.9.246


这个地方的192.168.9.246改成自己master的节点的IP地址


根据执行成功后的提示信息,执行以下步骤,如果执行错误,执行 kubeadm reset后在重新执行


mkdir -p $HOME/.kube


sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config


sudo chown $(id -u):$(id -g) $HOME/.kube/config


在其他slave节点执行:


kubeadm join 192.168.9.246:6443 --token vldt2a.tuo0oe0z6n6lal5m --discovery-token-ca-cert-hash


sha256:981c9a9d29a921d0519c3e800e2d16cb40760678ab0783bdf6a7e9d7405a50bf

将节点加入到集群中,这句是在集群初始化成功后,最后有显示,拷贝到slave节点上执行就行了


如果是节点加入,因为token是有期限的,需要重新生成token

解决方法如下:


kubeadm token create


image.png

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

image.png


用红色标出的部分替换原先init命令里的对应部分就可以了。


# 安装k8s-dashboard扩展


kubectl apply -f ~/kubernetes-dashboard.yaml

kubectl apply -f ~/kubernetes-dashboard-admin.rbac.yaml


# 完成后等待pod:dashboard创建启动


# 查看pod状态


kubectl get pods -n kube-system


# 查看service状态


kubectl get service -n kube-system


#打开浏览器:访问 :https://localhost:30001,使用token登录,token查看方法如下


#kubectl -n kube-system get secret


#kubectl -n kube-system describe secret kubernetes-dashboard-admin-token-skhfh


#{上条命令输出的结果中复制的类似kubernetes-dashboard-admin-token-skhfh的key字符串到这里替换}


#复制tokdn数据到登录框内登录即可登录


在浏览器中输入https://IP:30001就可以看到下面的界面


微信图片_20220512214243.png

选择令牌,将我们上面获取的令牌复制黏贴到这里点击登录即可

image.png


kubernetes-dashboard.yaml文件内容:



# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1

kind: Secret

metadata:

 labels:

   k8s-app: kubernetes-dashboard

 name: kubernetes-dashboard-certs

 namespace: kube-system

type: Opaque

---

# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1

kind: ServiceAccount

metadata:

 labels:

   k8s-app: kubernetes-dashboard

 name: kubernetes-dashboard

 namespace: kube-system

---

# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

 name: kubernetes-dashboard-minimal

 namespace: kube-system

rules:

 # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

 resources: ["secrets"]

 verbs: ["create"]

 # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

 resources: ["configmaps"]

 verbs: ["create"]

 # Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

 resources: ["secrets"]

 resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

 verbs: ["get", "update", "delete"]

 # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

 resources: ["configmaps"]

 resourceNames: ["kubernetes-dashboard-settings"]

 verbs: ["get", "update"]

 # Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

 resources: ["services"]

 resourceNames: ["heapster"]

 verbs: ["proxy"]

- apiGroups: [""]

 resources: ["services/proxy"]

 resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

 verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

 name: kubernetes-dashboard-minimal

 namespace: kube-system

roleRef:

 apiGroup: rbac.authorization.k8s.io

 kind: Role

 name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

 name: kubernetes-dashboard

 namespace: kube-system

---

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment

apiVersion: apps/v1beta2

metadata:

 labels:

   k8s-app: kubernetes-dashboard

 name: kubernetes-dashboard

 namespace: kube-system

spec:

 replicas: 1

 revisionHistoryLimit: 10

 selector:

   matchLabels:

     k8s-app: kubernetes-dashboard

 template:

   metadata:

     labels:

       k8s-app: kubernetes-dashboard

   spec:

     containers:

     - name: kubernetes-dashboard

       image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3

       ports:

       - containerPort: 8443

         protocol: TCP

       args:

         - --auto-generate-certificates

         # Uncomment the following line to manually specify Kubernetes API server Host

         # If not specified, Dashboard will attempt to auto discover the API server and connect

         # to it. Uncomment only if the default does not work.

         # - --apiserver-host=http://my-address:port

       volumeMounts:

       - name: kubernetes-dashboard-certs

         mountPath: /certs

         # Create on-disk volume to store exec logs

       - mountPath: /tmp

         name: tmp-volume

       livenessProbe:

         httpGet:

           scheme: HTTPS

           path: /

           port: 8443

         initialDelaySeconds: 30

         timeoutSeconds: 30

     volumes:

     - name: kubernetes-dashboard-certs

       secret:

         secretName: kubernetes-dashboard-certs

     - name: tmp-volume

       emptyDir: {}

     serviceAccountName: kubernetes-dashboard

     # Comment the following tolerations if Dashboard must not be deployed on master

     tolerations:

     - key: node-role.kubernetes.io/master

       effect: NoSchedule

---

# ------------------- Dashboard Service ------------------- #

kind: Service

apiVersion: v1

metadata:

 labels:

   k8s-app: kubernetes-dashboard

 name: kubernetes-dashboard

 namespace: kube-system

spec:

 type: NodePort

 ports:

   - port: 443

     targetPort: 8443

     nodePort: 30001

 selector:

   k8s-app: kubernetes-dashboard



kubernetes-dashboard-admin.rbac.yaml内容:

---

apiVersion: v1


kind: ServiceAccount


metadata:


 labels:


   k8s-app: kubernetes-dashboard


 name: kubernetes-dashboard-admin


 namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1


kind: ClusterRoleBinding


metadata:


 labels:

   k8s-app: kubernetes-dashboard


 name: kubernetes-dashboard-admin


roleRef:

 apiGroup: rbac.authorization.k8s.io


 kind: ClusterRole


 name: cluster-admin


subjects:


- kind: ServiceAccount


 name: kubernetes-dashboard-admin


 namespace: kube-system

---



相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
Kubernetes Cloud Native Apache
基于 Kubernetes 部署 Zookeeper,太有意思了!
随着云原生化流行的大趋势,我们的基础组件也需要逐渐上Kubernetes了。Apache Zookeeper作为目前最流行的分布式协调组件,在我们的微服务架构中负责扮演注册中心的角色。
基于 Kubernetes 部署 Zookeeper,太有意思了!
|
Kubernetes 数据可视化 Linux
【探索 Kubernetes|集群搭建篇 系列 6】从 0 到 1,轻松搭建完整的 Kubernetes 集群(上)
【探索 Kubernetes|集群搭建篇 系列 6】从 0 到 1,轻松搭建完整的 Kubernetes 集群(上)
152 1
|
存储 canal Kubernetes
【探索 Kubernetes|集群搭建篇 系列 6】从 0 到 1,轻松搭建完整的 Kubernetes 集群(下)
【探索 Kubernetes|集群搭建篇 系列 6】从 0 到 1,轻松搭建完整的 Kubernetes 集群(下)
187 1
|
Kubernetes Docker 容器
k8s--集群搭建,使用 kubeadm 搭建 1.23.5 的集群(一)
k8s--集群搭建,使用 kubeadm 搭建 1.23.5 的集群
|
Kubernetes Perl 容器
k8s--集群搭建,使用 kubeadm 搭建 1.23.5 的集群(二)
k8s--集群搭建,使用 kubeadm 搭建 1.23.5 的集群
|
存储 Kubernetes Linux
kubernetes学习-集群搭建
kubernetes学习-集群搭建
232 0
|
域名解析 Kubernetes Unix
Kubernetes 一主多从集群从 0 到 1 部署
Kubernetes 一主多从集群从 0 到 1 部署
|
存储 分布式计算 前端开发
Zookper集群搭建
🍅程序员小王的博客:程序员小王的博客 🍅 欢迎点赞 👍 收藏 ⭐留言 📝 🍅 如有编辑错误联系作者,如果有比较好的文章欢迎分享给我,我会取其精华去其糟粕
217 0
Zookper集群搭建
|
canal Kubernetes 安全
使用kubeadm从0到1搭建kubernete集群
使用kubeadm从0到1搭建kubernete集群
548 0
使用kubeadm从0到1搭建kubernete集群
|
Kubernetes Ubuntu Docker
k8s集群搭建
k8s集群搭建
332 0