k8s离线安装部署教程
文件名称 | 版本号 | linux核心 |
---|---|---|
docker版本 | 20.10.9 | x86 |
k8s版本 | v1.22.4 | x86 |
kuboard | v3 | x86 |
6.设置ipvs模式
k8s整个集群为了访问通;默认是用iptables,性能下降(kube-proxy在集群之间同步iptables的内容)
每个pod,都需要分配一个ip,每个节点,kube-proxy,会同步其他节点的pod的ip,以保证iptables一致,才能让各节点访问得通,则会不断得同步iptables,这样十分影响性能。
#1、查看默认kube-proxy 使用的模式
kubectl get pod -A|grep kube-proxy
kubectl logs -n kube-system kube-proxy-xxxx
#2、需要修改 kube-proxy 的配置文件,修改mode 为ipvs。默认iptables,但是集群大了以后就很慢
kubectl edit cm kube-proxy -n kube-system
修改如下
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
###修改了kube-proxy的配置,为了让重新生效,需要杀掉以前的Kube-proxy
kubectl get pod -A|grep kube-proxy
kubectl delete pod kube-proxy-xxxx -n kube-system
### 修改完成后可以重启kube-proxy以生效
7.安装kuboard
kuboard安装:版本:v3
在线安装:
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
离线安装:
先下载kuboard-v3.yaml文件
由于执行kuboard-v3.yaml,需要用到镜像,无法上网的,可以在有网的服务器先下载对应的镜像。
cat kuboard-v3.yaml | grep image: | awk '{print $2}'
eipwork/etcd-host:3.4.16-1
eipwork/kuboard:v3下面这两个镜像,通过这行命令,无法获取到,是官网文档说明,需要pull
eipwork/kuboard-agent:v3
questdb/questdb:6.0.4
# 拉取全部镜像
cat kuboard-v3.yaml \
| grep image: \
| awk '{print "docker pull " $2}' \
| sh
# pull另外两个镜像
docker pull eipwork/kuboard-agent:v3
docker pull questdb/questdb:6.0.4
# 在当前目录导出镜像为压缩包
docker save -o kuboard-v3.tar eipwork/kuboard:v3
docker save -o etcd-host-3.4.16-1.tar eipwork/etcd-host:3.4.16-1
docker save -o kuboard-agent-v3.tar eipwork/kuboard-agent:v3
docker save -o questdb-6.0.4.tar questdb/questdb:6.0.4
# 加载到docker环境
docker load -i kuboard-v3.tar
docker load -i etcd-host-3.4.16-1.tar
docker load -i kuboard-agent-v3.tar
docker load -i questdb-6.0.4.tar
# 安装kuboard
kubectl apply -f kuboard-v3.yaml
# 删除kuboard
kubectl delete -f kuboard-v3.yaml
注意,这里, kuboard-v3.yaml,imagePullPolicy:需要将Always,改为IfNotPresent
image: 'eipwork/etcd-host:3.4.16-1'
# 这里需要将Always,改为IfNotPresent(表示本地有镜像时,不从仓库下)
imagePullPolicy: IfNotPresent
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
k8s.kuboard.cn/name: kuboard-v3
name: kuboard-v3
namespace: kuboard
image: 'eipwork/kuboard:v3'
# 这里需要将Always,改为IfNotPresent(表示本地有镜像时,不从仓库下)
imagePullPolicy: IfNotPresent
启动效果如下:
\# 启动kuboard-v3kubectl apply -f kuboard-v3.yaml
\# 查看kuborad是否启动成功:
kubectl get pods -n kuboard
如果只有三个,没有启动kuboard-agent-xxx容器。请继续行往下操作:
8.访问Kuboard
- 在浏览器中打开链接
http://your-node-ip-address:30080
输入初始用户名和密码,并登录
- 用户名:
admin
- 密码:
Kuboard123
- 用户名:
首页,默认情况下,这里会显示导入中
(agent没有启动情况下),点击进入到default
中
导出:kuboard-agent.yaml文件
注意:这个kuboard-agent.yaml文件,镜像拉取方式,需要修改imagePullPolicy:需要将Always,改为IfNotPresent
kubectl apply -f ./kuboard-agent.yaml
最终效果:
9.安装metrics-server
为了可以在kuboard中,能看到服务器的资源,可以监控
metrics-server.yaml,版本0.5.0
在这里选择安装metrics-server
,一步一步,操作下去,最后预览生成的yaml
文件,保存为metrics-server.yaml
文件
离线安装:
metrics-server.yaml文件
由于执行metrics-server.yaml,需要用到镜像,无法上网的,可以在有网的服务器先下载对应的镜像。
swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0
# 拉取全部镜像
docker pull swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0
# 在当前目录导出镜像为压缩包
docker save -o metrics-server-v0.5.0.tar swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0
# 加载到docker环境
docker load -i metrics-server-v0.5.0.tar
# 安装kuboard
kubectl apply -f metrics-server.yaml
# 删除kuboard
kubectl delete -f metrics-server.yaml
metrics-server.yaml文件如下:
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
k8s-app: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: 'true'
rbac.authorization.k8s.io/aggregate-to-edit: 'true'
rbac.authorization.k8s.io/aggregate-to-view: 'true'
name: 'system:aggregated-metrics-reader'
namespace: kube-system
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: 'system:metrics-server'
namespace: kube-system
rules:
- apiGroups:
- ''
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: 'metrics-server:system:auth-delegator'
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: 'system:auth-delegator'
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: 'system:metrics-server'
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: 'system:metrics-server'
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
namespace: kube-system
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: metrics-server
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
weight: 100
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: metrics-server
namespaces:
- kube-system
topologyKey: kubernetes.io/hostname
containers:
- args:
- '--cert-dir=/tmp'
- '--secure-port=443'
- '--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname'
- '--kubelet-use-node-status-port'
- '--kubelet-insecure-tls=true'
- '--authorization-always-allow-paths=/livez,/readyz'
- '--metric-resolution=15s'
image: >-
swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
tolerations:
- effect: ''
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: metrics-server
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
k8s-app: metrics-server
启动效果如下:
\# 启动metrics-serverkubectl apply -f metrics-server.yaml
可以查看到服务器的内存,cpu等资源信息
10.NodePort端口
nodeport端口范围修改,k8s需要向外暴露端口,访问方式Service,需要对节点端口范围进行修改,默认是30000-32767,这里修改为1-65535
vi /etc/kubernetes/manifests/kube-apiserver.yaml
然后在Command下增加以下配置
- --service-node-port-range=1-65535
然后wq保存。 这个时候 k8s会自动重启,要等待一段时间。
直到你在服务器上敲入以下命令kubectl get all -n kube-system 所有状态都为running,即可。
就可以了(但页面上还是会显示30000-32767,这个不管他,你照样设置想要的端口是可以保存成功的)
11.master去除污点
12.nodePort不可访问问题
通过setenforce 0
关闭SELinux是必须的,这样容器才可以访问主机的文件系统,例如pod的网络操作就需要访问主机文件系统。直到kubelet支持SELinux之前你都需要关闭SELinux。
iptables --flush
和iptables -tnat --flush
是清空iptables的规则
setenforce 0
iptables --flush
iptables -tnat --flush
service docker restart
iptables -P FORWARD ACCEPT
13.scheduler出现Unhealthy
部署完master
节点以后,执行kubectl get cs
命令来检测组件的运行状态时,报如下错误
原因分析
出现这种情况,是/etc/kubernetes/manifests/
下的kube-scheduler.yaml
设置的默认端口是0导致的,解决方式是注释掉对应的port即可,操作如下:
vi /etc/kubernetes/manifests/kube-scheduler.yaml
#注释下面 - --port=0这行,保存即可。
保存完成功,不需要任何操作。等待kubelet
重新检测完成即可。
同样,如果controller-manager
也出现Unhealthy的问题,也是按照同样的方式修改/etc/kubernetes/manifests/
下的kube-controller-manager.yaml
,注释掉对应的port即可。
14.安装ingress-nginx
为什么需要Ingress?
- Service可以使用NodePort暴露集群外访问端口,但是性能低下不安全
- 缺少Layer7的统一访问入口,可以负载均衡、限流等
- Ingress 公开了从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义
的规则控制。
- 我们使用Ingress作为整个集群统一的入口,配置Ingress规则转到对应的Service
ingress nginx
这是k8s官方做的,适配nginx的。这个里面会及时更新一些特性,而且性能很高,也被广泛采用。
在线安装:通过kubectl apply
进行安装
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
离线安装:
下载好该deploy.yaml
文件,修改名称为ingress-nginx.yaml
由于执行ingress-nginx.yaml,需要用到镜像,无法上网的,可以在有网的服务器先下载对应的镜像。
cat ingress-nginx.yaml | grep image: | awk '{print $2}'
k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
# 拉取全部镜像
cat ingress-nginx.yaml \
| grep image: \
| awk '{print "docker pull " $2}' \
| sh
# 当然你也可以一个一个pull
# 在当前目录导出镜像为压缩包
docker save -o ingress-nginx-controller-v1.1.0.tar k8s.gcr.io/ingress-nginx/controller:v1.1.0
docker save -o ingress-kube-webhook-certgen-v1.1.1.tar k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
# 加载到docker环境
docker load -i ingress-nginx-controller-v1.1.0.tar
docker load -i ingress-kube-webhook-certgen-v1.1.1.tar
# 安装calico
kubectl apply -f ingress-nginx.yaml
# 删除calico
kubectl delete -f ingress-nginx.yaml
注意,这里, ingress-nginx.yaml,需要修改里面的镜像名称。
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
#改为下面:(有两处地方需要修改)
image: ingress/kube-webhook-certgen:v1.1.1
image: k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a
#改为下面:
image: ingress/ingress-nginx-controller:v1.1.0
安装成功:
- ingress配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
k8s.kuboard.cn/layer: svc
k8s.kuboard.cn/name: ureport
name: ureport
namespace: jxbp
resourceVersion: '1212113'
spec:
ingressClassName: nginx
rules:
- host: www.jxbp.com
http:
paths:
- backend:
service:
name: ureport
port:
number: 7302
path: /ureport
pathType: Prefix
测试:
curl -H "Host: www.llsydn.com" clusterIp
curl -H "Host: www.llsydn.com" 10.96.45.119/ureport