K8S 离线安装版 1.23.1

简介: 在准备Kubernetes (k8s) 集群部署时,执行了以下步骤:1. 关闭防火墙和服务:`systemctl stop firewalld && systemctl disable firewalld`,并禁用SELinux和swap。2. 添加主机条目到`/etc/hosts`,同步时间使用`ntpdate time.windows.com`。3. 设置k8s仓库源,安装必要的工具,如`yum install yum-utils -y`。4. 安装Docker,并下载k8s相关软件包。5. 制作离线包,使用`docker save`保存镜像,然后在目标机器上用`docker

关闭防火墙
[root@gcv01 ~]# systemctl stop firewalld
[root@gcv01 ~]# systemctl disable firewalld

关闭selinux、关闭swap
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
sed -ri 's/.swap./#&/' /etc/fstab
swapoff -a

添加主机

cat >> /etc/hosts << EOF
192.168.10.1 node1
192.168.10.2 node0
EOF

同步时间

yum install ntpdate -y
ntpdate time.windows.com

k8s 部署

镜像

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装工具

j[root@gcv01 ~]# yum install yum-utils -y

报一下错误:1开外网 2换地址
http://yum.tbsite.net/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
docker 安装
yum install docker-ce -y

k8s 软件包下载
yum install -y kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1

开机启动
systemctl enable kubelet

关闭交换分区
swapoff -a

离线包制作

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.1
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1
docker pull registry.aliyuncs.com/google_containers/pause:3.6
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6


docker save -o kube-apiserver.tar registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.1
docker save -o kube-controller-manager.tar registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1
docker save -o kube-scheduler.tar registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.1
docker save -o kube-proxy.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1
docker save -o pause.tar registry.aliyuncs.com/google_containers/pause:3.6
docker save -o etcd.tar registry.aliyuncs.com/google_containers/etcd:3.5.1-0
docker save -o coredns.tar registry.aliyuncs.com/google_containers/coredns:v1.8.6


docker load -i kube-apiserver.tar 
docker load -i kube-controller-manager.tar 
docker load -i kube-scheduler.tar 
docker load -i kube-proxy.tar 
docker load -i pause.tar
docker load -i etcd.tar
docker load -i coredns.tar

初始化 master

jkubeadm init --kubernetes-version=1.23.1 \
--apiserver-advertise-address=10.xx.xx.xx \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=SystemVerification \
--ignore-preflight-errors=Swap

检查组件是否安装正常

kubectl get componentstatuses

如果出现安装失败

cgroup cubectl 与docker 不一致
更改docker 启动方式

daemon.json
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

master重装
sudo kubeadm reset -f

查看状态
journalctl -u kubelet

master获取证书
[root@gcv01 ~]# kubeadm token create --print-join-command
kubeadm join 10.251.95.25:6443 --token jgxiv1.kwjb17a74jj8rln4 --discovery-token-ca-cert-hash sha256:028c9db9273470f0fd28f3ccce8d241539a82818550a319ad31814cc9169ddf9

node 执行以上命令挂node, node 不挂之前sytemctl status kubelet 非active状态并且var/lib/kubelet/config.yaml文件为空

j[root@gcv05 ~]# kubeadm join 10.251.95.25:6443 --token jgxiv1.kwjb17a74jj8rln4 --discovery-token-ca-cert-hash sha256:028c9db9273470f0fd28f3ccce8d241539a82818550a319ad31814cc9169ddf9
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

修改权限

 kubectl label node node1 node-role.kubernetes.io/worker=worker

下载网络配置
wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate

安装
kubectl apply -f calico.yaml
更改鏡像所有的地址 quay.io/flysangel/calico/kube-controllers:v3.25.0
安装面板 dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
更改具體鏡像地址

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: quay.io/azimuth/docker.io/kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: quay.io/platform9/dashboard-metrics-scraper:v1.0.8-pmk-2925185
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

端口设置

2.1 type类型
默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2024-06-25T09:17:35Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "15816"
  uid: c58cc193-b05c-4587-8d59-c027906a1e50
spec:
  clusterIP: 10.97.190.52
  clusterIPs:
  - 10.97.190.52
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 30050
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

将type对应的参数修改为NodePort

查看端口
kubectl get svc -A |grep kubernetes-dashboard

启动服务
[root@gcv01 yaml]# k apply -f dash.yaml

获取到token

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{ {.data.token | base64decode}}"

jeyJhbGciOiJSUzI1NiIsImtpZCI6IkNURG92MzZhaldvMTdjeFc3RC1ENzdPenhnNVJGYzRLTGRVMERpZ0RzY0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWw1bnF6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlYThlY2EzMS0wMGNmLTRhNzMtOWYxZS01ZjRiNDE5ZTljNjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.CTX0-7obNu11jP1ehU3L_18jjaAZnOKfz-_AXfgNKbfZuNrhJpdO_QBZuETu7Q-z6lHb_7FS6RrXDuKFWSBKpvtybieCIX5y3VPFif4_UN-OKGLk54vjD4gdd263sb2--zqIrpo05qx-ZWvZPoyGmugYEk8MiCfbOdOCiHC9AYEhEmOX4NrymSM3AyR5y9OJmTfsYqrlwbaJvD1uHuH3j0aGHnLZUHZ2SNqB9YPtI5umtjdX00_q9Tq0-xMM7Irhes6YOAAECRjCJV1lONOrEBO7KKJ1I5V8fh2ALhMNibki5nhFceS5ugZGCwKLNnDPKnt2VUl9R_SSOGD8g7LMLA
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
7月前
|
Kubernetes Linux Docker
centos离线安装 k8s (实操可用)
centos离线安装 k8s (实操可用)
284 4
|
7月前
|
Kubernetes Linux Docker
k8s学习-k8s初识、Centos下集群安装与一键离线安装
k8s学习-k8s初识、Centos下集群安装与一键离线安装
211 2
|
7月前
|
Kubernetes Cloud Native 开发工具
云原生|kubernetes|helm3 自定义离线安装部署ingress-nginx
云原生|kubernetes|helm3 自定义离线安装部署ingress-nginx
553 0
|
存储 Kubernetes 负载均衡
k8s离线安装部署教程X86(一)
k8s离线安装部署教程 文件名称 版本号 linux核心 docker版本 20.10.9 x86 k8s版本 v1.22.4 x86 kuboard v3 x86 一、k8s(x86) 1.dock
1814 0
|
Kubernetes 网络协议 Cloud Native
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)(一)
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)
246 0
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)(一)
|
Kubernetes 负载均衡 监控
k8s离线安装部署教程X86(二)
k8s离线安装部署教程 文件名称 版本号 linux核心 docker版本 20.10.9 x86 k8s版本 v1.22.4 x86 kuboard v3 x86 6.设置ipvs模式 k8s整个集
845 0
|
Kubernetes Cloud Native Docker
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)(三)
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)
399 0
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)(三)
|
Kubernetes Cloud Native 开发工具
云原生|kubernetes|helm3 自定义离线安装部署ingress-nginx
云原生|kubernetes|helm3 自定义离线安装部署ingress-nginx
730 0
云原生|kubernetes|helm3 自定义离线安装部署ingress-nginx
|
Kubernetes Cloud Native Docker
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)(二)
云原生|kubernetes|kubeadm方式安装部署 kubernetes1.20(可离线安装,修订版---2022-10-15)
185 0
|
存储 Kubernetes NoSQL
kubernetes 学习之helm包管理器离线安装nfs-client-provisioner
kubernetes 学习之helm包管理器离线安装nfs-client-provisioner
936 0

热门文章

最新文章

推荐镜像

更多