阿里云上all in one安装kubesphere

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
Redis 开源版,标准版 2GB
推荐场景:
搭建游戏排行榜
云数据库 Tair(兼容Redis),内存型 2GB
简介: 阿里云 kubesphere

官网: https://kubesphere.com.cn/

kubeSphere是青云的开源容器平台,主要就是基于k8s,实现简易图形化操作.

1. 安装

1.1 基于k8s安装kubeSphere

机器规格:

master 4c8g

node1 8c16g

node2 8c16g

机器规格的提升主要是为了验证kubeSphere的全功能,需要大量的资源

https://kubesphere.com.cn/docs/installing-on-kubernetes/ 官网部署文档

kubeSphere 支持在各种公有云上部署

1.1.1 安装k8s

安装k8s参考之前的文章

安装完成后,需要安装nfs,参考之前kubernets的操作文章

然后创建默认存储类型:

## 创建了一个存储类apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:  name: nfs-storage
  annotations:    storageclass.kubernetes.io/is-default-class: "true"provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:  archiveOnDelete: "true"## 删除pv的时候,pv的内容是否要备份---apiVersion: apps/v1
kind: Deployment
metadata:  name: nfs-client-provisioner
  labels:    app: nfs-client-provisioner
# replace with namespace where provisioner is deployed  namespace: default
spec:  replicas: 1  strategy:    type: Recreate
  selector:    matchLabels:      app: nfs-client-provisioner
  template:    metadata:      labels:        app: nfs-client-provisioner
    spec:      serviceAccountName: nfs-client-provisioner
      containers:        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
# resources:#    limits:#      cpu: 10m#    requests:#      cpu: 10m          volumeMounts:            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.31.0.4 ## 指定自己nfs服务器地址            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录      volumes:        - name: nfs-client-root
          nfs:            server: 172.31.0.4 # 改为自己的ip            path: /nfs/data
---apiVersion: v1
kind: ServiceAccount
metadata:  name: nfs-client-provisioner
# replace with namespace where provisioner is deployed  namespace: default
---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:  name: nfs-client-provisioner-runner
rules:  - apiGroups: [""]    resources: ["nodes"]    verbs: ["get","list","watch"]  - apiGroups: [""]    resources: ["persistentvolumes"]    verbs: ["get","list","watch","create","delete"]  - apiGroups: [""]    resources: ["persistentvolumeclaims"]    verbs: ["get","list","watch","update"]  - apiGroups: ["storage.k8s.io"]    resources: ["storageclasses"]    verbs: ["get","list","watch"]  - apiGroups: [""]    resources: ["events"]    verbs: ["create","update","patch"]---kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:  name: run-nfs-client-provisioner
subjects:  - kind: ServiceAccount
    name: nfs-client-provisioner
# replace with namespace where provisioner is deployed    namespace: default
roleRef:  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:  name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed  namespace: default
rules:  - apiGroups: [""]    resources: ["endpoints"]    verbs: ["get","list","watch","create","update","patch"]---kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:  name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed  namespace: default
subjects:  - kind: ServiceAccount
    name: nfs-client-provisioner
# replace with namespace where provisioner is deployed    namespace: default
roleRef:  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
# 创建storageClasskubectl apply -f sc.yml


1.1.2 安装kubeSphere前置环境

安装 metrics-server

apiVersion: v1
kind: ServiceAccount
metadata:  labels:    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:  labels:    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"    rbac.authorization.k8s.io/aggregate-to-edit: "true"    rbac.authorization.k8s.io/aggregate-to-view: "true"  name: system:aggregated-metrics-reader
rules:- apiGroups:  - metrics.k8s.io
  resources:  - pods
  - nodes
  verbs:  - get
  - list
  - watch
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:  labels:    k8s-app: metrics-server
  name: system:metrics-server
rules:- apiGroups:  - ""  resources:  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:  - get
  - list
  - watch
---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:  labels:    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:  labels:    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:  labels:    k8s-app: metrics-server
  name: system:metrics-server
roleRef:  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---apiVersion: v1
kind: Service
metadata:  labels:    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:  ports:  - name: https
    port: 443    protocol: TCP
    targetPort: https
  selector:    k8s-app: metrics-server
---apiVersion: apps/v1
kind: Deployment
metadata:  labels:    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:  selector:    matchLabels:      k8s-app: metrics-server
  strategy:    rollingUpdate:      maxUnavailable: 0  template:    metadata:      labels:        k8s-app: metrics-server
    spec:      containers:      - args:        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:          failureThreshold: 3          httpGet:            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10        name: metrics-server
        ports:        - containerPort: 4443          name: https
          protocol: TCP
        readinessProbe:          failureThreshold: 3          httpGet:            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10        securityContext:          readOnlyRootFilesystem: true          runAsNonRoot: true          runAsUser: 1000        volumeMounts:        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:      - emptyDir: {}        name: tmp-dir
---apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:  labels:    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:  group: metrics.k8s.io
  groupPriorityMinimum: 100  insecureSkipTLSVerify: true  service:    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100


kubectl apply -f metrics-server.yml

1.1.3 安装kubeSphere

参考官方文档 https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/

wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/kubesphere-installer.yaml

 

wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/cluster-configuration.yaml


由于其中的 metrics-server 我们已经安装过了 (配置中是从官网下载镜像,可能比较慢,上面安装metrics-server的时候换成了阿里云的镜像),所以需要修改下集群配置文件  cluster-configuration.yaml

注意: 修改cluster-configuration.yaml 主要是为了两个目的:

  1. 修改etcd的ip,从localhost改为主节点局域网ip
  2. 修改其中一些可插拔组件是否启用, 参考 https://kubesphere.com.cn/docs/pluggable-components/
  1. 如下配置文件,基本上启用了kubeSphere的所有可插拔组件功能


---apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:  name: ks-installer
  namespace: kubesphere-system
  labels:    version: v3.2.0
spec:  persistence:    storageClass: ""# If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.  authentication:    jwtSecret: ""# Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.  local_registry: ""# Add your private registry address if it is needed.# dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-install release version.  etcd:    monitoring: true       # 修改为true Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.    endpointIps: xx.xx.xx.xx  # IP改为masterIP etcd cluster EndpointIps. It can be a bunch of IPs here.    port: 2379              # etcd port.    tlsEnable: true  common:    core:      console:        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.        port: 30880        type: NodePort
# apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster#  resources: {}# controllerManager:#  resources: {}    redis:      enabled: false #改为true 启用redis      volumeSize: 2Gi # Redis PVC size.    openldap:      enabled: false # 改为true 启用      volumeSize: 2Gi   # openldap PVC size.    minio:      volumeSize: 20Gi # Minio PVC size.    monitoring:# type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.      GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero.         enabled: false    gpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs.       kinds:               - resourceName: "nvidia.com/gpu"        resourceType: "GPU"        default: true    es:   # Storage backend for logging, events and auditing.# master:#   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.#   replicas: 1      # The total number of master nodes. Even numbers are not allowed.#   resources: {}# data:#   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.#   replicas: 1       # The total number of data nodes.#   resources: {}      logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.      elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.      basicAuth:        enabled: false        username: ""        password: ""      externalElasticsearchUrl: ""      externalElasticsearchPort: ""  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.    enabled: true         # 启用告警 Enable or disable the KubeSphere Alerting System.# thanosruler:#   replicas: 1#   resources: {}  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.    enabled: true         # 启用审计 Enable or disable the KubeSphere Auditing Log System.# operator:#   resources: {}# webhook:#   resources: {}  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.    enabled: true             # 启用devops Enable or disable the KubeSphere DevOps System.# resources: {}    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.    jenkinsVolumeSize: 8Gi     # Jenkins volume size.    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.    enabled: true         # 启用事件 Enable or disable the KubeSphere Events System.# operator:#   resources: {}# exporter:#   resources: {}# ruler:#   enabled: true#   replicas: 2#   resources: {}  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.    enabled: true         # 启用日志 Enable or disable the KubeSphere Logging System.    containerruntime: docker
    logsidecar:      enabled: true      replicas: 2# resources: {}  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).    enabled: false                   # 此处不启用,我们已经手动安装过了 Enable or disable metrics-server.  monitoring:    storageClass: ""# If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.# kube_rbac_proxy:#   resources: {}# kube_state_metrics:#   resources: {}# prometheus:#   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.#   volumeSize: 20Gi  # Prometheus PVC size.#   resources: {}#   operator:#     resources: {}#   adapter:#     resources: {}# node_exporter:#   resources: {}# alertmanager:#   replicas: 1          # AlertManager Replicas.#   resources: {}# notification_manager:#   resources: {}#   operator:#     resources: {}#   proxy:#     resources: {}    gpu:                           # GPU monitoring-related plugins installation.      nvidia_dcgm_exporter:        enabled: false# resources: {}  multicluster:    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.  network:    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.      enabled: true # 启用网络服务 Enable or disable network policies.    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.      type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.    store:      enabled: false # Enable or disable the KubeSphere App Store.  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.    enabled: true     # 启用服务网格 Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.    enabled: true   # Enable or disable KubeEdge.    cloudCore:      nodeSelector: {"node-role.kubernetes.io/worker": ""}      tolerations: []      cloudhubPort: "10000"      cloudhubQuicPort: "10001"      cloudhubHttpsPort: "10002"      cloudstreamPort: "10003"      tunnelPort: "10004"      cloudHub:        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.          - ""# Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.        nodeLimit: "100"      service:        cloudhubNodePort: "30000"        cloudhubQuicNodePort: "30001"        cloudhubHttpsNodePort: "30002"        cloudstreamNodePort: "30003"        tunnelNodePort: "30004"    edgeWatcher:      nodeSelector: {"node-role.kubernetes.io/worker": ""}      tolerations: []      edgeWatcherAgent:        nodeSelector: {"node-role.kubernetes.io/worker": ""}        tolerations: []
# 修改配置文件后开始安装kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
# 查看安装进度kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}')-f# 安装时发现缺少etcd监控证书 创建如下证书kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key



1.2 基于Linux单节点安装kubeSphere

参考官方文档

https://kubesphere.com.cn/docs/quick-start/all-in-one-on-linux/

1.2.1 开通服务器

4c8g

1.2.2 安装kubekey

exportKKZONE=cn
# 下载kubekeycurl-sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh-# 添加可执行权限chmod+x kk


1.2.3 安装kubernetes和kubeSphere

# 指定k8s和kubeSphere版本

./kk create cluster --with-kubernetes v1.21.5 --with-kubesphere v3.2.0

等待安装完成即可

注意: 安装过程中可能出现缺少某些组件的信息, 直接 yum install xx即可

1.2.4 启用可插拔组件

这种all in one的方式安装,默认是最小化安装的,想要开启各种功能,需要手动开启

官网文档: https://kubesphere.com.cn/docs/pluggable-components/

以DevOps为例


# 监测安装过程kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}')-f


1.3 基于Linux多节点安装kubeSphere

官网文档: https://kubesphere.com.cn/docs/installing-on-linux/introduction/multioverview/

1.3.1 开通机器

Master 4c8g *1

Worker 8c16g *2

设置三台机器的hostname

master node1 node2

1.3.2 master安装kubekey

exportKKZONE=cn
curl-sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh-chmod+x kk



1.3.3 创建集群配置文件

# 会在当前目录下创建 config-sample.yaml./kk create config --with-kubesphere v3.2.0 --with-kubernetes v1.20.4
修改该文件
vim config-sample.yaml


spec:  hosts: # 修改三个节点的hostname ip地址 用户名和密码  - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}  - {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}  - {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}  roleGroups:    etcd:    - master # 主节点的hostname    master:    - master # 主节点的hostname    worker:    - node1 # 两个work节点的hostname    - node2


1.3.4 创建集群

./kk create cluster -f config-sample.yaml


1.4 阿里云ACK安装

和Linux配置基本相同 官网文档: https://kubesphere.com.cn/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs/

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
3月前
|
存储 Kubernetes 容器
Kubernetes(K8S) kubesphere 安装
Kubernetes(K8S) kubesphere 安装
98 4
|
5月前
|
负载均衡 Cloud Native 中间件
KubeSphere实战
KubeSphere实战
53 0
|
6月前
|
存储 Kubernetes 监控
|
Kubernetes Ubuntu Unix
Linux安装 v1.27.3 Kubernetes
Linux Install Kubernetes
245 0
|
Kubernetes Linux Shell
Linux k8s Kubernetes All-in-One 模式安装 KubeSphere 详细教程
Linux k8s Kubernetes All-in-One 模式安装 KubeSphere 详细教程
|
运维 Kubernetes 监控
Rancher 和 KubeSphere 的对比
Rancher 和 KubeSphere 的对比
1856 0
|
运维 Kubernetes Ubuntu
【K8S】使用 All-in-One 方式安装部署 KubeSphere(一)
【K8S】使用 All-in-One 方式安装部署 KubeSphere
409 0
|
Kubernetes 监控 安全
【K8S】使用 All-in-One 方式安装部署 KubeSphere(下)
【K8S】使用 All-in-One 方式安装部署 KubeSphere(下)
227 0
|
弹性计算 Kubernetes Docker
|
Kubernetes 网络协议 Docker
ubuntu16.0安装kubernetes集群为练习CKA准备
ubuntu16.0安装kubernetes集群为练习CKA准备
ubuntu16.0安装kubernetes集群为练习CKA准备