kubesphere安装部署附带Metrics server的安装(二)

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
云数据库 Redis 版,标准版 2GB
推荐场景:
搭建游戏排行榜
云原生内存数据库 Tair,内存型 2GB
简介: kubesphere安装部署附带Metrics server的安装

四,部署并设置一个defualt的StorageClass


部署和设置default就不重复了以免篇幅太长,见我的博客:kubernetes学习之持久化存储StorageClass(4)_zsk_john的博客-CSDN博客

五,开始kubesphere的安装:


这个文件不需要改动

[root@master media]# cat kubesphere-installer.yaml 
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
  - name: v1alpha1
    served: true
    storage: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
    - cc
---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kubeedge.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v3.1.1
        imagePullPolicy: "IfNotPresent"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time

集群设置和功能选择的配置文件:

[root@master media]# cat cluster-configuration.yaml 
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""        #这里保持默认即可,因为有默认的存储类
  authentication:
    jwtSecret: ""      
  local_registry: ""        # Add your private registry address if it is needed.这里是离线安装的仓库地址,如果你有内网仓库的话。
  etcd:                    
    monitoring: true       # 改为"true",表示开启etcd的监控功能
    endpointIps: 192.168.217.16  # 改为自己的master节点IP地址
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    redis:
      enabled: true         #改为"true",开启redis功能
    openldap: 
      enabled: true         #改为"true",开启轻量级目录协议
    minioVolumeSize: 20Gi # Minio PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    es:   # Storage backend for logging, events and auditing
      elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false          #此处的"false"不用改为"true",这个标识在开启监控功能之后是否要连接ElasticSearch的账户和密码,此处不用
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    port: 30880
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # 改为"true",开启告警功能
  auditing:                
    enabled: true         #  改为"true",开启审计功能
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # 改为"true",开启DevOps功能
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # 改为"true",开启集群的事件功能
    ruler:
      enabled: true
      replicas: 2
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true        # 改为"true",开启日志功能
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # 这个不用修改,因为在上卖弄我们已经安装过了,如果这里开启,镜像是官方的,会拉取镜像失败
  monitoring:
    storageClass: ""                 
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      enabled: true # 改为"true",开启网络策略
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: none #如果你的网络插件是calico,需要修改为"calico",这里我是Flannel,保持默认。
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # 改为"true",开启应用商店
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true     # 改为"true",开启微服务治理
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false   # 这个就不修改了,这个是边缘服务,我们也没有边缘的设备。
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

执行以上两个文件即可,时间比较长,估计得1个小时左右,安装结束这样的是正确的:

Start installing servicemesh
**************************************************
Waiting for all tasks to be completed ...
task alerting status is successful  (1/10)
task multicluster status is successful  (2/10)
task network status is successful  (3/10)
task openpitrix status is successful  (4/10)
task auditing status is successful  (5/10)
task logging status is successful  (6/10)
task events status is successful  (7/10)
task devops status is successful  (8/10)
task monitoring status is successful  (9/10)
task servicemesh status is successful  (10/10)
**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################
Console: http://192.168.217.16:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.
#####################################################
https://kubesphere.io             2022-08-31 13:33:27
#####################################################

大概安装了这些

[root@master ~]# k get po -A
NAMESPACE                      NAME                                               READY   STATUS              RESTARTS   AGE
default                        nginx1                                             1/1     Running             4          3d19h
istio-system                   istio-ingressgateway-76df6567c6-xf4k8              0/1     ContainerCreating   0          40m
istio-system                   jaeger-operator-587999bcb9-rx25x                   1/1     Running             0          22m
istio-system                   kiali-operator-855cc4486d-w2gz9                    0/1     ContainerCreating   0          22m
kube-system                    coredns-76648cbfc9-lb75g                           1/1     Running             4          3d16h
kube-system                    kube-flannel-ds-mhkdq                              1/1     Running             11         3d21h
kube-system                    kube-flannel-ds-mlb7l                              1/1     Running             9          3d21h
kube-system                    kube-flannel-ds-sl4qv                              1/1     Running             6          3d21h
kube-system                    metrics-server-7d594964f5-nf6nz                    1/1     Running             2          148m
kube-system                    nfs-client-provisioner-9c9f9bd86-ffc4c             0/1     CrashLoopBackOff    8          107m
kube-system                    snapshot-controller-0                              1/1     Running             0          51m
kubesphere-controls-system     default-http-backend-857d7b6856-mdtnz              1/1     Running             0          49m
kubesphere-controls-system     kubectl-admin-db9fc54f5-cxwqq                      1/1     Running             0          20m
kubesphere-devops-system       ks-jenkins-58fffc7489-h8cnz                        0/1     Init:0/1            0          40m
kubesphere-devops-system       s2ioperator-0                                      1/1     Running             0          44m
kubesphere-logging-system      elasticsearch-logging-data-0                       1/1     Running             0          50m
kubesphere-logging-system      elasticsearch-logging-data-1                       1/1     Running             4          44m
kubesphere-logging-system      elasticsearch-logging-discovery-0                  1/1     Running             1          50m
kubesphere-logging-system      fluentbit-operator-85cbc8c7b6-9dvn4                0/1     PodInitializing     0          49m
kubesphere-logging-system      ks-events-operator-8dbf7fccc-rkzvs                 0/1     ContainerCreating   0          43m
kubesphere-logging-system      kube-auditing-operator-697658f8d-cdh79             1/1     Running             0          45m
kubesphere-logging-system      kube-auditing-webhook-deploy-9484b5ff-8jjdr        1/1     Running             0          29m
kubesphere-logging-system      kube-auditing-webhook-deploy-9484b5ff-t4qpn        1/1     Running             0          29m
kubesphere-logging-system      logsidecar-injector-deploy-74c66bfd85-jj4f6        0/2     ContainerCreating   0          44m
kubesphere-logging-system      logsidecar-injector-deploy-74c66bfd85-sks5d        2/2     Running             0          44m
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running             0          36m
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running             0          36m
kubesphere-monitoring-system   alertmanager-main-2                                0/2     ContainerCreating   0          36m
kubesphere-monitoring-system   kube-state-metrics-d6645c6b-ttv7g                  3/3     Running             0          39m
kubesphere-monitoring-system   node-exporter-4vz5v                                2/2     Running             0          39m
kubesphere-monitoring-system   node-exporter-fhwwl                                0/2     ContainerCreating   0          39m
kubesphere-monitoring-system   node-exporter-m84q5                                2/2     Running             0          39m
kubesphere-monitoring-system   notification-manager-deployment-674dddcbd9-4xgs5   1/1     Running             0          34m
kubesphere-monitoring-system   notification-manager-deployment-674dddcbd9-9xfx7   1/1     Running             0          34m
kubesphere-monitoring-system   notification-manager-operator-7877c6574f-6nj58     2/2     Running             3          35m
kubesphere-monitoring-system   prometheus-k8s-0                                   0/3     ContainerCreating   0          38m
kubesphere-monitoring-system   prometheus-k8s-1                                   0/3     ContainerCreating   0          38m
kubesphere-monitoring-system   prometheus-operator-7d7684fc68-wkhlr               2/2     Running             0          39m
kubesphere-monitoring-system   thanos-ruler-kubesphere-0                          2/2     Running             0          35m
kubesphere-monitoring-system   thanos-ruler-kubesphere-1                          2/2     Running             0          35m
kubesphere-system              ks-apiserver-5dd69b75b9-qgc2v                      1/1     Running             0          20m
kubesphere-system              ks-console-7494896c94-c4jkj                        1/1     Running             0          48m
kubesphere-system              ks-controller-manager-579b4c7847-xzgtw             1/1     Running             1          20m
kubesphere-system              ks-installer-7568684bbc-jsshg                      1/1     Running             0          54m
kubesphere-system              minio-597cb64f44-6wkdr                             1/1     Running             0          50m
kubesphere-system              openldap-0                                         1/1     Running             1          50m
kubesphere-system              openpitrix-import-job-qmk6r                        0/1     Completed           1          46m
kubesphere-system              redis-5566549765-k8kks                             1/1     Running             0          51m

可以看到kubesphere有非常多的组件,例如,openldap,prometheus,node-exporter,elasticsearch,minio等等。因此,kubesphere可以看做是一个PaaS平台啦。

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
9天前
|
监控 Linux
Zabbix 5.0 LTS的agent服务部署实战篇
文章介绍了如何在CentOS 7.6操作系统上部署Zabbix 5.0 LTS版本的agent服务,包括配置软件源、安装agent、修改配置文件、启动服务,并在Zabbix web界面添加监控。
28 4
Zabbix 5.0 LTS的agent服务部署实战篇
|
1月前
|
Kubernetes API 容器
[kubernetes]安装metrics-server
[kubernetes]安装metrics-server
|
4月前
|
Kubernetes 应用服务中间件 nginx
K8S部署Metrics-Server服务
K8S部署Metrics-Server服务
98 1
|
4月前
|
存储 运维 Kubernetes
kubesphere安装部署附带Metrics server的安装
kubesphere安装部署附带Metrics server的安装
53 0
|
10月前
|
弹性计算 Linux Perl
suse 12 二进制部署 Kubernetets 1.19.7 - 第13章 - 部署metrics-server插件
suse 12 二进制部署 Kubernetets 1.19.7 - 第13章 - 部署metrics-server插件
117 0
|
Kubernetes 监控 程序员
kubernetes部署metrics-server
原有的kubernetes容器监控服务heapster,从kubernetes 1.11版本开始逐渐退休,新的监控服务是metrics-server
921 0
kubernetes部署metrics-server
|
Linux 数据安全/隐私保护 Docker
Docker 一键式部署方案丨 Linux服务器中安装配置指定版本R以及Rstudio server
Docker 一键式部署方案丨 Linux服务器中安装配置指定版本R以及Rstudio server
|
安全 NoSQL MongoDB
一键安装脚本实现快速部署GrayLog Server 4.2.10单机版
一键安装脚本实现快速部署GrayLog Server 4.2.10单机版
433 0
一键安装脚本实现快速部署GrayLog Server 4.2.10单机版
|
存储 运维 Kubernetes
kubesphere安装部署附带Metrics server的安装(一)
kubesphere安装部署附带Metrics server的安装
554 0
|
Docker 容器
一次rancher server迁移
由于前期网段规划存在问题,rancher 需要迁移到新网段主机中,这里给了rancher单机迁移备份实践方案大家参考。
603 0
一次rancher server迁移