K8S 安装及部署 Nginx 记录文档

简介: K8S 部署文档记录 本文旨在对通过基于 Virtual Box 和 Vagrant 安装 CentOS虚拟机,在虚拟机中安装 K8S 集群并部署 Nginx 做一次记录,仅供参考。 K8S 部署文档记录 环境 虚拟机信息 安装基础环境 master node1 更新 CentOS 主机名 IP 映射 K8S 安装前期准备 1.

K8S 部署文档记录

本文旨在对通过基于 Virtual Box 和 Vagrant 安装 CentOS虚拟机,在虚拟机中安装 K8S 集群并部署 Nginx 做一次记录,仅供参考。

环境

序号 名称 配置 说明 其他
1 Virtual Box 用来安装虚拟机
2 Vagrant 用来安装 CentOS

虚拟机信息

Tips: 虚拟机与主机之间用 NAT 通信

序号 hostname ip 配置 其他
1 master 192.168.33.10 cpu:2,memory:2048
2 node1 192.168.33.11 cpu:1,memory:1024

安装基础环境

对于 Virtual Box 和 Vagrant 及 基于 Virtual Box 和 Vagrant 安装 CentOS 在这就不细说了,网上有很多教程;在这我提供我的 CentOS 的安装配置。

master

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.network "private_network", ip: "192.168.33.10"
   config.vm.provider "virtualbox" do |vb|
     vb.gui = false
     vb.cpus = 2
     vb.memory = "2048"
   end
end

node1

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.network "private_network", ip: "192.168.33.11"
   config.vm.provider "virtualbox" do |vb|
     vb.gui = false
     vb.memory = "1024"
   end
end

更新 CentOS 主机名

以下步骤 master 和 node1 均需更新

[root@vagrant ~]# su root
Password:
[root@master ~]# hostnamectl set-hostname master

IP 映射

以下步骤 master 和 node1 均需更新

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.33.10   master
192.168.33.11   node1
[root@master ~]# reboot

K8S 安装前期准备

以下步骤 master 和 node1 均需更新

1. 执行脚本preENV.sh

[root@master ~]# cat preENV.sh
#!/bin/bash
# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 关闭SELINUX
setenforce 0 && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 关闭Swap
swapoff -a && sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

2. 创建 /etc/sysctl.d/k8s.conf

[root@master ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

3. 执行命令使修改生效

[root@master ~]# modprobe br_netfilter
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf

4. kube-proxy 开启 ipvs 的前置条件

[root@master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装 Docker

以下步骤 master 和 node1 均需更新

[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# yum makecache fast
[root@master ~]# yum install -y docker-ce
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker

K8S 安装

配置镜像

以下步骤 master 和 node1 均需更新

[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master ~]# yum makecache fast
[root@master ~]# yum install -y kubelet kubeadm kubectl
[root@master ~]# systemctl enable kubelet

Master 节点处理

Master 节点镜像映射

[root@master ~]# vim k8sMasterImages.sh
#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})


for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done
[root@master ~]# chmod +x k8sMasterImages.sh
[root@master ~]# ./k8sMasterImages.sh

Master 节点初始化

以下内容复制到物理机上保存

[root@master ~]# kubeadm init --kubernetes-version=v1.14.1 --apiserver-advertise-address=192.168.33.10 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.33.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.33.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.33.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.502789 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6ftxi9.gov5rsp9syw1fect
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.33.10:6443 --token 6ftxi9.gov5rsp9syw1fect \
    --discovery-token-ca-cert-hash sha256:e474c065e6c7c69c12ec37b24cedb5745f941231c532032921553766789a4a5e
说明
  • --kubernetes-version=v1.14.1 : 指定K8S版本
  • --apiserver-advertise-address=192.168.33.10 : 指定apiserver 的地址,一般为主节点的地址,也就是虚拟机的地址
  • --pod-network-cidr=10.244.0.0/16 : Pod的地址范围,其值为CIDR格式的网络地址;计划使用flannel网络插件;使用flannel网络插件时,其默认地址为10.244.0.0/16.

执行如下命令

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群状态

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

结果显示和上面的输出一致就 ok 了。

查看 Pod 信息

[root@master ~]# kubectl get pod -A -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-88hfr          0/1     Pending   0          6m33s   <none>          <none>   <none>           <none>
kube-system   coredns-fb8b8dccf-jmw7m          0/1     Pending   0          6m33s   <none>          <none>   <none>           <none>
kube-system   etcd-master                      1/1     Running   0          5m42s   192.168.33.10   master   <none>           <none>
kube-system   kube-apiserver-master            1/1     Running   0          5m46s   192.168.33.10   master   <none>           <none>
kube-system   kube-controller-manager-master   1/1     Running   0          5m47s   192.168.33.10   master   <none>           <none>
kube-system   kube-proxy-46wth                 1/1     Running   0          6m32s   192.168.33.10   master   <none>           <none>
kube-system   kube-scheduler-master            1/1     Running   0          5m40s   192.168.33.10   master   <none>           <none>

到这里,会发现除了 coredns 未 ready,其他 pod 都正常 Running这是正常的,因为还没有网络插件,接下来将 node1 加入集群后在主节点安装 flannel 后就正常了,稍后操作。

将主节点也设置为工作节点,参与负载

[root@master ~]# kubectl describe node master | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@master ~]# kubectl taint nodes master node-role.kubernetes.io/master-
node "master" untainted

node1 节点处理

在上面的步骤中,我们完成了关闭防火墙、关闭SELINUX、创建k8s.conf 配置文件,kube-proxy 开启 ipvs 和 Docker、kubelet、kubectl、kubeadm 等的安装工作,接下来我们将拉取 node1 上用到的镜像,并将其加入到 master 节点上,形成 2 个节点的集群。

安装镜像

[vagrant@node1 ~]$ cat k8sNodeImages.sh
#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy-amd64:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION})


for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done
[root@node1 ~]# chmod +x k8sNodeImages.sh
[root@node1 ~]# ./k8sNodeImages.sh

执行 master 上初始化生成的将 node 加入 master 的命令

[root@node1 ~]# kubeadm join 192.168.33.10:6443 --token 6ftxi9.gov5rsp9syw1fect \
>     --discovery-token-ca-cert-hash sha256:e474c065e6c7c69c12ec37b24cedb5745f941231c532032921553766789a4a5e
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
[root@node1 ~]#

master 节点处理

查看集群信息

在此之前 master 节点的状态可能为 NotReady, 可以忽略,在安装 flannel 之后就会变为 Ready。

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   Ready      master   58m   v1.14.1
node1    NotReady   <none>   13s   v1.14.1

刚加入集群的 node1 状态是 NotReady 状态,等几分钟就 Ready 了,可以通过如下命令来监控

[root@master ~]# watch kubectl get nodes
Every 2.0s: kubectl get node                                                                                                                            Sun May 12 02:48:00 2019

NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   12h   v1.14.1
node1    Ready    <none>   11h   v1.14.1

部署 flannel

推荐等到 node1 状态 Ready 之后再安装 flannel
因为 flannel 中的有些镜像在 quay-mirror.qiniu.com,国内无法访问,故可以转到阿里云镜像中心获取,然后再将其重新打 tag 为 quay.io/coreos/flannel:v0.11.0-amd64 中的名称即可。

获取阿里云镜像仓库中的镜像
[root@master ~]# cat processFlannelImage.sh
#!/bin/bash
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
[root@master ~]# chmod +x processFlannelImage.sh
[root@master ~]# ./processFlannelImage.sh
安装 flannel
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
查看 pod 信息
[root@master ~]# kubectl get pod -A -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-88hfr            1/1     Running   0          12h    10.244.0.3      master   <none>           <none>
kube-system   coredns-fb8b8dccf-jmw7m            1/1     Running   0          12h    10.244.0.2      master   <none>           <none>
kube-system   etcd-master                        1/1     Running   0          12h    192.168.33.10   master   <none>           <none>
kube-system   kube-apiserver-master              1/1     Running   0          12h    192.168.33.10   master   <none>           <none>
kube-system   kube-controller-manager-master     1/1     Running   0          12h    192.168.33.10   master   <none>           <none>
kube-system   kube-flannel-ds-amd64-5jtfd        1/1     Running   0          11h    192.168.33.11   node1    <none>           <none>
kube-system   kube-flannel-ds-amd64-bgv9j        1/1     Running   0          11h    192.168.33.10   master   <none>           <none>
kube-system   kube-proxy-46wth                   1/1     Running   0          12h    192.168.33.10   master   <none>           <none>
kube-system   kube-proxy-r86jk                   1/1     Running   0          11h    192.168.33.11   node1    <none>           <none>
kube-system   kube-scheduler-master              1/1     Running   0          12h    192.168.33.10   master   <none>           <none>

现在可以发现所有的 pod 都正常启动了。

部署 Nginx

部署 Nginx pod

[root@master ~]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
[root@master ~]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@master ~]# kubectl get pods
NAME                               READY   STATUS              RESTARTS   AGE
nginx-deployment-6dd86d77d-fb9bp   0/1     ContainerCreating   0          101s
nginx-deployment-6dd86d77d-lzc56   1/1     Running             0          101s
nginx-deployment-6dd86d77d-p7pd8   1/1     Running             0          101s

创建 service, 并让外部访问

[root@master ~]# cat nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  ports:
  - port: 88
    targetPort: 80
  selector:
    app: nginx
  type: NodePort
[root@master ~]# kubectl create -f nginx-service.yaml
service/nginx-service created
[root@master ~]# kubectl get service/nginx-service
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-service   NodePort   10.99.114.248   <none>        88:31345/TCP   14s

service 说明

为了让主机访问 k8s 提供的nginx 服务,我们需要在 nginx-service.yaml 规范 spec 中指定 type 为 NodePort。

在浏览器中访问

访问地址格式为 MasterIP:NodePort, NodePort 为kubectl get service/nginx-servic88:31345/TCP31345这个端口;本次为http://192.168.33.10:31345/


k8sNignx

总结

  • 不需要配置 ssh, 有些文档中写需要配置,但是没必要
  • 部署前一定要对部署的环境变量设置
  • 配置信息尽量脚本化
  • master 初始化时的参数一定要指定正确并知道其含义
  • 在初始化好 master 节点后,先不要部署网络相关的内容,先将 node 加入集群,并等到 node 的状态为 Ready 后再部署网络相关的组件
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
2月前
|
Kubernetes 持续交付 Docker
利用 Docker 和 Kubernetes 实现微服务部署
【10月更文挑战第2天】利用 Docker 和 Kubernetes 实现微服务部署
|
3天前
|
存储 Kubernetes 容器
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。
|
7天前
|
Kubernetes 应用服务中间件 nginx
二进制安装Kubernetes(k8s)v1.32.0
本指南提供了一个详细的步骤,用于在Linux系统上通过二进制文件安装Kubernetes(k8s)v1.32.0,支持IPv4+IPv6双栈。具体步骤包括环境准备、系统配置、组件安装和配置等。
93 10
|
2月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
135 60
|
2月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
269 62
|
26天前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
1月前
|
应用服务中间件 网络安全 nginx
轻松上手Nginx Proxy Manager:安装、配置与实战
Nginx Proxy Manager (NPM) 是一款基于 Nginx 的反向代理管理工具,提供直观的 Web 界面,方便用户配置和管理反向代理、SSL 证书等。本文档介绍了 NPM 的安装步骤,包括 Docker 和 Docker Compose 的安装、Docker Compose 文件的创建与配置、启动服务、访问 Web 管理界面、基本使用方法以及如何申请和配置 SSL 证书,帮助用户快速上手 NPM。
207 1
|
2月前
|
负载均衡 应用服务中间件 Linux
nginx学习,看这一篇就够了:下载、安装。使用:正向代理、反向代理、负载均衡。常用命令和配置文件,很全
这篇博客文章详细介绍了Nginx的下载、安装、配置以及使用,包括正向代理、反向代理、负载均衡、动静分离等高级功能,并通过具体实例讲解了如何进行配置。
180 4
nginx学习,看这一篇就够了:下载、安装。使用:正向代理、反向代理、负载均衡。常用命令和配置文件,很全
|
1月前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
49 0
|
1月前
|
Kubernetes Ubuntu Linux
我应该如何安装Kubernetes
我应该如何安装Kubernetes