关于K8s中Ansible AWX(awx-operator)平台Helm部署的一些笔记

本文涉及的产品
云数据库 Tair(兼容Redis),内存型 2GB
Redis 开源版,标准版 2GB
推荐场景:
搭建游戏排行榜
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: 整理一些K8s中通过Helm的方式部署AWX的笔记分享给小伙伴博文内容为部署过程和遇到问题的解决过程食用方式:需要了解K8s需要预置的K8s+Helm环境需要科学上网理解不足小伙伴帮忙指正嗯,疫情快点结束吧 ^_^

写在前面

  • 整理一些K8s中通过Helm的方式部署AWX的笔记分享给小伙伴
  • 博文内容为部署过程和遇到问题的解决过程
  • 食用方式:

    • 需要了解K8s
    • 需要预置的K8s+Helm环境
    • 需要科学上网
  • 理解不足小伙伴帮忙指正

嗯,疫情快点结束吧 ^_^


一些介绍

关于 AWX 做简单介绍,AWX 提供基于 Web 的用户界面REST API 基于Ansible构建的任务引擎。它是红帽 Ansible 自动化平台的上游项目之一。对应红帽的订阅产品Ansible Tower的开源版本。

在物理机的部署有单机版单机版+远程数据库高可用性集群的架构方式,这里部署使用AWX基于k8s的部署方案awx-operator来部署, 为了方便,我们使用Helm的方式,默认配置为单机版,即AWX和PostgreSQL位于同一个节点,对于节点要求内存不小于4G。存储不小于20G。

关于AWX更多了解:项目地址: https://github.com/ansible/awx

需要使用订阅版本 ansible-tower: https://docs.ansible.com/ansible-tower/index.html

要安装 AWX,请查看安装指南。

关于awx-operator:一个用于KubernetesAnsible AWX Operator,使用operator SDKAnsible构建。关于Operator,这里简单理解为自定义资源CustomResourceDefinition的具体实现来描述AWX的部署过程。下面为AWX部署后生成的自定义资源对象

┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$kubectl get awxs,awxrestores,awxbackups
NAME                           AGE
awx.awx.ansible.com/awx-demo   14h
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$kubectl describe awx awx-demo
Name:         awx-demo
Namespace:    awx
Labels:       app.kubernetes.io/component=awx
              app.kubernetes.io/managed-by=awx-operator
              app.kubernetes.io/name=awx-demo
              app.kubernetes.io/operator-version=0.30.0
              app.kubernetes.io/part-of=awx-demo
Annotations:  <none>
API Version:  awx.ansible.com/v1beta1
Kind:         AWX
Metadata:
  Creation Timestamp:  2022-10-15T02:49:58Z
  Generation:          1
  Managed Fields:
    API Version:  awx.ansible.com/v1beta1
    .........................

关于Helm:可以简单理解为类似Ansible中角色的概念,或者yum,maven,npm等包管理器,用于对需要在Kubernetes上部署的复杂应用进行定义、安装和更新,Helm以Chart的方式对应用软件进行描述,可以方便地创建、版本化、共享和发布复杂的应用软件。

环境要求

需要一个预置的K8s集群,这是使用的是1.22的版本

┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl  get nodes
NAME                         STATUS   ROLES                  AGE    VERSION
vms81.liruilongs.github.io   Ready    control-plane,master   301d   v1.22.2
vms82.liruilongs.github.io   Ready    <none>                 301d   v1.22.2
vms83.liruilongs.github.io   Ready    <none>                 301d   v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

需要安装好Helm

┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}

work 节点信息

┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$hostnamectl
   Static hostname: vms81.liruilongs.github.io
         Icon name: computer-vm
           Chassis: vm
        Machine ID: a5d2de32a7d4411d9c12cd390b672d32
           Boot ID: 1fd2c0810f6d4058a224d1ff966c0e09
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-1160.76.1.el7.x86_64
      Architecture: x86-64
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$

Helm部署

配置awx-operator的Helm源

┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$helm repo add awx-operator https://ansible.github.io/awx-operator/
"awx-operator" has been added to your repositories
┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "liruilong_repo" chart repository
...Successfully got an update from the "elastic" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "azure" chart repository
...Unable to get an update from the "ali" chart repository (https://apphub.aliyuncs.com):
        failed to fetch https://apphub.aliyuncs.com/index.yaml : 504 Gateway Timeout
...Successfully got an update from the "awx-operator" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

搜索awx-operator的Chart

┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$helm search repo awx-operator
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
awx-operator/awx-operator       0.30.0          0.30.0          A Helm chart for the AWX Operator

自定义参数安装helm install my-awx-operator awx-operator/awx-operator -n awx --create-namespace -f myvalues.yaml

如果使用自定义的安装,需要在myvalues.yaml中开启对应的开关,可以配置HTTPS、独立PG数据库、LB、LDAP认证等。文件模板可以pull下chart包里找到,用里面的value.yaml做模板

我们这里使用默认的配置安装,不需要指定配置文件

┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$helm install -n awx --create-namespace my-awx-operator awx-operator/awx-operator
NAME: my-awx-operator
LAST DEPLOYED: Mon Oct 10 16:29:24 2022
NAMESPACE: awx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWX Operator installed with Helm Chart version 0.30.0
┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$

OK,这样就安装完成了。但是因为好多镜像需要外网下载,所以需要处理下。为了方便我们切换一下命名空间

┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$kubectl config set-context  $(kubectl config current-context) --namespace=awx
Context "kubernetes-admin@kubernetes" modified.
┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$

查看下pod状态

┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$kubectl  get pod  -o wide
NAME                                               READY   STATUS         RESTARTS   AGE   IP               NODE                         NOMINATED NODE   READINESS GATES
awx-operator-controller-manager-79ff9599d8-mksmc   1/2     ErrImagePull   0          13m   10.244.171.167   vms82.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$kubectl  get pod
NAME                                               READY   STATUS             RESTARTS   AGE
awx-operator-controller-manager-79ff9599d8-mksmc   1/2     ImagePullBackOff   0          13m

拉取镜像失败,解决报错

┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$kubectl describe pod awx-operator-controller-manager-79ff9599d8-mksmc | grep -i event -A 30
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  14m                   default-scheduler  Successfully assigned awx/awx-operator-controller-manager-79ff9599d8-mksmc to vms82.liruilongs.github.io
  Normal   Pulling    14m                   kubelet            Pulling image "quay.io/ansible/awx-operator:0.30.0"
  Normal   Started    13m                   kubelet            Started container awx-manager
  Normal   Pulled     13m                   kubelet            Successfully pulled image "quay.io/ansible/awx-operator:0.30.0" in 20.52788571s
  Normal   Created    13m                   kubelet            Created container awx-manager
  Warning  Failed     13m (x3 over 14m)     kubelet            Failed to pull image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     13m (x3 over 14m)     kubelet            Error: ErrImagePull
  Warning  Failed     12m (x5 over 13m)     kubelet            Error: ImagePullBackOff
  Normal   Pulling    12m (x4 over 14m)     kubelet            Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"
  Warning  Failed     11m                   kubelet            Failed to pull image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": dial tcp 74.125.203.82:443: i/o timeout
  Normal   BackOff    4m23s (x35 over 13m)  kubelet            Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"
┌──[root@vms81.liruilongs.github.io]-[~/AWK]
└─$

Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"

这个镜像需要科学上网,下载下,然后本地导入,如果有谷歌账号,可以在谷歌云下载

下载步骤
点击那个 shell 中运行,然后导出镜像
下载导出的镜像

上传到虚机

PS C:\Users\山河已无恙\Downloads> scp .\kube-rbac-proxy.tar root@192.168.26.81:~
root@192.168.26.81's password:
kube-rbac-proxy.tar                                                                                                                           100%   58MB 108.7MB/s   00:00
PS C:\Users\山河已无恙\Downloads>

节点导入镜像

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m copy -a 'dest=/root/ src=../kube-rbac-proxy.tar'
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "docker load -i /root/kube-rbac-proxy.tar"

OK,这个POD好了

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl  get pods -owide
NAME                                               READY   STATUS    RESTARTS   AGE   IP               NODE                         NOMINATED NODE   READINESS GATES
awx-operator-controller-manager-79ff9599d8-mksmc   2/2     Running   0          19h   10.244.171.167   vms82.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

查看事件确认下

Events:
  Type     Reason   Age                     From     Message
  ----     ------   ----                    ----     -------
  Warning  Failed   41m (x187 over 19h)     kubelet  Failed to pull image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   Pulling  36m (x214 over 19h)     kubelet  Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"
  Normal   BackOff  6m31s (x4861 over 19h)  kubelet  Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"
  Normal   Pulled   88s

还有其他资源没有创建好,PG等还没创建,看下POD 中awx的日志排查下问题

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl logs awx-operator-controller-manager-79ff9599d8-mksmc -c awx-manager

剧本执行报错,unable to retrieve the complete list of server APIs

--------------------------- Ansible Task StdOut -------------------------------

TASK [Verify imagePullSecrets
] *************************************************
task path: /opt/ansible/playbooks/awx.yml: 10

-------------------------------------------------------------------------------
I1015 11: 09: 32.772623       8 request.go: 601
] Waited for 1.048239742s due to client-side throttling, not priority and fairness, request: GET:https: //10.96.0.1:443/apis/autoscaling/v2beta2?timeout=32s
{
"level": "error",
"ts": 1665832173.374363,
"logger": "proxy",
"msg": "Unable to determine if virtual resource",
"gvk": "/v1, Kind=Secret",
"error": "unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server (\"Internal Server Error: \\\"/apis/metrics.k8s.io/v1beta1?timeout=32s\\\": the server could not find the requested resource\") has prevented the request from succeeding",
"stacktrace": "github.com/operator-framework/operator-sdk/internal/ansible/proxy.(*cacheResponseHandler).ServeHTTP\n\t/workspace/internal/ansible/proxy/cache_response.go:97\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2916\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1966"
}

关于这个问题在下面的issuse中找到了解决办法:

具体操作可以参考:https://www.cnblogs.com/liruilong/p/16795064.html

解决问题之后我们需要重新helm repo update然后重新部署,这一步可以略去, 我的网不好所以需要

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "liruilong_repo" chart repository
...Successfully got an update from the "elastic" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "azure" chart repository
...Unable to get an update from the "ali" chart repository (https://apphub.aliyuncs.com):
        failed to fetch https://apphub.aliyuncs.com/index.yaml : 504 Gateway Timeout
...Successfully got an update from the "awx-operator" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

因为之前已经install了,所以这里upgrade就可以

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$helm upgrade my-awx-operator awx-operator/awx-operator -n awx --create-namespace
Release "my-awx-operator" has been upgraded. Happy Helming!
NAME: my-awx-operator
LAST DEPLOYED: Sat Oct 15 21:16:28 2022
NAMESPACE: awx
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
AWX Operator installed with Helm Chart version 0.30.0

在看下日志确认下,没有error即可

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl logs awx-operator-controller-manager-79ff9599d8-2v5fn -c awx-manager

在看下POD状态

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl  get pods
NAME                                               READY   STATUS    RESTARTS   AGE
awx-demo-postgres-13-0                             0/1     Pending   0          105s
awx-operator-controller-manager-79ff9599d8-2v5fn   2/2     Running   0          128m
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl  get svc
NAME                                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
awx-demo-postgres-13                              ClusterIP   None            <none>        5432/TCP   5m48s
awx-operator-controller-manager-metrics-service   ClusterIP   10.107.17.167   <none>        8443/TCP   132m

pg对应的pod:awx-demo-postgres-13-0 pending了,看下事件

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pods awx-demo-postgres-13-0 | grep -i  -A 10 event
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  23s (x8 over 7m31s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME                                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
postgres-13-awx-demo-postgres-13-0   Pending                                                     10m
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe  pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Events:
  Type    Reason         Age                 From                         Message
  ----    ------         ----                ----                         -------
  Normal  FailedBinding  82s (x42 over 11m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
No resources found

OK ,Pending的原因是没有默认SC

对于有状态应用来讲,在生成statefulsets之前需要创建好默认的SC(动态卷供应),由SC来动态处理PV和PVC的创建,生成PV用于PG的数据存储,所以我们这里需要创建一个SC,创建之前我们需要一个分配器,不同的分配器指定了动态创建pv时使用什么后端存储。

这里为了方便,使用本地存储作为后端存储,一般情况下,PV只能是网络存储,不属于任何Node,所以通过NFS的方式比较多一点,SC会通过provisioner 字段指定分配器。创建好storageClass之后,用户在定义pvc时使用默认SC的分配存储

分配器及SC的创建: https://github.com/rancher/local-path-provisioner

yaml 文件下载不下来,所以浏览器访问然后复制下执行,我这里集群本来没有SC,如果小伙伴的集群有SC,直接设置默认SC即可

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$wget  https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml
--2022-10-15 21:45:02--  https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, ::
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... 失败:拒绝连接。
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... 失败:拒绝连接。
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$vim local-path-storage.yaml
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$ [新] 128L, 2932C 已写入
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc -A
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$mkdir -p /opt/local-path-provisioner
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply  -f local-path-storage.yaml
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$

确认创建成功

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
NAME         PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  2m6s

设置为默认SC:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/local-path patched
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
awx-demo-postgres-13-0                             0/1     Pending   0          46m
awx-operator-controller-manager-79ff9599d8-2v5fn   2/2     Running   0          173m

导出yaml文件,删除重新创建

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc postgres-13-awx-demo-postgres-13-0  -o yaml > postgres-13-awx-demo-postgres-13-0.yaml
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl delete -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim "postgres-13-awx-demo-postgres-13-0" deleted
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim/postgres-13-awx-demo-postgres-13-0 created

查看pvc的状态,这里需要等一会,Bound意味着已经绑定。

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME                                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
postgres-13-awx-demo-postgres-13-0   Pending                                      local-path     3s
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Events:
  Type    Reason                 Age   From                                                                                                Message
  ----    ------                 ----  ----                                                                                                -------
  Normal  WaitForPodScheduled    42s   persistentvolume-controller                                                                         waiting for pod awx-demo-postgres-13-0 to be scheduled
  Normal  ExternalProvisioning   41s   persistentvolume-controller                                                                         waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
  Normal  Provisioning           41s   rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8  External provisioner is provisioning volume for claim "awx/postgres-13-awx-demo-postgres-13-0"
  Normal  ProvisioningSucceeded  39s   rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8  Successfully provisioned volume pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
postgres-13-awx-demo-postgres-13-0   Bound    pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3   8Gi        RWO            local-path     53s
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                    STORAGECLASS   REASON   AGE
pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3   8Gi        RWO            Delete           Bound    awx/postgres-13-awx-demo-postgres-13-0   local-path              54s

在看下POD的状态,这里PG相关的POD创建成功,但是awx-demo-65d9bf775b-hc58x对应的初始化容器一个也没有创建成功,应该是镜像pull不下来。

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods -o wide
NAME                                               READY   STATUS     RESTARTS   AGE     IP               NODE                         NOMINATED NODE   READINESS GATES
awx-demo-65d9bf775b-hc58x                          0/4     Init:0/1   0          4m42s   <none>           vms82.liruilongs.github.io   <none>           <none>
awx-demo-postgres-13-0                             1/1     Running    0          68m     10.244.171.180   vms82.liruilongs.github.io   <none>           <none>
awx-operator-controller-manager-79ff9599d8-m7t8k   2/2     Running    0          7m3s    10.244.171.178   vms82.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe  pod awx-demo-65d9bf775b-hc58x | grep -i -A 10 event
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m47s  default-scheduler  Successfully assigned awx/awx-demo-65d9bf775b-hc58x to vms82.liruilongs.github.io
  Normal  Pulling    4m46s  kubelet            Pulling image "quay.io/ansible/awx-ee:latest"

OK,然后我们以同样的方式pull镜像

┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$cd /root/ansible/
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m copy -a 'dest=/root/ src=../awx-ee.tar'
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "docker load -i /root/awx-ee.tar"

查看下其他的镜像

┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$kubectl describe pods awx-demo-65d9bf775b-hc58x | grep -i image:
    Image:         quay.io/ansible/awx-ee:latest
    Image:         docker.io/redis:7
    Image:         quay.io/ansible/awx:21.7.0
    Image:         quay.io/ansible/awx:21.7.0
    Image:         quay.io/ansible/awx-ee:latest
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$

可以手动在work节点pull镜像,确认镜像都pull成功

┌──[root@vms82.liruilongs.github.io]-[~]
└─$docker pull quay.io/ansible/awx:21.7.0
21.7.0: Pulling from ansible/awx
Digest: sha256:bca920f96fc6a77b72c4442088b53a90b22162cfa90503d3dcda4577afee58f8
Status: Image is up to date for quay.io/ansible/awx:21.7.0
quay.io/ansible/awx:21.7.0
┌──[root@vms82.liruilongs.github.io]-[~]
└─$docker pull docker.io/redis:7
7: Pulling from library/redis
Digest: sha256:c95835a74c37b3a784fb55f7b2c211bd20c650d5e55dae422c3caa9c01eb39fa
Status: Image is up to date for redis:7
docker.io/library/redis:7
┌──[root@vms82.liruilongs.github.io]-[~]
└─$docker pull quay.io/ansible/awx-ee:latest
latest: Pulling from ansible/awx-ee
Digest: sha256:a300d6522c9e4292c9f19b04e4544289cbcf7926bde4001131582f254d191494
Status: Image is up to date for quay.io/ansible/awx-ee:latest
quay.io/ansible/awx-ee:latest
┌──[root@vms82.liruilongs.github.io]-[~]
└─$

这里需要等一会,会看到Pod都正常了

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
awx-demo-65d9bf775b-hc58x                          4/4     Running   0          79m
awx-demo-postgres-13-0                             1/1     Running   0          143m
awx-operator-controller-manager-79ff9599d8-m7t8k   2/2     Running   0          81m

查看SVC访问测试

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get svc
NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
awx-demo-postgres-13                              ClusterIP   None             <none>        5432/TCP       143m
awx-demo-service                                  NodePort    10.104.176.210   <none>        80:30066/TCP   79m
awx-operator-controller-manager-metrics-service   ClusterIP   10.108.71.67     <none>        8443/TCP       82m
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$curl 192.168.26.82:30066
<!doctype html><html lang="en"><head><script nonce="cw6jhvbF7S5bfKJPsimyabathhaX35F5hIyR7emZNT0=" type="text/javascript">window.....
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

获取密码

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get secrets
NAME                                          TYPE                                  DATA   AGE
awx-demo-admin-password                       Opaque                                1      146m
awx-demo-app-credentials                      Opaque                                3      82m
awx-demo-broadcast-websocket                  Opaque                                1      146m
awx-demo-postgres-configuration               Opaque                                6      146m
awx-demo-receptor-ca                          kubernetes.io/tls                     2      82m
awx-demo-receptor-work-signing                Opaque                                2      82m
awx-demo-secret-key                           Opaque                                1      146m
awx-demo-token-sc92t                          kubernetes.io/service-account-token   3      82m
awx-operator-controller-manager-token-tpv2m   kubernetes.io/service-account-token   3      84m
default-token-864fk                           kubernetes.io/service-account-token   3      4h32m
redhat-operators-pull-secret                  Opaque                                1      146m
sh.helm.release.v1.my-awx-operator.v1         helm.sh/release.v1                    1      84m
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$echo $(kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode)
tP59YoIWSS6NgCUJYQUG4cXXJIaIc7ci
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$

访问测试

默认的服务发布方式为NodePort,所以我们可以在任意子网IP通过节点加端口访问:http://192.168.26.82:30066/#/login

没有想到会是中文的界面,只能说国际化做的很好。

如果有面板工具可以简单看下涉及的资源

部分资源

通过命令行查看所有资源

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl api-resources -o name --verbs=list --namespaced | xargs -n 1 kubectl get --show-kind --ignore-not-found -n awx
NAME                                        DATA   AGE
configmap/awx-demo-awx-configmap            5      116m
configmap/awx-operator                      0      5h7m
configmap/awx-operator-awx-manager-config   1      119m
configmap/kube-root-ca.crt                  1      5h7m
NAME                                                        ENDPOINTS             AGE
endpoints/awx-demo-postgres-13                              10.244.171.180:5432   3h
endpoints/awx-demo-service                                  10.244.171.181:8052   116m
endpoints/awx-operator-controller-manager-metrics-service   10.244.171.178:8443   119m
LAST SEEN   TYPE     REASON    OBJECT                          MESSAGE
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Successfully pulled image "quay.io/ansible/awx-ee:latest" in 1h16m36.915786211s
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container init
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container init
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "docker.io/redis:7" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container redis
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container redis
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "quay.io/ansible/awx:21.7.0" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container awx-demo-web
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container awx-demo-web
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "quay.io/ansible/awx:21.7.0" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container awx-demo-task
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container awx-demo-task
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "quay.io/ansible/awx-ee:latest" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container awx-demo-ee
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container awx-demo-ee
NAME                                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/postgres-13-awx-demo-postgres-13-0   Bound    pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3   8Gi        RWO            local-path     117m
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/awx-demo-65d9bf775b-hc58x                          4/4     Running   0          116m
pod/awx-demo-postgres-13-0                             1/1     Running   0          3h
pod/awx-operator-controller-manager-79ff9599d8-m7t8k   2/2     Running   0          119m
NAME                                                 TYPE                                  DATA   AGE
secret/awx-demo-admin-password                       Opaque                                1      3h
secret/awx-demo-app-credentials                      Opaque                                3      116m
secret/awx-demo-broadcast-websocket                  Opaque                                1      3h
secret/awx-demo-postgres-configuration               Opaque                                6      3h
secret/awx-demo-receptor-ca                          kubernetes.io/tls                     2      116m
secret/awx-demo-receptor-work-signing                Opaque                                2      116m
secret/awx-demo-secret-key                           Opaque                                1      3h
secret/awx-demo-token-sc92t                          kubernetes.io/service-account-token   3      116m
secret/awx-operator-controller-manager-token-tpv2m   kubernetes.io/service-account-token   3      119m
secret/default-token-864fk                           kubernetes.io/service-account-token   3      5h7m
secret/redhat-operators-pull-secret                  Opaque                                1      3h
secret/sh.helm.release.v1.my-awx-operator.v1         helm.sh/release.v1                    1      119m
NAME                                             SECRETS   AGE
serviceaccount/awx-demo                          1         116m
serviceaccount/awx-operator-controller-manager   1         119m
serviceaccount/default                           1         5h7m
NAME                                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/awx-demo-postgres-13                              ClusterIP   None             <none>        5432/TCP       3h
service/awx-demo-service                                  NodePort    10.104.176.210   <none>        80:30066/TCP   116m
service/awx-operator-controller-manager-metrics-service   ClusterIP   10.108.71.67     <none>        8443/TCP       119m
NAME                                                      CONTROLLER                              REVISION   AGE
controllerrevision.apps/awx-demo-postgres-13-85958bcbcd   statefulset.apps/awx-demo-postgres-13   1          3h
NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/awx-demo                          1/1     1            1           116m
deployment.apps/awx-operator-controller-manager   1/1     1            1           119m
NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/awx-demo-65d9bf775b                          1         1         1       116m
replicaset.apps/awx-operator-controller-manager-79ff9599d8   1         1         1       119m
NAME                                    READY   AGE
statefulset.apps/awx-demo-postgres-13   1/1     3h
NAME                           AGE
awx.awx.ansible.com/awx-demo   13h
NAME                                     HOLDER                                                                                  AGE
lease.coordination.k8s.io/awx-operator   awx-operator-controller-manager-79ff9599d8-m7t8k_7502aa73-eaad-4b61-868e-4af77edaf856   5d7h
NAME                                                                                   ADDRESSTYPE   PORTS   ENDPOINTS        AGE
endpointslice.discovery.k8s.io/awx-demo-postgres-13-4tc87                              IPv4          5432    10.244.171.180   3h
endpointslice.discovery.k8s.io/awx-demo-service-6gs4d                                  IPv4          8052    10.244.171.181   116m
endpointslice.discovery.k8s.io/awx-operator-controller-manager-metrics-service-7wtml   IPv4          8443    10.244.171.178   119m
LAST SEEN   TYPE     REASON    OBJECT                          MESSAGE
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Successfully pulled image "quay.io/ansible/awx-ee:latest" in 1h16m36.915786211s
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container init
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container init
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "docker.io/redis:7" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container redis
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container redis
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "quay.io/ansible/awx:21.7.0" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container awx-demo-web
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container awx-demo-web
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "quay.io/ansible/awx:21.7.0" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container awx-demo-task
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container awx-demo-task
40m         Normal   Pulled    pod/awx-demo-65d9bf775b-hc58x   Container image "quay.io/ansible/awx-ee:latest" already present on machine
40m         Normal   Created   pod/awx-demo-65d9bf775b-hc58x   Created container awx-demo-ee
40m         Normal   Started   pod/awx-demo-65d9bf775b-hc58x   Started container awx-demo-ee
NAME                                                                             ROLE                                     AGE
rolebinding.rbac.authorization.k8s.io/awx-demo                                   Role/awx-demo                            116m
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding       Role/awx-operator-awx-manager-role       119m
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding   Role/awx-operator-leader-election-role   119m
NAME                                                               CREATED AT
role.rbac.authorization.k8s.io/awx-demo                            2022-10-15T14:19:31Z
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role       2022-10-15T14:17:13Z
role.rbac.authorization.k8s.io/awx-operator-leader-election-role   2022-10-15T14:17:13Z
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

嗯,关于Helm方式安装AWX和小伙伴们分享到这里,生活加油 ^_^

博文参考

https://blog.csdn.net/m0_51691302/article/details/126288338

https://zenn.dev/asterisk9101/articles/kubernetes-1

https://www.youtube.com/watch?v=AYfqkTbCDAw

https://www.youtube.com/watch?v=gCqCtAEP6lc

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
1月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
124 60
|
1月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
217 62
|
22天前
|
存储 运维 Kubernetes
云端迁移:备份中心助力企业跨云迁移K8s容器服务平台
本文将简要介绍阿里云容器服务ACK的备份中心,并以某科技公司在其实际的迁移过程中遇到具体挑战为例,阐述如何有效地利用备份中心来助力企业的容器服务平台迁移项目。
|
5天前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
14 0
|
1月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
76 3
|
1月前
|
NoSQL 关系型数据库 Redis
高可用和性能:基于ACK部署Dify的最佳实践
本文介绍了基于阿里云容器服务ACK,部署高可用、可伸缩且具备高SLA的生产可用的Dify服务的详细解决方案。
|
13天前
|
运维 应用服务中间件 网络安全
自动化运维的新篇章:使用Ansible进行服务器配置管理
【10月更文挑战第34天】在现代IT基础设施的快速迭代中,自动化运维成为提升效率、确保一致性的关键手段。本文将通过介绍Ansible工具的使用,展示如何实现高效的服务器配置管理。从基础安装到高级应用,我们将一步步揭开自动化运维的神秘面纱,让你轻松掌握这一技术,为你的运维工作带来革命性的变化。
|
9天前
|
运维 应用服务中间件 Linux
自动化运维的利器:Ansible在配置管理中的应用
【10月更文挑战第39天】本文旨在通过深入浅出的方式,向读者展示如何利用Ansible这一强大的自动化工具来优化日常的运维工作。我们将从基础概念讲起,逐步深入到实战操作,不仅涵盖Ansible的核心功能,还会分享一些高级技巧和最佳实践。无论你是初学者还是有经验的运维人员,这篇文章都会为你提供有价值的信息,帮助你提升工作效率。
|
12天前
|
运维 Ubuntu 应用服务中间件
自动化运维工具Ansible的实战应用
【10月更文挑战第36天】在现代IT基础设施管理中,自动化运维已成为提升效率、减少人为错误的关键手段。本文通过介绍Ansible这一流行的自动化工具,旨在揭示其在简化日常运维任务中的实际应用价值。文章将围绕Ansible的核心概念、安装配置以及具体使用案例展开,帮助读者构建起自动化运维的初步认识,并激发对更深入内容的学习兴趣。
33 4
|
10天前
|
运维 安全 应用服务中间件
自动化运维的利剑:Ansible在配置管理中的应用
【10月更文挑战第37天】本文将深入探讨如何利用Ansible简化和自动化复杂的IT基础设施管理任务。我们将通过实际案例,展示如何用Ansible编写可重用的配置代码,以及这些代码如何帮助运维团队提高效率和减少人为错误。文章还将讨论如何构建Ansible playbook来自动部署应用、管理系统更新和执行常规维护任务。准备好深入了解这个强大的工具,让你的运维工作更加轻松吧!
26 2