从 k8s 1.21.3 升级到 1.22.0 后,AWX Operator 无法创建 awx-pods
以前工作得很好。
从 1.21.3 安装 awx-operator 并设置 awx。 它运作良好。
升级到 1.22.0
杀死并重新创建 awx 部署
pods awx postgres 已启动
pods awx 服务器未启动
AWX 应该处于运行状态
只有 awx postgres 启动
NAME READY STATUS RESTARTS AGE
awx-itd-postgres-0 1/1 Running 0 8m29s
awx-operator-545497f7d5-k88wr 1/1 Running 1 (34m ago) 55m
nfs-client-provisioner-5c95d8f86-9tm6k 1/1 Running 5 (34m ago) 5d4h
我用来创建 pod 的 Yaml 文件:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx-itd
spec:
service_type: LoadBalancer
loadbalancer_protocol: http
loadbalancer_port: 80
loadbalancer_annotations: |
metallb.universe.tf/address-pool: bde-172-17
hostname: awx.bde.lab
replicas: 2
projects_persistence: true
projects_storage_class: managed-nfs-storage
postgres_storage_class: managed-nfs-storage
#adminUser: admin
PLAY RECAP *********************************************************************
localhost : ok=29 changed=2 unreachable=0 failed=1 skipped=27 rescued=0 ignored=0
-------------------------------------------------------------------------------
{"level":"error","ts":1628693554.202394,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"awx-controller","request":"default/awx-itd","error":"event runner on failed","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
观察同。
将 microk8s 更新到 1.22 并尝试部署 operator 0.13 后,operator 容器已升级,但在尝试升级 awx pod 时似乎出现错误
"job":"6175742077372812453","name":"awx","namespace":"default","error":"exit status 2","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/ansible/runner.(*runner).Run.func1\n\tsrc/github.com/operator-framework/operator-sdk/pkg/ansible/runner/runner.go:239"}
{"level":"error","ts":1629084951.678016,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"awx-controller","request":"default/awx","error":"event runner on failed","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
awx yaml 非常基础:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
task_privileged: true
与 operator-sdk 团队的一些人聊天。
这似乎是因为我们的操作符是建立在 sdk 0.19 版本上的。
抄送@Spredzy @rooftopcellist
与 operator-sdk 团队确认,在 0.x 上构建的操作符不会在较新版本的 kubernetes 上工作。 我们已经优先考虑在短期内的某个时候解决这个问题。 任何更新都会在这里发布。
我可以确认版本 1.22.1 上存在此问题。
回到 1.21.3 版本解决了这个问题🥳
添加此 PR - https://github.com/ansible/awx-operator/pull/508 后,我能够将 awx-operator 和 awx 应用程序部署到 Openshift 4.9 集群 (k8s v1.22.0)。
$ oc version
Client Version: 4.6.8
Server Version: 4.9.0-0.nightly-2021-08-23-224104
Kubernetes Version: v1.22.0-rc.0+5c2f7cd
所有容器都在运行,我能够从 UI 登录并运行作业。
要使用此修复程序,您目前需要从 devel 构建 awx-operator 映像,因为带有此修复程序的版本尚未删除。
我还可以通过将https://raw.githubusercontent.com/ansible/awx-operator/devel/deploy/awx-operator.yaml
编辑
...
containers:
- name: awx-operator
image: 'quay.io/ansible/awx-operator:devel'
env:
- name: WATCH_NAMESPACE
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OPERATOR_NAME
value: awx-operator
- name: ANSIBLE_GATHERING
value: explicit
- name: OPERATOR_VERSION
value: devel
- name: ANSIBLE_DEBUG_LOGS
value: 'false'
...
注意将image
标签和OPERATOR_VERSION
更改devel
并应用它
谢谢!
最有用的评论
观察同。
将 microk8s 更新到 1.22 并尝试部署 operator 0.13 后,operator 容器已升级,但在尝试升级 awx pod 时似乎出现错误
awx yaml 非常基础: