Flannel: vxlan_network.go:158] n'a pas réussi à ajouter vxlanRoute (10.244.0.0/24 -> 10.244.0.0) : argument invalide

Créé le 7 mars 2018  ·  6Commentaires  ·  Source: coreos/flannel

Docker : 1.12.6
RHEL : 7,3
Linux k8s-master 3.10.0-693.21.1.el7.x86_64 #1 SMP vendredi 23 février 18:54:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Kubernetes 1.9.3
quay.io/calico/ node:v2.6.2
quay.io/calico/cni:v1.11.0
quay.io/coreos/flanelle:v0.9.1

Azure Cloud avec espace d'adressage vnet : 10.244.0.0/16

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

Le cluster a été initialisé avec

kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=cri
#applied canal networking which runs flannel
kubectl scale deployment kube-dns -n kube-system --replicas=2
#this attempted to launch kube-dns on the agent node.
#the kube-dns container on the agent node never runs because it is unable to communicate to the master node to determine the dns configuration.
[root@k8s-master v2]# kubectl logs kube-dns-6f4fd4bdf-dsjtq -n kube-system -c kubedns
I0307 21:34:17.804073       1 dns.go:48] version: 1.14.6-3-gc36cb11
I0307 21:34:17.805197       1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s
I0307 21:34:17.805254       1 server.go:112] FLAG: --alsologtostderr="false"
I0307 21:34:17.805264       1 server.go:112] FLAG: --config-dir="/kube-dns-config"
I0307 21:34:17.805271       1 server.go:112] FLAG: --config-map=""
I0307 21:34:17.805277       1 server.go:112] FLAG: --config-map-namespace="kube-system"
I0307 21:34:17.805283       1 server.go:112] FLAG: --config-period="10s"
I0307 21:34:17.805290       1 server.go:112] FLAG: --dns-bind-address="0.0.0.0"
I0307 21:34:17.805296       1 server.go:112] FLAG: --dns-port="10053"
I0307 21:34:17.805303       1 server.go:112] FLAG: --domain="cluster.local."
I0307 21:34:17.805311       1 server.go:112] FLAG: --federations=""
I0307 21:34:17.805318       1 server.go:112] FLAG: --healthz-port="8081"
I0307 21:34:17.805324       1 server.go:112] FLAG: --initial-sync-timeout="1m0s"
I0307 21:34:17.805330       1 server.go:112] FLAG: --kube-master-url=""
I0307 21:34:17.805336       1 server.go:112] FLAG: --kubecfg-file=""
I0307 21:34:17.805342       1 server.go:112] FLAG: --log-backtrace-at=":0"
I0307 21:34:17.805350       1 server.go:112] FLAG: --log-dir=""
I0307 21:34:17.805356       1 server.go:112] FLAG: --log-flush-frequency="5s"
I0307 21:34:17.805362       1 server.go:112] FLAG: --logtostderr="true"
I0307 21:34:17.805368       1 server.go:112] FLAG: --nameservers=""
I0307 21:34:17.805374       1 server.go:112] FLAG: --stderrthreshold="2"
I0307 21:34:17.805390       1 server.go:112] FLAG: --v="2"
I0307 21:34:17.805396       1 server.go:112] FLAG: --version="false"
I0307 21:34:17.805415       1 server.go:112] FLAG: --vmodule=""
I0307 21:34:17.805466       1 server.go:194] Starting SkyDNS server (0.0.0.0:10053)
I0307 21:34:17.805656       1 server.go:213] Skydns metrics enabled (/metrics:10055)
I0307 21:34:17.805677       1 dns.go:146] Starting endpointsController
I0307 21:34:17.805683       1 dns.go:149] Starting serviceController
I0307 21:34:17.805805       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0307 21:34:17.805826       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0307 21:34:18.306107       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:18.806137       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:19.305925       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:19.805901       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:20.305909       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:20.805936       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:21.305954       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:21.805893       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:22.305926       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:22.806025       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:23.305962       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:23.805877       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:24.305931       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:24.805906       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:25.305905       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:25.806023       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:26.305906       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:26.806023       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:27.305930       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:27.805968       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:28.305904       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:28.805886       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:29.305877       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:29.805878       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:30.305896       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:30.805966       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:31.305877       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:31.805931       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:32.305950       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:32.805986       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:33.305935       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:33.805899       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:34.305962       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:34.806082       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:35.305918       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:35.805870       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:36.305998       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:36.805920       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:37.305936       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:37.805872       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:38.306083       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:38.806080       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:39.305912       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:39.805896       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:40.306024       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:40.805991       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:41.305912       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:41.805891       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:42.305905       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:42.805873       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:43.305893       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:43.805927       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:44.305913       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:44.806054       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:45.306072       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:45.805924       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:46.305902       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:46.805889       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:47.305910       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0307 21:34:47.806071       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E0307 21:34:47.806907       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0307 21:34:47.807363       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout



md5-9317769878e8e3cfcc2d426aee981246



```bash
[root@k8s-master v2]# kubectl describe node k8s-master
Name:               k8s-master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=Standard_DS2_v2
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=usgovvirginia
                    failure-domain.beta.kubernetes.io/zone=1
                    kubernetes.io/hostname=k8s-master
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"92:b2:1f:03:ff:99"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=10.244.0.100
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             node-role.kubernetes.io/master:NoSchedule
CreationTimestamp:  Wed, 07 Mar 2018 19:25:44 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 07 Mar 2018 21:31:45 +0000   Wed, 07 Mar 2018 19:25:39 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 07 Mar 2018 21:31:45 +0000   Wed, 07 Mar 2018 19:25:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 07 Mar 2018 21:31:45 +0000   Wed, 07 Mar 2018 19:25:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Wed, 07 Mar 2018 21:31:45 +0000   Wed, 07 Mar 2018 19:26:55 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.244.0.100
  Hostname:    k8s-master
Capacity:
 cpu:     2
 memory:  7125792Ki
 pods:    110
Allocatable:
 cpu:     2
 memory:  7023392Ki
 pods:    110
System Info:
 Machine ID:                 aa4f0681ccb6435784669b356fa73d9c
 System UUID:                2E21AA4F-77BB-F640-990D-12267E1262C0
 Boot ID:                    12d0a064-2b59-411d-8d64-d9c2a61472f0
 Kernel Version:             3.10.0-693.21.1.el7.x86_64
 OS Image:                   Red Hat Enterprise Linux Server 7.4 (Maipo)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.9.3
 Kube-Proxy Version:         v1.9.3
PodCIDR:                     10.244.0.0/24
ExternalID:                  /subscriptions/28865b6d-f25c-4bba-a4f1-a16bfa782571/resourceGroups/kubernetes/providers/Microsoft.Compute/virtualMachines/k8s-master
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                  ------------  ----------  ---------------  -------------
  kube-system                canal-jmgzn                           250m (12%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                etcd-k8s-master                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-k8s-master             250m (12%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-k8s-master    200m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-6f4fd4bdf-ch98b              260m (13%)    0 (0%)      110Mi (1%)       170Mi (2%)
  kube-system                kube-proxy-x6j8p                      0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-k8s-master             100m (5%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  1060m (53%)   0 (0%)      110Mi (1%)       170Mi (2%)
Events:         <none>



md5-0d07dc48b1e0026de2145194c4c72107




I have a very basic configuration.  1 Master Node and 1 Agent Node.  DNS queries are not working on the agent node.  the kube-dns is running on the master node.  Master Node IP 10.244.0.100 and Agent Node IP: 10.244.0.4.

I am trying to figure out why it is that I cannot communicate with 10.96.0.10 (kube-dns) which is supposed to be routed to the master node (where kube-dns is running).

I've been looking at log files and enabling level 10 verbosity for the past several hours.  What does this error message mean?

vxlan_network.go:158] failed to add vxlanRoute (10.244.0.0/24 -> 10.244.0.0): invalid argument

I am unable to get pods that require kube-dns to run. They just fail with a dns error trying to perform a lookup to kubernetes.default.svc.cluster.local. If i try to scale kube-dns to launch on the non-master node the kube-dns fails to start on that node due to an issue with dns lookup.

I am unable to get pod-to-kube-dns and node->kube-dns communication working. How can I debug what the issue is?

These are RHEL 7.3 nodes with:



md5-b1244f51976e984132f68ca2f878c482







md5-8f89721b1faf7d5b61c623c1fee1bf87



Master Node:



md5-b067d0e7106b6454581f8d7bd103bc43



Slave Node:



md5-9a8d2b3a3982568e8a5facf8e0f78e56




Interestingly enough I can do things like this



md5-afbe2436c9a2cb865b6c42d9f09b0ac9



Slave Node Iptables:



md5-55da6e2e2272753e6a26dcb7f76a3e58



Anyway to use tcpdump to figure out this issue?

What does this error mean? vxlan_network.go:158] failed to add vxlanRoute (10.244.0.0/24 -> 10.244.0.0): invalid argument

Master is running at 10.244.0.100 and agent node is running at 10.244.0.4.

--master



md5-478ceda8b71538290562cd49708b152e



--agent



md5-625e4602e250b1da4ba742c7c6dc4495







md5-7017b7075f44600ec171ec62415f38d9







md5-e5d086d306cce1205cb06919bfe8efd3



```bash
I0307 20:20:33.291138       1 round_trippers.go:426]     Content-Type: application/json
I0307 20:20:33.291143       1 round_trippers.go:426]     Date: Wed, 07 Mar 2018 20:20:33 GMT
I0307 20:20:34.283991       1 kube.go:137] Node controller sync successful
I0307 20:20:34.284022       1 main.go:234] Created subnet manager: Kubernetes Subnet Manager - k8s-master
I0307 20:20:34.284031       1 main.go:237] Installing signal handlers
I0307 20:20:34.284104       1 main.go:352] Found network config - Backend type: vxlan
I0307 20:20:34.284160       1 vxlan.go:119] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
I0307 20:20:34.284266       1 device.go:68] VXLAN device already exists
I0307 20:20:34.284443       1 device.go:76] Returning existing device
I0307 20:20:34.284912       1 main.go:299] Wrote subnet file to /run/flannel/subnet.env
I0307 20:20:34.284921       1 main.go:303] Running backend.
I0307 20:20:34.284931       1 main.go:321] Waiting for all goroutines to exit
I0307 20:20:34.284967       1 vxlan_network.go:56] watching for new subnet leases
I0307 20:20:34.285029       1 vxlan_network.go:138] adding subnet: 10.244.3.0/24 PublicIP: 10.244.0.4 VtepMAC: 8a:50:80:d4:48:ec
I0307 20:20:34.285042       1 device.go:179] calling AddARP: 10.244.3.0, 8a:50:80:d4:48:ec
I0307 20:20:34.285122       1 device.go:156] calling AddFDB: 10.244.0.4, 8a:50:80:d4:48:ec
I0307 20:25:33.284125       1 reflector.go:276] github.com/coreos/flannel/subnet/kube/kube.go:284: forcing resync
I0307 20:28:34.291301       1 reflector.go:405] github.com/coreos/flannel/subnet/kube/kube.go:284: Watch close - *v1.Node total 94 items received

Commentaire le plus utile

@slecrenski Pouvez-vous s'il vous plaît partager ce que vous avez fait pour résoudre ce problème ? Je rencontre le même problème.

Tous les 6 commentaires

Je ne vois pas le lien flanel.1 sur le nœud de l'agent. Est-ce le problème ? Pourquoi ce lien n'est-il pas créé ? Est-ce dû à vxlan_network.go:158] n'a pas réussi à ajouter vxlanRoute (10.244.0.0/24 -> 10.244.0.0) : argument invalide ? De plus, si c'est le cas, pourquoi puis-je communiquer avec kubernetes.default via 10.96.0.1 sur 443 ?

Si oui, qu'est-ce qui cause cela ?

Appréciez toute aide.

Donc, si quelqu'un est intéressé ou a ce problème, j'ai pu surmonter ce problème en supprimant kubectl le nœud k8s-master et en le recréant. Cela a alloué un sous-réseau cidr de nœud différent 10.244.1.0/24 au lieu de 10.244.0.0/24 qui semble être en conflit.

Tous les liens ont été créés, mais j'ai toujours un problème avec l'interface de service.

[root@k8s-agent2 ~]# nslookup kubernetes.default.svc.cluster.local 10.244.1.2
Server:     10.244.1.2
Address:    10.244.1.2#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

[root@k8s-agent2 ~]# nslookup kubernetes.default.svc.cluster.local 10.96.0.10
;; connection timed out; trying next origin
;; connection timed out; no servers could be reached

[root@k8s-master ~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   2d

[root@k8s-master ~]# kubectl get ep -n kube-system
NAME                      ENDPOINTS                     AGE
kube-controller-manager   <none>                        2d
kube-dns                  10.244.1.2:53,10.244.1.2:53   2d
kube-scheduler            <none>

[root@k8s-agent2 ~]# iptables-save | grep kube-dns
-A KUBE-SEP-BWHGELGX6BITPZVO -s 10.244.1.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-BWHGELGX6BITPZVO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.1.2:53
-A KUBE-SEP-Z6M7ZHWCTBNMPLD7 -s 10.244.1.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-Z6M7ZHWCTBNMPLD7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.1.2:53
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-BWHGELGX6BITPZVO
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-Z6M7ZHWCTBNMPLD7

[root@k8s-agent2 ~]# ip route
default via 10.244.0.1 dev eth0 proto static metric 100 
10.244.0.0/16 dev eth0 proto kernel scope link src 10.244.0.6 metric 100 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.2 dev cali4b8ffe82a2b scope link 
10.244.2.4 dev cali10256f09271 scope link 
10.244.2.5 dev cali3ac1a873578 scope link 
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink 
168.63.129.16 via 10.244.0.1 dev eth0 proto dhcp metric 100 
169.254.169.254 via 10.244.0.1 dev eth0 proto dhcp metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1

Quelqu'un voit-il quelque chose qui ne va pas avec mes règles ici ?

Je l'ai compris.

@slecrenski Pouvez-vous s'il vous plaît partager ce que vous avez fait pour résoudre ce problème ? Je rencontre le même problème.

@slecrenski Oui. J'obtiens le même problème. Pourriez-vous s'il vous plaît partager comment résoudre le problème?

Dans mon cas, j'avais un tunl sur le nœud avec une adresse IP répertoriée dans le journal des erreurs. J'ai donc supprimé manuellement l'adresse IP sur tunl et redémarré le flanelle pod.

Cette page vous a été utile?
0 / 5 - 0 notes