Kubernetes: network plugin is not ready: cni config uninitialized

Created on 12 Jul 2017  ·  75Comments  ·  Source: kubernetes/kubernetes

Hello, I want to do a fresh install of kubernetes via kubeadm, but when I start the install I'm stuck on

[apiclient] Created API client, waiting for the control plane to become ready

When I do a journalctl -xe I see :

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

And I don't why I get this error. I also tried to disable firewalld but no effect.

Environment:

  • Kubernetes version (use kubectl version): v1.7.0
  • Cloud provider or hardware configuration**:
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 3.10.0-514.26.2.el7.x86_64
  • Install tools: Kubeadm
  • Others:
    docker version : Docker version 17.06.0-ce, build 02c1d87
    My RPM version :

kubeadm-1.7.0, kubectl-1.7.0, kubelet-1.7.0, kubernetes-cni-0.5.1

Thanks for your help

arekubeadm kinbug sinetwork

Most helpful comment

flanneld needs a fix for k8s 1.12.
Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it's a known issue: https://github.com/coreos/flannel/issues/1044

All 75 comments

@PLoic There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-* for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

_Note: method (1) will trigger a notification to the team. You can find the team list here and label list here_

/area [kubeadm]

@PLoic you get this error because no CNI network has been defined in /etc/cni/net.d and you're apparently using the CNI network plugin. Something has to write a config file to that directory to tell the CNI driver how to configure networking. I'm not sure what/how kubeadm does that though, so I'll leave that to @jbeda or other kubeadm folks.

xref: #43567

@dcbw Hi dcbw , Environment same to @PLoic , but I get this same error

It's seem to working by removing the$KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

removing $KUBELET_NETWORK_ARGS not work with me.

@PLoic also not work with me

@PLoic at step 3 which pod network did you install? There are various choices, and troubleshooting after that depends on the specific case.

@PLoic also, kubelet logs would be great

try to apply this plugin: kubectl apply --filename https://git.io/weave-kube-1.6
it works for me.

@PLoic @dcbw  I install flannel plugin of k8s(1.7) still get this same error ,Can you provide a solution?

Jul 14 17:57:20 node2 kubelet: W0714 17:57:20.540849 17504 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 14 17:57:20 node2 kubelet: E0714 17:57:20.541001 17504 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 14 17:57:23 node2 kubelet: I0714 17:57:23.032330 17504 kubelet.go:1820] skipping pod synchronization - [Failed to start ContainerManager systemd version does not support ability to start a slice as transient unit]

Sorry for the delay, I was using Weave, I will try to update kubernetes to 1.7.1 and with the new version of weave

I updated all my components and it seems to works ! :)

Is it ok to close this issue @PLoic ?

@cmluciano Yes I think it's ok to close this issue

Removing the $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf works for me.
Thanks @PLoic

Note that KUBELET_NETWORK_ARGS is what tells kubelet which kind of network plugin to expect. If you remove it then kubelet expects no plugin, and therefore you get whatever the underlying container runtime gives you: typically Docker "bridge" networking.

This is fine in some cases, particularly if you only have one machine. It is not helpful if you actually want to use CNI.

I am seeing the exact same error with kubeadm, where it is struck forever at:

[apiclient] Created API client, waiting for the control plane to become ready

In the "journalctl -r -u kubelet" I see these lines over and over:
Aug 31 16:34:41 k8smaster1 kubelet[8876]: E0831 16:34:41.499982 8876 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Aug 31 16:34:41 k8smaster1 kubelet[8876]: W0831 16:34:41.499746 8876 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d

Version details are:
`kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Kubectl version: Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}`

OS Details are:
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo) Kernel: Linux 3.10.0-514.el7.x86_64 Architecture: x86-64
Any help is very much appreciated!

@ashish-billore what CNI provider did you install?

i'm getting Unable to update cni config: No networks found in /etc/cni/net.d with a recent github tip of the master branch - "v1.9.0-alpha.0.690+9aef242a4c1e42-dirty"

on ubuntu 17.04

if i remove this line from 10-kubelet.conf:
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/ --cni-bin-dir=/opt/cni/bin""

the kubelet starts, then i install weave-net as the pod-network plugin but the kube-system pods never start (they remain scheduled?).

kube-system   etcd-luboitvbox                      0/1       Pending   0          31m
kube-system   kube-apiserver-luboitvbox            0/1       Pending   0          31m
kube-system   kube-controller-manager-luboitvbox   0/1       Pending   0          31m
kube-system   kube-dns-1848271846-7mw9x            0/3       Pending   0          32m
kube-system   kube-proxy-k89jp                     0/1       Pending   0          32m
kube-system   kube-scheduler-luboitvbox            0/1       Pending   0          31m
kube-system   weave-net-v8888                      0/2       Pending   0          30m

same happens with flannel.

Hello,

For information, I had this issue: Kubernetes introduces RBAC since v1.6, we need to create correspond Service Account, RBAC rules and flannel daemonset so that kubelet can communicate with api server correctly.

You have to run:

$ kubectl create -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml

I hope it helps.

Hi guys, why was this issue closed? Doesn't look like there was a solution?

I see this when trying to install a k8's cluster with weave CNI plugin.

@vinayvenkat the issue was closed because the OP updated and the problem went away.

Since Kubernetes, and particularly networking, is very complex and diverse, you should not assume that an issue which seems similar is actually the same. Open a new issue and give full details of your specific situation there.

If your issue is with Weave Net you may get a more focused answer at https://github.com/weaveworks/issues/new , or in the Weave community Slack.

I also encountered the same problem. But it's not a fatal error on installing k8s.
The probelem may be that you kubelet use systemd cgroups which differs with docker's cgroups, adjust both kubelet and docker, the run kubeadm again, it may runs well.
hope to help you.

OS (e.g. from /etc/os-release): CentOS 7

If you see the /etc/cni/net.d directory on the node empty despite the fact that the Pod Network Provider pod is running on it, try to setenforce 0 and delete the Pod Network Provider pod. k8s will restart it and, hopefully, now it will be able to copy its config.

dont't forget to restart the kubelet...
removing the $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable kubelet && systemctl start kubelet
then re-join the node
this way works fine for me~

Hi,

If you comment $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and restart the service/sever or if you kubeadm reset and join again/kubeadm init to recreate the cluster
and join the nodes again,

pods will be in running state but if you describe the kube-dns pod you will see

Warning Unhealthy 1m (x4 over 2m) kubelet, master Readiness probe failed: Get http://172.17.0.2:8081/readiness: dial tcp 172.17.0.2:8081: getsockopt: connection refused

the complete output as below.

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned kube-dns-6f4fd4bdf-qxmzn to master
Normal SuccessfulMountVolume 10m kubelet, master MountVolume.SetUp succeeded for volume "kube-dns-token-47fpd"
Normal SuccessfulMountVolume 10m kubelet, master MountVolume.SetUp succeeded for volume "kube-dns-config"
Normal Pulling 10m kubelet, master pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7"
Normal Pulled 10m kubelet, master Successfully pulled image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7"
Normal Created 10m kubelet, master Created container
Normal Started 10m kubelet, master Started container
Normal Pulling 10m kubelet, master pulling image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7"
Normal Pulled 10m kubelet, master Successfully pulled image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7"
Normal Created 10m kubelet, master Created container
Normal Started 10m kubelet, master Started container
Normal Pulling 10m kubelet, master pulling image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7"
Normal Pulled 10m kubelet, master Successfully pulled image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7"
Normal Created 10m kubelet, master Created container
Normal Started 10m kubelet, master Started container
Normal SuccessfulMountVolume 2m kubelet, master MountVolume.SetUp succeeded for volume "kube-dns-token-47fpd"
Normal SuccessfulMountVolume 2m kubelet, master MountVolume.SetUp succeeded for volume "kube-dns-config"
Normal SandboxChanged 2m kubelet, master Pod sandbox changed, it will be killed and re-created.
Normal Pulled 2m kubelet, master Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7" already present on machine
Normal Created 2m kubelet, master Created container
Normal Started 2m kubelet, master Started container
Normal Created 2m kubelet, master Created container
Normal Pulled 2m kubelet, master Container image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7" already present on machine
Normal Started 2m kubelet, master Started container
Normal Pulled 2m kubelet, master Container image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7" already present on machine
Normal Created 2m kubelet, master Created container
Normal Started 2m kubelet, master Started container
Warning Unhealthy 1m (x4 over 2m) kubelet, master Readiness probe failed: Get http://172.17.0.2:8081/readiness: dial tcp 172.17.0.2:8081: getsockopt: connection refused

docker@master:~$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
etcd-master 1/1 Running 1 14m
kube-apiserver-master 1/1 Running 1 14m
kube-controller-manager-master 1/1 Running 1 14m
kube-dns-6f4fd4bdf-qxmzn 3/3 Running 3 15m
kube-proxy-d54fk 1/1 Running 1 15m
kube-scheduler-master 1/1 Running 1 14m

Nobody mentioned SELinux yet. I got this error when running kubeadm join on a Centos 7 machine with SELinux in Enforcing mode. Setting setenforce 0 and reruning kubeadm fixed my problem.

Thanks setenforce 0 worked for me.

1.$KUBELET_NETWORK_ARGS take out,not solve the problem
2.setenforce 0
3.systemctl stop firewalld
4.docker cgroups drivers:systemd 与 Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
accordance
but,you know the promblem still exist.


May 20 20:10:45 k8s kubelet: I0520 20:10:45.244383 17638 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
May 20 20:10:45 k8s kubelet: E0520 20:10:45.920981 17638 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.18.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s.master.com&limit=500&resourceVersion=0: dial tcp 192.168.18.90:6443: getsockopt: connection refused
May 20 20:10:45 k8s kubelet: E0520 20:10:45.924021 17638 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.18.90:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.18.90:6443: getsockopt: connection refused
May 20 20:10:45 k8s kubelet: E0520 20:10:45.935594 17638 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.18.90:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s.master.com&limit=500&resourceVersion=0: dial tcp 192.168.18.90:6443: getsockopt: connection refused

May 23 10:19:45 arch kubelet[13585]: E0523 10:19:45.909458 13585 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 23 10:19:46 arch kubelet[13585]: E0523 10:19:46.002646 13585 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 8; ignoring extra CPUs

`sudo kubectl get node

NAME STATUS ROLES AGE VERSION
127.0.0.1 NotReady 23h v1.8.13`

Just ran into this, and it seems to be due to the file actually being empty, here's the output from the install-cni container:

$ k logs canal-25rct install-cni -n kube-system
ls: /calico-secrets: No such file or directory
Wrote Calico CNI binaries to /host/opt/cni/bin
CNI plugin version: v3.1.2
/host/secondary-bin-dir is non-writeable, skipping
CNI config: {
Created CNI config 10-calico.conflist
Done configuring CNI.  Sleep=true

And in /etc/cni/net.d/10-calico.conflist:

$ cat /etc/cni/net.d/10-calico.conflist 
{

When I try to shell into the container (maybe it should be an initContainer?), I get the following:

$ k exec -it canal-25rct -c install-cni -n kube-system -- /bin/bash
Error: Malformed environment entry: "  "name": "k8s-pod-network",
": Success
command terminated with exit code 45

It's weird, because the version of the script hasn't changed, and the only thing I've changed recently is switching to rkt for running containers. Also, this is on Container Linux (CoreOS) if that helps at all.

Hello K8s folks!

I had many times the same problem. For example - something went wrong during my K8s initialization and I had to use kubeadm reset and initialize K8s again. After run initialization command I got in kubelet log this error:

Jun 01 10:13:40 vncub0626 kubelet[18861]: I0601 10:13:40.665823   18861 kubelet.go:2102] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jun 01 10:13:40 vncub0626 kubelet[18861]: E0601 10:13:40.665874   18861 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

... I was mad from this error message - nothing helped. So I said myself - at first the initialization run but the re-initialization didn't. So it wasn't caused by this line in Kubelet configuration: KUBELET_NETWORK_ARGS and I don't agree with comment it. So I read kubelet log again and again... and finally I noticed in log next error message:

Jun 01 10:13:29 vncub0626 kubelet[18861]: E0601 10:13:29.376339   18861 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.96.22.11:6443/api/v1/services?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

This error was caused by bad ~/.kube/config file in home directory after previous initialization. After removing it I run initialization again... and voilá... initialization finished successfully. :]

... hope it helps to someone else because this error is nightmare and it's not almost possible to determine its cause.

@waldauf you are correct!!! awesome!!!

run follow command works well

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

kubeadm version v1.10.3

Removing the $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf works for me.

Repeating for visibility:

Note that KUBELET_NETWORK_ARGS is what tells kubelet which kind of network plugin to expect. If you remove it then kubelet expects no plugin, and therefore you get whatever the underlying container runtime gives you: typically Docker "bridge" networking.

This is fine in some cases, particularly if you only have one machine. It is not helpful if you actually want to use CNI.

@waldauf thx, it works

This came across me when my flannel plugin has not been installed correctly.

I was following this guide today (https://www.techrepublic.com/article/how-to-install-a-kubernetes-cluster-on-centos-7/) and I missed the cgroupfs configuration in "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf". Once I fixed it worked like a charm

https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-395965665

@ChinaSilence Can you explain why we have to use flannel? Can't we do it without flannel?

flannel 0.10.0 and kubernetes 1.12.0 can not work together somehow. there is something wrong with kubernetes 1.12.0, so i've downgraded kubernetes to the 1.11.3 and everything is working fine.

Hope kubernetes fixes that issue soon.

@bilalx20 i can confirm that - flannel is broken for me too in 1.12.
what you can do is try weave or callico, they work.

I've this issue on my Ubuntu 16.04 with k8s 1.12.

Downgrade to the 1.11.0 and everything is up and running.

I have this issue on CentOS 7.5 with k8s 1.12

Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env works fine

flanneld needs a fix for k8s 1.12.
Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it's a known issue: https://github.com/coreos/flannel/issues/1044

I have experienced the problem @ReSearchITEng describes with 1.12.1. His/her solution worked for me.
EDIT: scratch that, one of the nodes still shows the same issue after kubeadm join
EDIT2: unrelated issue, turns out nvidia-container-runtime was missing on this GPU node. Diagnosed using journalctl -xeu kubelet on the bad node

TL;DR: the solution works

Confirming that the solution from @ReSearchITEng works, I not have my master node in the running state and flannel is up.

Just in case somebody is googling this. I had the same problem, in my case the cloud-init process set the NM_CONTROLLED option in /etc/sysconfig/network-scripts/ifcfg-{interface-name} to no.
This option has to be set to yes for NetworkManager to create the needed resolv.conf file for the sdn pods.

I had failed to actually apply the weave manifests, hence, no network plugin was initialized.
Note to self: Read the whole Weave installation manual before looking for answers elsewhere :+1:

flanneld needs a fix for k8s 1.12.
Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it's a known issue: coreos/flannel#1044

Additional:
I think this problem cause by kuberadm first init coredns but not init flannel,so it throw "network plugin is not ready: cni config uninitialized".
Solution:

  1. Install flannel by kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
  2. Reset the coredns pod
    kubectl delete coredns-xx-xx
  3. Then run kubectl get pods to see if it works.

if you see this error "cni0" already has an IP address different from 10.244.1.1/24".
follow this:

ifconfig  cni0 down
brctl delbr cni0
ip link delete flannel.1

if you see this error "Back-off restarting failed container", and you can get the log by

root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system
.:53
2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6
2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
 [FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1599094102175870692.6819166615156126341.".

Then you can see the file "/etc/resolv.conf" on the failed node, if the nameserver is localhost there will be a loopback.Change to:

#nameserver 127.0.1.1
nameserver 8.8.8.8

you add --network-plugin=cni in your kueblet start conf
in my system:
1.vim /etc/systemd/system/kubelet.service
2.delete --network-plugin=cni
3.restart kubelet (systemctl daemon-reload; systemctl restart kubelet)

please do 3 step , in your system,maybe your installtion different from me ,please do it like this way

@mdzddl you deleted --network-plugin=cni cause kubelet complains about cni? Not so clever. Deleting the default network plugin is not recommended at all.

flanneld needs a fix for k8s 1.12.
Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it's a known issue: coreos/flannel#1044

Works for me !

@mdzddl you deleted --network-plugin=cni cause kubelet complains about cni? Not so clever. Deleting the default network plugin is not recommended at all.

Then what is the solution

flanneld needs a fix for k8s 1.12.
Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it's a known issue: coreos/flannel#1044

Additional:
I think this problem cause by kuberadm first init coredns but not init flannel,so it throw "network plugin is not ready: cni config uninitialized".
Solution:

  1. Install flannel by kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
  2. Reset the coredns pod
    kubectl delete coredns-xx-xx
  3. Then run kubectl get pods to see if it works.

if you see this error "cni0" already has an IP address different from 10.244.1.1/24".
follow this:

ifconfig  cni0 down
brctl delbr cni0
ip link delete flannel.1

if you see this error "Back-off restarting failed container", and you can get the log by

root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system
.:53
2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6
2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
 [FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1599094102175870692.6819166615156126341.".

Then you can see the file "/etc/resolv.conf" on the failed node, if the nameserver is localhost there will be a loopback.Change to:

#nameserver 127.0.1.1
nameserver 8.8.8.8

kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

Throws error as above

flannel has not updated their manifest to comply with the latest changes in k8s 1.16.
try a different CNI plugin, like Calico or WeaveNet.

...or patch the flannel manifests to use apps/v1 instead of extensions/v1beta1

I have this issue on Ubuntu 16.04 with k8s 1.16 (I run ubuntu on vagrant)

Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env works fine

flannel has not updated their manifest to comply with the latest changes in k8s 1.16.
try a different CNI plugin, like Calico or WeaveNet.
...
...or patch the flannel manifests to use apps/v1 instead of extensions/v1beta1

That was fixed a while ago, but the links in the Kubernetes documentation still point to an older version which doesn't work (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ has https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml). Using "master" instead works and also fixes another issue (missing version in the CNI config).

This is what I saw: This error comes when you dont have flannel running yet, but you start kublet with just manifests of apiserver, scheduler and controller-manager - WHILE YOU HAVE THIS LINE in 10-kubeadm.conf - "Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --node-ip=192.168.8.11"

Comment that and start kubelet.
Then core kube-system pods come up.
Then install kube-proxy
Then install flannel
* Then uncomment the above line and restart kubelet *
Install core-dns/kube-dns

Hi,
i am trying to install version 1.16.0 i am using kuberouter pluging and i am getting the same errors

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Hi,
i am trying to install version 1.16.0 i am using kuberouter pluging and i am getting the same errors

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

If you can provide more information about the running environment, it will be helpful. Such as, the Operate System, and what did you do before the error occurred.

this is a new fresh install of version 1.16.0 on amazon.
i am usuing this AMI - k8s-1.16-debian-stretch-amd64-hvm-ebs-2020-01-17
uname -a
Linux ip-172-28-125-218 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux

if i install 1.15.0 there are no problems at all.

this is what i see in syslog of the master nodes.

Mar 12 05:26:22 ip-172-28-125-218 kubeletÄ3656Å: E0312 05:26:22.761009 3656 kubelet.go:2187Å Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:26:25 ip-172-28-125-218 dockerÄ3570Å: I0312 05:26:25.713681 3619 dns.go:47Å DNSView unchanged: 5
Mar 12 05:26:25 ip-172-28-125-218 kubeletÄ3656Å: W0312 05:26:25.883857 3656 cni.go:202Å Error validating CNI config &äkubernetes false Ä0xc0009d8260Å Ä123 34 99 110 105 86 101 114 115 105 111 110 34 58 34 34 44 34 110 97 109 101 34 58 34 107 117 98 101 114 110 101 116 101 115 34 44 34 112 108 117 103 105 110 115 34 58 91 123 34 98 114 105 100 103 101 34 58 34 107 117 98 101 45 98 114 105 100 103 101 34 44 34 105 112 97 109 34 58 123 34 115 117 98 110 101 116 34 58 34 49 48 48 46 57 54 46 48 46 48 47 50 52 34 44 34 116 121 112 101 34 58 34 104 111 115 116 45 108 111 99 97 108 34 125 44 34 105 115 68 101 102 97 117 108 116 71 97 116 101 119 97 121 34 58 116 114 117 101 44 34 110 97 109 101 34 58 34 107 117 98 101 114 110 101 116 101 115 34 44 34 116 121 112 101 34 58 34 98 114 105 100 103 101 34 125 93 125Åå: Äplugin bridge does not support config version ""Å
Mar 12 05:26:25 ip-172-28-125-218 kubeletÄ3656Å: W0312 05:26:25.883925 3656 cni.go:237Å Unable to update cni config: no valid networks found in /etc/cni/net.d/
Mar 12 05:26:27 ip-172-28-125-218 kubeletÄ3656Å: E0312 05:26:27.762309 3656 kubelet.go:2187Å Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:26:30 ip-172-28-125-218 dockerÄ3570Å: I0312 05:26:30.713906 3619 dns.go:47Å DNSView unchanged: 5
Mar 12 05:26:30 ip-172-28-125-218 kubeletÄ3656Å: W0312 05:26:30.886362 3656 cni.go:202Å Error validating CNI config &äkubernetes false Ä0xc0008fc000Å Ä123 34 99 110 105 86 101 114 115 105 111 110 34 58 34 34 44 34 110 97 109 101 34 58 34 107 117 98 101 114 110 101 116 101 115 34 44 34 112 108 117 103 105 110 115 34 58 91 123 34 98 114 105 100 103 101 34 58 34 107 117 98 101 45 98 114 105 100 103 101 34 44 34 105 112 97 109 34 58 123 34 115 117 98 110 101 116 34 58 34 49 48 48 46 57 54 46 48 46 48 47 50 52 34 44 34 116 121 112 101 34 58 34 104 111 115 116 45 108 111 99 97 108 34 125 44 34 105 115 68 101 102 97 117 108 116 71 97 116 101 119 97 121 34 58 116 114 117 101 44 34 110 97 109 101 34 58 34 107 117 98 101 114 110 101 116 101 115 34 44 34 116 121 112 101 34 58 34 98 114 105 100 103 101 34 125 93 125Åå: Äplugin bridge does not support config version ""Å
Mar 12 05:26:30 ip-172-28-125-218 kubeletÄ3656Å: W0312 05:26:30.886428 3656 cni.go:237Å Unable to update cni config: no valid networks found in /etc/cni/net.d/

Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env also works on CentOS 7.6 with k8s 1.16.8

please don't change anything. Just run this command. The error will be gone.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Thanks @ikramuallah , that worked for 1.18, thing is it didn't work directly cause one of the flannel pod could not be pulled as one of the quay sites was throwing 500. So my suggestion is that after applying the YAML, check if all the flannel pods have come up and debug that. Linking issue I raised in flannel for reference. coreos/flannel#1294

please don't change anything. Just run this command. The error will be gone.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

thanks!

please don't change anything. Just run this command. The error will be gone.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

thanks!

Sorry, I can't reach the link. Is something wrong ?

@juxuny link is ok, maybe some temporary netw connectivity issues?

Please someone help me
For kube-proxy-windows-xhxzw pod status is ContainerCreating .When I describe the pods it gives warning as

Warning NetworkNotReady 3m27s (x4964 over 168m) kubelet, casts1 network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

All others pods are in running state. except the kube-proxy-windows

I have created kubernetes envrioronment as below:
Master Node Ready V1.19.0 RHEL 7 OS
Worker Node NotReady V1.19.0 Windows Server 2019
I am trying to join Windows worker node to master node

I am using flannel network. I have tried below solutions:
1.kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  1. Change 10-flannel.confglist's cniVersion from "0.3.1" to "0.2.0"
    10-flannel.confglist was already there.

Please let me know the exact problem

In my case when i am init kubernetes master i have had this issue. After delete all data in etcd the process init was successful
On all etcd nodes:
systemctl stop etcd
rm -rf /var/lib/etcd/*
systemctl daemon-reload && systemctl enable etcd && systemctl start

Was this page helpful?
0 / 5 - 0 ratings