Kubeadm: jaringan runtime tidak siap: NetworkReady=alasan salah:Pesan NetworkPluginNotReady:docker: plugin jaringan belum siap: konfigurasi cni tidak diinisialisasi

Dibuat pada 2 Agu 2018  ·  65Komentar  ·  Sumber: kubernetes/kubeadm

Apakah ini LAPORAN BUG atau PERMINTAAN FITUR?

LAPORAN BUG

  • Saya telah mengikuti panduan ini .
  • Saya telah menginstal master node pada server 96 CPU ARM64.
  • OSnya adalah Ubuntu 18.04 LTS. tepat setelah apt-get update/upgrade .
  • Digunakan kubeadm init --pod-network-cidr=10.244.0.0/16 . Dan kemudian menjalankan perintah yang disarankan.
  • Jaringan polong flanel yang dipilih:

    • sysctl net.bridge.bridge-nf-call-iptables=1 .

    • wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml .

    • vim kube-flannel.yml , ganti amd64 dengan arm64

    • kubectl apply -f kube-flannel.yml .

    • kubectl get pods --all-namespaces :

NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-ls44z                   1/1       Running   0          20m
kube-system   coredns-78fcdf6894-njnnt                   1/1       Running   0          20m
kube-system   etcd-devstats.team.io                      1/1       Running   0          20m
kube-system   kube-apiserver-devstats.team.io            1/1       Running   0          20m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running   0          20m
kube-system   kube-flannel-ds-v4t8s                      1/1       Running   0          13m
kube-system   kube-proxy-5825g                           1/1       Running   0          20m
kube-system   kube-scheduler-devstats.team.io            1/1       Running   0          20m

Kemudian gabungkan dua node AMD64 menggunakan kubeadm init output:
simpul pertama:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:49.987467   16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709   16652 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

simpul ke-2:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:58.913060   38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222   38617 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Tetapi pada master kubectl get nodes :

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    7m        v1.11.1
devstats.cncf.io   NotReady   <none>    7m        v1.11.1
devstats.team.io   Ready      master    21m       v1.11.1

Dan kemudian: kubectl describe nodes (master adalah devstats.team.io , node adalah: cncftest.io dan devstats.cncf.io ):

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:26:53 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.1.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 8m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  8m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:27:00 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.2.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 7m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  7m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=147.75.97.234
                    kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:12:56 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:21:07 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                coredns-78fcdf6894-ls44z                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                coredns-78fcdf6894-njnnt                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-v4t8s                       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)
  kube-system                kube-proxy-5825g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       850m (0%)   100m (0%)
  memory    190Mi (0%)  390Mi (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 23m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  23m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     23m (x5 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 21m                kube-proxy, devstats.team.io  Starting kube-proxy.
  Normal  NodeReady                13m                kubelet, devstats.team.io     Node devstats.team.io status is now: NodeReady

Versi

versi kubeadm (gunakan kubeadm version ):

kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}

Lingkungan :

  • Versi Kubernetes (gunakan kubectl version ):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
  • Penyedia cloud atau konfigurasi perangkat keras :
  • Master: Server bare metal 96 core, ARM64, 128G RAM, swap dimatikan.
  • Node (2): server bare metal 48 core, AMD64, 256G RAM, swap disetel dari x 2.
  • uname -a : Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Sel 24 Apr 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
  • OS (mis. dari /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • lsb_release -a :
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:    18.04
Codename:   bionic
  • Kernel (misalnya uname -a ): Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
  • Lainnya : docker version :
docker version
Client:
 Version:   17.12.1-ce
 API version:   1.35
 Go version:    go1.10.1
 Git commit:    7390fc6
 Built: Wed Apr 18 01:26:37 2018
 OS/Arch:   linux/arm64

Server:
 Engine:
  Version:  17.12.1-ce
  API version:  1.35 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   7390fc6
  Built:    Wed Feb 28 17:46:05 2018
  OS/Arch:  linux/arm64
  Experimental: false

Apa yang terjadi?

Kesalahan yang tepat tampaknya:

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Di simpul: cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

Dari utas ini (tidak ada KUBELET_NETWORK_ARGS sana).

  • journalctl -xe pada simpul:
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663   38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876   38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Direktori /etc/cni/net.d ada, tetapi kosong.

Apa yang Anda harapkan terjadi?

Semua node dalam status Ready .

Bagaimana cara mereproduksinya (seminimal dan setepat mungkin)?

Cukup ikuti langkah-langkah dari tutorial . Mencoba 3 kali dan itu terjadi sepanjang waktu.

Ada lagi yang perlu kita ketahui?

Master adalah ARM64, 2 node adalah AMD64.
Master dan satu simpul ada di Amsterdam dan simpul ke-2 ada di Amerika Serikat.

Saya dapat menggunakan kubectl taint nodes --all node-role.kubernetes.io/master- untuk menjalankan pod pada master, tetapi ini bukan solusi. Saya ingin memiliki cluster multi-simpul nyata untuk digunakan.

areecosystem prioritawaiting-more-evidence

Komentar yang paling membantu

@lukasredynk

ya, jadi ini adalah masalah utama, terima kasih atas konfirmasinya.
mari kita fokus pada kain flanel di sini karena masalah tenun tampaknya merupakan masalah yang bersinggungan.

lihat ini oleh @luxas untuk konteksnya, jika belum melihatnya:
https://github.com/luxas/kubeadm-workshop

Haruskah Master menangani pemijahan penyebaran lengkungan yang benar pada dirinya sendiri dan node?

_itu harus_ tetapi manifes yang Anda unduh bukan yang "gemuk":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

sejauh yang saya pahami, noda lengkung disebarkan dan Anda perlu memperbaikinya dengan kubectl pada setiap simpul (?).

sepertinya manifes "gemuk" ada di master dan ditambahkan di sini:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

masalah terkait/pr:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

asumsi saya adalah ini adalah tepi berdarah dan Anda harus menggunakan:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

jadi turunkan cluster dan cobalah dan semoga berhasil.
dokumen CNI kami akan membutuhkan perbaikan, namun ini perlu terjadi ketika flannel-next dirilis.

Semua 65 komentar

@lukaszgryglicki
Tampaknya node tidak mendapatkan flanel karena mereka menggunakan arsitektur amd64

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

dan

Name:               devstats.team.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

Saya bukan ahli flanel, tetapi saya pikir Anda harus memeriksa dokumentasi produk untuk mengetahui cara membuatnya bekerja di lingkungan dengan platform campuran

Itu poin yang bagus, tapi bagaimana dengan pesan kesalahan - sepertinya sangat tidak berhubungan.

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Sepertinya beberapa file konfigurasi CNI hilang di /etc/cni/net.d tetapi mengapa?
Saya sekarang mencoba buruh pelabuhan yang berbeda 18.03ce seperti yang disarankan pada saluran slack (17,03 sebenarnya disarankan, tetapi tidak ada 17,03 untuk Ubuntu 18,04).

Label dengan nama lengkung memang tidak cocok. Tetapi label berikutnya beta.kubernetes.io/os=linux adalah sama di ketiga server.

Hal yang sama terjadi dengan Docker 18.03ce. Saya tidak melihat perbedaan, ini tidak terlihat seperti masalah buruh pelabuhan. Ini terlihat seperti beberapa masalah konfigurasi CNI.

@lukaszgryglicki
Hai,

Master: Server bare metal 96 core, ARM64, 128G RAM, swap dimatikan.
Node (2): server bare metal 48 core, AMD64, 256G RAM, swap disetel dari x 2.

ini adalah beberapa spesifikasi _nice_.

cara saya menguji hal-hal sebagai berikut - jika sesuatu tidak bekerja dengan weavenet, saya mencoba kain flanel dan sebaliknya.

jadi silakan coba menenun dan jika pengaturan CNI Anda berfungsi dengannya, maka ini terkait dengan plugin CNI.

sementara tim kubeadm mendukung plugin dan add-on, kami biasanya mendelegasikan masalah ke pengelola masing-masing karena kami tidak memiliki bandwidth untuk menangani semuanya.

Tentu, saya sudah mencoba menenun beberapa iterasi yang lalu. Itu berakhir dengan loop restart kontainer.
Sekarang akan mencoba buruh pelabuhan 17.03 untuk mengecualikan masalah buruh pelabuhan (17.03 seharusnya didukung dengan sangat baik).

Jadi ini bukan masalah buruh pelabuhan. Pada 17.03 sama:

Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: W0802 14:21:51.406786   21714 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: E0802 14:21:51.407074   21714 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
now will try weave net as suggested on the issue

Saya akan mencoba menenun sekarang dan memposting hasilnya di sini.

Jadi, saya sudah mencoba weave net dan tidak berhasil:
Pada master: kubectl get nodes :

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    5s        v1.11.1
devstats.cncf.io   NotReady   <none>    12s       v1.11.1
devstats.team.io   NotReady   master    7m        v1.11.1
  • kubectl describe nodes (kesalahan terkait cni yang sama, tetapi juga pada master node sekarang):
Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:56 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-wwjrr    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 1m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:49 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-2fsrf    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 1m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:32:14 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.9.0
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (6 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-69qnb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-j9f5m                             20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests   Limits
  --------  --------   ------
  cpu       570m (0%)  0 (0%)
  memory    0 (0%)     0 (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 10m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  10m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x5 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 8m                 kube-proxy, devstats.team.io  Starting kube-proxy.
  • journalctl -xe pada master:
Aug 02 14:42:18 devstats.team.io dockerd[44020]: time="2018-08-02T14:42:18.330999189Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.079835   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080312   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080677   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:19 devstats.team.io kubelet[56340]: E0802 14:42:19.080815   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:21 devstats.team.io kubelet[56340]: W0802 14:42:21.867690   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:21 devstats.team.io kubelet[56340]: E0802 14:42:21.868005   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.259681   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260359   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260833   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.260984   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:26 devstats.team.io kubelet[56340]: W0802 14:42:26.870675   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.871316   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  • kubectl get po --all-namespaces :
NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   coredns-78fcdf6894-g8wzs                   0/1       Pending            0          12m
kube-system   coredns-78fcdf6894-tzs8n                   0/1       Pending            0          12m
kube-system   etcd-devstats.team.io                      1/1       Running            0          12m
kube-system   kube-apiserver-devstats.team.io            1/1       Running            0          12m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running            0          12m
kube-system   kube-proxy-69qnb                           1/1       Running            0          12m
kube-system   kube-scheduler-devstats.team.io            1/1       Running            0          12m
kube-system   weave-net-2fsrf                            1/2       CrashLoopBackOff   5          5m
kube-system   weave-net-j9f5m                            1/2       CrashLoopBackOff   6          8m
kube-system   weave-net-wwjrr                            1/2       CrashLoopBackOff   5          4m
  • kubectl describe po --all-namespaces :
Name:               coredns-78fcdf6894-g8wzs
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x48 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               coredns-78fcdf6894-tzs8n
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x47 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               etcd-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=etcd
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.mirror=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.seen=2018-08-02T14:31:13.654147902Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  etcd:
    Container ID:  docker://254c88b154393778ef7b1ead2aaaa0acb120ffb76d911f140172da3323f1f1e3
    Image:         k8s.gcr.io/etcd-arm64:3.2.18
    Image ID:      docker-pullable://k8s.gcr.io/etcd-arm64<strong i="13">@sha256</strong>:f0b7368ebb28e6226ab3b4dbce4b5c6d77dab7b5f6579b08fd645c00f7b100ff
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://127.0.0.1:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --initial-advertise-peer-urls=https://127.0.0.1:2380
      --initial-cluster=devstats.team.io=https://127.0.0.1:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379
      --listen-peer-urls=https://127.0.0.1:2380
      --name=devstats.team.io
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:    <none>
    Mounts:
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
      /var/lib/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-apiserver-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.mirror=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.seen=2018-08-02T14:31:13.639443247Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-apiserver:
    Container ID:  docker://22b73993b141faebe6b4aab727d2235abb3422a17b60bc1be6c749c260e39f67
    Image:         k8s.gcr.io/kube-apiserver-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver-arm64<strong i="14">@sha256</strong>:bca1933fa25fc7f890700f6aebd572c6f8351f7bc89d2e4f2c44a63649e3fccf
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --authorization-mode=Node,RBAC
      --advertise-address=147.75.97.234
      --allow-privileged=true
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --disable-admission-plugins=PersistentVolumeLabel
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        250m
    Liveness:     http-get https://147.75.97.234:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-controller-manager-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-controller-manager
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.mirror=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.seen=2018-08-02T14:31:13.646000889Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-controller-manager:
    Container ID:  docker://5182bf5c7c63f9507e6319a2c3fb5698dc827ea9b591acbb071cb39c4ea445ea
    Image:         k8s.gcr.io/kube-controller-manager-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-controller-manager-arm64<strong i="15">@sha256</strong>:7fa0b0242c13fcaa63bff3b4cde32d30ce18422505afa8cb4c0f19755148b612
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --address=127.0.0.1
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --use-service-account-credentials=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        200m
    Liveness:     http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-proxy-69qnb
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:32:25 +0000
Labels:             controller-revision-hash=2718475167
                    k8s-app=kube-proxy
                    pod-template-generation=1
Annotations:        scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://12fb2a4a8af025604e46783aa87d084bdc681365317c8dac278a583646a8ad1c
    Image:         k8s.gcr.io/kube-proxy-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy-arm64<strong i="16">@sha256</strong>:c61f4e126ec75dedce3533771c67eb7c1266cacaac9ae770e045a9bec9c9dc32
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
    State:          Running
      Started:      Thu, 02 Aug 2018 14:32:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-4q6rl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-proxy-token-4q6rl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-4q6rl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/arch=arm64
Tolerations:     
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type    Reason   Age   From                       Message
  ----    ------   ----  ----                       -------
  Normal  Pulled   13m   kubelet, devstats.team.io  Container image "k8s.gcr.io/kube-proxy-arm64:v1.11.1" already present on machine
  Normal  Created  13m   kubelet, devstats.team.io  Created container
  Normal  Started  13m   kubelet, devstats.team.io  Started container


Name:               kube-scheduler-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-scheduler
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.mirror=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.seen=2018-08-02T14:31:13.651239565Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-scheduler:
    Container ID:  docker://0b8018a7d0c2cb2dc64d9364dea5cea8047b0688c4ecb287dba8bebf9ab011a3
    Image:         k8s.gcr.io/kube-scheduler-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler-arm64<strong i="17">@sha256</strong>:28ab99ab78c7945a4e20d9369682e626b671ba49e2d4101b1754019effde10d2
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --address=127.0.0.1
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:14 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Liveness:     http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               weave-net-2fsrf
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.cncf.io/147.75.78.47
Start Time:         Thu, 02 Aug 2018 14:39:49 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.78.47
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://e8f5c3b702166a15212ab9576696aa7a1a0cb5b94e9cba1451fc9cc2b1d1382d
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="18">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:04 +0000
      Finished:     Thu, 02 Aug 2018 14:43:05 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://1cfd16507d6d9e1744bfc354af62301fb8678af12ace34113121a40ca93b6113
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="19">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:39:58 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                       Message
  ----     ------   ----               ----                       -------
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, devstats.cncf.io  Created container
  Normal   Started  5m                 kubelet, devstats.cncf.io  Started container
  Normal   Created  5m (x4 over 5m)    kubelet, devstats.cncf.io  Created container
  Normal   Started  5m (x4 over 5m)    kubelet, devstats.cncf.io  Started container
  Normal   Pulled   5m (x3 over 5m)    kubelet, devstats.cncf.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  56s (x27 over 5m)  kubelet, devstats.cncf.io  Back-off restarting failed container


Name:               weave-net-j9f5m
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:36:11 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="20">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:42:18 +0000
      Finished:     Thu, 02 Aug 2018 14:42:18 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://3cd49dbca669ac83db95ebf943ed0053281fa5082f7fa403a56e30091eaec36b
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="21">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:36:31 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age               From                       Message
  ----     ------   ----              ----                       -------
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  9m                kubelet, devstats.team.io  Created container
  Normal   Started  9m                kubelet, devstats.team.io  Started container
  Normal   Created  8m (x4 over 9m)   kubelet, devstats.team.io  Created container
  Normal   Started  8m (x4 over 9m)   kubelet, devstats.team.io  Started container
  Normal   Pulled   8m (x3 over 9m)   kubelet, devstats.team.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  4m (x26 over 9m)  kubelet, devstats.team.io  Back-off restarting failed container


Name:               weave-net-wwjrr
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               cncftest.io/147.75.205.79
Start Time:         Thu, 02 Aug 2018 14:39:57 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.205.79
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://d0d1dccfe0a1f57bce652e30d5df210a9b232dd71fe6be1340c8bd5617e1ce11
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="22">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:16 +0000
      Finished:     Thu, 02 Aug 2018 14:43:16 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://e2c15578719788110131a4be3653a077441338b0f61f731add9dadaadfc11655
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="23">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:40:09 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                  Message
  ----     ------   ----               ----                  -------
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, cncftest.io  Created container
  Normal   Started  5m                 kubelet, cncftest.io  Started container
  Normal   Created  4m (x4 over 5m)    kubelet, cncftest.io  Created container
  Normal   Pulled   4m (x3 over 5m)    kubelet, cncftest.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Normal   Started  4m (x4 over 5m)    kubelet, cncftest.io  Started container
  Warning  BackOff  44s (x27 over 5m)  kubelet, cncftest.io  Back-off restarting failed container
  • kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true :
I0802 14:49:02.034473   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.036654   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.044546   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.062906   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.063710   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf
I0802 14:49:02.063753   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.063791   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.063828   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.236764   64396 round_trippers.go:408] Response Status: 200 OK in 172 milliseconds
I0802 14:49:02.236870   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.236907   64396 round_trippers.go:414]     Content-Type: application/json
I0802 14:49:02.236944   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
I0802 14:49:02.237363   64396 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-2fsrf","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-2fsrf","uid":"e8b2dfe9-9661-11e8-8ca9-fc15b4970491","resourceVersion":"1625","creationTimestamp":"2018-08-02T14:39:49Z","labels":{"controller-revision-hash":"332195524","name":"weave-net","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"66e82a46-9661-11e8-8ca9-fc15b4970491","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","ty [truncated 4212 chars]
I0802 14:49:02.261076   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.262803   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave
I0802 14:49:02.262844   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.262882   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.262919   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.275703   64396 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds
I0802 14:49:02.275743   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.275779   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.275815   64396 round_trippers.go:414]     Content-Length: 69
I0802 14:49:02.275850   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
I0802 14:49:02.278054   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.279649   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave-npc
I0802 14:49:02.279691   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.279728   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.279765   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.293271   64396 round_trippers.go:408] Response Status: 200 OK in 13 milliseconds
I0802 14:49:02.293321   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.293358   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.293394   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
INFO: 2018/08/02 14:39:58.198716 Starting Weaveworks NPC 2.4.0; node name "devstats.cncf.io"
INFO: 2018/08/02 14:39:58.198969 Serving /metrics on :6781
Thu Aug  2 14:39:58 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/08/02 14:39:58.294002 Got list of ipsets: []
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338475   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.339275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.340235   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.341457   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.340117   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.341216   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.342131   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.342657   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343322   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343396   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.343714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.344561   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.346722   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.344468   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.345385   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.347275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.345226   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.346184   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.347875   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347016   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347523   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.350821   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.347826   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.348883   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.351365   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.348662   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.349573   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.352012   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.349429   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.350420   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.352714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.351213   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.352074   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.355261   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352128   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352949   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.355929   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.352903   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.353844   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.356576   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.353994   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.354564   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.357281   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.355515   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.356603   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.359533   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.356372   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.357453   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.360401   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

Jadi, menyimpulkan. Tidak mungkin menginstal cluster Kubernetes hanya dengan master tunggal dan node pekerja tunggal di Ubuntu 18.04.
Saya pikir harus ada instruksi instalasi cara mengatur k8s langkah demi langkah menggunakan kubeadm di LTS Ubuntu terbaru.

saya pikir 18,04 rusak baik dalam hal bundel Docker dan untuk systemd-resolved .
jadi ya, sangat sulit untuk menulis panduan untuk setiap rasa distro di luar sana dan kami tidak dapat benar-benar mempertahankannya secara efisien.

juga sementara kubeadm adalah frontend di sini, masalahnya mungkin tidak terkait dengan kubeadm itu sendiri.

beberapa pertanyaan:

  • apakah Anda telah berhasil menjalankan kluster amd64 + arm64 dengan versi kubernetes terbaru?
  • saya ingin tahu apakah ini masalah proxy. apakah node di belakang proxy?
  • apa isi /var/lib/kubelet/kubeadm-flags.env ketika Anda memulai kubeadm join/init pada 3 node?
  • apakah hanya itu konten menarik dari journalctl -xeu kubelet ? apakah itu hanya di master node - bagaimana dengan yang lain? Anda dapat membuang ini di github Gist atau http://Pastebin.com untuk saya juga lihat.
  • apakah Anda telah berhasil menjalankan kluster amd64 + arm64 dengan versi kubernetes terbaru? Tidak, ini adalah percobaan pertama saya, tetapi saya juga akan mencoba menginstal master pada host amd64 dan satu node dengan host amd64 lain untuk mengecualikan masalah terkait arm64
  • saya ingin tahu apakah ini masalah proxy. apakah node di belakang proxy? tidak ada proxy sama sekali, ketiga server memiliki IP statis
  • apa isi /var/lib/kubelet/kubeadm-flags.env ketika kubeadm memulai join/init pada 3 node?
    master (devstats.team.io, arm64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

simpul (cncftest.io, amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

simpul (devstats.cncf.io, amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
  • apakah hanya itu konten menarik dari journalctl -xeu kubelet? apakah itu hanya di master node - bagaimana dengan yang lain? Anda dapat membuang ini di github Gist atau http://Pastebin.com untuk saya juga lihat.

Pastebins: master , simpul .

Jadi, saya telah menginstal master kubeadm init pada host amd64 dan mencoba weave net dan hasilnya persis sama seperti ketika mencoba ini pada host arm64:

  • Mundur memulai ulang wadah yang gagal
  • runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Ada kemajuan kecil.
Saya telah menginstal master di amd64 lalu satu node di amd64 juga. Semua bekerja dengan baik.
Saya telah menambahkan simpul arm64 dan sekarang saya memiliki:
master amd64: Siap
simpul amd64: Siap
simpul arm64: Tidak Siap: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

  • Jadi sepertinya plugin flannel net tidak dapat berbicara antara arsitektur yang berbeda dan arm64 tidak dapat digunakan sebagai master sama sekali.
  • Plugin Weave net tidak berfungsi sama sekali (bahkan tanpa menambahkan node). Master selalu dalam status NotReady tidak peduli apakah arch adalah amd64 atau arm64.
  • Dalam semua kasus tersebut, alasan 'NotReady' selalu sama: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Ada saran apa yang harus saya lakukan? Ke mana saya harus melaporkan ini? Saya sudah memiliki cluster 2 node (master dan node amd64) tetapi saya ingin membantu menyelesaikan masalah ini sehingga orang dapat menggunakan master lengkung apa pun dengan simpul lengkung apa pun hanya OOTB.

@lukaszgryglicki
kube-flannel.yml menyebarkan wadah flanel untuk satu arsitektur saja. Inilah sebabnya mengapa pada node dengan arsitektur yang berbeda, plugin cni tidak memulai dan node tidak pernah siap

Saya tidak pernah mencoba sendiri, tetapi saya kira Anda dapat menggunakan dua manifes flanel yang diretas dengan noda (dan nama) yang berbeda untuk menghindari mencampuradukkan hal-hal, tetapi sekali lagi saran saya adalah bertanya kepada orang flanel jika sudah ada instruksi tentang cara melakukan ini .

Tapi saya telah mengubah manifes di arm64 seperti yang disarankan dalam tutorial. Mengganti amd64 dengan arm64 .
Jadi mungkin saya akan membuat masalah untuk flannell dan menempelkan tautan ke utas ini.

Dan sekarang mengapa wave net gagal di kedua lengkungan dengan bug terkait cni yang sama? Mungkin membuat masalah untuk weave juga dan juga menautkan ke utas ini?

@lukaszgryglicki
Ketika Anda memutar kube-flannel.yml untuk arm, itu berhenti bekerja pada mesin amd... Inilah mengapa saya menduga bahwa menggunakan 2 manifes yang di-twik dengan baik, satu untuk arm dan satu untuk amd, dapat menyelesaikan masalah Anda.

Dan sekarang saya memikirkannya, mungkin Anda harus memperbaiki masalah yang sama dengan set daemon kube-proxy juga, tetapi saya tidak dapat menguji ini sekarang, maaf


Untuk masalah yang Anda miliki dengan menenun, saya tidak punya cukup info. Satu masalah mungkin bahwa weave tidak berfungsi dengan --pod-network-cidr=10.244.0.0/16 , tetapi kembali ke masalah awal, saya tidak tahu apakah weave berfungsi di luar kotak pada platform campuran atau tidak.

Jadi saya harus menyebarkan dua manifes berbeda untuk kain flanel pada master, bukan? Tidak masalah jika master kebetulan arm64 atau amd64, kan? Haruskah Master menangani pemijahan penyebaran lengkungan yang benar pada dirinya sendiri dan node?
Tidak yakin apa yang Anda maksud di sini:

And now that I think of, might be you should fix the same issue with kube-proxy daemon set as well, but I can't test this now, sorry

Saya tidak menggunakan --pod-network-cidr=10.244.0.0/16 untuk weave . Saya baru saja menggunakan kubeadm init .
Saya telah menggunakan --pod-network-cidr=10.244.0.0/16 hanya untuk percobaan flanel. Seperti yang dikatakan para dokumen.

cc @luxas - Saya telah melihat Anda telah membuat beberapa dokumen tentang penerapan k8s multi-arch, mungkin Anda dapat memiliki beberapa umpan balik?

@lukasredynk

ya, jadi ini adalah masalah utama, terima kasih atas konfirmasinya.
mari kita fokus pada kain flanel di sini karena masalah tenun tampaknya merupakan masalah yang bersinggungan.

lihat ini oleh @luxas untuk konteksnya, jika belum melihatnya:
https://github.com/luxas/kubeadm-workshop

Haruskah Master menangani pemijahan penyebaran lengkungan yang benar pada dirinya sendiri dan node?

_itu harus_ tetapi manifes yang Anda unduh bukan yang "gemuk":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

sejauh yang saya pahami, noda lengkung disebarkan dan Anda perlu memperbaikinya dengan kubectl pada setiap simpul (?).

sepertinya manifes "gemuk" ada di master dan ditambahkan di sini:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

masalah terkait/pr:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

asumsi saya adalah ini adalah tepi berdarah dan Anda harus menggunakan:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

jadi turunkan cluster dan cobalah dan semoga berhasil.
dokumen CNI kami akan membutuhkan perbaikan, namun ini perlu terjadi ketika flannel-next dirilis.

OK, akan mencoba setelah akhir pekan dan memposting hasil saya di sini. Terima kasih.

@lukaszgryglicki hai, apakah Anda berhasil menggunakan manifes flanel baru?

Belum, saya akan mencoba hari ini.

Oke akhirnya berhasil:

root<strong i="6">@devstats</strong>:/root# kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
cncftest.io        Ready     <none>    39s       v1.11.1
devstats.cncf.io   Ready     <none>    46s       v1.11.1
devstats.team.io   Ready     master    12m       v1.11.1

Mainifest gemuk dari flanel cabang master membantu.
Terima kasih, ini bisa ditutup.

Halo guys, saya dalam situasi yang sama.
Saya memiliki node pekerja dalam status Siap, tetapi flanel di arm64 terus mogok dengan kesalahan ini:
1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm64-m5jfd': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm64-m5jfd: dial tcp 10.96.0.1:443: i/o timeout
@lukasredynk apakah itu berhasil untuk Anda?

ada ide?

Kesalahannya tampak berbeda, tetapi apakah Anda menggunakan manifes gemuk: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ?
Ini berisi manifes untuk beberapa lengkungan.

Ya, benar:
image

Masalahnya sekarang adalah wadah flanel yang tidak berdiri di lengan. :(

itu berlaku pada amd64 dan arm64 - bekerja untuk saya.
Sayangnya, saya tidak dapat membantu dengan arm (32 bit), saya tidak memiliki mesin arm tersedia.

Saya menggunakan arm64 tetapi terima kasih, saya akan terus menyelidiki ...

Ohh maka maaf, saya pikir Anda berada di lengan.
Bagaimanapun, saya juga cukup baru dalam hal ini, jadi Anda perlu menunggu orang lain untuk membantu.
Silakan rekatkan output kubectl describe pods --all-namespace dan kemungkinan output dari perintah lain yang telah saya posting di utas ini. Ini dapat membantu seseorang untuk melacak masalah sebenarnya.

Terima kasih @lukaszgryglicki ,
ini adalah output dari pod deskripsi: https://Pastebin.com/kBVPYsMd

@lukaszgryglicki
senang akhirnya berhasil.
saya akan mendokumentasikan penggunaan manifes lemak untuk flanel di dokumen, karena saya tidak tahu kapan 0.11.0 akan dirilis.

@Leen15

relevan dari pod yang gagal:

  Warning  FailedCreatePodSandBox  3m (x5327 over 7h)  kubelet, nanopi-neo-plus2  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ddb551d520a757f4f8ff81d1dbfde50a98a5ec65385673a5a49a79e23a3243b" network for pod "arm-test-7894bfffd-njdcc": NetworkPlugin cni failed to set up pod "arm-test-7894bfffd-njdcc_default" network: open /run/flannel/subnet.env: no such file or directory

apakah Anda menambahkan --pod-network-cidr=... yang dibutuhkan untuk kain flanel?

coba juga panduan ini:
https://github.com/kubernetes/kubernetes/issues/36575#issuecomment -264622923

@ neolit123 ya, saya menemukan masalah: flanel tidak membuat antarmuka jaringan Virtual (cni dan flannel0).
Saya tidak tahu alasannya dan saya gagal menyelesaikannya setelah beberapa jam.
Saya menyerah dan beralih ke swarm.

baik dimengerti. dalam hal ini saya menutup masalah.
Terima kasih.

Saya mengalami masalah yang sama juga, dan saya menemukan bahwa node tidak dapat menarik gambar yang diperlukan karena GFW di Cina, jadi saya menarik gambar secara manual dan pulih dengan baik

Saya menjalankan perintah ini dan itu menyelesaikan masalah saya:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Ini membuat file di direktori /etc/cni/net.d dengan nama 10-flannel.conflist. Saya percaya bahwa kubernetes membutuhkan jaringan, yang diatur oleh paket ini.
Cluster saya dalam kondisi berikut:

NAMA STATUS PERAN VERSI USIA
k8s-master Siap master 3h37m v1.14.1
node001 Siap3h6m v1.14.1
node02 Siap167m v1.14.1

Halo semua,

Saya memiliki 1 master dan 2 node. Node ke-2 dalam keadaan tidak siap.

root@kube1 :~# kubectl dapatkan node
NAMA STATUS PERAN VERSI USIA
dockerlab1 Siap3h57m v1.14.3
kube1 Siap master 4h12m v1.14.3
labserver1 Tidak Siap22m v1.14.3


root@kube1 :~# kubectl dapatkan pod --all-namespaces
NAMESPACE NAMA SIAP STATUS MULAI ULANG USIA
kube-system coredns-fb8b8dccf-72llr 1/1 Berjalan 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 Menjalankan 0 4h13m
kube-system etcd-kube1 1/1 Berjalan 0 4h12m
kube-system kube-apiserver-kube1 1/1 Menjalankan 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Berjalan 0 4j13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Init:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Berjalan 0 3j59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Berjalan 0 4j1m
kube-system kube-proxy-7m8jg 1/1 Berjalan 0 3j59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 Berjalan 0 4h13m

kube-system kube-scheduler-kube1 1/1 Berjalan 0 4j13m

root@kube1 :~# kubectl mendeskripsikan node labserver1
Nama: labserver1
Peran:
Label: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anotasi: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
Stempel Waktu Pembuatan: Minggu, 09 Jun 2019 21:03:57 +0800
Taints: node.kubernetes.io/not- ready:NoExecute
node.kubernetes.io/not- ready:NoSchedule
Tidak terjadwal: salah
Kondisi:
Ketik Status LastHeartbeatTime LastTransitionTime Alasan Pesan
---- ------ ----------------- ------------------ ----- -------
MemoryPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet memiliki memori yang cukup tersedia
DiskPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet tidak memiliki tekanan disk
PIDTekanan Palsu Sun, 09 Jun 2019 21:28:31 +0800 Min, 09 Jun 2019 21:03:57 +0800 KubeletHasCukupPID kubelet memiliki PID yang cukup tersedia
Ready False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 Jaringan runtime KubeletNotReady tidak siap: NetworkReady= alasan salah pesan:docker : plugin jaringan belum siap: cni config tidak diinisialisasi
Alamat:
IP Internal: 172.31.8.125
Nama host: labserver1
Kapasitas:
CPU: 1
penyimpanan sementara: 18108284Ki
bigpages-1Gi: 0
bigpages-2Mi: 0
memori: 1122528Ki
polong: 110
Dapat dialokasikan:
CPU: 1
penyimpanan sementara: 16688594507
bigpages-1Gi: 0
bigpages-2Mi: 0
memori: 1020128Ki
polong: 110
Sistem Informasi:
ID Mesin: 292dc4560f9309ccdd72b6935c80e8ec
UUID Sistem: DE4707DF-5516-784A-9B41-588FCDE49369
ID Boot: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Versi Kernel: 4.4.0-142-generik
Gambar OS: Ubuntu 16.04.6 LTS
Sistem Operasi: linux
Arsitektur: amd64
Versi Waktu Proses Kontainer:
Versi Kubelet: v1.14.3
Versi Kube-Proxy: v1.14.3
PodCIDR: 10.244.3.0/24
Pod yang tidak dihentikan: (totalnya 2)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Sumber daya yang dialokasikan:
(Total batas mungkin lebih dari 100 persen, yaitu, komitmen berlebihan.)
Batas Permintaan Sumber Daya
-------- -------- ------
cpu 100m (10%) 100m (10%)
memori 50Mi (5%) 50Mi (5%)
penyimpanan sementara 0 (0%) 0 (0%)
Acara:
Ketik Alasan Usia Dari Pesan
---- ------ ---- ---- -------
Normal Mulai 45m kubelet, labserver1 Mulai kubelet.
Normal NodeHasSufficientMemory 45m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientPID
Normal NodeAllocatableEnforced 45m kubelet, labserver1 Batas Node yang Diperbarui yang Dapat Dialokasikan di seluruh pod
Normal Mulai 25m kubelet, labserver1 Mulai kubelet.
Normal NodeAllocatableEnforced 25m kubelet, labserver1 Batas Node yang Diperbarui yang Dapat Dialokasikan di seluruh pod
Normal NodeHasSufficientMemory 25m (x2 lebih dari 25m) kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientMemory
Normal NodeHasSufficientPID 25m (x2 lebih dari 25m) kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 lebih dari 25m) kubelet, labserver1 Status labserver1 Node sekarang: NodeHasNoDiskPressure
Normal Mulai 13m kubelet, labserver1 Mulai kubelet.
Normal NodeHasSufficientMemory 13m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13m kubelet, labserver1 Batas Node yang Diperbarui yang Dapat Dialokasikan di seluruh pod
root@kube1 :~#

Tolong bantu

Halo semua,

Saya memiliki 1 master dan 2 node. Node ke-2 dalam keadaan tidak siap.

root@kube1 :~# kubectl dapatkan node
NAMA STATUS PERAN VERSI USIA
dockerlab1 Siap 3h57m v1.14.3
kube1 Siap master 4h12m v1.14.3
labserver1 Tidak Siap 22m v1.14.3

root@kube1 :~# kubectl dapatkan pod --all-namespaces

NAMESPACE NAMA SIAP STATUS MULAI ULANG USIA
kube-system coredns-fb8b8dccf-72llr 1/1 Berjalan 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 Menjalankan 0 4h13m
kube-system etcd-kube1 1/1 Berjalan 0 4h12m
kube-system kube-apiserver-kube1 1/1 Menjalankan 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Berjalan 0 4j13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Init:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Berjalan 0 3j59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Berjalan 0 4j1m
kube-system kube-proxy-7m8jg 1/1 Berjalan 0 3j59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 Berjalan 0 4h13m
kube-system kube-scheduler-kube1 1/1 Berjalan 0 4j13m
root@kube1 :~# kubectl mendeskripsikan node labserver1
Nama: labserver1
Peran:
Label: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anotasi: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
Stempel Waktu Pembuatan: Minggu, 09 Jun 2019 21:03:57 +0800
Taints: node.kubernetes.io/not- ready:NoExecute
node.kubernetes.io/not- ready:NoSchedule
Tidak terjadwal: salah
Kondisi:
Ketik Status LastHeartbeatTime LastTransitionTime Alasan Pesan

MemoryPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet memiliki memori yang cukup tersedia
DiskPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet tidak memiliki tekanan disk
PIDTekanan Palsu Sun, 09 Jun 2019 21:28:31 +0800 Min, 09 Jun 2019 21:03:57 +0800 KubeletHasCukupPID kubelet memiliki PID yang cukup tersedia
Ready False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 Jaringan runtime KubeletNotReady tidak siap: NetworkReady= alasan salah pesan:docker : plugin jaringan belum siap: cni config tidak diinisialisasi
Alamat:
IP Internal: 172.31.8.125
Nama host: labserver1
Kapasitas:
CPU: 1
penyimpanan sementara: 18108284Ki
bigpages-1Gi: 0
bigpages-2Mi: 0
memori: 1122528Ki
polong: 110
Dapat dialokasikan:
CPU: 1
penyimpanan sementara: 16688594507
bigpages-1Gi: 0
bigpages-2Mi: 0
memori: 1020128Ki
polong: 110
Sistem Informasi:
ID Mesin: 292dc4560f9309ccdd72b6935c80e8ec
UUID Sistem: DE4707DF-5516-784A-9B41-588FCDE49369
ID Boot: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Versi Kernel: 4.4.0-142-generik
Gambar OS: Ubuntu 16.04.6 LTS
Sistem Operasi: linux
Arsitektur: amd64
Versi Waktu Proses Kontainer:
Versi Kubelet: v1.14.3
Versi Kube-Proxy: v1.14.3
PodCIDR: 10.244.3.0/24
Pod yang tidak dihentikan: (totalnya 2)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE

kube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Sumber daya yang dialokasikan:
(Total batas mungkin lebih dari 100 persen, yaitu, komitmen berlebihan.)
Batas Permintaan Sumber Daya

cpu 100m (10%) 100m (10%)
memori 50Mi (5%) 50Mi (5%)
penyimpanan sementara 0 (0%) 0 (0%)
Acara:
Ketik Alasan Usia Dari Pesan

Normal Mulai 45m kubelet, labserver1 Mulai kubelet.
Normal NodeHasSufficientMemory 45m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientPID
Normal NodeAllocatableEnforced 45m kubelet, labserver1 Batas Node yang Diperbarui yang Dapat Dialokasikan di seluruh pod
Normal Mulai 25m kubelet, labserver1 Mulai kubelet.
Normal NodeAllocatableEnforced 25m kubelet, labserver1 Batas Node yang Diperbarui yang Dapat Dialokasikan di seluruh pod
Normal NodeHasSufficientMemory 25m (x2 lebih dari 25m) kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientMemory
Normal NodeHasSufficientPID 25m (x2 lebih dari 25m) kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 lebih dari 25m) kubelet, labserver1 Status labserver1 Node sekarang: NodeHasNoDiskPressure
Normal Mulai 13m kubelet, labserver1 Mulai kubelet.
Normal NodeHasSufficientMemory 13m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 Status labserver1 Node sekarang: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13m kubelet, labserver1 Batas Node yang Diperbarui yang Dapat Dialokasikan di seluruh pod
root@kube1 :~#

Tolong bantu

Hai Athir,

         Please check out logs in /var/logs/messages section of your Master node. You can find an actual error in those logs. But here are some general tips.

Saya. Selalu berkonsentrasi pada node Master Anda terlebih dahulu.
ii. Instal mesin buruh pelabuhan di atasnya dan ambil semua gambar yang digunakan untuk kubernetes. Ketika semuanya akan dalam bentuk berjalan, maka tambahkan node ke master. itu akan menyelesaikan semua masalah. Saya melihat beberapa artikel di internet, yang mencoba mengambil beberapa gambar setelah memasang node slave. Praktek itu menyebabkan beberapa masalah.

Hai saddique164, Terima kasih atas saran Anda. Ya seperti yang Anda katakan, saya menyebarkan node budak baru kemarin dan dapat bergabung dengan Master tanpa masalah.

Maaf, saya tidak dapat membantu, saya tidak memiliki node ARM64 lagi, sekarang saya memiliki cluster bare-metal AMD64 4 node.

File /etc/cni/net.d/10-flannel.conflist tidak memiliki kunci cniVersion dalam konfigurasinya.

Menambahkan "cniVersion": "0.2.0" memecahkan masalah.

File /etc/cni/net.d/10-flannel.conflist tidak memiliki kunci cniVersion dalam konfigurasinya.

Menambahkan "cniVersion": "0.2.0" memecahkan masalah.

Saya menghadapi masalah ketika diperbarui ke V1.16.0 dari 1.15.

dGVzdDoxMjPCow

Pada 24 Sep 2019 12:52, "ronakgpatel" [email protected] menulis:

File /etc/cni/net.d/10-flannel.conflist tidak memiliki kunci cniVersion di
konfigurasinya

Menambahkan "cniVersion": "0.2.0" memecahkan masalah.

Saya menghadapi masalah ketika diperbarui ke V1.16.0 dari 1.15.


Anda menerima ini karena Anda berlangganan utas ini.
Balas email ini secara langsung, lihat di GitHub
https://github.com/kubernetes/kubeadm/issues/1031?email_source=notifications&email_token=AND2HJTXHB6WSGAE7PSOAJTQLGMKVA5CNFSM4FNQHEHKYY3PNVWWK3TUL52HS4DFVREXG43VMWSZW63LNMVXHJKT78
atau matikan utasnya
https://github.com/notifications/unsubscribe-auth/AND2HJXDKQARYKXVY4YLZMDQLGMKVANCNFSM4FNQHEHA
.

kain flanel tidak terlalu aktif dipelihara. saya merekomendasikan belacu atau weavenet.

gudang flanel perlu diperbaiki.
panduan kubeadm untuk menginstal kain flanel baru saja diperbarui, lihat:
https://github.com/kubernetes/website/pull/16575/files

Menghadapi masalah yang sama di sini.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.

Bekerja untuk saya.

buruh pelabuhan: plugin jaringan belum siap: konfigurasi cni tidak diinisialisasi

Instal ulang buruh pelabuhan pada node notready.
Bekerja untuk saya.

Saya menjalankan perintah ini dan itu menyelesaikan masalah saya:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Ini membuat file di direktori /etc/cni/net.d dengan nama 10-flannel.conflist. Saya percaya bahwa kubernetes membutuhkan jaringan, yang diatur oleh paket ini.
Cluster saya dalam kondisi berikut:

NAMA STATUS PERAN VERSI USIA
k8s-master Siap master 3h37m v1.14.1
node001 Siap 3h6m v1.14.1
node02 Siap 167m v1.14.1

Itu baru saja melakukannya!

Saya memiliki kasus serupa di mana saya membuat plugin jaringan sebelum menautkan pekerja, yang membuat /etc/cni/net.d hilang.
Saya menjalankan kembali konfigurasi setelah menautkan node pekerja menggunakan:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Akibatnya, konfigurasi di /etc/cni/net.d berhasil dibuat dan node ditampilkan dalam status Siap.

Harapan yang membantu siapa pun dengan masalah yang sama.

Saya menjalankan perintah ini dan itu menyelesaikan masalah saya:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Ini membuat file di direktori /etc/cni/net.d dengan nama 10-flannel.conflist. Saya percaya bahwa kubernetes membutuhkan jaringan, yang diatur oleh paket ini.
Cluster saya dalam kondisi berikut:

NAMA STATUS PERAN VERSI USIA
k8s-master Siap master 3h37m v1.14.1
node001 Siap 3h6m v1.14.1
node02 Siap 167m v1.14.1

Jalankan Perintah itu di Mesin Master dan semuanya dalam status Siap sekarang. Terima kasih @saddique164 .

Cara tercepat adalah dengan menambahkan Flannel ke dalam Kubernetes pada Arsitektur AMD64.

1. Perbarui kube-flanel.yaml

$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> kube-flannel.yaml

2. Selesaikan pengaturan jaringan Flanel

$ kubectl apply -f kube-flannel.yaml

Saya menggunakan kubernetes versi 1.18.

Saya menggunakan ini: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Tidak ada file yang dibuat di bawah /etc/cni/net.d
Node master NotReady sedangkan slave dalam status Ready

Saya menggunakan kubernetes versi 1.18.

Saya menggunakan ini: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Tidak ada file yang dibuat di bawah /etc/cni/net.d
Node master NotReady sedangkan slave dalam status Ready

  1. bisakah Anda menjalankan perintah kubectl di master?
  2. dapatkah Anda memeriksa kubelet Anda berjalan di master? atau jalankan ini: systemctl restart kubelet.
  3. jika kubelet sedang restart atau auto-restart, maka jalankan journal -u kubelet dan periksa lognya. Anda akan menemukan kesalahannya.

CATATAN: Ini sepertinya masalah kubelet.

  1. Ya, saya dapat menjalankan perintah kubectl.
  2. Kubectl berjalan dan kemudian gagal.
  3. Inilah yang saya lihat dalam kesalahan dari journactl -u kubelet:
Jul 01 11:58:36 master kubelet[17918]: F0701 11:58:36.613864   17918 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 01 11:58:36 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 01 11:58:36 master systemd[1]: Unit kubelet.service entered failed state.
Jul 01 11:58:36 master systemd[1]: kubelet.service failed.

coba ini di master:

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

Itu dimulai dan kemudian gagal lagi.

Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692341   15525 remote_runtime.go:59] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692358   15525 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692381   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692389   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692420   15525 remote_image.go:50] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692427   15525 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692435   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692440   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692464   15525 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692480   15525 kubelet.go:317] Watching apiserver
Jul 02 10:37:16 master kubelet[15525]: W0702 10:37:16.680313   15525 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

Jika Anda melihatnya mengatakan tidak ada jaringan yang ditemukan. Jalankan perintah ini dan bagikan hasilnya.

Kubectl mendapatkan pod -n kube-system

NAME                          READY   STATUS        RESTARTS   AGE
coredns-75f8564758-92ws7      1/1     Running       0          25h
coredns-75f8564758-z9xn8      1/1     Running       0          25h
kube-flannel-ds-amd64-2j4mw   1/1     Running       0          25h
kube-flannel-ds-amd64-5tmhp   0/1     Pending       0          25h
kube-flannel-ds-amd64-rqwmz   1/1     Running       0          25h
kube-proxy-6v24w              1/1     Running       0          25h
kube-proxy-jgdw7              0/1     Pending       0          25h
kube-proxy-qppnk              1/1     Running       0          25h

jalankan ini
kubectl log kube-flannel-ds-amd64-5tmhp -n kube-system

jika tidak ada yang datang maka jalankan yang ini:
kubectl mendeskripsikan pod kube-flannel-ds-amd64-5tmhp -n kube-system

Kesalahan dari server: Dapatkan https://10.75.214.124 :10250/containerLogs/kube-system/kube-flannel-ds-amd64-5tmhp/kube-flannel: dial tcp 10.75.214.124:10250: sambungkan: koneksi ditolak

Berapa banyak node yang berjalan untuk Anda? DI klaster? satu node melakukan masalah ini. Ini disebut daemonset. Mereka berjalan di setiap node. Rencana kontrol Anda tidak menerima permintaan darinya. Jadi saya akan menyarankan Anda untuk mengikuti langkah-langkah berikut.

  1. pertama-tama tiriskan node pekerja satu per satu.
    Kubectl drain nodename
  2. lalu hapus.
    kubectl menghapus node nodename
  3. tunggu sampai master node siap. jika tidak datang. jalankan perintah ini.
    reset kubeadm
  4. lagi inisialisasi kubeadm
    kubeadm init
  5. jalankan perintah ini
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  6. dapatkan perintah dari master dan jalankan di node pekerja untuk menghubungkannya.

Proses ini akan berhasil.

kubectl mendapatkan node:

NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   26h   v1.18.5
slave1   Ready      <none>   26h   v1.18.5
slave2   Ready      <none>   26h   v1.18.5

Saya mencoba langkah-langkah yang Anda sebutkan:

Ini yang saya dapatkan.

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

menguras semua node kecuali master dan berkonsentrasi pada itu. Ketika akan siap kemudian pergi untuk menambahkan yang lain.

Menguras node dan kemudian kubeadm reset dan init tidak membantu. Cluster tidak diinisialisasi sesudahnya.

Masalah saya adalah saya memperbarui nama host setelah cluster dibuat. Dengan melakukan itu, seolah-olah sang master tidak tahu bahwa itu adalah sang master.

Saya masih berlari:

sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname)

tapi sekarang saya menjalankannya sebelum inisialisasi cluster

Apakah halaman ini membantu?
0 / 5 - 0 peringkat