Kubeadm: tiempo de ejecución la red no está lista: NetworkReady = razón falsa: NetworkPluginNotReady mensaje: docker: el complemento de red no está listo: cni config no inicializado

Creado en 2 ago. 2018  ·  65Comentarios  ·  Fuente: kubernetes/kubeadm

¿Es este un INFORME DE ERROR o una SOLICITUD DE FUNCIÓN?

INFORME DE ERROR

  • He seguido esta guía .
  • Instalé el nodo maestro en el servidor ARM64 de 96 CPU.
  • El sistema operativo es Ubuntu 18.04 LTS. justo después de apt-get update/upgrade .
  • Usado kubeadm init --pod-network-cidr=10.244.0.0/16 . Y luego ejecutó los comandos sugeridos.
  • Red de vainas de franela seleccionada:

    • sysctl net.bridge.bridge-nf-call-iptables=1 .

    • wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml .

    • vim kube-flannel.yml , reemplace amd64 con arm64

    • kubectl apply -f kube-flannel.yml .

    • kubectl get pods --all-namespaces :

NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-ls44z                   1/1       Running   0          20m
kube-system   coredns-78fcdf6894-njnnt                   1/1       Running   0          20m
kube-system   etcd-devstats.team.io                      1/1       Running   0          20m
kube-system   kube-apiserver-devstats.team.io            1/1       Running   0          20m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running   0          20m
kube-system   kube-flannel-ds-v4t8s                      1/1       Running   0          13m
kube-system   kube-proxy-5825g                           1/1       Running   0          20m
kube-system   kube-scheduler-devstats.team.io            1/1       Running   0          20m

Luego se unieron dos nodos AMD64 usando kubeadm init salida:
1er nodo:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:49.987467   16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709   16652 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

2do nodo:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:58.913060   38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222   38617 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Pero en el maestro kubectl get nodes :

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    7m        v1.11.1
devstats.cncf.io   NotReady   <none>    7m        v1.11.1
devstats.team.io   Ready      master    21m       v1.11.1

Y luego: kubectl describe nodes (el maestro es devstats.team.io , los nodos son: cncftest.io y devstats.cncf.io ):

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:26:53 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.1.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 8m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  8m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:27:00 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.2.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 7m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  7m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=147.75.97.234
                    kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:12:56 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:21:07 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                coredns-78fcdf6894-ls44z                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                coredns-78fcdf6894-njnnt                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-v4t8s                       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)
  kube-system                kube-proxy-5825g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       850m (0%)   100m (0%)
  memory    190Mi (0%)  390Mi (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 23m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  23m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     23m (x5 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 21m                kube-proxy, devstats.team.io  Starting kube-proxy.
  Normal  NodeReady                13m                kubelet, devstats.team.io     Node devstats.team.io status is now: NodeReady

Versiones

versión kubeadm (use kubeadm version ):

kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}

Medio ambiente :

  • Versión de Kubernetes (use kubectl version ):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
  • Proveedor de nube o configuración de hardware :
  • Maestro: servidor bare metal 96 núcleos, ARM64, 128G RAM, intercambio desactivado.
  • Nodos (2): servidor bare metal de 48 núcleos, AMD64, 256G de RAM, intercambio sintonizado x 2.
  • uname -a : Linux devstats.team.io 4.15.0-20-generic # 21-Ubuntu SMP Mar 24 de abril 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU / Linux
  • SO (por ejemplo, de / etc / os-release):
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • lsb_release -a :
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:    18.04
Codename:   bionic
  • Kernel (por ejemplo, uname -a ): Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
  • Otros : docker version :
docker version
Client:
 Version:   17.12.1-ce
 API version:   1.35
 Go version:    go1.10.1
 Git commit:    7390fc6
 Built: Wed Apr 18 01:26:37 2018
 OS/Arch:   linux/arm64

Server:
 Engine:
  Version:  17.12.1-ce
  API version:  1.35 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   7390fc6
  Built:    Wed Feb 28 17:46:05 2018
  OS/Arch:  linux/arm64
  Experimental: false

¿Qué sucedió?

El error exacto parece ser:

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

En el nodo: cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

De este hilo (no hay KUBELET_NETWORK_ARGS allí).

  • journalctl -xe en el nodo:
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663   38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876   38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

El directorio /etc/cni/net.d existe, pero está vacío.

¿Qué esperabas que sucediera?

Todos los nodos del estado Ready .

¿Cómo reproducirlo (de la forma más mínima y precisa posible)?

Simplemente siga los pasos del tutorial . Intenté 3 veces y sucede todo el tiempo.

¿Algo más que necesitemos saber?

El maestro es ARM64, 2 nodos son AMD64.
El maestro y un nodo están en Ámsterdam y el segundo nodo está en los EE. UU.

Puedo usar kubectl taint nodes --all node-role.kubernetes.io/master- para ejecutar pods en el maestro, pero esta no es una solución. Quiero tener un clúster de múltiples nodos real para trabajar.

areecosystem prioritawaiting-more-evidence

Comentario más útil

@lukasredynk

sí, después de todo, este es un problema archi, gracias por confirmar.
centrémonos en la franela aquí, ya que el tema del tejido parece tangente.

Eche un vistazo a esto de @luxas para el contexto, si aún no lo ha visto:
https://github.com/luxas/kubeadm-workshop

¿Debería Master manejar el despliegue correcto del arco en sí mismo y en los nodos?

_ debería_ pero el manifiesto que está descargando no es "gordo":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

Por lo que tengo entendido, las manchas de arco se propagan y debe solucionarlo con kubectl en cada nodo (?).

parece que un manifiesto "gordo" está en el maestro y se agregó aquí:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

problema relacionado / pr:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

mi suposición es que esto es de vanguardia y tienes que usar:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

así que baja el clúster y pruébalo y espera que funcione.
nuestros documentos CNI necesitarían un aumento, sin embargo, esto debe suceder cuando se publique flannel-next .

Todos 65 comentarios

@lukaszgryglicki
Parece que los nodos no reciben franela porque están en la arquitectura amd64

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

y

Name:               devstats.team.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

No soy un experto en franela, pero creo que debería consultar la documentación del producto para saber cómo hacer que funcione en un entorno con plataformas mixtas.

Ese es un buen punto, pero ¿qué pasa con el mensaje de error? Parece que realmente no tiene relación.

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Parece que faltan algunos archivos de configuración CNI en /etc/cni/net.d pero ¿por qué?
Ahora estoy probando una ventana acoplable diferente 18.03ce como se sugiere en el canal de holgura (en realidad se sugirió 17.03, pero no hay 17.03 para Ubuntu 18.04).

Las etiquetas con el nombre del arco no coinciden. Pero la siguiente etiqueta beta.kubernetes.io/os=linux es la misma en los 3 servidores.

Lo mismo ocurre con Docker 18.03ce. No veo ninguna diferencia, esto no parece un problema de Docker. Esto parece un problema de configuración de CNI.

@lukaszgryglicki
Hola,

Maestro: servidor bare metal 96 núcleos, ARM64, 128G RAM, intercambio desactivado.
Nodos (2): servidor bare metal de 48 núcleos, AMD64, 256G de RAM, intercambio sintonizado x 2.

estas son algunas especificaciones _ agradables_.

La forma en que pruebo las cosas es la siguiente: si algo no funciona con weavenet, pruebo la franela y al revés.

así que intente tejer y si su configuración CNI funciona con él, entonces esto está relacionado con el complemento CNI.

Si bien el equipo de kubeadm admite complementos y complementos, generalmente delegamos los problemas a sus respectivos mantenedores porque no tenemos el ancho de banda para manejar todo.

Claro, intenté tejer hace algunas iteraciones. Terminó en un ciclo de reinicio del contenedor.
Ahora probaré la ventana acoplable 17.03 para excluir el problema de la ventana acoplable (se supone que la versión 17.03 es muy buena).

Así que esto no es un problema de Docker. El 17.03 lo mismo:

Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: W0802 14:21:51.406786   21714 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: E0802 14:21:51.407074   21714 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
now will try weave net as suggested on the issue

Intentaré tejer ahora y publicar los resultados aquí.

Entonces, probé weave net y no funciona:
En maestro: kubectl get nodes :

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    5s        v1.11.1
devstats.cncf.io   NotReady   <none>    12s       v1.11.1
devstats.team.io   NotReady   master    7m        v1.11.1
  • kubectl describe nodes (el mismo error relacionado con cni, pero también en el nodo maestro ahora):
Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:56 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-wwjrr    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 1m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:49 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-2fsrf    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 1m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:32:14 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.9.0
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (6 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-69qnb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-j9f5m                             20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests   Limits
  --------  --------   ------
  cpu       570m (0%)  0 (0%)
  memory    0 (0%)     0 (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 10m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  10m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x5 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 8m                 kube-proxy, devstats.team.io  Starting kube-proxy.
  • journalctl -xe en el maestro:
Aug 02 14:42:18 devstats.team.io dockerd[44020]: time="2018-08-02T14:42:18.330999189Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.079835   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080312   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080677   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:19 devstats.team.io kubelet[56340]: E0802 14:42:19.080815   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:21 devstats.team.io kubelet[56340]: W0802 14:42:21.867690   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:21 devstats.team.io kubelet[56340]: E0802 14:42:21.868005   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.259681   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260359   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260833   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.260984   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:26 devstats.team.io kubelet[56340]: W0802 14:42:26.870675   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.871316   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  • kubectl get po --all-namespaces :
NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   coredns-78fcdf6894-g8wzs                   0/1       Pending            0          12m
kube-system   coredns-78fcdf6894-tzs8n                   0/1       Pending            0          12m
kube-system   etcd-devstats.team.io                      1/1       Running            0          12m
kube-system   kube-apiserver-devstats.team.io            1/1       Running            0          12m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running            0          12m
kube-system   kube-proxy-69qnb                           1/1       Running            0          12m
kube-system   kube-scheduler-devstats.team.io            1/1       Running            0          12m
kube-system   weave-net-2fsrf                            1/2       CrashLoopBackOff   5          5m
kube-system   weave-net-j9f5m                            1/2       CrashLoopBackOff   6          8m
kube-system   weave-net-wwjrr                            1/2       CrashLoopBackOff   5          4m
  • kubectl describe po --all-namespaces :
Name:               coredns-78fcdf6894-g8wzs
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x48 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               coredns-78fcdf6894-tzs8n
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x47 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               etcd-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=etcd
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.mirror=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.seen=2018-08-02T14:31:13.654147902Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  etcd:
    Container ID:  docker://254c88b154393778ef7b1ead2aaaa0acb120ffb76d911f140172da3323f1f1e3
    Image:         k8s.gcr.io/etcd-arm64:3.2.18
    Image ID:      docker-pullable://k8s.gcr.io/etcd-arm64<strong i="13">@sha256</strong>:f0b7368ebb28e6226ab3b4dbce4b5c6d77dab7b5f6579b08fd645c00f7b100ff
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://127.0.0.1:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --initial-advertise-peer-urls=https://127.0.0.1:2380
      --initial-cluster=devstats.team.io=https://127.0.0.1:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379
      --listen-peer-urls=https://127.0.0.1:2380
      --name=devstats.team.io
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:    <none>
    Mounts:
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
      /var/lib/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-apiserver-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.mirror=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.seen=2018-08-02T14:31:13.639443247Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-apiserver:
    Container ID:  docker://22b73993b141faebe6b4aab727d2235abb3422a17b60bc1be6c749c260e39f67
    Image:         k8s.gcr.io/kube-apiserver-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver-arm64<strong i="14">@sha256</strong>:bca1933fa25fc7f890700f6aebd572c6f8351f7bc89d2e4f2c44a63649e3fccf
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --authorization-mode=Node,RBAC
      --advertise-address=147.75.97.234
      --allow-privileged=true
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --disable-admission-plugins=PersistentVolumeLabel
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        250m
    Liveness:     http-get https://147.75.97.234:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-controller-manager-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-controller-manager
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.mirror=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.seen=2018-08-02T14:31:13.646000889Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-controller-manager:
    Container ID:  docker://5182bf5c7c63f9507e6319a2c3fb5698dc827ea9b591acbb071cb39c4ea445ea
    Image:         k8s.gcr.io/kube-controller-manager-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-controller-manager-arm64<strong i="15">@sha256</strong>:7fa0b0242c13fcaa63bff3b4cde32d30ce18422505afa8cb4c0f19755148b612
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --address=127.0.0.1
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --use-service-account-credentials=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        200m
    Liveness:     http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-proxy-69qnb
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:32:25 +0000
Labels:             controller-revision-hash=2718475167
                    k8s-app=kube-proxy
                    pod-template-generation=1
Annotations:        scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://12fb2a4a8af025604e46783aa87d084bdc681365317c8dac278a583646a8ad1c
    Image:         k8s.gcr.io/kube-proxy-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy-arm64<strong i="16">@sha256</strong>:c61f4e126ec75dedce3533771c67eb7c1266cacaac9ae770e045a9bec9c9dc32
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
    State:          Running
      Started:      Thu, 02 Aug 2018 14:32:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-4q6rl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-proxy-token-4q6rl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-4q6rl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/arch=arm64
Tolerations:     
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type    Reason   Age   From                       Message
  ----    ------   ----  ----                       -------
  Normal  Pulled   13m   kubelet, devstats.team.io  Container image "k8s.gcr.io/kube-proxy-arm64:v1.11.1" already present on machine
  Normal  Created  13m   kubelet, devstats.team.io  Created container
  Normal  Started  13m   kubelet, devstats.team.io  Started container


Name:               kube-scheduler-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-scheduler
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.mirror=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.seen=2018-08-02T14:31:13.651239565Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-scheduler:
    Container ID:  docker://0b8018a7d0c2cb2dc64d9364dea5cea8047b0688c4ecb287dba8bebf9ab011a3
    Image:         k8s.gcr.io/kube-scheduler-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler-arm64<strong i="17">@sha256</strong>:28ab99ab78c7945a4e20d9369682e626b671ba49e2d4101b1754019effde10d2
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --address=127.0.0.1
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:14 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Liveness:     http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               weave-net-2fsrf
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.cncf.io/147.75.78.47
Start Time:         Thu, 02 Aug 2018 14:39:49 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.78.47
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://e8f5c3b702166a15212ab9576696aa7a1a0cb5b94e9cba1451fc9cc2b1d1382d
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="18">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:04 +0000
      Finished:     Thu, 02 Aug 2018 14:43:05 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://1cfd16507d6d9e1744bfc354af62301fb8678af12ace34113121a40ca93b6113
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="19">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:39:58 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                       Message
  ----     ------   ----               ----                       -------
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, devstats.cncf.io  Created container
  Normal   Started  5m                 kubelet, devstats.cncf.io  Started container
  Normal   Created  5m (x4 over 5m)    kubelet, devstats.cncf.io  Created container
  Normal   Started  5m (x4 over 5m)    kubelet, devstats.cncf.io  Started container
  Normal   Pulled   5m (x3 over 5m)    kubelet, devstats.cncf.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  56s (x27 over 5m)  kubelet, devstats.cncf.io  Back-off restarting failed container


Name:               weave-net-j9f5m
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:36:11 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="20">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:42:18 +0000
      Finished:     Thu, 02 Aug 2018 14:42:18 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://3cd49dbca669ac83db95ebf943ed0053281fa5082f7fa403a56e30091eaec36b
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="21">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:36:31 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age               From                       Message
  ----     ------   ----              ----                       -------
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  9m                kubelet, devstats.team.io  Created container
  Normal   Started  9m                kubelet, devstats.team.io  Started container
  Normal   Created  8m (x4 over 9m)   kubelet, devstats.team.io  Created container
  Normal   Started  8m (x4 over 9m)   kubelet, devstats.team.io  Started container
  Normal   Pulled   8m (x3 over 9m)   kubelet, devstats.team.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  4m (x26 over 9m)  kubelet, devstats.team.io  Back-off restarting failed container


Name:               weave-net-wwjrr
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               cncftest.io/147.75.205.79
Start Time:         Thu, 02 Aug 2018 14:39:57 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.205.79
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://d0d1dccfe0a1f57bce652e30d5df210a9b232dd71fe6be1340c8bd5617e1ce11
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="22">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:16 +0000
      Finished:     Thu, 02 Aug 2018 14:43:16 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://e2c15578719788110131a4be3653a077441338b0f61f731add9dadaadfc11655
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="23">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:40:09 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                  Message
  ----     ------   ----               ----                  -------
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, cncftest.io  Created container
  Normal   Started  5m                 kubelet, cncftest.io  Started container
  Normal   Created  4m (x4 over 5m)    kubelet, cncftest.io  Created container
  Normal   Pulled   4m (x3 over 5m)    kubelet, cncftest.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Normal   Started  4m (x4 over 5m)    kubelet, cncftest.io  Started container
  Warning  BackOff  44s (x27 over 5m)  kubelet, cncftest.io  Back-off restarting failed container
  • kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true :
I0802 14:49:02.034473   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.036654   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.044546   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.062906   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.063710   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf
I0802 14:49:02.063753   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.063791   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.063828   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.236764   64396 round_trippers.go:408] Response Status: 200 OK in 172 milliseconds
I0802 14:49:02.236870   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.236907   64396 round_trippers.go:414]     Content-Type: application/json
I0802 14:49:02.236944   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
I0802 14:49:02.237363   64396 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-2fsrf","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-2fsrf","uid":"e8b2dfe9-9661-11e8-8ca9-fc15b4970491","resourceVersion":"1625","creationTimestamp":"2018-08-02T14:39:49Z","labels":{"controller-revision-hash":"332195524","name":"weave-net","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"66e82a46-9661-11e8-8ca9-fc15b4970491","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","ty [truncated 4212 chars]
I0802 14:49:02.261076   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.262803   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave
I0802 14:49:02.262844   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.262882   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.262919   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.275703   64396 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds
I0802 14:49:02.275743   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.275779   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.275815   64396 round_trippers.go:414]     Content-Length: 69
I0802 14:49:02.275850   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
I0802 14:49:02.278054   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.279649   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave-npc
I0802 14:49:02.279691   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.279728   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.279765   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.293271   64396 round_trippers.go:408] Response Status: 200 OK in 13 milliseconds
I0802 14:49:02.293321   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.293358   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.293394   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
INFO: 2018/08/02 14:39:58.198716 Starting Weaveworks NPC 2.4.0; node name "devstats.cncf.io"
INFO: 2018/08/02 14:39:58.198969 Serving /metrics on :6781
Thu Aug  2 14:39:58 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/08/02 14:39:58.294002 Got list of ipsets: []
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338475   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.339275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.340235   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.341457   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.340117   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.341216   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.342131   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.342657   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343322   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343396   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.343714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.344561   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.346722   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.344468   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.345385   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.347275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.345226   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.346184   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.347875   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347016   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347523   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.350821   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.347826   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.348883   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.351365   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.348662   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.349573   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.352012   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.349429   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.350420   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.352714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.351213   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.352074   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.355261   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352128   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352949   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.355929   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.352903   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.353844   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.356576   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.353994   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.354564   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.357281   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.355515   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.356603   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.359533   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.356372   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.357453   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.360401   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

Entonces, resumiendo. Es imposible instalar el clúster de Kubernetes con un solo nodo maestro y un solo nodo trabajador en Ubuntu 18.04.
Creo que debería haber una instrucción de instalación sobre cómo configurar k8s paso a paso usando kubeadm en el Ubuntu LTS más nuevo.

Creo que 18.04 se rompió tanto en términos del Docker que incluye como por systemd-resolved .
así que sí, es muy difícil escribir guías para cada sabor de distribución que existe y realmente no podemos mantenerlas de manera eficiente.

Además, aunque kubeadm es la interfaz aquí, el problema realmente podría no estar relacionado con kubeadm en sí.

algunas preguntas:

  • ¿Ha estado ejecutando clústeres amd64 + arm64 con éxito con versiones recientes de Kubernetes?
  • Me pregunto si esto es un problema de proxy. ¿Están los nodos detrás de un proxy?
  • ¿Cuáles son los contenidos de /var/lib/kubelet/kubeadm-flags.env cuando empiezas kubeadm join/init en los 3 nodos?
  • ¿Son esos los únicos contenidos interesantes de journalctl -xeu kubelet ? ¿Eso es solo en el nodo maestro? ¿Qué pasa con los demás? puedes volcarlos en un github gist o en http://pastebin.com para que yo también mire.
  • ¿Ha estado ejecutando clústeres amd64 + arm64 con éxito con versiones recientes de Kubernetes? No, este es mi primer intento, pero también intentaré instalar master en el host amd64 y un solo nodo con otro host amd64 para excluir el problema relacionado con arm64
  • Me pregunto si esto es un problema de proxy. ¿Están los nodos detrás de un proxy? sin proxy, los 3 servidores tienen IP estáticas
  • ¿Cuáles son los contenidos de /var/lib/kubelet/kubeadm-flags.env cuando inicias kubeadm join / init en los 3 nodos?
    maestro (devstats.team.io, arm64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

nodo (cncftest.io, amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

nodo (devstats.cncf.io, amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
  • ¿Son esos los únicos contenidos interesantes de journalctl -xeu kubelet? ¿Eso es solo en el nodo maestro? ¿Qué pasa con los demás? puedes volcarlos en un github gist o en http://pastebin.com para que yo también mire.

Pastebins: maestro , nodo .

Entonces, instalé el maestro kubeadm init en el host amd64 y probé weave net y el resultado es exactamente el mismo que cuando probé esto en el host arm64:

  • Retroceso reiniciando contenedor fallido
  • runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Hay un pequeño avance.
He instalado master en amd64 y luego un nodo en amd64 también. Todo funcionó bien.
Agregué el nodo arm64 y ahora tengo:
maestro amd64: listo
nodo amd64: listo
nodo arm64: NotReady: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

  • Por lo tanto, parece que el complemento flannel net no puede hablar entre diferentes arquitecturas y arm64 no se puede usar como maestro en absoluto.
  • El complemento Weave net no funciona en absoluto (sin siquiera agregar nodos). El maestro siempre está en estado NotReady sin importar si arch es amd64 o arm64.
  • En todos esos casos, el motivo de 'NotReady' es siempre el mismo: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

¿Alguna sugerencia que debo hacer? ¿Dónde debo informar de esto? Ya tengo un clúster de 2 nodos (maestro y nodo amd64) pero quiero ayudar a resolver este problema para poder usar cualquier arco maestro con cualquier arco nodo solo OOTB.

@lukaszgryglicki
kube-flannel.yml implementa el contenedor de franela solo para una arquitectura. Esta es la razón por la que en nodos con arquitectura diferente, el complemento cni no se inicia y el nodo nunca se prepara

Nunca lo intenté por mí mismo, pero supongo que puede implementar dos manifiestos de franela pirateados con diferentes manchas (y nombres) para evitar mezclar las cosas, pero nuevamente, mi sugerencia es preguntarle a la gente de franela si ya hay instrucciones sobre cómo hacer esto. .

Pero modifiqué el manifiesto en arm64 como se sugiere en el tutorial. Reemplazó amd64 con arm64 .
Entonces, tal vez cree un problema para flannell y pegue un enlace a este hilo.

¿Y ahora por qué wave net falla en ambos arcos con el mismo error relacionado con cni? ¿Quizás crear un problema para weave también y también vincular a este hilo?

@lukaszgryglicki
Cuando pulsaste kube-flannel.yml para arm, deja de funcionar en máquinas amd ... Es por eso que supongo que la implementación de 2 manifiestos bien ajustados, uno para arm y otro para amd, puede resolver tu problema.

Y ahora que lo pienso, podría ser que debería solucionar el mismo problema con el conjunto de demonios kube-proxy, pero no puedo probar esto ahora, lo siento


Para el problema que tienes con el tejido, no tengo suficiente información. Un problema podría ser que el tejido no funciona con --pod-network-cidr=10.244.0.0/16 , pero volviendo al problema inicial, no sé de memoria si el tejido funciona desde el primer momento en plataformas mixtas o no.

Entonces debería implementar dos manifiestos diferentes de franela en un maestro, ¿verdad? No importa si master es arm64 o amd64, ¿verdad? ¿Debería Master manejar el despliegue correcto del arco en sí mismo y en los nodos?
No estoy seguro de a qué te refieres aquí:

And now that I think of, might be you should fix the same issue with kube-proxy daemon set as well, but I can't test this now, sorry

No usé --pod-network-cidr=10.244.0.0/16 para weave . He usado solo kubeadm init .
He usado --pod-network-cidr=10.244.0.0/16 solo para intentos de franela. Como dicen los doctores.

cc @luxas : he visto que ha creado algunos documentos sobre implementaciones de k8s de múltiples arcos, ¿tal vez pueda tener algunos comentarios?

@lukasredynk

sí, después de todo, este es un problema archi, gracias por confirmar.
centrémonos en la franela aquí, ya que el tema del tejido parece tangente.

Eche un vistazo a esto de @luxas para el contexto, si aún no lo ha visto:
https://github.com/luxas/kubeadm-workshop

¿Debería Master manejar el despliegue correcto del arco en sí mismo y en los nodos?

_ debería_ pero el manifiesto que está descargando no es "gordo":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

Por lo que tengo entendido, las manchas de arco se propagan y debe solucionarlo con kubectl en cada nodo (?).

parece que un manifiesto "gordo" está en el maestro y se agregó aquí:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

problema relacionado / pr:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

mi suposición es que esto es de vanguardia y tienes que usar:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

así que baja el clúster y pruébalo y espera que funcione.
nuestros documentos CNI necesitarían un aumento, sin embargo, esto debe suceder cuando se publique flannel-next .

Bien, lo intentaré después del fin de semana y publicaré mis resultados aquí. Gracias.

@lukaszgryglicki hola, ¿conseguiste que esto funcionara usando el nuevo manifiesto de franela?

Todavía no, lo intentaré hoy.

OK finalmente funcionó:

root<strong i="6">@devstats</strong>:/root# kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
cncftest.io        Ready     <none>    39s       v1.11.1
devstats.cncf.io   Ready     <none>    46s       v1.11.1
devstats.team.io   Ready     master    12m       v1.11.1

Grasa mainifest de franela master rama ayudó.
Gracias, esto se puede cerrar.

Hola chicos, estoy en la misma situación.
Tengo nodos de trabajo en estado Listo, pero la franela en arm64 sigue fallando con este error:
1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm64-m5jfd': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm64-m5jfd: dial tcp 10.96.0.1:443: i/o timeout
@lukasredynk, ¿te funcionó?

¿alguna idea?

El error parece diferente, pero ¿usó el manifiesto gordo: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ?
Contiene manifiestos para múltiples arcos.

Sí lo soy:
image

El problema ahora es el recipiente de franela que no se mantiene en el brazo. :(

lo hace en amd64 y arm64 - funciona para mí.
Desafortunadamente, no puedo ayudar con arm (32 bits), no tengo una máquina arm disponible.

Estoy en arm64 pero gracias, seguiré investigando ...

Ohh, entonces lo siento, pensé que estabas del brazo.
De todos modos, también soy bastante nuevo en esto, así que debes esperar a que otros chicos te ayuden.
Pegue la salida de kubectl describe pods --all-namespace y la posible salida de otros comandos que publiqué en este hilo. Esto puede ayudar a alguien a rastrear el problema real.

Gracias @lukaszgryglicki ,
esta es la salida de describe pods: https://pastebin.com/kBVPYsMd

@lukaszgryglicki
Me alegro de que funcionó al final.
Documentaré el uso del manifiesto gordo para la franela en los documentos, ya que no tengo idea de cuándo se lanzará 0.11.0.

@ Leen15

relevante de la vaina que falla:

  Warning  FailedCreatePodSandBox  3m (x5327 over 7h)  kubelet, nanopi-neo-plus2  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ddb551d520a757f4f8ff81d1dbfde50a98a5ec65385673a5a49a79e23a3243b" network for pod "arm-test-7894bfffd-njdcc": NetworkPlugin cni failed to set up pod "arm-test-7894bfffd-njdcc_default" network: open /run/flannel/subnet.env: no such file or directory

¿Estás agregando --pod-network-cidr=... que se necesita para la franela?

prueba también esta guía:
https://github.com/kubernetes/kubernetes/issues/36575#issuecomment -264622923

@ neolit123 sí, encontré el problema: flannel no creó la interfaz de red virtual (cni y flannel0).
No sé el motivo y no pude resolverlo después de varias horas.
Me di por vencido y cambié a enjambre.

entendido. en ese caso estoy cerrando el tema.
Gracias.

También me encontré con el mismo problema, y ​​descubrí que el nodo no puede extraer las imágenes requeridas debido al GFW en China, así que extraigo las imágenes manualmente y se recupera bien

Ejecuté este comando y resolvió mi problema:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Esto crea un archivo en el directorio /etc/cni/net.d con el nombre de 10-flannel.conflist. Creo que kubernetes requiere una red, que está configurada por este paquete.
Mi clúster está en el siguiente estado:

NOMBRE ESTADO ROLES EDAD VERSIÓN
k8s-master Ready master 3h37m v1.14.1
node001 listo3h6m v1.14.1
node02 listo167 millones v1.14.1

Hola a todos,

Tengo 1 maestro y 2 nodos. El segundo nodo está en estado no listo.

root @ kube1 : ~ # kubectl obtener nodos
NOMBRE ESTADO ROLES EDAD VERSIÓN
dockerlab1 listo3h57m v1.14.3
kube1 Ready master 4h12m v1.14.3
labserver1 NotReady22m v1.14.3


root @ kube1 : ~ # kubectl get pods --all-namespaces
NOMBRE NOMBRE ESTADO LISTO REINICIE EDAD
kube-system coredns-fb8b8dccf-72llr 1/1 En ejecución 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 En ejecución 0 4h13m
kube-system etcd-kube1 1/1 En funcionamiento 0 4h12m
kube-system kube-apiserver-kube1 1/1 En funcionamiento 0 4h12m
kube-system kube-controller-manager-kube1 1/1 En ejecución 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Inicial : 0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Corriendo 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Corriendo 0 4h1m
kube-system kube-proxy-7m8jg 1/1 En ejecución 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 En ejecución 0 4h13m

kube-system kube-planificador-kube1 1/1 En ejecución 0 4h13m

root @ kube1 : ~ # kubectl describe el nodo labserver1
Nombre: labserver1
Roles:
Etiquetas: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anotaciones: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
Marca de tiempo de creación: dom., 09 de junio de 2019 21:03:57 +0800
Taints: node.kubernetes.io/not- ready: NoExecute
node.kubernetes.io/not- ready: NoSchedule
No programable: falso
Condiciones:
Tipo Estado LastHeartbeatTime LastTransitionTime Mensaje de motivo
---- ------ ----------------- ------------------ ----- - -------
MemoryPressure False Sun, 09 de junio de 2019 21:28:31 +0800 Sun, 09 de junio de 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet tiene suficiente memoria disponible
DiskPressure False Sun, 09 de junio de 2019 21:28:31 +0800 Sun, 09 de junio de 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet no tiene presión de disco
PIDPressure False Sun, 09 de junio de 2019 21:28:31 +0800 Dom, 09 de junio de 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet tiene suficiente PID disponible
Listo Falso Dom, 09 de junio de 2019 21:28:31 +0800 Dom, 09 de junio de 2019 21:03:57 +0800 La red de tiempo de ejecución de KubeletNotReady no está lista: NetworkReady = motivo falso Mensaje de NetworkPluginNotReady : docker : el complemento de red no está listo: cni config no inicializado
Direcciones:
IP interno: 172.31.8.125
Nombre de host: labserver1
Capacidad:
CPU: 1
almacenamiento-efímero: 18108284Ki
enormes páginas-1Gi: 0
enormes páginas-2Mi: 0
memoria: 1122528Ki
vainas: 110
Asignable:
CPU: 1
almacenamiento-efímero: 16688594507
enormes páginas-1Gi: 0
enormes páginas-2Mi: 0
memoria: 1020128Ki
vainas: 110
Información del sistema:
ID de máquina: 292dc4560f9309ccdd72b6935c80e8ec
UUID del sistema: DE4707DF-5516-784A-9B41-588FCDE49369
ID de arranque: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Versión de Kernel: 4.4.0-142-generic
Imagen del sistema operativo: Ubuntu 16.04.6 LTS
Sistema operativo: linux
Arquitectura: amd64
Versión en tiempo de ejecución del contenedor: docker: //18.9.6
Versión de Kubelet: v1.14.3
Versión de proxy de Kube: v1.14.3
PodCIDR: 10.244.3.0/24
Pods no terminados: (2 en total)
Nombre del espacio de nombres Solicitudes de CPU Límites de CPU Solicitudes de memoria Límites de memoria EDAD
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Recursos asignados:
(Los límites totales pueden ser superiores al 100 por ciento, es decir, comprometidos en exceso).
Límites de solicitudes de recursos
-------- -------- ------
CPU 100 m (10%) 100 m (10%)
memoria 50Mi (5%) 50Mi (5%)
almacenamiento-efímero 0 (0%) 0 (0%)
Eventos:
Escriba el motivo de la antigüedad del mensaje
---- ------ ---- ---- -------
Inicio normal 45m kubelet, labserver1 Inicio kubelet.
Normal NodeHasSufficientMemory 45m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientPID
Normal NodeAllocatable Enforcement 45 m kubelet, labserver1 Límite de nodo actualizado en todos los pods
Normal Arranque 25m kubelet, labserver1 Arranque kubelet.
Normal NodeAllocatable Enforcement 25 m kubelet, labserver1 Límite de nodo actualizado en todos los pods
Normal NodeHasSufficientMemory 25m (x2 sobre 25m) kubelet, labserver1 El estado del nodo labserver1 es ahora: NodeHasSufficientMemory
Normal NodeHasSufficientPID 25m (x2 over 25m) kubelet, labserver1 El estado del nodo labserver1 es ahora: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 sobre 25m) kubelet, labserver1 El estado del nodo labserver1 es ahora: NodeHasNoDiskPressure
Inicio normal 13m kubelet, labserver1 Inicio kubelet.
Normal NodeHasSufficientMemory 13m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientPID
Normal NodeAllocatableEnforcement 13m kubelet, labserver1 Límite de nodo actualizado en todos los pods
root @ kube1 : ~ #

Por favor ayuda

Hola a todos,

Tengo 1 maestro y 2 nodos. El segundo nodo está en estado no listo.

root @ kube1 : ~ # kubectl obtener nodos
NOMBRE ESTADO ROLES EDAD VERSIÓN
dockerlab1 Ready 3h57m v1.14.3
kube1 Ready master 4h12m v1.14.3
labserver1 NotReady 22m v1.14.3

root @ kube1 : ~ # kubectl get pods --all-namespaces

NOMBRE NOMBRE ESTADO LISTO REINICIE EDAD
kube-system coredns-fb8b8dccf-72llr 1/1 En ejecución 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 En ejecución 0 4h13m
kube-system etcd-kube1 1/1 En funcionamiento 0 4h12m
kube-system kube-apiserver-kube1 1/1 En funcionamiento 0 4h12m
kube-system kube-controller-manager-kube1 1/1 En ejecución 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Inicial : 0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Corriendo 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Corriendo 0 4h1m
kube-system kube-proxy-7m8jg 1/1 En ejecución 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 En ejecución 0 4h13m
kube-system kube-planificador-kube1 1/1 En ejecución 0 4h13m
root @ kube1 : ~ # kubectl describe el nodo labserver1
Nombre: labserver1
Roles:
Etiquetas: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anotaciones: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
Marca de tiempo de creación: dom., 09 de junio de 2019 21:03:57 +0800
Taints: node.kubernetes.io/not- ready: NoExecute
node.kubernetes.io/not- ready: NoSchedule
No programable: falso
Condiciones:
Tipo Estado LastHeartbeatTime LastTransitionTime Mensaje de motivo

MemoryPressure False Sun, 09 de junio de 2019 21:28:31 +0800 Sun, 09 de junio de 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet tiene suficiente memoria disponible
DiskPressure False Sun, 09 de junio de 2019 21:28:31 +0800 Sun, 09 de junio de 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet no tiene presión de disco
PIDPressure False Sun, 09 de junio de 2019 21:28:31 +0800 Dom, 09 de junio de 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet tiene suficiente PID disponible
Listo Falso Dom, 09 de junio de 2019 21:28:31 +0800 Dom, 09 de junio de 2019 21:03:57 +0800 La red de tiempo de ejecución de KubeletNotReady no está lista: NetworkReady = motivo falso Mensaje de NetworkPluginNotReady : docker : el complemento de red no está listo: cni config no inicializado
Direcciones:
IP interno: 172.31.8.125
Nombre de host: labserver1
Capacidad:
CPU: 1
almacenamiento-efímero: 18108284Ki
enormes páginas-1Gi: 0
enormes páginas-2Mi: 0
memoria: 1122528Ki
vainas: 110
Asignable:
CPU: 1
almacenamiento-efímero: 16688594507
enormes páginas-1Gi: 0
enormes páginas-2Mi: 0
memoria: 1020128Ki
vainas: 110
Información del sistema:
ID de máquina: 292dc4560f9309ccdd72b6935c80e8ec
UUID del sistema: DE4707DF-5516-784A-9B41-588FCDE49369
ID de arranque: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Versión de Kernel: 4.4.0-142-generic
Imagen del sistema operativo: Ubuntu 16.04.6 LTS
Sistema operativo: linux
Arquitectura: amd64
Versión en tiempo de ejecución del contenedor: docker: //18.9.6
Versión de Kubelet: v1.14.3
Versión de proxy de Kube: v1.14.3
PodCIDR: 10.244.3.0/24
Pods no terminados: (2 en total)
Nombre del espacio de nombres Solicitudes de CPU Límites de CPU Solicitudes de memoria Límites de memoria EDAD

kube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Recursos asignados:
(Los límites totales pueden ser superiores al 100 por ciento, es decir, comprometidos en exceso).
Límites de solicitudes de recursos

CPU 100 m (10%) 100 m (10%)
memoria 50Mi (5%) 50Mi (5%)
almacenamiento-efímero 0 (0%) 0 (0%)
Eventos:
Escriba el motivo de la antigüedad del mensaje

Inicio normal 45m kubelet, labserver1 Inicio kubelet.
Normal NodeHasSufficientMemory 45m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientPID
Normal NodeAllocatable Enforcement 45 m kubelet, labserver1 Límite de nodo actualizado en todos los pods
Normal Arranque 25m kubelet, labserver1 Arranque kubelet.
Normal NodeAllocatable Enforcement 25 m kubelet, labserver1 Límite de nodo actualizado en todos los pods
Normal NodeHasSufficientMemory 25m (x2 sobre 25m) kubelet, labserver1 El estado del nodo labserver1 es ahora: NodeHasSufficientMemory
Normal NodeHasSufficientPID 25m (x2 over 25m) kubelet, labserver1 El estado del nodo labserver1 es ahora: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 sobre 25m) kubelet, labserver1 El estado del nodo labserver1 es ahora: NodeHasNoDiskPressure
Inicio normal 13m kubelet, labserver1 Inicio kubelet.
Normal NodeHasSufficientMemory 13m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 El estado del nodo labserver1 ahora es: NodeHasSufficientPID
Normal NodeAllocatableEnforcement 13m kubelet, labserver1 Límite de nodo actualizado en todos los pods
root @ kube1 : ~ #

Por favor ayuda

Hola Athir,

         Please check out logs in /var/logs/messages section of your Master node. You can find an actual error in those logs. But here are some general tips.

I. Concéntrese siempre primero en su nodo maestro.
ii. Instale el motor de la ventana acoplable y obtenga todas las imágenes que se están utilizando para kubernetes. Cuando todo esté en ejecución, agregue nodos al maestro. Resolverá todo el problema. Vi algunos artículos en Internet, que intentan obtener algunas imágenes después de adjuntar los nodos esclavos. Esa práctica causa algunos problemas.

Hola saddique164, Gracias por tus sugerencias. Sí, como dijiste, ayer implementé otro nuevo nodo esclavo y pude unirme al Maestro sin ningún problema.

Lo siento, no puedo ayudar, ya no tengo nodos ARM64, ahora tengo un clúster puro AMD64 de 4 nodos.

Al archivo /etc/cni/net.d/10-flannel.conflist le faltaba la clave cniVersion en su configuración.

Agregar "cniVersion": "0.2.0" resolvió el problema.

Al archivo /etc/cni/net.d/10-flannel.conflist le faltaba la clave cniVersion en su configuración.

Agregar "cniVersion": "0.2.0" resolvió el problema.

Enfrenté el problema cuando actualicé a V1.16.0 desde 1.15.

dGVzdDoxMjPCow

El 24 de septiembre de 2019 12:52, "ronakgpatel" [email protected] escribió:

Faltaba la clave cniVersion en el archivo /etc/cni/net.d/10-flannel.conflist
su config.

Agregar "cniVersion": "0.2.0" resolvió el problema.

Enfrenté el problema cuando actualicé a V1.16.0 desde 1.15.

-
Estás recibiendo esto porque estás suscrito a este hilo.
Responda a este correo electrónico directamente, véalo en GitHub
https://github.com/kubernetes/kubeadm/issues/1031?email_source=notifications&email_token=AND2HJTXHB6WSGAE7PSOAJTQLGMKVA5CNFSM4FNQHEHKYY3PNVWWK3TUL52HS4DFVDVREXWJWKNMM38 ,
o silenciar el hilo
https://github.com/notifications/unsubscribe-auth/AND2HJXDKQARYKXVY4YLZMDQLGMKVANCNFSM4FNQHEHA
.

la franela no se mantiene muy activamente. recomiendo calico o weavenet.

el depósito de franela necesitaba una reparación.
la guía de kubeadm para instalar franela se acaba de actualizar, consulte:
https://github.com/kubernetes/website/pull/16575/files

Enfrenté el mismo problema aquí.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.

Trabajó para mi.

docker: el complemento de red no está listo: cni config sin inicializar

Vuelva a instalar la ventana acoplable en el nodo que no está listo.
Trabajó para mi.

Ejecuté este comando y resolvió mi problema:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Esto crea un archivo en el directorio /etc/cni/net.d con el nombre de 10-flannel.conflist. Creo que kubernetes requiere una red, que está configurada por este paquete.
Mi clúster está en el siguiente estado:

NOMBRE ESTADO ROLES EDAD VERSIÓN
k8s-master Ready master 3h37m v1.14.1
node001 Listo 3h6m v1.14.1
node02 Ready 167m v1.14.1

¡Eso acaba de hacerlo!

Tuve un caso similar en el que estaba creando el complemento de red antes de vincular a los trabajadores, lo que hacía que faltara /etc/cni/net.d.
Re-ejecuté la configuración después de vincular los nodos trabajadores usando:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Como resultado, la configuración en /etc/cni/net.d se creó con éxito y el nodo se mostró en estado Listo.

Espero que ayude a cualquiera con el mismo problema.

Ejecuté este comando y resolvió mi problema:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Esto crea un archivo en el directorio /etc/cni/net.d con el nombre de 10-flannel.conflist. Creo que kubernetes requiere una red, que está configurada por este paquete.
Mi clúster está en el siguiente estado:

NOMBRE ESTADO ROLES EDAD VERSIÓN
k8s-master Ready master 3h37m v1.14.1
node001 Listo 3h6m v1.14.1
node02 Ready 167m v1.14.1

Ejecuté ese comando en la máquina maestra y todo está en estado Listo ahora. Gracias @ saddique164 .

La forma más rápida es agregar Flannel a Kubernetes en cualquiera de las arquitecturas AMD64.

1. Actualiza kube-flannel.yaml

$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> kube-flannel.yaml

2. Completa la configuración de la red Flannel

$ kubectl apply -f kube-flannel.yaml

Estoy usando la versión de kubernetes 1.18.

Usé esto: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

No se creó ningún archivo en /etc/cni/net.d
El nodo maestro no está listo mientras que los esclavos están en estado Listo

Estoy usando la versión de kubernetes 1.18.

Usé esto: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

No se creó ningún archivo en /etc/cni/net.d
El nodo maestro no está listo mientras que los esclavos están en estado Listo

  1. ¿Puedes ejecutar el comando kubectl en el maestro?
  2. ¿Puedes comprobar que tu kubelet se está ejecutando en el maestro? o ejecute esto: systemctl restart kubelet.
  3. si kubelet se está reiniciando o reiniciando automáticamente, ejecute el diario -u kubelet y verifique los registros. Encontrarás el error.

NOTA: Esto parece ser un problema de Kubelet.

  1. Sí, puedo ejecutar comandos de kubectl.
  2. Kubectl se ejecuta y luego falla.
  3. Esto es lo que veo en los errores de journalnactl -u kubelet:
Jul 01 11:58:36 master kubelet[17918]: F0701 11:58:36.613864   17918 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 01 11:58:36 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 01 11:58:36 master systemd[1]: Unit kubelet.service entered failed state.
Jul 01 11:58:36 master systemd[1]: kubelet.service failed.

prueba esto en el maestro:

sed -i 's / cgroup-driver = systemd / cgroup-driver = cgroupfs / g' /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

Se inicia y luego vuelve a fallar.

Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692341   15525 remote_runtime.go:59] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692358   15525 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692381   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692389   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692420   15525 remote_image.go:50] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692427   15525 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692435   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692440   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692464   15525 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692480   15525 kubelet.go:317] Watching apiserver
Jul 02 10:37:16 master kubelet[15525]: W0702 10:37:16.680313   15525 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

Si ve, dice que no se encuentra ninguna red. Ejecute este comando y comparta el resultado.

Kubectl get pods -n kube-system

NAME                          READY   STATUS        RESTARTS   AGE
coredns-75f8564758-92ws7      1/1     Running       0          25h
coredns-75f8564758-z9xn8      1/1     Running       0          25h
kube-flannel-ds-amd64-2j4mw   1/1     Running       0          25h
kube-flannel-ds-amd64-5tmhp   0/1     Pending       0          25h
kube-flannel-ds-amd64-rqwmz   1/1     Running       0          25h
kube-proxy-6v24w              1/1     Running       0          25h
kube-proxy-jgdw7              0/1     Pending       0          25h
kube-proxy-qppnk              1/1     Running       0          25h

ejecuta esto
registros de kubectl kube-flannel-ds-amd64-5tmhp -n kube-system

si no sale nada, ejecute este:
kubectl describe pod kube-flannel-ds-amd64-5tmhp -n kube-system

Error del servidor: obtenga https://10.75.214.124 : 10250 / containerLogs / kube-system / kube-flannel-ds-amd64-5tmhp / kube-flannel: marque tcp 10.75.214.124:10250: connect: conexión rechazada

¿Cuántos nodos se están ejecutando para usted? ¿EN el clúster? un nodo está haciendo este problema. Esto se llama daemonset. Se ejecutan en todos los nodos. Su plan de control no acepta su solicitud. Así que te sugiero que sigas los siguientes pasos.

  1. primero drene los nodos trabajadores uno por uno.
    Nombre de nodo de drenaje de Kubectl
  2. luego bórrelos.
    kubectl eliminar nodo nombre de nodo
  3. espere a que el nodo maestro esté listo. si no viene. ejecute este comando.
    reinicio de kubeadm
  4. nuevamente initalizar el kubeadm
    kubeadm init
  5. ejecutar este comando
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  6. obtenga el comando del maestro y ejecútelo en los nodos de trabajo para conectarlos.

Este proceso funcionará.

kubectl obtener nodos:

NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   26h   v1.18.5
slave1   Ready      <none>   26h   v1.18.5
slave2   Ready      <none>   26h   v1.18.5

Probé los pasos que mencionaste:

Esto es lo que obtengo.

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

drene todos los nodos excepto el maestro y concéntrese en eso. Cuando esté listo, vaya a agregar otros.

Drenar los nodos y luego kubeadm reset e init no ayuda. El clúster no se inicializa posteriormente.

Mi problema fue que estaba actualizando el nombre de host después de que se creó el clúster. Al hacer eso, es como si el maestro no supiera que era el maestro.

Todavía estoy corriendo:

sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname)

pero ahora lo ejecuto antes de la inicialización del clúster

¿Fue útil esta página
0 / 5 - 0 calificaciones