Kubeadm: 运行时网络未准备好:NetworkReady=false 原因:NetworkPluginNotReady 消息:docker:网络插件未准备好:cni 配置未初始化

创建于 2018-08-02  ·  65评论  ·  资料来源: kubernetes/kubeadm

这是错误报告还是功能请求?

错误报告

  • 我遵循了本指南
  • 我已经在 96 CPU ARM64 服务器上安装了主节点。
  • 操作系统是 Ubuntu 18.04 LTS。 就在apt-get update/upgrade
  • 使用kubeadm init --pod-network-cidr=10.244.0.0/16 。 然后执行建议的命令。
  • 选择法兰绒豆荚网络:

    • sysctl net.bridge.bridge-nf-call-iptables=1

    • wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

    • vim kube-flannel.yml ,将amd64替换arm64

    • kubectl apply -f kube-flannel.yml

    • kubectl get pods --all-namespaces

NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-ls44z                   1/1       Running   0          20m
kube-system   coredns-78fcdf6894-njnnt                   1/1       Running   0          20m
kube-system   etcd-devstats.team.io                      1/1       Running   0          20m
kube-system   kube-apiserver-devstats.team.io            1/1       Running   0          20m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running   0          20m
kube-system   kube-flannel-ds-v4t8s                      1/1       Running   0          13m
kube-system   kube-proxy-5825g                           1/1       Running   0          20m
kube-system   kube-scheduler-devstats.team.io            1/1       Running   0          20m

然后使用kubeadm init输出加入两个 AMD64 节点:
第一个节点:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:49.987467   16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709   16652 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

第二个节点:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:58.913060   38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222   38617 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

但是在主kubectl get nodes

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    7m        v1.11.1
devstats.cncf.io   NotReady   <none>    7m        v1.11.1
devstats.team.io   Ready      master    21m       v1.11.1

然后: kubectl describe nodes (主节点是devstats.team.io ,节点是: cncftest.iodevstats.cncf.io ):

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:26:53 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.1.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 8m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  8m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:27:00 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.2.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 7m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  7m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=147.75.97.234
                    kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:12:56 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:21:07 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                coredns-78fcdf6894-ls44z                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                coredns-78fcdf6894-njnnt                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-v4t8s                       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)
  kube-system                kube-proxy-5825g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       850m (0%)   100m (0%)
  memory    190Mi (0%)  390Mi (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 23m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  23m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     23m (x5 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 21m                kube-proxy, devstats.team.io  Starting kube-proxy.
  Normal  NodeReady                13m                kubelet, devstats.team.io     Node devstats.team.io status is now: NodeReady

版本

kubeadm 版本(使用kubeadm version ):

kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}

环境:

  • Kubernetes 版本(使用kubectl version ):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
  • 云提供商或硬件配置
  • Master:裸机服务器96核,ARM64,128G内存,swap关闭。
  • 节点 (2):裸机服务器 48 核,AMD64,256G RAM,交换关闭 x 2。
  • uname -a :Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
  • 操作系统(例如来自 /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:    18.04
Codename:   bionic
  • 内核(例如uname -a ): Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
  • 其他: docker version :
docker version
Client:
 Version:   17.12.1-ce
 API version:   1.35
 Go version:    go1.10.1
 Git commit:    7390fc6
 Built: Wed Apr 18 01:26:37 2018
 OS/Arch:   linux/arm64

Server:
 Engine:
  Version:  17.12.1-ce
  API version:  1.35 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   7390fc6
  Built:    Wed Feb 28 17:46:05 2018
  OS/Arch:  linux/arm64
  Experimental: false

发生了什么?

确切的错误似乎是:

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

在节点上: cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

这个线程(没有KUBELET_NETWORK_ARGS那里)。

  • journalctl -xe在节点上:
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663   38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876   38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

目录/etc/cni/net.d存在,但为空。

你预计会发生什么?

处于Ready状态的所有节点。

如何重现它(尽可能少且精确)?

只需按照教程中的步骤操作即可。 尝试了 3 次,它一直发生。

还有什么我们需要知道的吗?

Master为ARM64,2个节点为AMD64。
主节点和一个节点在阿姆斯特丹,第二个节点在美国。

我可以使用kubectl taint nodes --all node-role.kubernetes.io/master-在 master 上运行 pod,但这不是解决方案。 我想要一个真正的多节点集群来使用。

areecosystem prioritawaiting-more-evidence

最有用的评论

@lukasredynk

是的,所以这毕竟是一个主要问题,感谢您的确认。
让我们在这里专注于法兰绒,因为编织问题似乎是切线问题。

看看@luxas的上下文,如果还没有看到它:
https://github.com/luxas/kubeadm-workshop

Master 是否应该在其自身和节点上处理生成正确的 Arch 部署?

_它应该_ 但您下载的清单不是“胖”清单:
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

据我了解,arch 污点被传播,您需要在每个节点上使用kubectl修复它(?)。

看起来像一个“胖”清单在 master 中并被添加到这里:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

相关问题/公关:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

我的假设是,这是最前沿的,你必须使用:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

因此,请关闭集群并尝试一下,希望它可以工作。
我们的 CNI 文档需要修改,但这需要在flannel-next发布时发生。

所有65条评论

@lukaszgryglicki
似乎节点没有得到法兰绒,因为它们位于amd64架构上

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

Name:               devstats.team.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

我不是法兰绒专家,但我认为您应该检查产品文档以了解如何使其在混合平台的环境中工作

这是一个很好的观点,但是错误消息如何 - 它似乎真的无关。

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

看起来/etc/cni/net.d中缺少一些 CNI 配置文件,但为什么呢?
我现在正在尝试按照 slack 频道上的建议使用不同的 docker 18.03ce (实际上建议使用 17.03,但 Ubuntu 18.04 没有 17.03)。

带有拱形名称的标签确实不匹配。 但是下一个标签beta.kubernetes.io/os=linux在所有 3 个服务器上都是相同的。

Docker 18.03ce 也是如此。 我看不出有什么区别,这看起来不像是 docker 问题。 这看起来像一些 CNI 配置问题。

@lukaszgryglicki
你好,

Master:裸机服务器96核,ARM64,128G内存,swap关闭。
节点 (2):裸机服务器 48 核,AMD64,256G RAM,交换关闭 x 2。

这些是一些 _nice_ 规格。

我测试事物的方式如下 - 如果某些东西不适用于 weavenet,我会尝试法兰绒,反之亦然。

所以请尝试 weave,如果您的 CNI 设置适用于它,那么这是与 CNI 插件相关的。

虽然 kubeadm 团队支持插件和附加组件,但我们通常将问题委托给他们各自的维护者,因为我们没有足够的带宽来处理所有事情。

当然,我在几次迭代前尝试过编织。 它以容器重启循环结束。
现在将尝试使用 docker 17.03 来排除 docker 问题(应该很好地支持 17.03)。

所以这不是码头工人的问题。 在 17.03 相同:

Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: W0802 14:21:51.406786   21714 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: E0802 14:21:51.407074   21714 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
now will try weave net as suggested on the issue

我现在会尝试编织并在此处发布结果。

所以,我试过weave net但它不起作用:
在主人上: kubectl get nodes

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    5s        v1.11.1
devstats.cncf.io   NotReady   <none>    12s       v1.11.1
devstats.team.io   NotReady   master    7m        v1.11.1
  • kubectl describe nodes (相同的 cni 相关错误,但现在在主节点上):
Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:56 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-wwjrr    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 1m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:49 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-2fsrf    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 1m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:32:14 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.9.0
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (6 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-69qnb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-j9f5m                             20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests   Limits
  --------  --------   ------
  cpu       570m (0%)  0 (0%)
  memory    0 (0%)     0 (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 10m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  10m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x5 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 8m                 kube-proxy, devstats.team.io  Starting kube-proxy.
  • journalctl -xe在主人上:
Aug 02 14:42:18 devstats.team.io dockerd[44020]: time="2018-08-02T14:42:18.330999189Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.079835   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080312   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080677   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:19 devstats.team.io kubelet[56340]: E0802 14:42:19.080815   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:21 devstats.team.io kubelet[56340]: W0802 14:42:21.867690   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:21 devstats.team.io kubelet[56340]: E0802 14:42:21.868005   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.259681   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260359   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260833   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.260984   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:26 devstats.team.io kubelet[56340]: W0802 14:42:26.870675   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.871316   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  • kubectl get po --all-namespaces
NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   coredns-78fcdf6894-g8wzs                   0/1       Pending            0          12m
kube-system   coredns-78fcdf6894-tzs8n                   0/1       Pending            0          12m
kube-system   etcd-devstats.team.io                      1/1       Running            0          12m
kube-system   kube-apiserver-devstats.team.io            1/1       Running            0          12m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running            0          12m
kube-system   kube-proxy-69qnb                           1/1       Running            0          12m
kube-system   kube-scheduler-devstats.team.io            1/1       Running            0          12m
kube-system   weave-net-2fsrf                            1/2       CrashLoopBackOff   5          5m
kube-system   weave-net-j9f5m                            1/2       CrashLoopBackOff   6          8m
kube-system   weave-net-wwjrr                            1/2       CrashLoopBackOff   5          4m
  • kubectl describe po --all-namespaces
Name:               coredns-78fcdf6894-g8wzs
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x48 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               coredns-78fcdf6894-tzs8n
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x47 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               etcd-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=etcd
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.mirror=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.seen=2018-08-02T14:31:13.654147902Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  etcd:
    Container ID:  docker://254c88b154393778ef7b1ead2aaaa0acb120ffb76d911f140172da3323f1f1e3
    Image:         k8s.gcr.io/etcd-arm64:3.2.18
    Image ID:      docker-pullable://k8s.gcr.io/etcd-arm64<strong i="13">@sha256</strong>:f0b7368ebb28e6226ab3b4dbce4b5c6d77dab7b5f6579b08fd645c00f7b100ff
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://127.0.0.1:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --initial-advertise-peer-urls=https://127.0.0.1:2380
      --initial-cluster=devstats.team.io=https://127.0.0.1:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379
      --listen-peer-urls=https://127.0.0.1:2380
      --name=devstats.team.io
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:    <none>
    Mounts:
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
      /var/lib/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-apiserver-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.mirror=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.seen=2018-08-02T14:31:13.639443247Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-apiserver:
    Container ID:  docker://22b73993b141faebe6b4aab727d2235abb3422a17b60bc1be6c749c260e39f67
    Image:         k8s.gcr.io/kube-apiserver-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver-arm64<strong i="14">@sha256</strong>:bca1933fa25fc7f890700f6aebd572c6f8351f7bc89d2e4f2c44a63649e3fccf
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --authorization-mode=Node,RBAC
      --advertise-address=147.75.97.234
      --allow-privileged=true
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --disable-admission-plugins=PersistentVolumeLabel
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        250m
    Liveness:     http-get https://147.75.97.234:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-controller-manager-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-controller-manager
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.mirror=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.seen=2018-08-02T14:31:13.646000889Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-controller-manager:
    Container ID:  docker://5182bf5c7c63f9507e6319a2c3fb5698dc827ea9b591acbb071cb39c4ea445ea
    Image:         k8s.gcr.io/kube-controller-manager-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-controller-manager-arm64<strong i="15">@sha256</strong>:7fa0b0242c13fcaa63bff3b4cde32d30ce18422505afa8cb4c0f19755148b612
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --address=127.0.0.1
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --use-service-account-credentials=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        200m
    Liveness:     http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-proxy-69qnb
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:32:25 +0000
Labels:             controller-revision-hash=2718475167
                    k8s-app=kube-proxy
                    pod-template-generation=1
Annotations:        scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://12fb2a4a8af025604e46783aa87d084bdc681365317c8dac278a583646a8ad1c
    Image:         k8s.gcr.io/kube-proxy-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy-arm64<strong i="16">@sha256</strong>:c61f4e126ec75dedce3533771c67eb7c1266cacaac9ae770e045a9bec9c9dc32
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
    State:          Running
      Started:      Thu, 02 Aug 2018 14:32:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-4q6rl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-proxy-token-4q6rl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-4q6rl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/arch=arm64
Tolerations:     
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type    Reason   Age   From                       Message
  ----    ------   ----  ----                       -------
  Normal  Pulled   13m   kubelet, devstats.team.io  Container image "k8s.gcr.io/kube-proxy-arm64:v1.11.1" already present on machine
  Normal  Created  13m   kubelet, devstats.team.io  Created container
  Normal  Started  13m   kubelet, devstats.team.io  Started container


Name:               kube-scheduler-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-scheduler
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.mirror=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.seen=2018-08-02T14:31:13.651239565Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-scheduler:
    Container ID:  docker://0b8018a7d0c2cb2dc64d9364dea5cea8047b0688c4ecb287dba8bebf9ab011a3
    Image:         k8s.gcr.io/kube-scheduler-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler-arm64<strong i="17">@sha256</strong>:28ab99ab78c7945a4e20d9369682e626b671ba49e2d4101b1754019effde10d2
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --address=127.0.0.1
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:14 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Liveness:     http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               weave-net-2fsrf
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.cncf.io/147.75.78.47
Start Time:         Thu, 02 Aug 2018 14:39:49 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.78.47
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://e8f5c3b702166a15212ab9576696aa7a1a0cb5b94e9cba1451fc9cc2b1d1382d
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="18">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:04 +0000
      Finished:     Thu, 02 Aug 2018 14:43:05 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://1cfd16507d6d9e1744bfc354af62301fb8678af12ace34113121a40ca93b6113
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="19">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:39:58 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                       Message
  ----     ------   ----               ----                       -------
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, devstats.cncf.io  Created container
  Normal   Started  5m                 kubelet, devstats.cncf.io  Started container
  Normal   Created  5m (x4 over 5m)    kubelet, devstats.cncf.io  Created container
  Normal   Started  5m (x4 over 5m)    kubelet, devstats.cncf.io  Started container
  Normal   Pulled   5m (x3 over 5m)    kubelet, devstats.cncf.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  56s (x27 over 5m)  kubelet, devstats.cncf.io  Back-off restarting failed container


Name:               weave-net-j9f5m
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:36:11 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="20">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:42:18 +0000
      Finished:     Thu, 02 Aug 2018 14:42:18 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://3cd49dbca669ac83db95ebf943ed0053281fa5082f7fa403a56e30091eaec36b
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="21">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:36:31 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age               From                       Message
  ----     ------   ----              ----                       -------
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  9m                kubelet, devstats.team.io  Created container
  Normal   Started  9m                kubelet, devstats.team.io  Started container
  Normal   Created  8m (x4 over 9m)   kubelet, devstats.team.io  Created container
  Normal   Started  8m (x4 over 9m)   kubelet, devstats.team.io  Started container
  Normal   Pulled   8m (x3 over 9m)   kubelet, devstats.team.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  4m (x26 over 9m)  kubelet, devstats.team.io  Back-off restarting failed container


Name:               weave-net-wwjrr
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               cncftest.io/147.75.205.79
Start Time:         Thu, 02 Aug 2018 14:39:57 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.205.79
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://d0d1dccfe0a1f57bce652e30d5df210a9b232dd71fe6be1340c8bd5617e1ce11
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="22">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:16 +0000
      Finished:     Thu, 02 Aug 2018 14:43:16 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://e2c15578719788110131a4be3653a077441338b0f61f731add9dadaadfc11655
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="23">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:40:09 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                  Message
  ----     ------   ----               ----                  -------
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, cncftest.io  Created container
  Normal   Started  5m                 kubelet, cncftest.io  Started container
  Normal   Created  4m (x4 over 5m)    kubelet, cncftest.io  Created container
  Normal   Pulled   4m (x3 over 5m)    kubelet, cncftest.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Normal   Started  4m (x4 over 5m)    kubelet, cncftest.io  Started container
  Warning  BackOff  44s (x27 over 5m)  kubelet, cncftest.io  Back-off restarting failed container
  • kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true
I0802 14:49:02.034473   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.036654   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.044546   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.062906   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.063710   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf
I0802 14:49:02.063753   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.063791   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.063828   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.236764   64396 round_trippers.go:408] Response Status: 200 OK in 172 milliseconds
I0802 14:49:02.236870   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.236907   64396 round_trippers.go:414]     Content-Type: application/json
I0802 14:49:02.236944   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
I0802 14:49:02.237363   64396 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-2fsrf","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-2fsrf","uid":"e8b2dfe9-9661-11e8-8ca9-fc15b4970491","resourceVersion":"1625","creationTimestamp":"2018-08-02T14:39:49Z","labels":{"controller-revision-hash":"332195524","name":"weave-net","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"66e82a46-9661-11e8-8ca9-fc15b4970491","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","ty [truncated 4212 chars]
I0802 14:49:02.261076   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.262803   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave
I0802 14:49:02.262844   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.262882   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.262919   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.275703   64396 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds
I0802 14:49:02.275743   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.275779   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.275815   64396 round_trippers.go:414]     Content-Length: 69
I0802 14:49:02.275850   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
I0802 14:49:02.278054   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.279649   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave-npc
I0802 14:49:02.279691   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.279728   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.279765   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.293271   64396 round_trippers.go:408] Response Status: 200 OK in 13 milliseconds
I0802 14:49:02.293321   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.293358   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.293394   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
INFO: 2018/08/02 14:39:58.198716 Starting Weaveworks NPC 2.4.0; node name "devstats.cncf.io"
INFO: 2018/08/02 14:39:58.198969 Serving /metrics on :6781
Thu Aug  2 14:39:58 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/08/02 14:39:58.294002 Got list of ipsets: []
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338475   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.339275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.340235   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.341457   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.340117   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.341216   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.342131   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.342657   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343322   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343396   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.343714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.344561   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.346722   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.344468   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.345385   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.347275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.345226   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.346184   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.347875   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347016   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347523   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.350821   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.347826   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.348883   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.351365   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.348662   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.349573   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.352012   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.349429   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.350420   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.352714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.351213   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.352074   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.355261   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352128   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352949   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.355929   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.352903   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.353844   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.356576   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.353994   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.354564   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.357281   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.355515   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.356603   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.359533   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.356372   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.357453   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.360401   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

所以,总结一下。 在 Ubuntu 18.04 上不可能安装只有一个 master 和一个 worker 节点的 Kubernetes 集群。
我认为应该有一个安装说明,如何在最新的 LTS Ubuntu 上使用 kubeadm 逐步设置 k8s。

我认为 18.04 在它捆绑的 Docker 和systemd-resolved方面都打破了。
所以是的,真的很难为每一种发行版风格编写指南,而且我们无法真正有效地维护这些指南。

此外,虽然 kubeadm 是这里的前端,但问题确实可能与 kubeadm 本身无关。

一些问题:

  • 您是否使用最新的 kubernetes 版本成功运行 amd64 + arm64 集群?
  • 我想知道这是否是代理问题。 代理背后的节点是什么?
  • 在3个节点上启动kubeadm join/init/var/lib/kubelet/kubeadm-flags.env的内容是什么?
  • 这些是journalctl -xeu kubelet唯一有趣的内容吗? 仅在主节点上 - 其他节点呢? 您可以将这些转储到 github gist 或http://pastebin.com 中,我也可以看看。
  • 您是否使用最新的 kubernetes 版本成功运行 amd64 + arm64 集群? 不,这是我的第一次尝试,但我也会尝试在 amd64 主机上安装 master 和另一个 amd64 主机的单个节点以排除 arm64 相关问题
  • 我想知道这是否是代理问题。 代理背后的节点是什么? 根本没有代理,所有 3 个服务器都有静态 IP
  • 当你在 3 个节点上启动 kubeadm join/init 时,/var/lib/kubelet/kubeadm-flags.env 的内容是什么?
    大师(devstats.team.io,arm64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

节点(cncftest.io,amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

节点(devstats.cncf.io,amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
  • 那些是 journalctl -xeu kubelet 中唯一有趣的内容吗? 仅在主节点上 - 其他节点呢? 您可以将这些转储到 github gist 或http://pastebin.com 中,我也可以看看。

Pastebins: masternode

所以,我已经在 amd64 主机上安装了 master kubeadm init并尝试了weave net并且结果与在 arm64 主机上尝试时完全相同:

  • 回退重启失败的容器
  • runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

有一个小进步。
我已经在 amd64 上安装了 master,然后也在 amd64 上安装了一个节点。 一切正常。
我添加了 arm64 节点,现在我有:
主 amd64:准备好了
节点 amd64:就绪
节点 arm64:未就绪: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

  • 所以看起来flannel net plugin 不能在不同架构之间进行通信,arm64 根本不能用作 master。
  • Weave net 插件根本不起作用(甚至没有添加节点)。 无论 arch 是 amd64 还是 arm64,Master 始终处于 NotReady 状态。
  • 在所有这些情况下,'NotReady' 的原因总是相同的: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

有什么建议我该怎么办? 我应该在哪里报告? 我已经有一个 2 节点集群(主节点和节点 amd64),但我想帮助解决这个问题,这样一个人就可以使用任何带有任何架构节点的架构主节点,只是 OOTB。

@lukaszgryglicki
kube-flannel.yml仅为一种架构部署 flannel 容器。 这就是为什么在具有不同架构的节点上 cni 插件不会启动并且节点永远不会准备好的原因

我自己从未尝试过,但我想您可以部署两个具有不同污点(和名称)的黑客法兰绒清单以避免混淆,但我的建议是再次询问法兰绒人,如果已经有关于如何执行此操作的说明.

但是我已经按照教程中的建议调整了 arm64 上的清单。 将amd64替换arm64
所以也许我会为flannell创建一个问题并粘贴一个链接到这个线程。

现在为什么wave net在两个拱门上都失败了,并且具有相同的 cni 相关错误? 也许也为weave创建一个问题并链接到这个线程?

@lukaszgryglicki
当您将kube-flannel.yml用于 arm 时,它会停止在 amd 机器上运行...这就是为什么我猜测部署 2 个精心设计的清单,一个用于 arm,一个用于 amd,可以解决您的问题。

现在我想到了,也许你也应该用 kube-proxy 守护进程集解决同样的问题,但我现在无法测试,抱歉


对于您在编织方面遇到的问题,我没有足够的信息。 一个问题可能是 weave 不适用于--pod-network-cidr=10.244.0.0/16 ,但回到最初的问题,我不知道 weave 是否在混合平台上开箱即用。

所以我应该在 master 上为 flannel 部署两个不同的清单,对吗? 不管master是arm64还是amd64,对吧? Master 是否应该在其自身和节点上处理生成正确的 Arch 部署?
不知道你在这里是什么意思:

And now that I think of, might be you should fix the same issue with kube-proxy daemon set as well, but I can't test this now, sorry

我没有将--pod-network-cidr=10.244.0.0/16用于weave 。 我只使用了kubeadm init
我仅将--pod-network-cidr=10.244.0.0/16用于法兰绒尝试。 就像文档说的那样。

cc @luxas - 我看到你已经创建了一些关于多架构 k8s 部署的文档,也许你可以有一些反馈?

@lukasredynk

是的,所以这毕竟是一个主要问题,感谢您的确认。
让我们在这里专注于法兰绒,因为编织问题似乎是切线问题。

看看@luxas的上下文,如果还没有看到它:
https://github.com/luxas/kubeadm-workshop

Master 是否应该在其自身和节点上处理生成正确的 Arch 部署?

_它应该_ 但您下载的清单不是“胖”清单:
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

据我了解,arch 污点被传播,您需要在每个节点上使用kubectl修复它(?)。

看起来像一个“胖”清单在 master 中并被添加到这里:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

相关问题/公关:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

我的假设是,这是最前沿的,你必须使用:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

因此,请关闭集群并尝试一下,希望它可以工作。
我们的 CNI 文档需要修改,但这需要在flannel-next发布时发生。

好的,将在周末后尝试并在此处发布我的结果。 谢谢。

@lukaszgryglicki嗨,你使用新的法兰绒清单

还没有,我今天试试。

OK终于奏效了:

root<strong i="6">@devstats</strong>:/root# kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
cncftest.io        Ready     <none>    39s       v1.11.1
devstats.cncf.io   Ready     <none>    46s       v1.11.1
devstats.team.io   Ready     master    12m       v1.11.1

来自法兰绒master分支的Fat mainifest有所帮助。
谢谢,这个可以关了。

大家好,我也是这种情况。
我有处于就绪状态的工作节点,但 arm64 上的法兰绒不断因此错误而崩溃:
1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm64-m5jfd': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm64-m5jfd: dial tcp 10.96.0.1:443: i/o timeout
@lukasredynk对你

任何的想法?

错误似乎不同,但是您是否使用了胖清单: https :
它包含多个拱门的清单。

我是:
image

现在的问题是法兰绒容器不能固定在手臂上。 :(

它适用于amd64arm64 - 对我有用。
不幸的是,我无法帮助arm (32 位),我没有可用的arm机器。

我在 arm64 但谢谢,我会继续调查...

哦,那对不起,我以为你在胳膊上。
无论如何,我对此也很陌生,所以你需要等待其他人的帮助。
请粘贴kubectl describe pods --all-namespace输出以及我在此线程中发布的其他命令的可能输出。 这可能有助于某人跟踪真正的问题。

谢谢@lukaszgryglicki
这是描述豆荚的输出: https :

@lukaszgryglicki
很高兴它最终奏效了。
我将在文档中记录 flannel 的胖清单用法,因为我不知道 0.11.0 何时发布。

@Leen15

与失败的 Pod 相关:

  Warning  FailedCreatePodSandBox  3m (x5327 over 7h)  kubelet, nanopi-neo-plus2  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ddb551d520a757f4f8ff81d1dbfde50a98a5ec65385673a5a49a79e23a3243b" network for pod "arm-test-7894bfffd-njdcc": NetworkPlugin cni failed to set up pod "arm-test-7894bfffd-njdcc_default" network: open /run/flannel/subnet.env: no such file or directory

您要添加法兰绒所需的--pod-network-cidr=...吗?

也试试这个指南:
https://github.com/kubernetes/kubernetes/issues/36575#issuecomment -264622923

@neolit123是的,我发现了问题:flannel 没有创建虚拟网络接口(cni 和 flannel0)。
我不知道原因,几个小时后我未能解决它。
我放弃了,转而使用 swarm。

好,懂了。 在那种情况下,我正在关闭这个问题。
谢谢。

我也遇到了同样的问题,发现由于中国的GFW,节点无法拉取需要的图片,于是我手动拉取图片,恢复正常

我运行这个命令,它解决了我的问题:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这会在 /etc/cni/net.d 目录中创建一个名为 10-flannel.conflist 的文件。 我相信 kubernetes 需要一个网络,这是这个包设置的。
我的集群处于以下状态:

姓名 状态 角色 年龄 版本
k8s-master Ready master 3h37m v1.14.1
node001 就绪3h6m v1.14.1
node02 就绪167m v1.14.1

大家好,

我有 1 个主节点和 2 个节点。 第二个节点处于未就绪状态。

root@kube1 :~# kubectl 获取节点
姓名 状态 角色 年龄 版本
dockerlab1 就绪3h57m v1.14.3
kube1 Ready master 4h12m v1.14.3
labserver1 未就绪22m v1.14.3


root@kube1 :~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS 年龄
kube 系统 coredns-fb8b8dccf-72llr 1/1 运行 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 运行 0 4h13m
kube-system etcd-kube1 1/1 运行 0 4h12m
kube-system kube-apiserver-kube1 1/1 运行 0 4h12m
kube-system kube-controller-manager-kube1 1/1 运行 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1初始化:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 运行 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 运行 0 4h1m
kube 系统 kube-proxy-7m8jg 1/1 运行 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 运行 0 4h13m

kube-system kube-scheduler-kube1 1/1 运行 0 4h13m

root@kube1 :~# kubectl 描述节点 labserver1
名称:labserver1
角色:
标签:beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
注释:kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: 真
创建时间戳:2019 年 6 月 9 日星期日 21:03:57 +0800
污点:node.kubernetes.io/not-准备:NOEXECUTE
node.kubernetes.io/not-ready:NoSchedule
不可安排:假
条件:
类型状态 LastHeartbeatTime LastTransitionTime 原因消息
---- ------ ----------------- ----- ----- - -------
MemoryPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet 有足够的可用内存
DiskPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet 没有磁盘压力
PIDPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet 有足够的 PID 可用
Ready False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 21:03:57 +0800 KubeletNotReady 运行时网络未就绪:NetworkReady=false原因:NetworkPluginNotReady消息:docker :网络插件未就绪:cni 配置未初始化
地址:
内部IP:172.31.8.125
主机名:labserver1
容量:
中央处理器:1
临时存储:18108284Ki
大页面-1Gi:0
大页面-2Mi:0
内存:1122528Ki
豆荚:110
可分配:
中央处理器:1
临时存储:16688594507
大页面-1Gi:0
大页面-2Mi:0
内存:1020128Ki
豆荚:110
系统信息:
机器 ID:292dc4560f9309ccdd72b6935c80e8ec
系统 UUID:DE4707DF-5516-784A-9B41-588FCDE49369
引导 ID:828d124c-b687-43f6-bffa-6a3e1e6e17e6
内核版本:4.4.0-142-generic
操作系统映像:Ubuntu 16.04.6 LTS
操作系统:linux
架构:amd64
容器运行时版本:docker://18.9.6
Kubelet 版本:v1.14.3
Kube-Proxy 版本:v1.14.3
PodCIDR:10.244.3.0/24
未终止的 Pod:(共 2 个)
命名空间名称 CPU 请求 CPU 限制内存请求内存限制 AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
分配的资源:
(总限制可能超过 100%,即过度使用。)
资源请求限制
———————————————
中央处理器 100m (10%) 100m (10%)
内存 50Mi (5%) 50Mi (5%)
临时存储 0 (0%) 0 (0%)
事件:
键入来自消息的原因年龄
---- ------ ---- ---- -------
正常启动 45m kubelet,labserver1 启动 kubelet。
普通 NodeHasSufficientMemory 45m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientMemory
普通 NodeHasNoDiskPressure 45m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasNoDiskPressure
普通 NodeHasSufficientPID 45m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientPID
普通 NodeAllocableEnforced 45m kubelet、labserver1 更新了跨 Pod 的节点可分配限制
正常启动 25m kubelet,labserver1 启动 kubelet。
普通 NodeAllocableEnforced 25m kubelet、labserver1 更新了跨 Pod 的节点可分配限制
普通 NodeHasSufficientMemory 25m (x2 over 25m) kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientMemory
普通 NodeHasSufficientPID 25m (x2 over 25m) kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientPID
普通 NodeHasNoDiskPressure 25m (x2 over 25m) kubelet, labserver1 节点 labserver1 状态现在是:NodeHasNoDiskPressure
正常启动 13m kubelet,labserver1 启动 kubelet。
普通 NodeHasSufficientMemory 13m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientMemory
普通 NodeHasNoDiskPressure 13m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasNoDiskPressure
普通 NodeHasSufficientPID 13m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientPID
普通 NodeAllocableEnforced 13m kubelet、labserver1 更新了跨 Pod 的节点可分配限制
root@kube1 :~#

请帮忙

大家好,

我有 1 个主节点和 2 个节点。 第二个节点处于未就绪状态。

root@kube1 :~# kubectl 获取节点
姓名 状态 角色 年龄 版本
dockerlab1 就绪 3h57m v1.14.3
kube1 Ready master 4h12m v1.14.3
labserver1 NotReady 22m v1.14.3

root@kube1 :~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS 年龄
kube 系统 coredns-fb8b8dccf-72llr 1/1 运行 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 运行 0 4h13m
kube-system etcd-kube1 1/1 运行 0 4h12m
kube-system kube-apiserver-kube1 1/1 运行 0 4h12m
kube-system kube-controller-manager-kube1 1/1 运行 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1初始化:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 运行 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 运行 0 4h1m
kube 系统 kube-proxy-7m8jg 1/1 运行 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 运行 0 4h13m
kube-system kube-scheduler-kube1 1/1 运行 0 4h13m
root@kube1 :~# kubectl 描述节点 labserver1
名称:labserver1
角色:
标签:beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
注释:kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: 真
创建时间戳:2019 年 6 月 9 日星期日 21:03:57 +0800
污点:node.kubernetes.io/not-准备:NOEXECUTE
node.kubernetes.io/not-ready:NoSchedule
不可安排:假
条件:
类型状态 LastHeartbeatTime LastTransitionTime 原因消息

MemoryPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet 有足够的可用内存
DiskPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet 没有磁盘压力
PIDPressure False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet 有足够的 PID 可用
Ready False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 21:03:57 +0800 KubeletNotReady 运行时网络未就绪:NetworkReady=false原因:NetworkPluginNotReady消息:docker :网络插件未就绪:cni 配置未初始化
地址:
内部IP:172.31.8.125
主机名:labserver1
容量:
中央处理器:1
临时存储:18108284Ki
大页面-1Gi:0
大页面-2Mi:0
内存:1122528Ki
豆荚:110
可分配:
中央处理器:1
临时存储:16688594507
大页面-1Gi:0
大页面-2Mi:0
内存:1020128Ki
豆荚:110
系统信息:
机器 ID:292dc4560f9309ccdd72b6935c80e8ec
系统 UUID:DE4707DF-5516-784A-9B41-588FCDE49369
引导 ID:828d124c-b687-43f6-bffa-6a3e1e6e17e6
内核版本:4.4.0-142-generic
操作系统映像:Ubuntu 16.04.6 LTS
操作系统:linux
架构:amd64
容器运行时版本:docker://18.9.6
Kubelet 版本:v1.14.3
Kube-Proxy 版本:v1.14.3
PodCIDR:10.244.3.0/24
未终止的 Pod:(共 2 个)
命名空间名称 CPU 请求 CPU 限制内存请求内存限制 AGE

kube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
分配的资源:
(总限制可能超过 100%,即过度使用。)
资源请求限制

中央处理器 100m (10%) 100m (10%)
内存 50Mi (5%) 50Mi (5%)
临时存储 0 (0%) 0 (0%)
事件:
键入来自消息的原因年龄

正常启动 45m kubelet,labserver1 启动 kubelet。
普通 NodeHasSufficientMemory 45m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientMemory
普通 NodeHasNoDiskPressure 45m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasNoDiskPressure
普通 NodeHasSufficientPID 45m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientPID
普通 NodeAllocableEnforced 45m kubelet、labserver1 更新了跨 Pod 的节点可分配限制
正常启动 25m kubelet,labserver1 启动 kubelet。
普通 NodeAllocableEnforced 25m kubelet、labserver1 更新了跨 Pod 的节点可分配限制
普通 NodeHasSufficientMemory 25m (x2 over 25m) kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientMemory
普通 NodeHasSufficientPID 25m (x2 over 25m) kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientPID
普通 NodeHasNoDiskPressure 25m (x2 over 25m) kubelet, labserver1 节点 labserver1 状态现在是:NodeHasNoDiskPressure
正常启动 13m kubelet,labserver1 启动 kubelet。
普通 NodeHasSufficientMemory 13m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientMemory
普通 NodeHasNoDiskPressure 13m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasNoDiskPressure
普通 NodeHasSufficientPID 13m kubelet, labserver1 节点 labserver1 状态现在是:NodeHasSufficientPID
普通 NodeAllocableEnforced 13m kubelet、labserver1 更新了跨 Pod 的节点可分配限制
root@kube1 :~#

请帮忙

嗨,阿瑟,

         Please check out logs in /var/logs/messages section of your Master node. You can find an actual error in those logs. But here are some general tips.

一世。 始终首先专注于您的主节点。
ii. 在其上安装 docker 引擎并获取用于 kubernetes 的所有图像。 当一切都处于运行状态时,然后将节点添加到主节点。 它将解决所有问题。 我在网上看到一些文章,尝试在附加从节点后获取一些图像。 这种做法会带来一些麻烦。

嗨saddique164,感谢您的建议。 是的,正如你所说,我昨天部署了另一个新的从节点,并且能够毫无问题地加入主节点。

抱歉,我无能为力,我不再有 ARM64 节点了,现在我有一个 4 节点 AMD64 裸机集群。

文件 /etc/cni/net.d/10-flannel.conflist 在其配置中缺少 cniVersion 键。

添加 "cniVersion": "0.2.0" 解决了这个问题。

文件 /etc/cni/net.d/10-flannel.conflist 在其配置中缺少 cniVersion 键。

添加 "cniVersion": "0.2.0" 解决了这个问题。

我从 1.15 更新到 V1.16.0 时遇到了这个问题。

dGVzdDoxMjPCow

2019 年 9 月 24 日 12:52,“ronakgpatel”通知@github.com 写道:

文件 /etc/cni/net.d/10-flannel.conflist 中缺少 cniVersion 键
它的配置。

添加 "cniVersion": "0.2.0" 解决了这个问题。

我从 1.15 更新到 V1.16.0 时遇到了这个问题。


您收到此消息是因为您订阅了此线程。
直接回复本邮件,在GitHub上查看
https://github.com/kubernetes/kubeadm/issues/1031?email_source=notifications&email_token=AND2HJTXHB6WSGAE7PSOAJTQLGMKVA5CNFSM4FNQHEHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXWSY7Z3ORG43VMVBW63LNMVXWZI730000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
或静音线程
https://github.com/notifications/unsubscribe-auth/AND2HJXDKQARYKXVY4YLZMDQLGMKVANCNFSM4FNQHEHA
.

法兰绒不是很积极维护。 我推荐 calico 或 weavenet。

法兰绒存储库需要修复。
用于安装 flannel 的 kubeadm 指南刚刚更新,请参阅:
https://github.com/kubernetes/website/pull/16575/files

在这里遇到了同样的问题。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml。

为我工作。

docker:网络插件未准备好:cni 配置未初始化

在 notready 节点上重新安装 docker。
为我工作。

我运行这个命令,它解决了我的问题:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这会在 /etc/cni/net.d 目录中创建一个名为 10-flannel.conflist 的文件。 我相信 kubernetes 需要一个网络,这是这个包设置的。
我的集群处于以下状态:

姓名 状态 角色 年龄 版本
k8s-master Ready master 3h37m v1.14.1
node001 就绪 3h6m v1.14.1
node02 就绪 167m v1.14.1

刚刚做到了!

我有一个类似的案例,我在链接工作人员之前创建了网络插件,这使得 /etc/cni/net.d 丢失了。
使用以下方法链接工作节点后,我重新执行了配置:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
结果/etc/cni/net.d中的配置创建成功,节点显示为Ready状态。

希望能帮助任何有同样问题的人。

我运行这个命令,它解决了我的问题:

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这会在 /etc/cni/net.d 目录中创建一个名为 10-flannel.conflist 的文件。 我相信 kubernetes 需要一个网络,这是这个包设置的。
我的集群处于以下状态:

姓名 状态 角色 年龄 版本
k8s-master Ready master 3h37m v1.14.1
node001 就绪 3h6m v1.14.1
node02 就绪 167m v1.14.1

在主机上运行该命令,现在一切都处于就绪状态。 谢谢@saddique164

最快的方法是在任一 AMD64 架构上将 Flannel 添加到 Kubernetes 中。

1.更新kube-flannel.yaml

$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> kube-flannel.yaml

2.完成Flannel网络设置

$ kubectl apply -f kube-flannel.yaml

我正在使用 kubernetes 1.18 版本。

我用过这个:kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

/etc/cni/net.d下没有创建文件
主节点未就绪,而从节点处于就绪状态

我正在使用 kubernetes 1.18 版本。

我用过这个:kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

/etc/cni/net.d下没有创建文件
主节点未就绪,而从节点处于就绪状态

  1. 你可以在 master 上运行 kubectl 命令吗?
  2. 你能检查一下你的 kubelet 是在 master 上运行的吗? 或者运行这个:systemctl restart kubelet。
  3. 如果 kubelet 正在重新启动或自动重新启动,则运行 journal -u kubelet 并检查日志。 你会发现错误。

注意:这似乎是 kubelet 问题。

  1. 是的,我可以运行 kubectl 命令。
  2. Kubectl 运行然后失败。
  3. 这是我在 journactl -u kubelet 的错误中看到的:
Jul 01 11:58:36 master kubelet[17918]: F0701 11:58:36.613864   17918 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 01 11:58:36 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 01 11:58:36 master systemd[1]: Unit kubelet.service entered failed state.
Jul 01 11:58:36 master systemd[1]: kubelet.service failed.

在主人上试试这个:

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

它开始,然后再次失败。

Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692341   15525 remote_runtime.go:59] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692358   15525 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692381   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692389   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692420   15525 remote_image.go:50] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692427   15525 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692435   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692440   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692464   15525 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692480   15525 kubelet.go:317] Watching apiserver
Jul 02 10:37:16 master kubelet[15525]: W0702 10:37:16.680313   15525 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

如果你看到它说没有找到网络。 运行此命令并共享结果。

Kubectl get pods -n kube-system

NAME                          READY   STATUS        RESTARTS   AGE
coredns-75f8564758-92ws7      1/1     Running       0          25h
coredns-75f8564758-z9xn8      1/1     Running       0          25h
kube-flannel-ds-amd64-2j4mw   1/1     Running       0          25h
kube-flannel-ds-amd64-5tmhp   0/1     Pending       0          25h
kube-flannel-ds-amd64-rqwmz   1/1     Running       0          25h
kube-proxy-6v24w              1/1     Running       0          25h
kube-proxy-jgdw7              0/1     Pending       0          25h
kube-proxy-qppnk              1/1     Running       0          25h

运行这个
kubectl 日志 kube-flannel-ds-amd64-5tmhp -n kube-system

如果什么都没有,那么运行这个:
kubectl 描述 pod kube-flannel-ds-amd64-5tmhp -n kube-system

来自服务器的错误:获取https://10.75.214.124 :10250/containerLogs/kube-system/kube-flannel-ds-amd64-5tmhp/kube-flannel:拨号 tcp 10.75.214.124:10250:连接:连接被拒绝

有多少节点在为您运行? 在集群中? 一个节点正在做这个问题。 这称为守护进程集。 它们在每个节点上运行。 您的控制计划不接受它的请求。 因此,我建议您按照以下步骤操作。

  1. 首先将工作节点一个一个排空。
    Kubectl 排水节点名
  2. 然后删除它们。
    kubectl 删除节点节点名
  3. 等待主节点准备就绪。 如果它不来。 运行这个命令。
    kubeadm 重置
  4. 再次初始化kubeadm
    kubeadm 初始化
  5. 运行这个命令
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  6. 从主节点获取命令并在工作节点中运行它以连接它们。

这个过程会起作用。

kubectl 获取节点:

NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   26h   v1.18.5
slave1   Ready      <none>   26h   v1.18.5
slave2   Ready      <none>   26h   v1.18.5

我试过你提到的步骤:

这就是我得到的。

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

排空除主节点之外的所有节点并专注于它。 当它准备好了然后去添加其他人。

排空节点然后 kubeadm reset 和 init 没有帮助。 之后不会初始化集群。

我的问题是我在创建集群后更新主机名。 这样做,就好像主人不知道自己是主人一样。

我还在运行:

sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname)

但现在我在集群初始化之前运行它

此页面是否有帮助?
0 / 5 - 0 等级