FEHLERBERICHT
apt-get update/upgrade
.kubeadm init --pod-network-cidr=10.244.0.0/16
. Und dann die vorgeschlagenen Befehle ausgeführt.sysctl net.bridge.bridge-nf-call-iptables=1
.wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
.vim kube-flannel.yml
, ersetze amd64
durch arm64
kubectl apply -f kube-flannel.yml
.kubectl get pods --all-namespaces
:NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-ls44z 1/1 Running 0 20m
kube-system coredns-78fcdf6894-njnnt 1/1 Running 0 20m
kube-system etcd-devstats.team.io 1/1 Running 0 20m
kube-system kube-apiserver-devstats.team.io 1/1 Running 0 20m
kube-system kube-controller-manager-devstats.team.io 1/1 Running 0 20m
kube-system kube-flannel-ds-v4t8s 1/1 Running 0 13m
kube-system kube-proxy-5825g 1/1 Running 0 20m
kube-system kube-scheduler-devstats.team.io 1/1 Running 0 20m
Dann wurden zwei AMD64-Knoten mit der Ausgabe von kubeadm init
:
1. Knoten:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:49.987467 16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709 16652 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
2. Knoten:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:58.913060 38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222 38617 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Aber auf dem Master kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
cncftest.io NotReady <none> 7m v1.11.1
devstats.cncf.io NotReady <none> 7m v1.11.1
devstats.team.io Ready master 21m v1.11.1
Und dann: kubectl describe nodes
(Master ist devstats.team.io
, Knoten sind: cncftest.io
und devstats.cncf.io
):
Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=cncftest.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:26:53 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.205.79
Hostname: cncftest.io
Capacity:
cpu: 48
ephemeral-storage: 459266000Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264047752Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 423259544900
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263945352Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 4C4C4544-0052-3310-804A-B7C04F4E4432
Boot ID: d87670d9-251e-42a5-90c5-5d63059f03ab
Kernel Version: 4.15.0-22-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.1.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m kubelet, cncftest.io Starting kubelet.
Normal NodeHasSufficientDisk 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m kubelet, cncftest.io Updated Node Allocatable limit across pods
Name: devstats.cncf.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.cncf.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:27:00 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.78.47
Hostname: devstats.cncf.io
Capacity:
cpu: 48
ephemeral-storage: 142124052Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264027220Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 130981526107
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263924820Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 00000000-0000-0000-0000-0CC47AF37CF2
Boot ID: f257b606-5da2-43fd-8782-0aa4484037f4
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.2.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m kubelet, devstats.cncf.io Starting kubelet.
Normal NodeHasSufficientDisk 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m kubelet, devstats.cncf.io Updated Node Allocatable limit across pods
Name: devstats.team.io
Roles: master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.team.io
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=147.75.97.234
kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:12:56 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:21:07 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 147.75.97.234
Hostname: devstats.team.io
Capacity:
cpu: 96
ephemeral-storage: 322988584Ki
hugepages-2Mi: 0
memory: 131731468Ki
pods: 110
Allocatable:
cpu: 96
ephemeral-storage: 297666278522
hugepages-2Mi: 0
memory: 131629068Ki
pods: 110
System Info:
Machine ID: 5eaa89a81ff348399284bb4cb016ffd7
System UUID: 10000000-FAC5-FFFF-A81D-FC15B4970493
Boot ID: 43b920e3-34e7-4de3-aa6c-8b5c525363ff
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system coredns-78fcdf6894-ls44z 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system coredns-78fcdf6894-njnnt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system etcd-devstats.team.io 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-devstats.team.io 250m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-devstats.team.io 200m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-flannel-ds-v4t8s 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%)
kube-system kube-proxy-5825g 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-devstats.team.io 100m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (0%) 100m (0%)
memory 190Mi (0%) 390Mi (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 23m kubelet, devstats.team.io Starting kubelet.
Normal NodeAllocatableEnforced 23m kubelet, devstats.team.io Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 23m (x5 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientPID
Normal NodeHasSufficientDisk 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasNoDiskPressure
Normal Starting 21m kube-proxy, devstats.team.io Starting kube-proxy.
Normal NodeReady 13m kubelet, devstats.team.io Node devstats.team.io status is now: NodeReady
kubeadm-Version (verwenden Sie kubeadm version
):
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Umgebung :
kubectl version
):Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
uname -a
: Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Di 24 Apr 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/LinuxNAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
lsb_release -a
:No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
uname -a
): Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
docker version
:docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Apr 18 01:26:37 2018
OS/Arch: linux/arm64
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Feb 28 17:46:05 2018
OS/Arch: linux/arm64
Experimental: false
Der genaue Fehler scheint zu sein:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Auf dem Knoten: cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Aus diesem Thread (keine KUBELET_NETWORK_ARGS
dort).
journalctl -xe
auf dem Knoten:Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663 38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876 38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Verzeichnis /etc/cni/net.d
existiert, ist aber leer.
Alle Knoten im Zustand Ready
.
Folgen Sie einfach den Schritten aus dem Tutorial . 3x versucht und es passiert die ganze Zeit.
Master ist ARM64, 2 Knoten sind AMD64.
Master und ein Knoten befinden sich in Amsterdam und der zweite Knoten befindet sich in den USA.
Ich kann kubectl taint nodes --all node-role.kubernetes.io/master-
, um Pods auf dem Master auszuführen, aber dies ist keine Lösung. Ich möchte einen echten Multi-Node-Cluster haben, mit dem ich arbeiten kann.
@lukaszgryglicki
Es scheint, dass die Knoten kein Flanell erhalten, weil sie sich auf der amd64
Architektur befinden
Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
und
Name: devstats.team.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
Ich bin kein Experte für Flanell, aber ich denke, Sie sollten die Produktdokumentation überprüfen, um zu erfahren, wie es in einer Umgebung mit gemischten Plattformen funktioniert.
Das ist ein guter Punkt, aber wie sieht es mit der Fehlermeldung aus - sie scheint wirklich nichts zu tun zu haben.
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Es sieht so aus, als ob einige CNI-Konfigurationsdateien in /etc/cni/net.d
fehlen, aber warum?
Ich probiere jetzt einen anderen Docker 18.03ce
wie auf dem Slack-Kanal vorgeschlagen (17.03 wurde eigentlich vorgeschlagen, aber es gibt keinen 17.03 für Ubuntu 18.04).
Labels mit Arch-Namen stimmen tatsächlich nicht überein. Aber das nächste Label beta.kubernetes.io/os=linux
ist auf allen 3 Servern gleich.
Das gleiche passiert mit Docker 18.03ce. Ich sehe keinen Unterschied, das sieht nicht nach einem Docker-Problem aus. Dies sieht nach einem CNI-Konfigurationsproblem aus.
@lukaszgryglicki
Hallo,
Master: Bare-Metal-Server 96 Kerne, ARM64, 128G RAM, Swap ausgeschaltet.
Knoten (2): Bare-Metal-Server 48 Kerne, AMD64, 256G RAM, Swap ausgeschaltet x 2.
das sind ein paar _schöne_ Spezifikationen.
Ich teste die Dinge wie folgt - wenn etwas mit Weavenet nicht funktioniert, versuche ich es mit Flanell und umgekehrt.
Versuchen Sie es also mit Weave und wenn Ihr CNI-Setup damit funktioniert, dann hängt dies mit dem CNI-Plugin zusammen.
Während das kubeadm-Team Plugins und Add-Ons unterstützt, delegieren wir Probleme normalerweise an ihre jeweiligen Betreuer, da wir nicht die Bandbreite haben, um alles zu erledigen.
Sicher, ich habe es vor ein paar Iterationen mit Weben versucht. Es endete in einer Container-Neustart-Schleife.
Werde jetzt Docker 17.03 ausprobieren, um Docker-Probleme auszuschließen (17.03 soll sehr gut unterstützt werden).
Dies ist also kein Docker-Problem. Am 17.03 das gleiche:
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: W0802 14:21:51.406786 21714 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: E0802 14:21:51.407074 21714 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
now will try weave net as suggested on the issue
Ich werde es jetzt mit Weben versuchen und die Ergebnisse hier posten.
Also, ich habe weave net
ausprobiert und es funktioniert nicht:
Auf Master: kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
cncftest.io NotReady <none> 5s v1.11.1
devstats.cncf.io NotReady <none> 12s v1.11.1
devstats.team.io NotReady master 7m v1.11.1
kubectl describe nodes
(derselbe cni-bezogene Fehler, aber jetzt auch auf dem Master-Knoten):Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=cncftest.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:39:56 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.205.79
Hostname: cncftest.io
Capacity:
cpu: 48
ephemeral-storage: 459266000Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264047752Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 423259544900
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263945352Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 4C4C4544-0052-3310-804A-B7C04F4E4432
Boot ID: d87670d9-251e-42a5-90c5-5d63059f03ab
Kernel Version: 4.15.0-22-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.2
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system weave-net-wwjrr 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 20m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 1m kubelet, cncftest.io Starting kubelet.
Normal NodeHasSufficientDisk 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 1m kubelet, cncftest.io Updated Node Allocatable limit across pods
Name: devstats.cncf.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.cncf.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:39:49 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.78.47
Hostname: devstats.cncf.io
Capacity:
cpu: 48
ephemeral-storage: 142124052Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264027220Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 130981526107
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263924820Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 00000000-0000-0000-0000-0CC47AF37CF2
Boot ID: f257b606-5da2-43fd-8782-0aa4484037f4
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.2
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system weave-net-2fsrf 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 20m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 1m kubelet, devstats.cncf.io Starting kubelet.
Normal NodeHasSufficientDisk 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 1m kubelet, devstats.cncf.io Updated Node Allocatable limit across pods
Name: devstats.team.io
Roles: master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.team.io
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:32:14 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.97.234
Hostname: devstats.team.io
Capacity:
cpu: 96
ephemeral-storage: 322988584Ki
hugepages-2Mi: 0
memory: 131731468Ki
pods: 110
Allocatable:
cpu: 96
ephemeral-storage: 297666278522
hugepages-2Mi: 0
memory: 131629068Ki
pods: 110
System Info:
Machine ID: 5eaa89a81ff348399284bb4cb016ffd7
System UUID: 10000000-FAC5-FFFF-A81D-FC15B4970493
Boot ID: 43b920e3-34e7-4de3-aa6c-8b5c525363ff
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://17.9.0
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-devstats.team.io 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-devstats.team.io 250m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-devstats.team.io 200m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-69qnb 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-devstats.team.io 100m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-j9f5m 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 570m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kubelet, devstats.team.io Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet, devstats.team.io Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 10m (x5 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientPID
Normal NodeHasSufficientDisk 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasNoDiskPressure
Normal Starting 8m kube-proxy, devstats.team.io Starting kube-proxy.
journalctl -xe
auf Master:Aug 02 14:42:18 devstats.team.io dockerd[44020]: time="2018-08-02T14:42:18.330999189Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.079835 56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080312 56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080677 56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:19 devstats.team.io kubelet[56340]: E0802 14:42:19.080815 56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:21 devstats.team.io kubelet[56340]: W0802 14:42:21.867690 56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:21 devstats.team.io kubelet[56340]: E0802 14:42:21.868005 56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.259681 56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260359 56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260833 56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.260984 56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:26 devstats.team.io kubelet[56340]: W0802 14:42:26.870675 56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.871316 56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
kubectl get po --all-namespaces
:NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-g8wzs 0/1 Pending 0 12m
kube-system coredns-78fcdf6894-tzs8n 0/1 Pending 0 12m
kube-system etcd-devstats.team.io 1/1 Running 0 12m
kube-system kube-apiserver-devstats.team.io 1/1 Running 0 12m
kube-system kube-controller-manager-devstats.team.io 1/1 Running 0 12m
kube-system kube-proxy-69qnb 1/1 Running 0 12m
kube-system kube-scheduler-devstats.team.io 1/1 Running 0 12m
kube-system weave-net-2fsrf 1/2 CrashLoopBackOff 5 5m
kube-system weave-net-j9f5m 1/2 CrashLoopBackOff 6 8m
kube-system weave-net-wwjrr 1/2 CrashLoopBackOff 5 4m
kubectl describe po --all-namespaces
:Name: coredns-78fcdf6894-g8wzs
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.1.3
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-jw4mv:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-jw4mv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m (x32 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
Warning FailedScheduling 3m (x48 over 5m) default-scheduler 0/3 nodes are available: 3 node(s) were not ready.
Name: coredns-78fcdf6894-tzs8n
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.1.3
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-jw4mv:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-jw4mv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m (x32 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
Warning FailedScheduling 3m (x47 over 5m) default-scheduler 0/3 nodes are available: 3 node(s) were not ready.
Name: etcd-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=etcd
tier=control-plane
Annotations: kubernetes.io/config.hash=cc73514fbc25558d566fe49661f006a0
kubernetes.io/config.mirror=cc73514fbc25558d566fe49661f006a0
kubernetes.io/config.seen=2018-08-02T14:31:13.654147902Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
etcd:
Container ID: docker://254c88b154393778ef7b1ead2aaaa0acb120ffb76d911f140172da3323f1f1e3
Image: k8s.gcr.io/etcd-arm64:3.2.18
Image ID: docker-pullable://k8s.gcr.io/etcd-arm64<strong i="13">@sha256</strong>:f0b7368ebb28e6226ab3b4dbce4b5c6d77dab7b5f6579b08fd645c00f7b100ff
Port: <none>
Host Port: <none>
Command:
etcd
--advertise-client-urls=https://127.0.0.1:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://127.0.0.1:2380
--initial-cluster=devstats.team.io=https://127.0.0.1:2380
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379
--listen-peer-urls=https://127.0.0.1:2380
--name=devstats.team.io
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Liveness: exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/pki/etcd from etcd-certs (rw)
/var/lib/etcd from etcd-data (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
etcd-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/etcd
HostPathType: DirectoryOrCreate
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki/etcd
HostPathType: DirectoryOrCreate
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-apiserver-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubernetes.io/config.hash=1f7835a47425009200d38bf94c337ab3
kubernetes.io/config.mirror=1f7835a47425009200d38bf94c337ab3
kubernetes.io/config.seen=2018-08-02T14:31:13.639443247Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-apiserver:
Container ID: docker://22b73993b141faebe6b4aab727d2235abb3422a17b60bc1be6c749c260e39f67
Image: k8s.gcr.io/kube-apiserver-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-apiserver-arm64<strong i="14">@sha256</strong>:bca1933fa25fc7f890700f6aebd572c6f8351f7bc89d2e4f2c44a63649e3fccf
Port: <none>
Host Port: <none>
Command:
kube-apiserver
--authorization-mode=Node,RBAC
--advertise-address=147.75.97.234
--allow-privileged=true
--client-ca-file=/etc/kubernetes/pki/ca.crt
--disable-admission-plugins=PersistentVolumeLabel
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--insecure-port=0
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 250m
Liveness: http-get https://147.75.97.234:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-controller-manager-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash=5d26a7fba3c17c9fa8969a466d6a0f1d
kubernetes.io/config.mirror=5d26a7fba3c17c9fa8969a466d6a0f1d
kubernetes.io/config.seen=2018-08-02T14:31:13.646000889Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-controller-manager:
Container ID: docker://5182bf5c7c63f9507e6319a2c3fb5698dc827ea9b591acbb071cb39c4ea445ea
Image: k8s.gcr.io/kube-controller-manager-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-controller-manager-arm64<strong i="15">@sha256</strong>:7fa0b0242c13fcaa63bff3b4cde32d30ce18422505afa8cb4c0f19755148b612
Port: <none>
Host Port: <none>
Command:
kube-controller-manager
--address=127.0.0.1
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--use-service-account-credentials=true
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 200m
Liveness: http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
flexvolume-dir:
Type: HostPath (bare host directory volume)
Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-proxy-69qnb
Namespace: kube-system
Priority: 2000001000
PriorityClassName: system-node-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:32:25 +0000
Labels: controller-revision-hash=2718475167
k8s-app=kube-proxy
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Controlled By: DaemonSet/kube-proxy
Containers:
kube-proxy:
Container ID: docker://12fb2a4a8af025604e46783aa87d084bdc681365317c8dac278a583646a8ad1c
Image: k8s.gcr.io/kube-proxy-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-proxy-arm64<strong i="16">@sha256</strong>:c61f4e126ec75dedce3533771c67eb7c1266cacaac9ae770e045a9bec9c9dc32
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/kube-proxy
--config=/var/lib/kube-proxy/config.conf
State: Running
Started: Thu, 02 Aug 2018 14:32:26 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/kube-proxy from kube-proxy (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-4q6rl (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-proxy:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-proxy
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
kube-proxy-token-4q6rl:
Type: Secret (a volume populated by a Secret)
SecretName: kube-proxy-token-4q6rl
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=arm64
Tolerations:
CriticalAddonsOnly
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 13m kubelet, devstats.team.io Container image "k8s.gcr.io/kube-proxy-arm64:v1.11.1" already present on machine
Normal Created 13m kubelet, devstats.team.io Created container
Normal Started 13m kubelet, devstats.team.io Started container
Name: kube-scheduler-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-scheduler
tier=control-plane
Annotations: kubernetes.io/config.hash=6e1c1eb822c75df4cec74cac9992eea9
kubernetes.io/config.mirror=6e1c1eb822c75df4cec74cac9992eea9
kubernetes.io/config.seen=2018-08-02T14:31:13.651239565Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-scheduler:
Container ID: docker://0b8018a7d0c2cb2dc64d9364dea5cea8047b0688c4ecb287dba8bebf9ab011a3
Image: k8s.gcr.io/kube-scheduler-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-scheduler-arm64<strong i="17">@sha256</strong>:28ab99ab78c7945a4e20d9369682e626b671ba49e2d4101b1754019effde10d2
Port: <none>
Host Port: <none>
Command:
kube-scheduler
--address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Running
Started: Thu, 02 Aug 2018 14:31:14 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: weave-net-2fsrf
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: devstats.cncf.io/147.75.78.47
Start Time: Thu, 02 Aug 2018 14:39:49 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.78.47
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://e8f5c3b702166a15212ab9576696aa7a1a0cb5b94e9cba1451fc9cc2b1d1382d
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="18">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:43:04 +0000
Finished: Thu, 02 Aug 2018 14:43:05 +0000
Ready: False
Restart Count: 5
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://1cfd16507d6d9e1744bfc354af62301fb8678af12ace34113121a40ca93b6113
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="19">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:39:58 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 5m kubelet, devstats.cncf.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 5m kubelet, devstats.cncf.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 5m kubelet, devstats.cncf.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 5m kubelet, devstats.cncf.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 5m kubelet, devstats.cncf.io Created container
Normal Started 5m kubelet, devstats.cncf.io Started container
Normal Created 5m (x4 over 5m) kubelet, devstats.cncf.io Created container
Normal Started 5m (x4 over 5m) kubelet, devstats.cncf.io Started container
Normal Pulled 5m (x3 over 5m) kubelet, devstats.cncf.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Warning BackOff 56s (x27 over 5m) kubelet, devstats.cncf.io Back-off restarting failed container
Name: weave-net-j9f5m
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:36:11 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.97.234
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="20">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:42:18 +0000
Finished: Thu, 02 Aug 2018 14:42:18 +0000
Ready: False
Restart Count: 6
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://3cd49dbca669ac83db95ebf943ed0053281fa5082f7fa403a56e30091eaec36b
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="21">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:36:31 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 9m kubelet, devstats.team.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 9m kubelet, devstats.team.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 9m kubelet, devstats.team.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 9m kubelet, devstats.team.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 9m kubelet, devstats.team.io Created container
Normal Started 9m kubelet, devstats.team.io Started container
Normal Created 8m (x4 over 9m) kubelet, devstats.team.io Created container
Normal Started 8m (x4 over 9m) kubelet, devstats.team.io Started container
Normal Pulled 8m (x3 over 9m) kubelet, devstats.team.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Warning BackOff 4m (x26 over 9m) kubelet, devstats.team.io Back-off restarting failed container
Name: weave-net-wwjrr
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: cncftest.io/147.75.205.79
Start Time: Thu, 02 Aug 2018 14:39:57 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.205.79
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://d0d1dccfe0a1f57bce652e30d5df210a9b232dd71fe6be1340c8bd5617e1ce11
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="22">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:43:16 +0000
Finished: Thu, 02 Aug 2018 14:43:16 +0000
Ready: False
Restart Count: 5
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://e2c15578719788110131a4be3653a077441338b0f61f731add9dadaadfc11655
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="23">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:40:09 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 5m kubelet, cncftest.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 5m kubelet, cncftest.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 5m kubelet, cncftest.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 5m kubelet, cncftest.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 5m kubelet, cncftest.io Created container
Normal Started 5m kubelet, cncftest.io Started container
Normal Created 4m (x4 over 5m) kubelet, cncftest.io Created container
Normal Pulled 4m (x3 over 5m) kubelet, cncftest.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Normal Started 4m (x4 over 5m) kubelet, cncftest.io Started container
Warning BackOff 44s (x27 over 5m) kubelet, cncftest.io Back-off restarting failed container
kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true
:I0802 14:49:02.034473 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.036654 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.044546 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.062906 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.063710 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf
I0802 14:49:02.063753 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.063791 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.063828 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.236764 64396 round_trippers.go:408] Response Status: 200 OK in 172 milliseconds
I0802 14:49:02.236870 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.236907 64396 round_trippers.go:414] Content-Type: application/json
I0802 14:49:02.236944 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
I0802 14:49:02.237363 64396 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-2fsrf","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-2fsrf","uid":"e8b2dfe9-9661-11e8-8ca9-fc15b4970491","resourceVersion":"1625","creationTimestamp":"2018-08-02T14:39:49Z","labels":{"controller-revision-hash":"332195524","name":"weave-net","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"66e82a46-9661-11e8-8ca9-fc15b4970491","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","ty [truncated 4212 chars]
I0802 14:49:02.261076 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.262803 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave
I0802 14:49:02.262844 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.262882 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.262919 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.275703 64396 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds
I0802 14:49:02.275743 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.275779 64396 round_trippers.go:414] Content-Type: text/plain
I0802 14:49:02.275815 64396 round_trippers.go:414] Content-Length: 69
I0802 14:49:02.275850 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
I0802 14:49:02.278054 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.279649 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave-npc
I0802 14:49:02.279691 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.279728 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.279765 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.293271 64396 round_trippers.go:408] Response Status: 200 OK in 13 milliseconds
I0802 14:49:02.293321 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.293358 64396 round_trippers.go:414] Content-Type: text/plain
I0802 14:49:02.293394 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
INFO: 2018/08/02 14:39:58.198716 Starting Weaveworks NPC 2.4.0; node name "devstats.cncf.io"
INFO: 2018/08/02 14:39:58.198969 Serving /metrics on :6781
Thu Aug 2 14:39:58 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/08/02 14:39:58.294002 Got list of ipsets: []
ERROR: logging before flag.Parse: E0802 14:40:28.338474 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338475 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338474 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.339275 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.340235 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.341457 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.340117 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.341216 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.342131 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.342657 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343322 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343396 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.343714 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.344561 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.346722 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.344468 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.345385 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.347275 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.345226 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.346184 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.347875 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347016 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347523 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.350821 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.347826 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.348883 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.351365 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.348662 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.349573 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.352012 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.349429 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.350420 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.352714 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.351213 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.352074 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.355261 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352128 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352949 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.355929 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.352903 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.353844 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.356576 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.353994 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.354564 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.357281 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.355515 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.356603 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.359533 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.356372 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.357453 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.360401 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
Also, zusammenfassend. Es ist unmöglich, Kubernetes-Cluster mit nur einem Master- und einem einzelnen Worker-Knoten auf Ubuntu 18.04 zu installieren.
Ich denke, es sollte eine Installationsanleitung geben, wie man k8s Schritt für Schritt mit kubeadm auf dem neuesten LTS Ubuntu einrichtet.
Ich denke, 18.04 hat sowohl in Bezug auf das gebündelte Docker als auch für systemd-resolved
gebrochen.
Also ja, es ist wirklich schwer, Anleitungen für jede einzelne Distributionsvariante zu schreiben, und wir können diese nicht wirklich effizient pflegen.
Auch wenn kubeadm hier das Frontend ist, könnte das Problem wirklich nichts mit kubeadm selbst zu tun haben.
einige Fragen:
/var/lib/kubelet/kubeadm-flags.env
wenn Sie kubeadm join/init
auf den 3 Knoten starten?journalctl -xeu kubelet
? ist das nur auf dem Masterknoten - was ist mit den anderen? du kannst dir diese in einem github gist oder einem http://pastebin.com auch bei mir anschauen.KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
Knoten (cncftest.io, amd64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
Knoten (devstats.cncf.io, amd64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
Also, ich habe master kubeadm init
auf dem amd64-Host installiert und weave net
ausprobiert und das Ergebnis ist genau das gleiche wie beim Versuch auf dem arm64-Host:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Es gibt einen kleinen Fortschritt.
Ich habe Master auf AMD64 installiert, dann auch einen Knoten auf AMD64. Alles hat gut funktioniert.
Ich habe den arm64-Knoten hinzugefügt und jetzt habe ich:
master amd64: Bereit
Knoten amd64: Bereit
Knoten arm64: NotReady: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
flannel
net plugin nicht zwischen verschiedenen Architekturen kommunizieren kann und arm64 überhaupt nicht als Master verwendet werden kann.runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Irgendwelche Vorschläge, was soll ich tun? Wo soll ich das melden? Ich habe bereits einen 2-Knoten-Cluster (Master und Node amd64), aber ich möchte helfen, dieses Problem zu lösen, damit man jeden Arch-Master mit jedem Arch-Knoten, nur OOTB, verwenden kann.
@lukaszgryglicki
kube-flannel.yml
den Flanell-Container nur für eine Architektur bereit. Aus diesem Grund startet auf Knoten mit unterschiedlicher Architektur das cni-Plugin nicht und der Knoten wird nie fertig
Ich habe es nie selbst versucht, aber ich schätze, Sie können zwei gehackte Flanell-Manifeste mit unterschiedlichen Makeln (und Namen) einsetzen, um eine Verwechslung zu vermeiden .
Aber ich habe Manifest auf arm64 optimiert, wie im Tutorial vorgeschlagen. amd64
durch arm64
.
Vielleicht erstelle ich also ein Problem für flannell
und füge einen Link zu diesem Thread ein.
Und jetzt, warum wave net
auf beiden Bögen mit dem gleichen cni-bezogenen Fehler fehlschlägt? Vielleicht auch ein Issue für weave
erstellen und auch auf diesen Thread verlinken?
@lukaszgryglicki
Wenn Sie kube-flannel.yml
für arm gemacht haben, funktioniert es nicht mehr auf AMD-Maschinen ... Deshalb vermute ich, dass die Bereitstellung von 2 gut durchdachten Manifesten, eines für Arm und eines für AMD, Ihr Problem lösen kann.
Und jetzt, wo ich daran denke, sollten Sie das gleiche Problem möglicherweise auch mit dem kube-proxy-Daemon-Set beheben, aber ich kann das jetzt nicht testen, sorry
Für das Problem, das Sie mit Weben haben, habe ich nicht genug Informationen. Ein Problem könnte sein, dass Weave nicht mit --pod-network-cidr=10.244.0.0/16
funktioniert, aber zurück zum ursprünglichen Problem, ich weiß nicht genau, ob Weave auf gemischten Plattformen sofort funktioniert oder nicht.
Ich sollte also zwei verschiedene Manifeste für Flanell auf einem Master bereitstellen, oder? Egal ob der Master zufällig arm64 oder amd64 ist, oder? Sollte der Master das Spawnen der korrekten Arch-Bereitstellung auf sich selbst und auf den Knoten handhaben?
Bin mir nicht sicher was du hier meinst:
And now that I think of, might be you should fix the same issue with kube-proxy daemon set as well, but I can't test this now, sorry
Ich habe nicht --pod-network-cidr=10.244.0.0/16
für weave
. Ich habe nur kubeadm init
.
Ich habe --pod-network-cidr=10.244.0.0/16
nur für Flanellversuche verwendet. Genau wie die Docs sagen.
cc @luxas - Ich habe gesehen, dass Sie einige Dokumente über Multi-Arch-k8s-Bereitstellungen erstellt haben. Vielleicht können Sie Feedback haben?
@lukasredynk
Ja, also ist dies immerhin ein Arch-Problem, danke für die Bestätigung.
Konzentrieren wir uns hier auf Flanell, da das Gewebeproblem tangential zu sein scheint.
schau dir das von @luxas für den Kontext an, falls hast :
https://github.com/luxas/kubeadm-workshop
Sollte der Master das Spawnen der korrekten Arch-Bereitstellung auf sich selbst und auf den Knoten handhaben?
_sollte_ aber das Manifest, das Sie herunterladen, ist kein "fettes":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Soweit ich das verstanden habe, werden die Arch Taints propagiert und Sie müssen das mit kubectl
auf jedem Knoten beheben (?).
sieht aus wie ein "fettes" Manifest im Master und wurde hier hinzugefügt:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca
verwandtes Problem/PR:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989
Meine Annahme ist, dass dies der neueste Stand ist und Sie Folgendes verwenden müssen:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Also bring den Cluster runter und probiere das aus und hoffe, dass es funktioniert.
unsere CNI-Dokumente würden eine Verbesserung benötigen, aber dies muss passieren, wenn flannel-next
veröffentlicht wird.
OK, werde es nach dem Wochenende versuchen und meine Ergebnisse hier posten. Danke.
@lukaszgryglicki Hallo, hast du das mit dem neuen Flanell-Manifest zum
Noch nicht, ich versuche es heute.
OK hat endlich geklappt:
root<strong i="6">@devstats</strong>:/root# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cncftest.io Ready <none> 39s v1.11.1
devstats.cncf.io Ready <none> 46s v1.11.1
devstats.team.io Ready master 12m v1.11.1
Fettes Mainifest vom Flanell master
Zweig hat geholfen.
Danke, das kann geschlossen werden.
Hallo Leute, ich bin in der gleichen Situation.
Ich habe Worker-Knoten im Zustand Bereit, aber das Flanell auf arm64 stürzt mit diesem Fehler immer wieder ab:
1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm64-m5jfd': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm64-m5jfd: dial tcp 10.96.0.1:443: i/o timeout
@lukasredynk hat es bei dir funktioniert?
irgendeine Idee?
Der Fehler scheint anders zu sein, aber haben Sie das fette Manifest verwendet: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ?
Es enthält Manifeste für mehrere Bögen.
Ja bin ich:
Das Problem ist jetzt der Flanellbehälter, der nicht auf dem Arm bleibt. :(
es funktioniert auf amd64
und arm64
- funktioniert für mich.
Leider kann ich mit arm
(32 Bit) nicht helfen, ich habe keine arm
Maschine zur Verfügung.
Ich bin auf arm64, aber danke, ich werde weiter nachforschen...
Ohh dann sorry, ich dachte du wärst auf dem Arm.
Wie auch immer, ich bin auch ziemlich neu darin, also musst du warten, bis andere Leute helfen.
Bitte fügen Sie die Ausgabe von kubectl describe pods --all-namespace
und die mögliche Ausgabe anderer Befehle ein, die ich in diesem Thread gepostet habe. Dies kann jemandem helfen, das eigentliche Problem zu verfolgen.
Danke @lukaszgryglicki ,
Dies ist die Ausgabe von Beschreibungs-Pods: https://pastebin.com/kBVPYsMd
@lukaszgryglicki
schön, dass es am Ende geklappt hat.
Ich werde die Verwendung von Fat Manifest für Flanell in den Dokumenten dokumentieren, da ich keine Ahnung habe, wann 0.11.0 veröffentlicht wird.
@Leen15
relevant aus dem fehlerhaften Pod:
Warning FailedCreatePodSandBox 3m (x5327 over 7h) kubelet, nanopi-neo-plus2 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ddb551d520a757f4f8ff81d1dbfde50a98a5ec65385673a5a49a79e23a3243b" network for pod "arm-test-7894bfffd-njdcc": NetworkPlugin cni failed to set up pod "arm-test-7894bfffd-njdcc_default" network: open /run/flannel/subnet.env: no such file or directory
fügen Sie --pod-network-cidr=...
das für Flanell benötigt wird?
Probiere auch diese Anleitung:
https://github.com/kubernetes/kubernetes/issues/36575#issuecomment -264622923
@neolit123 ja, ich habe das Problem gefunden: Flanell hat die virtuelle Netzwerkschnittstelle (cni und flannel0) nicht erstellt.
Ich kenne den Grund nicht und konnte ihn nach mehreren Stunden nicht beheben.
Ich gab auf und wechselte zum Schwarm.
OK, verstanden. in diesem Fall schließe ich das Thema.
Danke.
Ich bin auch auf das gleiche Problem gestoßen und habe festgestellt, dass der Knoten die erforderlichen Bilder aufgrund der GFW in China nicht abrufen kann
Ich führe diesen Befehl aus und er hat mein Problem gelöst:
Dadurch wird eine Datei im Verzeichnis /etc/cni/net.d mit dem Namen 10-flannel.conflist erstellt. Ich glaube, dass Kubernetes ein Netzwerk benötigt, das von diesem Paket festgelegt wird.
Mein Cluster hat folgenden Status:
NAME STATUS ROLLEN ALTER VERSION
k8s-master Bereit Master 3h37m v1.14.1
node001 Bereit
node02 Bereit
Hallo zusammen,
Ich habe 1 Master und 2 Knoten. 2. Knoten ist nicht bereit.
root@kube1 :~# kubectl Get Nodes
NAME STATUS ROLLEN ALTER VERSION
dockerlab1 Bereit
kube1 Bereit Master 4h12m v1.14.3
labserver1 NotReady
root@kube1 :~# kubectl Get Pods --all-namespaces
NAMESPACE NAME BEREIT STATUS NEU STARTET ALTER
kube-system coredns-fb8b8dccf-72llr 1/1 Läuft 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 Läuft 0 4h13m
kube-system etcd-kube1 1/1 Läuft 0 4h12m
kube-system kube-apiserver-kube1 1/1 Läuft 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Läuft 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Init:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Läuft 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Läuft 0 4h1m
kube-system kube-proxy-7m8jg 1/1 Läuft 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 Läuft 0 4h13m
root@kube1 :~# kubectl describe node labserver1
Name: labserver1
Rollen:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anmerkungen: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volume.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: So, 09 Jun 2019 21:03:57 +0800
Taints : node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Nicht planbar: falsch
Bedingungen:
Typ Status LastHeartbeatTime LastTransitionTime Grund Nachricht
---- ------ ----- ------------------ ----- - -------
MemoryPressure False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet hat genügend Arbeitsspeicher zur Verfügung
DiskPressure False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletHasNoDiskPressure Kubelet hat keinen Festplattendruck
PIDPressure False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletHasSufficientPID Kubelet hat genügend PID verfügbar
Ready False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletNotReady Laufzeitnetzwerk nicht bereit: NetworkReady=false Grund:NetworkPluginNotReady Nachricht:docker : Netzwerk-Plugin ist nicht bereit: cni config nicht initialisiert
Adressen:
Interne IP: 172.31.8.125
Hostname: labserver1
Kapazität:
CPU: 1
ephemerer Speicher: 18108284Ki
riesigepages-1Gi: 0
riesigepages-2Mi: 0
Speicher: 1122528Ki
Schoten: 110
Zuordenbar:
CPU: 1
ephemerer Speicher: 16688594507
riesigepages-1Gi: 0
riesigepages-2Mi: 0
Speicher: 1020128Ki
Schoten: 110
Systeminformationen:
Maschinen-ID: 292dc4560f9309ccdd72b6935c80e8ec
System-UUID: DE4707DF-5516-784A-9B41-588FCDE49369
Boot-ID: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Kernel-Version: 4.4.0-142-generisch
Betriebssystem-Image: Ubuntu 16.04.6 LTS
Betriebssystem: Linux
Architektur: amd64
Container-Laufzeitversion : docker://18.9.6
Kubelet-Version: v1.14.3
Kube-Proxy-Version: v1.14.3
PodCIDR: 10.244.3.0/24
Nicht terminierte Pods: (2 insgesamt)
Namespace-Name CPU-Anforderungen CPU-Limits Speicheranforderungen Speicherlimits AGE
--------- ---- ------------ ---------- -------------- -------------- ---
kube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Zugewiesene Ressourcen:
(Gesamtlimits können über 100 Prozent liegen, dh überbewertet sein.)
Beschränkungen für Ressourcenanfragen
-------- -------- ------
CPU 100m (10%) 100m (10%)
Speicher 50Mi (5%) 50Mi (5%)
ephemerer Speicher 0 (0%) 0 (0%)
Veranstaltungen:
Geben Sie den Grund für das Alter der Nachricht ein
---- ------ ---- ---- -------
Normal Start-Kubelet von 45m, labserver1 Start-Kubelet.
Normal NodeHasSufficientMemory 45m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientPID
Normal NodeAllocatableEnforced 45m Kubelet, labserver1 Aktualisiertes Node-Zuordnungslimit für Pods
Normal Start-Kubelet 25m, labserver1 Start-Kubelet.
Normal NodeAllocatableErzwungenes 25 m Kubelet, labserver1 Aktualisiertes Limit für zuweisbare Knoten über Pods hinweg
Normal NodeHasSufficientMemory 25m (x2 über 25m) kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientMemory
Normal NodeHasSufficientPID 25m (x2 über 25m) kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 über 25m) kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasNoDiskPressure
Normal Startkubelet 13m, labserver1 Startkubelet.
Normal NodeHasSufficientMemory 13m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13 m Kubelet, labserver1 Aktualisiertes Limit für zuweisbare Knoten über Pods hinweg
root@kube1 :~#
Bitte helft
Hallo zusammen,
Ich habe 1 Master und 2 Knoten. 2. Knoten ist nicht bereit.
root@kube1 :~# kubectl Get Nodes
NAME STATUS ROLLEN ALTER VERSION
dockerlab1 Bereit 3h57m v1.14.3
kube1 Bereit Master 4h12m v1.14.3
labserver1 NotReady 22m v1.14.3root@kube1 :~# kubectl Get Pods --all-namespaces
NAMESPACE NAME BEREIT STATUS NEU STARTET ALTER
kube-system coredns-fb8b8dccf-72llr 1/1 Läuft 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 Läuft 0 4h13m
kube-system etcd-kube1 1/1 Läuft 0 4h12m
kube-system kube-apiserver-kube1 1/1 Läuft 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Läuft 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Init:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Läuft 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Läuft 0 4h1m
kube-system kube-proxy-7m8jg 1/1 Läuft 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 Läuft 0 4h13m
kube-system kube-scheduler-kube1 1/1 Läuft 0 4h13m
root@kube1 :~# kubectl describe node labserver1
Name: labserver1
Rollen:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anmerkungen: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volume.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: So, 09 Jun 2019 21:03:57 +0800
Taints : node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Nicht planbar: falsch
Bedingungen:
Typ Status LastHeartbeatTime LastTransitionTime Grund NachrichtMemoryPressure False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet hat genügend Arbeitsspeicher zur Verfügung
DiskPressure False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletHasNoDiskPressure Kubelet hat keinen Festplattendruck
PIDPressure False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletHasSufficientPID Kubelet hat genügend PID verfügbar
Ready False So, 09.06.2019 21:28:31 +0800 So, 09.06.2019 21:03:57 +0800 KubeletNotReady Laufzeitnetzwerk nicht bereit: NetworkReady=false Grund:NetworkPluginNotReady Nachricht:docker : Netzwerk-Plugin ist nicht bereit: cni config nicht initialisiert
Adressen:
Interne IP: 172.31.8.125
Hostname: labserver1
Kapazität:
CPU: 1
ephemerer Speicher: 18108284Ki
riesigepages-1Gi: 0
riesigepages-2Mi: 0
Speicher: 1122528Ki
Schoten: 110
Zuordenbar:
CPU: 1
ephemerer Speicher: 16688594507
riesigepages-1Gi: 0
riesigepages-2Mi: 0
Speicher: 1020128Ki
Schoten: 110
Systeminformationen:
Maschinen-ID: 292dc4560f9309ccdd72b6935c80e8ec
System-UUID: DE4707DF-5516-784A-9B41-588FCDE49369
Boot-ID: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Kernel-Version: 4.4.0-142-generisch
Betriebssystem-Image: Ubuntu 16.04.6 LTS
Betriebssystem: Linux
Architektur: amd64
Container-Laufzeitversion : docker://18.9.6
Kubelet-Version: v1.14.3
Kube-Proxy-Version: v1.14.3
PodCIDR: 10.244.3.0/24
Nicht terminierte Pods: (2 insgesamt)
Namespace-Name CPU-Anforderungen CPU-Limits Speicheranforderungen Speicherlimits AGEkube-system kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Zugewiesene Ressourcen:
(Gesamtlimits können über 100 Prozent liegen, dh überbewertet sein.)
Beschränkungen für RessourcenanfragenCPU 100m (10%) 100m (10%)
Speicher 50Mi (5%) 50Mi (5%)
ephemerer Speicher 0 (0%) 0 (0%)
Veranstaltungen:
Geben Sie den Grund für das Alter der Nachricht einNormal Start-Kubelet von 45m, labserver1 Start-Kubelet.
Normal NodeHasSufficientMemory 45m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientPID
Normal NodeAllocatableEnforced 45m Kubelet, labserver1 Aktualisiertes Node-Zuordnungslimit für Pods
Normal Start-Kubelet 25m, labserver1 Start-Kubelet.
Normal NodeAllocatableErzwungenes 25 m Kubelet, labserver1 Aktualisiertes Limit für zuweisbare Knoten über Pods hinweg
Normal NodeHasSufficientMemory 25m (x2 über 25m) kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientMemory
Normal NodeHasSufficientPID 25m (x2 über 25m) kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 über 25m) kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasNoDiskPressure
Normal Startkubelet 13m, labserver1 Startkubelet.
Normal NodeHasSufficientMemory 13m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 Node labserver1 Status ist jetzt: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13 m Kubelet, labserver1 Aktualisiertes Limit für zuweisbare Knoten über Pods hinweg
root@kube1 :~#Bitte helft
Hallo Athir,
Please check out logs in /var/logs/messages section of your Master node. You can find an actual error in those logs. But here are some general tips.
ich. Konzentrieren Sie sich immer zuerst auf Ihren Master-Knoten.
ii. Installieren Sie darauf die Docker-Engine und holen Sie sich alle Images, die für Kubernetes verwendet werden. Wenn alles in Laufform ist, fügen Sie dem Master Knoten hinzu. es wird das ganze Problem lösen. Ich habe im Internet einige Artikel gesehen, die versuchen, nach dem Anhängen der Slave-Knoten einige Bilder abzurufen. Diese Praxis verursacht einige Schwierigkeiten.
Hallo saddique164, Danke für deine Vorschläge. Ja, wie Sie sagten, habe ich gestern einen weiteren neuen Slave-Knoten bereitgestellt und konnte ohne Probleme dem Master beitreten.
Entschuldigung, ich kann nicht helfen, ich habe keine ARM64-Knoten mehr, jetzt habe ich einen AMD64-Bare-Metal-Cluster mit 4 Knoten.
Der Datei /etc/cni/net.d/10-flannel.conflist fehlte der cniVersion-Schlüssel in ihrer Konfiguration.
Hinzufügen von "cniVersion": "0.2.0" hat das Problem behoben.
Der Datei /etc/cni/net.d/10-flannel.conflist fehlte der cniVersion-Schlüssel in ihrer Konfiguration.
Hinzufügen von "cniVersion": "0.2.0" hat das Problem behoben.
Ich hatte das Problem, als ich von 1.15 auf V1.16.0 aktualisiert wurde.
dGVzdDoxMjPCow
Am 24. September 2019 um 12:52 Uhr schrieb "ronakgpatel" [email protected] :
In der Datei /etc/cni/net.d/10-flannel.conflist fehlte der cniVersion-Schlüssel in
seine Konfig.Hinzufügen von "cniVersion": "0.2.0" hat das Problem behoben.
Ich hatte das Problem, als ich von 1.15 auf V1.16.0 aktualisiert wurde.
—
Sie erhalten dies, weil Sie diesen Thread abonniert haben.
Antworten Sie direkt auf diese E-Mail und zeigen Sie sie auf GitHub an
https://github.com/kubernetes/kubeadm/issues/1031?email_source=notifications&email_token=AND2HJTXHB6WSGAE7PSOAJTQLGMKVA5CNFSM4FNQHEHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNWW2issue
oder den Thread stumm schalten
https://github.com/notifications/unsubscribe-auth/AND2HJXDKQARYKXVY4YLZMDQLGMKVANCNFSM4FNQHEHA
.
Flanell wird nicht sehr aktiv gepflegt. Ich empfehle Kaliko oder Weavenet.
das Flanell-Repository brauchte einen Fix.
Die kubeadm-Anleitung zum Installieren von Flanell wurde gerade aktualisiert, siehe:
https://github.com/kubernetes/website/pull/16575/files
Stehe hier vor dem gleichen Problem.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.
Hat bei mir funktioniert.
docker: Netzwerk-Plugin ist nicht bereit: cni config nicht initialisiert
Installieren Sie Docker auf dem Notready-Knoten neu.
Hat bei mir funktioniert.
Ich führe diesen Befehl aus und er hat mein Problem gelöst:
- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Dadurch wird eine Datei im Verzeichnis /etc/cni/net.d mit dem Namen 10-flannel.conflist erstellt. Ich glaube, dass Kubernetes ein Netzwerk benötigt, das von diesem Paket festgelegt wird.
Mein Cluster hat folgenden Status:NAME STATUS ROLLEN ALTER VERSION
k8s-master Bereit Master 3h37m v1.14.1
node001 Bereit 3h6m v1.14.1
node02 Bereit 167m v1.14.1
Das hat es einfach getan!
Ich hatte einen ähnlichen Fall, in dem ich das Netzwerk-Plugin erstellte, bevor ich die Arbeiter verlinkte, wodurch /etc/cni/net.d fehlte.
Ich habe die Konfiguration erneut ausgeführt, nachdem ich die Worker-Knoten mit folgenden Elementen verknüpft hatte:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Als Ergebnis wurde die Konfiguration in /etc/cni/net.d erfolgreich erstellt und der Knoten zeigte einen Bereit-Zustand.
Hoffe das hilft jedem mit dem gleichen Problem.
Ich führe diesen Befehl aus und er hat mein Problem gelöst:
- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Dadurch wird eine Datei im Verzeichnis /etc/cni/net.d mit dem Namen 10-flannel.conflist erstellt. Ich glaube, dass Kubernetes ein Netzwerk benötigt, das von diesem Paket festgelegt wird.
Mein Cluster hat folgenden Status:NAME STATUS ROLLEN ALTER VERSION
k8s-master Bereit Master 3h37m v1.14.1
node001 Bereit 3h6m v1.14.1
node02 Bereit 167m v1.14.1
Führen Sie diesen Befehl auf der Master-Maschine aus und alles ist jetzt im Bereitschaftszustand. Danke @saddique164 .
Der schnellste Weg besteht darin, Flannel in Kubernetes auf einer der AMD64-Architekturen hinzuzufügen.
$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> kube-flannel.yaml
$ kubectl apply -f kube-flannel.yaml
Ich verwende die Kubernetes 1.18-Version.
Ich habe dies verwendet: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Unter /etc/cni/net.d wurde keine Datei erstellt
Der Master-Knoten ist NotReady, während sich Slaves im Zustand Ready befinden
Ich verwende die Kubernetes 1.18-Version.
Ich habe dies verwendet: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Unter /etc/cni/net.d wurde keine Datei erstellt
Der Master-Knoten ist NotReady, während sich Slaves im Zustand Ready befinden
HINWEIS: Dies scheint ein Kubelet-Problem zu sein.
Jul 01 11:58:36 master kubelet[17918]: F0701 11:58:36.613864 17918 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 01 11:58:36 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 01 11:58:36 master systemd[1]: Unit kubelet.service entered failed state.
Jul 01 11:58:36 master systemd[1]: kubelet.service failed.
versuch das mal auf master:
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Es startet und schlägt dann wieder fehl.
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692341 15525 remote_runtime.go:59] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692358 15525 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692381 15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692389 15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692420 15525 remote_image.go:50] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692427 15525 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692435 15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692440 15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692464 15525 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692480 15525 kubelet.go:317] Watching apiserver
Jul 02 10:37:16 master kubelet[15525]: W0702 10:37:16.680313 15525 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Wenn Sie sehen, dass kein Netzwerk gefunden wurde. Führen Sie diesen Befehl aus und teilen Sie das Ergebnis.
Kubectl Get Pods -n Kube-System
NAME READY STATUS RESTARTS AGE
coredns-75f8564758-92ws7 1/1 Running 0 25h
coredns-75f8564758-z9xn8 1/1 Running 0 25h
kube-flannel-ds-amd64-2j4mw 1/1 Running 0 25h
kube-flannel-ds-amd64-5tmhp 0/1 Pending 0 25h
kube-flannel-ds-amd64-rqwmz 1/1 Running 0 25h
kube-proxy-6v24w 1/1 Running 0 25h
kube-proxy-jgdw7 0/1 Pending 0 25h
kube-proxy-qppnk 1/1 Running 0 25h
lass das laufen
kubectl logs kube-flannel-ds-amd64-5tmhp -n kube-system
Wenn nichts kommt, dann führe dieses aus:
kubectl describe pod kube-flannel-ds-amd64-5tmhp -n kube-system
Fehler vom Server: Holen Sie sich https://10.75.214.124:10250/containerLogs/kube-system/kube-flannel-ds-amd64-5tmhp/kube-flannel : dial tcp 10.75.214.124:10250: connect: connection failed
Wie viele Knoten laufen für Sie? IM Cluster? ein Knoten macht dieses Problem. Dies wird Daemonset genannt. Sie laufen auf jedem Knoten. Ihr Kontrollplan akzeptiert die Anfrage von ihm nicht. Daher empfehle ich Ihnen, die folgenden Schritte zu befolgen.
Dieser Vorgang wird funktionieren.
kubectl Get-Knoten:
NAME STATUS ROLES AGE VERSION
master NotReady master 26h v1.18.5
slave1 Ready <none> 26h v1.18.5
slave2 Ready <none> 26h v1.18.5
Ich habe die von dir genannten Schritte ausprobiert:
Das bekomme ich.
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Entleeren Sie alle Knoten außer dem Master und konzentrieren Sie sich darauf. Wenn es fertig ist, fügen Sie andere hinzu.
Entleeren von Knoten und dann kubeadm reset und init helfen nicht. Der Cluster wird danach nicht initialisiert.
Mein Problem war, dass ich den Hostnamen aktualisiert habe, nachdem der Cluster erstellt wurde. Dadurch ist es so, als ob der Meister nicht wüsste, dass es der Meister ist.
Ich laufe noch:
sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname)
aber jetzt führe ich es vor der Cluster-Initialisierung aus
Hilfreichster Kommentar
@lukasredynk
Ja, also ist dies immerhin ein Arch-Problem, danke für die Bestätigung.
Konzentrieren wir uns hier auf Flanell, da das Gewebeproblem tangential zu sein scheint.
schau dir das von @luxas für den Kontext an, falls hast :
https://github.com/luxas/kubeadm-workshop
_sollte_ aber das Manifest, das Sie herunterladen, ist kein "fettes":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Soweit ich das verstanden habe, werden die Arch Taints propagiert und Sie müssen das mit
kubectl
auf jedem Knoten beheben (?).sieht aus wie ein "fettes" Manifest im Master und wurde hier hinzugefügt:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca
verwandtes Problem/PR:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989
Meine Annahme ist, dass dies der neueste Stand ist und Sie Folgendes verwenden müssen:
Also bring den Cluster runter und probiere das aus und hoffe, dass es funktioniert.
unsere CNI-Dokumente würden eine Verbesserung benötigen, aber dies muss passieren, wenn
flannel-next
veröffentlicht wird.