RAPPORT D'ERREUR
apt-get update/upgrade
.kubeadm init --pod-network-cidr=10.244.0.0/16
. Et puis exécuté les commandes suggérées.sysctl net.bridge.bridge-nf-call-iptables=1
.wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
.vim kube-flannel.yml
, remplacez amd64
par arm64
kubectl apply -f kube-flannel.yml
.kubectl get pods --all-namespaces
:NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-ls44z 1/1 Running 0 20m
kube-system coredns-78fcdf6894-njnnt 1/1 Running 0 20m
kube-system etcd-devstats.team.io 1/1 Running 0 20m
kube-system kube-apiserver-devstats.team.io 1/1 Running 0 20m
kube-system kube-controller-manager-devstats.team.io 1/1 Running 0 20m
kube-system kube-flannel-ds-v4t8s 1/1 Running 0 13m
kube-system kube-proxy-5825g 1/1 Running 0 20m
kube-system kube-scheduler-devstats.team.io 1/1 Running 0 20m
Puis rejoint deux nœuds AMD64 en utilisant la sortie kubeadm init
:
1er nœud :
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:49.987467 16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709 16652 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
2ème nœud :
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:58.913060 38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222 38617 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Mais sur le maître kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
cncftest.io NotReady <none> 7m v1.11.1
devstats.cncf.io NotReady <none> 7m v1.11.1
devstats.team.io Ready master 21m v1.11.1
Et puis : kubectl describe nodes
(le maître est devstats.team.io
, les nœuds sont : cncftest.io
et devstats.cncf.io
):
Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=cncftest.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:26:53 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.205.79
Hostname: cncftest.io
Capacity:
cpu: 48
ephemeral-storage: 459266000Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264047752Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 423259544900
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263945352Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 4C4C4544-0052-3310-804A-B7C04F4E4432
Boot ID: d87670d9-251e-42a5-90c5-5d63059f03ab
Kernel Version: 4.15.0-22-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.1.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m kubelet, cncftest.io Starting kubelet.
Normal NodeHasSufficientDisk 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m kubelet, cncftest.io Updated Node Allocatable limit across pods
Name: devstats.cncf.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.cncf.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:27:00 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.78.47
Hostname: devstats.cncf.io
Capacity:
cpu: 48
ephemeral-storage: 142124052Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264027220Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 130981526107
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263924820Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 00000000-0000-0000-0000-0CC47AF37CF2
Boot ID: f257b606-5da2-43fd-8782-0aa4484037f4
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.2.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m kubelet, devstats.cncf.io Starting kubelet.
Normal NodeHasSufficientDisk 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m kubelet, devstats.cncf.io Updated Node Allocatable limit across pods
Name: devstats.team.io
Roles: master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.team.io
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=147.75.97.234
kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:12:56 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:21:07 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 147.75.97.234
Hostname: devstats.team.io
Capacity:
cpu: 96
ephemeral-storage: 322988584Ki
hugepages-2Mi: 0
memory: 131731468Ki
pods: 110
Allocatable:
cpu: 96
ephemeral-storage: 297666278522
hugepages-2Mi: 0
memory: 131629068Ki
pods: 110
System Info:
Machine ID: 5eaa89a81ff348399284bb4cb016ffd7
System UUID: 10000000-FAC5-FFFF-A81D-FC15B4970493
Boot ID: 43b920e3-34e7-4de3-aa6c-8b5c525363ff
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system coredns-78fcdf6894-ls44z 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system coredns-78fcdf6894-njnnt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system etcd-devstats.team.io 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-devstats.team.io 250m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-devstats.team.io 200m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-flannel-ds-v4t8s 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%)
kube-system kube-proxy-5825g 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-devstats.team.io 100m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (0%) 100m (0%)
memory 190Mi (0%) 390Mi (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 23m kubelet, devstats.team.io Starting kubelet.
Normal NodeAllocatableEnforced 23m kubelet, devstats.team.io Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 23m (x5 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientPID
Normal NodeHasSufficientDisk 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasNoDiskPressure
Normal Starting 21m kube-proxy, devstats.team.io Starting kube-proxy.
Normal NodeReady 13m kubelet, devstats.team.io Node devstats.team.io status is now: NodeReady
version de kubeadm (utilisez kubeadm version
):
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Environnement :
kubectl version
):Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
uname -a
: Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP mar. 24 avril 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/LinuxNAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
lsb_release -a
:No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
uname -a
): Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
docker version
:docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Apr 18 01:26:37 2018
OS/Arch: linux/arm64
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Feb 28 17:46:05 2018
OS/Arch: linux/arm64
Experimental: false
L'erreur exacte semble être :
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sur le nœud : cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
De ce fil (pas de KUBELET_NETWORK_ARGS
là-bas).
journalctl -xe
sur le nœud :Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663 38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876 38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Le répertoire /etc/cni/net.d
existe, mais est vide.
Tous les nœuds dans l'état Ready
.
Suivez simplement les étapes du didacticiel . J'ai essayé 3 fois et ça arrive tout le temps.
Le maître est ARM64, 2 nœuds sont AMD64.
Le maître et un nœud se trouvent à Amsterdam et le deuxième nœud se trouve aux États-Unis.
Je peux utiliser kubectl taint nodes --all node-role.kubernetes.io/master-
pour exécuter des pods sur le maître, mais ce n'est pas une solution. Je veux avoir un vrai cluster multi-nœuds avec lequel travailler.
@lukaszgryglicki
Il semble que les nœuds ne reçoivent pas de flanelle car ils sont sur amd64
architecture
Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
et
Name: devstats.team.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
Je ne suis pas un expert de la flanelle, mais je pense que vous devriez vérifier la documentation du produit pour savoir comment le faire fonctionner dans un environnement avec des plates-formes mixtes
C'est un bon point, mais qu'en est-il du message d'erreur - cela semble vraiment sans rapport.
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Il semble que certains fichiers de configuration CNI manquent dans /etc/cni/net.d
mais pourquoi ?
J'essaie maintenant différents docker 18.03ce
comme suggéré sur le canal slack (17.03 a été suggéré en fait, mais il n'y a pas de 17.03 pour Ubuntu 18.04).
Les étiquettes avec le nom d'arch ne correspondent pas en effet. Mais la prochaine étiquette beta.kubernetes.io/os=linux
est la même sur les 3 serveurs.
La même chose se produit avec Docker 18.03ce. Je ne vois aucune différence, cela ne ressemble pas à un problème de docker. Cela ressemble à un problème de configuration CNI.
@lukaszgryglicki
salut,
Maître : serveur bare metal 96 cœurs, ARM64, 128 Go de RAM, swap désactivé.
Nœuds (2) : serveur bare metal 48 cœurs, AMD64, 256 Go de RAM, swap désactivé x 2.
ce sont quelques spécifications _nice_.
la façon dont je teste les choses est la suivante - si quelque chose ne fonctionne pas avec weavenet, j'essaye la flanelle et l'inverse.
alors s'il vous plaît essayez de tisser et si votre configuration CNI fonctionne avec, alors c'est lié au plugin CNI.
alors que l'équipe kubeadm prend en charge les plugins et les addons, nous déléguons généralement les problèmes à leurs mainteneurs respectifs car nous n'avons pas la bande passante pour tout gérer.
Bien sûr, j'ai essayé de tisser il y a quelques itérations. Cela s'est terminé par une boucle de redémarrage de conteneur.
Je vais maintenant essayer docker 17.03 pour exclure le problème de docker (17.03 est censé être très bien pris en charge).
Ce n'est donc pas un problème de docker. Le 17.03 le même :
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: W0802 14:21:51.406786 21714 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: E0802 14:21:51.407074 21714 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
now will try weave net as suggested on the issue
Je vais essayer de tisser maintenant et poster les résultats ici.
J'ai donc essayé weave net
et cela ne fonctionne pas :
Sur maître : kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
cncftest.io NotReady <none> 5s v1.11.1
devstats.cncf.io NotReady <none> 12s v1.11.1
devstats.team.io NotReady master 7m v1.11.1
kubectl describe nodes
(la même erreur liée au cni, mais aussi sur le nœud maître maintenant):Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=cncftest.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:39:56 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.205.79
Hostname: cncftest.io
Capacity:
cpu: 48
ephemeral-storage: 459266000Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264047752Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 423259544900
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263945352Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 4C4C4544-0052-3310-804A-B7C04F4E4432
Boot ID: d87670d9-251e-42a5-90c5-5d63059f03ab
Kernel Version: 4.15.0-22-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.2
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system weave-net-wwjrr 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 20m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 1m kubelet, cncftest.io Starting kubelet.
Normal NodeHasSufficientDisk 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 1m kubelet, cncftest.io Updated Node Allocatable limit across pods
Name: devstats.cncf.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.cncf.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:39:49 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.78.47
Hostname: devstats.cncf.io
Capacity:
cpu: 48
ephemeral-storage: 142124052Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264027220Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 130981526107
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263924820Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 00000000-0000-0000-0000-0CC47AF37CF2
Boot ID: f257b606-5da2-43fd-8782-0aa4484037f4
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.2
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system weave-net-2fsrf 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 20m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 1m kubelet, devstats.cncf.io Starting kubelet.
Normal NodeHasSufficientDisk 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 1m kubelet, devstats.cncf.io Updated Node Allocatable limit across pods
Name: devstats.team.io
Roles: master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.team.io
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:32:14 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.97.234
Hostname: devstats.team.io
Capacity:
cpu: 96
ephemeral-storage: 322988584Ki
hugepages-2Mi: 0
memory: 131731468Ki
pods: 110
Allocatable:
cpu: 96
ephemeral-storage: 297666278522
hugepages-2Mi: 0
memory: 131629068Ki
pods: 110
System Info:
Machine ID: 5eaa89a81ff348399284bb4cb016ffd7
System UUID: 10000000-FAC5-FFFF-A81D-FC15B4970493
Boot ID: 43b920e3-34e7-4de3-aa6c-8b5c525363ff
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://17.9.0
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-devstats.team.io 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-devstats.team.io 250m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-devstats.team.io 200m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-69qnb 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-devstats.team.io 100m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-j9f5m 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 570m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kubelet, devstats.team.io Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet, devstats.team.io Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 10m (x5 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientPID
Normal NodeHasSufficientDisk 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasNoDiskPressure
Normal Starting 8m kube-proxy, devstats.team.io Starting kube-proxy.
journalctl -xe
sur le maître :Aug 02 14:42:18 devstats.team.io dockerd[44020]: time="2018-08-02T14:42:18.330999189Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.079835 56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080312 56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080677 56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:19 devstats.team.io kubelet[56340]: E0802 14:42:19.080815 56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:21 devstats.team.io kubelet[56340]: W0802 14:42:21.867690 56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:21 devstats.team.io kubelet[56340]: E0802 14:42:21.868005 56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.259681 56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260359 56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260833 56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.260984 56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:26 devstats.team.io kubelet[56340]: W0802 14:42:26.870675 56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.871316 56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
kubectl get po --all-namespaces
:NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-g8wzs 0/1 Pending 0 12m
kube-system coredns-78fcdf6894-tzs8n 0/1 Pending 0 12m
kube-system etcd-devstats.team.io 1/1 Running 0 12m
kube-system kube-apiserver-devstats.team.io 1/1 Running 0 12m
kube-system kube-controller-manager-devstats.team.io 1/1 Running 0 12m
kube-system kube-proxy-69qnb 1/1 Running 0 12m
kube-system kube-scheduler-devstats.team.io 1/1 Running 0 12m
kube-system weave-net-2fsrf 1/2 CrashLoopBackOff 5 5m
kube-system weave-net-j9f5m 1/2 CrashLoopBackOff 6 8m
kube-system weave-net-wwjrr 1/2 CrashLoopBackOff 5 4m
kubectl describe po --all-namespaces
:Name: coredns-78fcdf6894-g8wzs
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.1.3
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-jw4mv:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-jw4mv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m (x32 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
Warning FailedScheduling 3m (x48 over 5m) default-scheduler 0/3 nodes are available: 3 node(s) were not ready.
Name: coredns-78fcdf6894-tzs8n
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.1.3
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-jw4mv:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-jw4mv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m (x32 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
Warning FailedScheduling 3m (x47 over 5m) default-scheduler 0/3 nodes are available: 3 node(s) were not ready.
Name: etcd-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=etcd
tier=control-plane
Annotations: kubernetes.io/config.hash=cc73514fbc25558d566fe49661f006a0
kubernetes.io/config.mirror=cc73514fbc25558d566fe49661f006a0
kubernetes.io/config.seen=2018-08-02T14:31:13.654147902Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
etcd:
Container ID: docker://254c88b154393778ef7b1ead2aaaa0acb120ffb76d911f140172da3323f1f1e3
Image: k8s.gcr.io/etcd-arm64:3.2.18
Image ID: docker-pullable://k8s.gcr.io/etcd-arm64<strong i="13">@sha256</strong>:f0b7368ebb28e6226ab3b4dbce4b5c6d77dab7b5f6579b08fd645c00f7b100ff
Port: <none>
Host Port: <none>
Command:
etcd
--advertise-client-urls=https://127.0.0.1:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://127.0.0.1:2380
--initial-cluster=devstats.team.io=https://127.0.0.1:2380
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379
--listen-peer-urls=https://127.0.0.1:2380
--name=devstats.team.io
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Liveness: exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/pki/etcd from etcd-certs (rw)
/var/lib/etcd from etcd-data (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
etcd-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/etcd
HostPathType: DirectoryOrCreate
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki/etcd
HostPathType: DirectoryOrCreate
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-apiserver-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubernetes.io/config.hash=1f7835a47425009200d38bf94c337ab3
kubernetes.io/config.mirror=1f7835a47425009200d38bf94c337ab3
kubernetes.io/config.seen=2018-08-02T14:31:13.639443247Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-apiserver:
Container ID: docker://22b73993b141faebe6b4aab727d2235abb3422a17b60bc1be6c749c260e39f67
Image: k8s.gcr.io/kube-apiserver-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-apiserver-arm64<strong i="14">@sha256</strong>:bca1933fa25fc7f890700f6aebd572c6f8351f7bc89d2e4f2c44a63649e3fccf
Port: <none>
Host Port: <none>
Command:
kube-apiserver
--authorization-mode=Node,RBAC
--advertise-address=147.75.97.234
--allow-privileged=true
--client-ca-file=/etc/kubernetes/pki/ca.crt
--disable-admission-plugins=PersistentVolumeLabel
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--insecure-port=0
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 250m
Liveness: http-get https://147.75.97.234:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-controller-manager-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash=5d26a7fba3c17c9fa8969a466d6a0f1d
kubernetes.io/config.mirror=5d26a7fba3c17c9fa8969a466d6a0f1d
kubernetes.io/config.seen=2018-08-02T14:31:13.646000889Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-controller-manager:
Container ID: docker://5182bf5c7c63f9507e6319a2c3fb5698dc827ea9b591acbb071cb39c4ea445ea
Image: k8s.gcr.io/kube-controller-manager-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-controller-manager-arm64<strong i="15">@sha256</strong>:7fa0b0242c13fcaa63bff3b4cde32d30ce18422505afa8cb4c0f19755148b612
Port: <none>
Host Port: <none>
Command:
kube-controller-manager
--address=127.0.0.1
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--use-service-account-credentials=true
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 200m
Liveness: http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
flexvolume-dir:
Type: HostPath (bare host directory volume)
Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-proxy-69qnb
Namespace: kube-system
Priority: 2000001000
PriorityClassName: system-node-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:32:25 +0000
Labels: controller-revision-hash=2718475167
k8s-app=kube-proxy
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Controlled By: DaemonSet/kube-proxy
Containers:
kube-proxy:
Container ID: docker://12fb2a4a8af025604e46783aa87d084bdc681365317c8dac278a583646a8ad1c
Image: k8s.gcr.io/kube-proxy-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-proxy-arm64<strong i="16">@sha256</strong>:c61f4e126ec75dedce3533771c67eb7c1266cacaac9ae770e045a9bec9c9dc32
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/kube-proxy
--config=/var/lib/kube-proxy/config.conf
State: Running
Started: Thu, 02 Aug 2018 14:32:26 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/kube-proxy from kube-proxy (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-4q6rl (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-proxy:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-proxy
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
kube-proxy-token-4q6rl:
Type: Secret (a volume populated by a Secret)
SecretName: kube-proxy-token-4q6rl
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=arm64
Tolerations:
CriticalAddonsOnly
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 13m kubelet, devstats.team.io Container image "k8s.gcr.io/kube-proxy-arm64:v1.11.1" already present on machine
Normal Created 13m kubelet, devstats.team.io Created container
Normal Started 13m kubelet, devstats.team.io Started container
Name: kube-scheduler-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-scheduler
tier=control-plane
Annotations: kubernetes.io/config.hash=6e1c1eb822c75df4cec74cac9992eea9
kubernetes.io/config.mirror=6e1c1eb822c75df4cec74cac9992eea9
kubernetes.io/config.seen=2018-08-02T14:31:13.651239565Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-scheduler:
Container ID: docker://0b8018a7d0c2cb2dc64d9364dea5cea8047b0688c4ecb287dba8bebf9ab011a3
Image: k8s.gcr.io/kube-scheduler-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-scheduler-arm64<strong i="17">@sha256</strong>:28ab99ab78c7945a4e20d9369682e626b671ba49e2d4101b1754019effde10d2
Port: <none>
Host Port: <none>
Command:
kube-scheduler
--address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Running
Started: Thu, 02 Aug 2018 14:31:14 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: weave-net-2fsrf
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: devstats.cncf.io/147.75.78.47
Start Time: Thu, 02 Aug 2018 14:39:49 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.78.47
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://e8f5c3b702166a15212ab9576696aa7a1a0cb5b94e9cba1451fc9cc2b1d1382d
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="18">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:43:04 +0000
Finished: Thu, 02 Aug 2018 14:43:05 +0000
Ready: False
Restart Count: 5
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://1cfd16507d6d9e1744bfc354af62301fb8678af12ace34113121a40ca93b6113
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="19">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:39:58 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 5m kubelet, devstats.cncf.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 5m kubelet, devstats.cncf.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 5m kubelet, devstats.cncf.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 5m kubelet, devstats.cncf.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 5m kubelet, devstats.cncf.io Created container
Normal Started 5m kubelet, devstats.cncf.io Started container
Normal Created 5m (x4 over 5m) kubelet, devstats.cncf.io Created container
Normal Started 5m (x4 over 5m) kubelet, devstats.cncf.io Started container
Normal Pulled 5m (x3 over 5m) kubelet, devstats.cncf.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Warning BackOff 56s (x27 over 5m) kubelet, devstats.cncf.io Back-off restarting failed container
Name: weave-net-j9f5m
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:36:11 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.97.234
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="20">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:42:18 +0000
Finished: Thu, 02 Aug 2018 14:42:18 +0000
Ready: False
Restart Count: 6
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://3cd49dbca669ac83db95ebf943ed0053281fa5082f7fa403a56e30091eaec36b
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="21">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:36:31 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 9m kubelet, devstats.team.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 9m kubelet, devstats.team.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 9m kubelet, devstats.team.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 9m kubelet, devstats.team.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 9m kubelet, devstats.team.io Created container
Normal Started 9m kubelet, devstats.team.io Started container
Normal Created 8m (x4 over 9m) kubelet, devstats.team.io Created container
Normal Started 8m (x4 over 9m) kubelet, devstats.team.io Started container
Normal Pulled 8m (x3 over 9m) kubelet, devstats.team.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Warning BackOff 4m (x26 over 9m) kubelet, devstats.team.io Back-off restarting failed container
Name: weave-net-wwjrr
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: cncftest.io/147.75.205.79
Start Time: Thu, 02 Aug 2018 14:39:57 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.205.79
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://d0d1dccfe0a1f57bce652e30d5df210a9b232dd71fe6be1340c8bd5617e1ce11
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="22">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:43:16 +0000
Finished: Thu, 02 Aug 2018 14:43:16 +0000
Ready: False
Restart Count: 5
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://e2c15578719788110131a4be3653a077441338b0f61f731add9dadaadfc11655
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="23">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:40:09 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 5m kubelet, cncftest.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 5m kubelet, cncftest.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 5m kubelet, cncftest.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 5m kubelet, cncftest.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 5m kubelet, cncftest.io Created container
Normal Started 5m kubelet, cncftest.io Started container
Normal Created 4m (x4 over 5m) kubelet, cncftest.io Created container
Normal Pulled 4m (x3 over 5m) kubelet, cncftest.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Normal Started 4m (x4 over 5m) kubelet, cncftest.io Started container
Warning BackOff 44s (x27 over 5m) kubelet, cncftest.io Back-off restarting failed container
kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true
:I0802 14:49:02.034473 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.036654 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.044546 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.062906 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.063710 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf
I0802 14:49:02.063753 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.063791 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.063828 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.236764 64396 round_trippers.go:408] Response Status: 200 OK in 172 milliseconds
I0802 14:49:02.236870 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.236907 64396 round_trippers.go:414] Content-Type: application/json
I0802 14:49:02.236944 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
I0802 14:49:02.237363 64396 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-2fsrf","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-2fsrf","uid":"e8b2dfe9-9661-11e8-8ca9-fc15b4970491","resourceVersion":"1625","creationTimestamp":"2018-08-02T14:39:49Z","labels":{"controller-revision-hash":"332195524","name":"weave-net","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"66e82a46-9661-11e8-8ca9-fc15b4970491","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","ty [truncated 4212 chars]
I0802 14:49:02.261076 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.262803 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave
I0802 14:49:02.262844 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.262882 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.262919 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.275703 64396 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds
I0802 14:49:02.275743 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.275779 64396 round_trippers.go:414] Content-Type: text/plain
I0802 14:49:02.275815 64396 round_trippers.go:414] Content-Length: 69
I0802 14:49:02.275850 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
I0802 14:49:02.278054 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.279649 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave-npc
I0802 14:49:02.279691 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.279728 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.279765 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.293271 64396 round_trippers.go:408] Response Status: 200 OK in 13 milliseconds
I0802 14:49:02.293321 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.293358 64396 round_trippers.go:414] Content-Type: text/plain
I0802 14:49:02.293394 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
INFO: 2018/08/02 14:39:58.198716 Starting Weaveworks NPC 2.4.0; node name "devstats.cncf.io"
INFO: 2018/08/02 14:39:58.198969 Serving /metrics on :6781
Thu Aug 2 14:39:58 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/08/02 14:39:58.294002 Got list of ipsets: []
ERROR: logging before flag.Parse: E0802 14:40:28.338474 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338475 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338474 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.339275 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.340235 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.341457 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.340117 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.341216 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.342131 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.342657 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343322 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343396 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.343714 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.344561 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.346722 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.344468 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.345385 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.347275 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.345226 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.346184 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.347875 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347016 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347523 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.350821 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.347826 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.348883 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.351365 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.348662 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.349573 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.352012 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.349429 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.350420 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.352714 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.351213 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.352074 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.355261 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352128 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352949 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.355929 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.352903 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.353844 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.356576 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.353994 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.354564 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.357281 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.355515 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.356603 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.359533 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.356372 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.357453 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.360401 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
Donc, en résumé. Il est impossible d'installer un cluster Kubernetes avec un seul maître et un seul nœud de travail sur Ubuntu 18.04.
Je pense qu'il devrait y avoir une instruction d'installation sur la façon de configurer k8s étape par étape à l'aide de kubeadm sur le dernier LTS Ubuntu.
Je pense que 18.04 a cassé à la fois en termes de Docker et pour systemd-resolved
.
alors oui, il est vraiment difficile d'écrire des guides pour chaque version de distribution et nous ne pouvons pas vraiment les maintenir efficacement.
De plus, bien que kubeadm soit l'interface ici, le problème pourrait vraiment ne pas être lié à kubeadm lui-même.
quelques questions:
/var/lib/kubelet/kubeadm-flags.env
lorsque vous démarrez kubeadm join/init
sur les 3 nœuds ?journalctl -xeu kubelet
? est-ce uniquement sur le nœud maître - qu'en est-il des autres ? vous pouvez les jeter dans un github gist ou sur http://pastebin.com pour moi aussi.KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
nœud (cncftest.io, amd64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
nœud (devstats.cncf.io, amd64) :
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
J'ai donc installé le maître kubeadm init
sur l'hôte amd64 et essayé weave net
et le résultat est exactement le même que lorsque j'ai essayé ceci sur l'hôte arm64 :
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Il y a un petit progrès.
J'ai installé master sur amd64 puis un nœud sur amd64 aussi. Tout a bien fonctionné.
J'ai ajouté le nœud arm64 et maintenant j'ai :
maître amd64 : prêt
nœud amd64 : prêt
nœud arm64 : Non Prêt : runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
flannel
ne puisse pas parler entre différentes architectures et arm64 ne puisse pas du tout être utilisé comme maître.runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Des suggestions que dois-je faire? Où dois-je signaler cela? J'ai déjà un cluster à 2 nœuds (maître et nœud amd64) mais je souhaite aider à résoudre ce problème afin que l'on puisse utiliser n'importe quel maître arch avec n'importe quel nœud arch uniquement OOTB.
@lukaszgryglicki
kube-flannel.yml
déploie le conteneur de flanelle pour une seule architecture. C'est pourquoi sur les nœuds avec une architecture différente le plugin cni ne démarre pas et le nœud ne se prépare jamais
Je n'ai jamais essayé par moi-même, mais je suppose que vous pouvez déployer deux manifestes de flanelle piratés avec des teintes différentes (et des noms) pour éviter de mélanger les choses, mais encore une fois, mes suggestions sont de demander aux gens de flanelle s'il existe déjà des instructions sur la façon de procéder. .
Mais j'ai modifié manifeste sur arm64 comme suggéré dans le didacticiel. Remplacement de amd64
par arm64
.
Alors peut-être que je vais créer un problème pour flannell
et coller un lien vers ce fil.
Et maintenant, pourquoi wave net
échoue sur les deux arches avec le même bogue lié à cni ? Peut-être créer un problème pour weave
et également un lien vers ce fil ?
@lukaszgryglicki
Lorsque vous avez modifié kube-flannel.yml
pour arm, il cesse de fonctionner sur les machines amd... C'est pourquoi je suppose que le déploiement de 2 manifestes bien modifiés, un pour arm et un pour amd, peut résoudre votre problème.
Et maintenant que j'y pense, vous devriez peut-être résoudre le même problème avec l'ensemble de démons kube-proxy, mais je ne peux pas tester cela maintenant, désolé
Pour le problème que vous avez avec le tissage, je n'ai pas assez d'informations. Un problème pourrait être que le tissage ne fonctionne pas avec --pod-network-cidr=10.244.0.0/16
, mais pour en revenir au problème initial, je ne sais pas par cœur si le tissage fonctionne directement sur les plates-formes mixtes ou non.
Je devrais donc déployer deux manifestes différents pour la flanelle sur un maître, n'est-ce pas ? Peu importe si le maître est arm64 ou amd64, n'est-ce pas ? Le maître doit-il gérer le déploiement correct de l'arche sur lui-même et sur les nœuds ?
Je ne sais pas ce que tu veux dire ici :
And now that I think of, might be you should fix the same issue with kube-proxy daemon set as well, but I can't test this now, sorry
Je n'ai pas utilisé --pod-network-cidr=10.244.0.0/16
pour weave
. Je n'ai utilisé que kubeadm init
.
J'ai utilisé --pod-network-cidr=10.244.0.0/16
uniquement pour les tentatives de flanelle. Comme disent les docs.
cc @luxas - J'ai vu que vous aviez créé des documents sur les déploiements multi-arch k8s, peut-être pourriez-vous avoir des commentaires ?
@lukasredynk
oui, c'est donc un problème d'arc après tout, merci pour la confirmation.
concentrons-nous ici sur la flanelle car le problème du tissage semble être un problème tangent.
regardez ceci par @luxas pour le contexte, si vous ne l'
https://github.com/luxas/kubeadm-workshop
Le maître doit-il gérer le déploiement correct de l'arche sur lui-même et sur les nœuds ?
_cela devrait_ mais le manifeste que vous téléchargez n'est pas un "gros":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Autant que je sache, les arch taints se propagent et vous devez les corriger avec kubectl
sur chaque nœud (?).
ressemble à un manifeste "gras" est dans le maître et a été ajouté ici :
https://github.com/coreos/flanelle/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca
problème/pr connexe :
https://github.com/coreos/flanel/issues/663
https://github.com/coreos/flanel/pull/989
mon hypothèse est que c'est à la pointe de la technologie et que vous devez utiliser :
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
alors réduisez le cluster et essayez-le en espérant que cela fonctionne.
nos documents CNI auraient besoin d'une amélioration, mais cela doit se produire lorsque flannel-next
est publié.
OK, j'essaierai après le week-end et posterai mes résultats ici. Merci.
@lukaszgryglicki salut, avez-vous réussi à faire fonctionner cela en utilisant le nouveau manifeste de flanelle ?
Pas encore, je vais essayer aujourd'hui.
OK a finalement fonctionné :
root<strong i="6">@devstats</strong>:/root# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cncftest.io Ready <none> 39s v1.11.1
devstats.cncf.io Ready <none> 46s v1.11.1
devstats.team.io Ready master 12m v1.11.1
La graisse principale de la master
a aidé.
Merci, cela peut être fermé.
Salut les gars, je suis dans le même cas.
J'ai des nœuds de travail à l'état Prêt, mais la flanelle sur arm64 continue de planter avec cette erreur :
1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm64-m5jfd': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm64-m5jfd: dial tcp 10.96.0.1:443: i/o timeout
@lukasredynk est-ce que cela a fonctionné pour vous ?
une idée?
L'erreur semble différente, mais avez-vous utilisé le fat manifest : https://raw.githubusercontent.com/coreos/flanel/master/Documentation/kube-flannel.yml ?
Il contient des manifestes pour plusieurs arches.
Oui:
Le problème maintenant, c'est le récipient en flanelle qui ne tient pas sur le bras. :(
il le fait sur amd64
et arm64
- fonctionne pour moi.
Malheureusement, je ne peux pas aider avec arm
(32 bits), je n'ai pas arm
machine
Je suis sur arm64 mais merci, je vais continuer à enquêter...
Ohh alors désolé, je pensais que tu étais sur le bras.
Quoi qu'il en soit, je suis également assez nouveau dans ce domaine, vous devez donc attendre que d'autres gars vous aident.
Veuillez coller la sortie de kubectl describe pods --all-namespace
et la sortie possible d'autres commandes que j'ai postées dans ce fil. Cela peut aider quelqu'un à repérer le vrai problème.
Merci @lukaszgryglicki ,
c'est la sortie des pods de description : https://pastebin.com/kBVPYsMd
@lukaszgryglicki
content que cela ait fonctionné à la fin.
Je vais documenter l'utilisation du manifeste de graisse pour la flanelle dans les documents, car je n'ai aucune idée de la date de sortie de la 0.11.0.
@Leen15
pertinent du pod défaillant :
Warning FailedCreatePodSandBox 3m (x5327 over 7h) kubelet, nanopi-neo-plus2 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ddb551d520a757f4f8ff81d1dbfde50a98a5ec65385673a5a49a79e23a3243b" network for pod "arm-test-7894bfffd-njdcc": NetworkPlugin cni failed to set up pod "arm-test-7894bfffd-njdcc_default" network: open /run/flannel/subnet.env: no such file or directory
ajoutez-vous --pod-network-cidr=...
qui est nécessaire pour la flanelle ?
essayez aussi ce guide:
https://github.com/kubernetes/kubernetes/issues/36575#issuecomment -264622923
@neolit123 oui, j'ai trouvé le problème : flannel n'a pas créé l'interface réseau virtuelle (cni et flannel0).
Je ne connais pas la raison et je n'ai pas réussi à le résoudre après plusieurs heures.
J'ai abandonné et je suis passé à l'essaim.
OK compris. dans ce cas je clos le sujet.
Merci.
J'ai également rencontré le même problème et j'ai découvert que le nœud ne pouvait pas extraire les images requises à cause du GFW en Chine.
J'exécute cette commande et cela a résolu mon problème :
Cela crée un fichier dans le répertoire /etc/cni/net.d avec le nom de 10-flanel.conflist. Je pense que kubernetes nécessite un réseau, qui est défini par ce package.
Mon cluster est dans l'état suivant :
NOM STATUT RLES ÂGE VERSION
k8s-master Ready master 3h37m v1.14.1
node001 Prêt
node02 Prêt
Salut tout le monde,
J'ai 1 maître et 2 nœuds. Le 2e nœud n'est pas prêt.
root@kube1 :~# kubectl récupère les nœuds
NOM STATUT RLES ÂGE VERSION
dockerlab1 Prêt
kube1 Prêt maître 4h12m v1.14.3
labserver1 pas prêt
root@kube1 :~# kubectl get pods --all-namespaces
NOM DE L'ESPACE DE NOM ÉTAT PRÊT REDÉMARRAGE ÂGE
kube-system coredns-fb8b8dccf-72llr 1/1 En cours d'exécution 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 En cours d'exécution 0 4h13m
kube-system etcd-kube1 1/1 En cours d'exécution 0 4h12m
kube-system kube-apiserver-kube1 1/1 Exécution 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Exécution 0 4h13m
kube-system kube-flanel-ds-amd64-6q6sz 0/1 Init:0/1 0 24m
kube-system kube-flanel-ds-amd64-rshnj 1/1 Running 0 3h59m
kube-system kube-flanel-ds-amd64-xsj72 1/1 En cours d'exécution 0 4h1m
kube-system kube-proxy-7m8jg 1/1 En cours d'exécution 0 3h59m
kube-system kube-proxy-m7gdc 0/1 Création de conteneur 0 24m
kube-system kube-proxy-xgq6p 1/1 En cours d'exécution 0 4h13m
root@kube1 :~# kubectl décrit le nœud labserver1
Nom : labserver1
Les rôles:
Libellés : beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Annotations : kubeadm.alpha.kubernetes.io/cri-socket : /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl : 0
volumes.kubernetes.io/controller-managed-attach-detach : vrai
CreationTimestamp : Dim, 09 juin 2019 21:03:57 +0800
Taints : node.kubernetes.io/not- ready:NoExecute
node.kubernetes.io/not- ready:NoSchedule
Non planifiable : faux
Conditions:
Type Statut LastHeartbeatTime LastTransitionTime Raison Message
---- ------ ----------------- ------------------ ----- - -------
MemoryPressure False dim, 09 juin 2019 21:28:31 +0800 dim, 09 juin 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet a suffisamment de mémoire disponible
DiskPressure False dim, 09 juin 2019 21:28:31 +0800 dim, 09 juin 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet n'a pas de pression de disque
PIDPressure False dim, 09 juin 2019 21:28:31 +0800 dim, 09 juin 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet a un PID suffisant disponible
Ready False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletNotReady runtime network not ready: NetworkReady=false Reason:NetworkPluginNotReady message:docker : network plugin is not ready: cni config non initialisé
Adresses :
IP interne : 172.31.8.125
Nom d'hôte : labserver1
Capacité:
processeur : 1
stockage éphémère : 18108284Ki
pages énormes-1Gi : 0
pages énormes-2Mi : 0
mémoire : 1122528Ki
gousses : 110
Allouable :
processeur : 1
stockage éphémère : 16688594507
pages énormes-1Gi : 0
pages énormes-2Mi : 0
mémoire : 1020128Ki
gousses : 110
Information système:
Identifiant de l'ordinateur : 292dc4560f9309ccdd72b6935c80e8ec
UUID système : DE4707DF-5516-784A-9B41-588FCDE49369
ID de démarrage : 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Version du noyau : 4.4.0-142-generic
Image du système d'exploitation : Ubuntu 16.04.6 LTS
Système d'exploitation : linux
Architecture : amd64
Version d'exécution du conteneur : docker://18.9.6
Version Kubelet : v1.14.3
Version Kube-Proxy : v1.14.3
PodCIDR : 10.244.3.0/24
Pods non terminés : (2 au total)
Nom de l'espace de noms Requêtes CPU Limites CPU Requêtes mémoire Limites mémoire AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-flanel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Ressources allouées :
(Les limites totales peuvent être supérieures à 100 %, c'est-à-dire surengagées.)
Limites des demandes de ressources
-------- -------- ------
processeur 100 m (10 %) 100 m (10 %)
mémoire 50Mi (5%) 50Mi (5%)
stockage éphémère 0 (0%) 0 (0%)
Événements:
Type Raison Âge De Message
---- ------ ---- ---- -------
Kubelet de départ normal 45m, labserver1 Kubelet de départ.
NodeHasSufficientMemory normal 45m kubelet, labserver1 L'état du nœud labserver1 est maintenant : NodeHasSufficientMemory
NodeHasNoDiskPressure 45m kubelet, labserver1 L'état du nœud labserver1 est maintenant : NodeHasNoDiskPressure
NodeHasSufficientPID 45m kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientPID
NodeAllocatableEnforced 45m kubelet, labserver1 Mise à jour de la limite d'allocation de nœuds entre les pods
Kubelet de départ normal 25m, labserver1 Kubelet de départ.
NodeAllocatableEnforced 25m kubelet, labserver1 Mise à jour de la limite d'allocation de nœuds entre les pods
NodeHasSufficientMemory 25m (x2 sur 25m) kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientMemory
NodeHasSufficientPID 25m (x2 sur 25m) kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientPID
NodeHasNoDiskPressure 25m (x2 sur 25m) kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasNoDiskPressure
Normal Début 13m kubelet, labserver1 Début kubelet.
NodeHasSufficientMemory 13m kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientMemory
NodeHasNoDiskPressure 13m kubelet, labserver1 L'état du nœud labserver1 est maintenant : NodeHasNoDiskPressure
NodeHasSufficientPID 13m kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientPID
NodeAllocatableEnforced 13 m kubelet, labserver1 Mise à jour de la limite d'allocation de nœuds entre les pods
root@kube1 :~#
S'il vous plaît aider
Salut tout le monde,
J'ai 1 maître et 2 nœuds. Le 2e nœud n'est pas prêt.
root@kube1 :~# kubectl récupère les nœuds
NOM STATUT RLES ÂGE VERSION
dockerlab1 Prêt 3h57m v1.14.3
kube1 Prêt maître 4h12m v1.14.3
labserver1 NotReady 22m v1.14.3root@kube1 :~# kubectl get pods --all-namespaces
NOM DE L'ESPACE DE NOM ÉTAT PRÊT REDÉMARRAGE ÂGE
kube-system coredns-fb8b8dccf-72llr 1/1 En cours d'exécution 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 En cours d'exécution 0 4h13m
kube-system etcd-kube1 1/1 En cours d'exécution 0 4h12m
kube-system kube-apiserver-kube1 1/1 Exécution 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Exécution 0 4h13m
kube-system kube-flanel-ds-amd64-6q6sz 0/1 Init:0/1 0 24m
kube-system kube-flanel-ds-amd64-rshnj 1/1 Running 0 3h59m
kube-system kube-flanel-ds-amd64-xsj72 1/1 En cours d'exécution 0 4h1m
kube-system kube-proxy-7m8jg 1/1 En cours d'exécution 0 3h59m
kube-system kube-proxy-m7gdc 0/1 Création de conteneur 0 24m
kube-system kube-proxy-xgq6p 1/1 En cours d'exécution 0 4h13m
kube-system kube-scheduler-kube1 1/1 Running 0 4h13m
root@kube1 :~# kubectl décrit le nœud labserver1
Nom : labserver1
Les rôles:
Libellés : beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Annotations : kubeadm.alpha.kubernetes.io/cri-socket : /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl : 0
volumes.kubernetes.io/controller-managed-attach-detach : vrai
CreationTimestamp : Dim, 09 juin 2019 21:03:57 +0800
Taints : node.kubernetes.io/not- ready:NoExecute
node.kubernetes.io/not- ready:NoSchedule
Non planifiable : faux
Conditions:
Type Statut LastHeartbeatTime LastTransitionTime Raison MessageMemoryPressure False dim, 09 juin 2019 21:28:31 +0800 dim, 09 juin 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet a suffisamment de mémoire disponible
DiskPressure False dim, 09 juin 2019 21:28:31 +0800 dim, 09 juin 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet n'a pas de pression de disque
PIDPressure False dim, 09 juin 2019 21:28:31 +0800 dim, 09 juin 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet a un PID suffisant disponible
Ready False Sun, 09 Jun 2019 21:28:31 +0800 Sun, 09 Jun 2019 21:03:57 +0800 KubeletNotReady runtime network not ready: NetworkReady=false Reason:NetworkPluginNotReady message:docker : network plugin is not ready: cni config non initialisé
Adresses :
IP interne : 172.31.8.125
Nom d'hôte : labserver1
Capacité:
processeur : 1
stockage éphémère : 18108284Ki
pages énormes-1Gi : 0
pages énormes-2Mi : 0
mémoire : 1122528Ki
gousses : 110
Allouable :
processeur : 1
stockage éphémère : 16688594507
pages énormes-1Gi : 0
pages énormes-2Mi : 0
mémoire : 1020128Ki
gousses : 110
Information système:
Identifiant de l'ordinateur : 292dc4560f9309ccdd72b6935c80e8ec
UUID système : DE4707DF-5516-784A-9B41-588FCDE49369
ID de démarrage : 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Version du noyau : 4.4.0-142-generic
Image du système d'exploitation : Ubuntu 16.04.6 LTS
Système d'exploitation : linux
Architecture : amd64
Version d'exécution du conteneur : docker://18.9.6
Version Kubelet : v1.14.3
Version Kube-Proxy : v1.14.3
PodCIDR : 10.244.3.0/24
Pods non terminés : (2 au total)
Nom de l'espace de noms Requêtes CPU Limites CPU Requêtes mémoire Limites mémoire AGEkube-system kube-flanel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
kube-system kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Ressources allouées :
(Les limites totales peuvent être supérieures à 100 %, c'est-à-dire surengagées.)
Limites des demandes de ressourcesprocesseur 100 m (10 %) 100 m (10 %)
mémoire 50Mi (5%) 50Mi (5%)
stockage éphémère 0 (0%) 0 (0%)
Événements:
Type Raison Âge De MessageKubelet de départ normal 45m, labserver1 Kubelet de départ.
NodeHasSufficientMemory normal 45m kubelet, labserver1 L'état du nœud labserver1 est maintenant : NodeHasSufficientMemory
NodeHasNoDiskPressure 45m kubelet, labserver1 L'état du nœud labserver1 est maintenant : NodeHasNoDiskPressure
NodeHasSufficientPID 45m kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientPID
NodeAllocatableEnforced 45m kubelet, labserver1 Mise à jour de la limite d'allocation de nœuds entre les pods
Kubelet de départ normal 25m, labserver1 Kubelet de départ.
NodeAllocatableEnforced 25m kubelet, labserver1 Mise à jour de la limite d'allocation de nœuds entre les pods
NodeHasSufficientMemory 25m (x2 sur 25m) kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientMemory
NodeHasSufficientPID 25m (x2 sur 25m) kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientPID
NodeHasNoDiskPressure 25m (x2 sur 25m) kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasNoDiskPressure
Normal Début 13m kubelet, labserver1 Début kubelet.
NodeHasSufficientMemory 13m kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientMemory
NodeHasNoDiskPressure 13m kubelet, labserver1 L'état du nœud labserver1 est maintenant : NodeHasNoDiskPressure
NodeHasSufficientPID 13m kubelet, labserver1 Le statut du nœud labserver1 est maintenant : NodeHasSufficientPID
NodeAllocatableEnforced 13 m kubelet, labserver1 Mise à jour de la limite d'allocation de nœuds entre les pods
root@kube1 :~#S'il vous plaît aider
Salut Athir,
Please check out logs in /var/logs/messages section of your Master node. You can find an actual error in those logs. But here are some general tips.
je. Concentrez-vous toujours sur votre nœud maître en premier.
ii. Installez le moteur Docker dessus et récupérez toutes les images utilisées pour kubernetes. Lorsque tout sera en cours d'exécution, ajoutez des nœuds au maître. cela résoudra tout le problème. J'ai vu des articles sur Internet, qui tentent de récupérer des images après avoir attaché les nœuds esclaves. Cette pratique cause des problèmes.
Salut saddique164, Merci pour vos suggestions. Oui, comme vous l'avez dit, j'ai déployé un autre nouveau nœud esclave hier et j'ai pu rejoindre le maître sans aucun problème.
Désolé, je ne peux pas aider, je n'ai plus de nœuds ARM64, maintenant j'ai un cluster bare-metal AMD64 à 4 nœuds.
Le fichier /etc/cni/net.d/10-flanel.conflist manquait la clé cniVersion dans sa configuration.
L'ajout de "cniVersion": "0.2.0" a résolu le problème.
Le fichier /etc/cni/net.d/10-flanel.conflist manquait la clé cniVersion dans sa configuration.
L'ajout de "cniVersion": "0.2.0" a résolu le problème.
J'ai rencontré le problème lors de la mise à jour vers V1.16.0 à partir de 1.15.
dGVzdDoxMjPCow
Le 24 septembre 2019 à 12h52, "ronakgpatel" [email protected] a écrit :
Le fichier /etc/cni/net.d/10-flanel.conflist manquait la clé cniVersion dans
sa config.L'ajout de "cniVersion": "0.2.0" a résolu le problème.
J'ai rencontré le problème lors de la mise à jour vers V1.16.0 à partir de 1.15.
-
Vous recevez ceci parce que vous êtes abonné à ce fil.
Répondez directement à cet e-mail, consultez-le sur GitHub
https://github.com/kubernetes/kubeadm/issues/1031?email_source=notifications&email_token=AND2HJTXHB6WSGAE7PSOAJTQLGMKVA5CNFSM4FNQHEHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXH2WZLOZD13 ,
ou couper le fil
https://github.com/notifications/unsubscribe-auth/AND2HJXDKQARYKXVY4YLZMDQLGMKVANCNFSM4FNQHEHA
.
la flanelle n'est pas très activement entretenue. Je recommande le calicot ou le tissage.
le référentiel de flanelle avait besoin d'un correctif.
le guide kubeadm pour l'installation de la flanelle vient d'être mis à jour, voir :
https://github.com/kubernetes/website/pull/16575/files
Confronté au même problème ici.
kubectl applique -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.
A travaillé pour moi.
docker : le plugin réseau n'est pas prêt : cni config non initialisé
Réinstallez docker sur le nœud non prêt.
A travaillé pour moi.
J'exécute cette commande et cela a résolu mon problème :
- kubectl applique -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Cela crée un fichier dans le répertoire /etc/cni/net.d avec le nom de 10-flanel.conflist. Je pense que kubernetes nécessite un réseau, qui est défini par ce package.
Mon cluster est dans l'état suivant :NOM STATUT RLES ÂGE VERSION
k8s-master Ready master 3h37m v1.14.1
node001 Prêt 3h6m v1.14.1
node02 Prêt 167m v1.14.1
Cela vient de le faire !
J'ai eu un cas similaire où je créais le plugin réseau avant de lier les travailleurs, ce qui laissait le /etc/cni/net.d manquant.
J'ai réexécuté la configuration après avoir lié les nœuds de travail à l'aide de :
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
En conséquence, la configuration dans /etc/cni/net.d a été créée avec succès et le nœud s'est affiché dans un état Prêt.
J'espère que cela aidera quelqu'un avec le même problème.
J'exécute cette commande et cela a résolu mon problème :
- kubectl applique -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Cela crée un fichier dans le répertoire /etc/cni/net.d avec le nom de 10-flanel.conflist. Je pense que kubernetes nécessite un réseau, qui est défini par ce package.
Mon cluster est dans l'état suivant :NOM STATUT RLES ÂGE VERSION
k8s-master Ready master 3h37m v1.14.1
node001 Prêt 3h6m v1.14.1
node02 Prêt 167m v1.14.1
Exécuté cette commande sur la machine principale et tout est maintenant à l'état Prêt. Merci @saddique164 .
Le moyen le plus rapide consiste à ajouter Flannel dans Kubernetes sur l'une des architectures AMD64.
$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> kube-flannel.yaml
$ kubectl apply -f kube-flannel.yaml
J'utilise la version kubernetes 1.18.
J'ai utilisé ceci : kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Aucun fichier n'a été créé sous /etc/cni/net.d
Le nœud maître est NotReady alors que les esclaves sont à l'état Ready
J'utilise la version kubernetes 1.18.
J'ai utilisé ceci : kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Aucun fichier n'a été créé sous /etc/cni/net.d
Le nœud maître est NotReady alors que les esclaves sont à l'état Ready
REMARQUE : Cela semble être un problème de kubelet.
Jul 01 11:58:36 master kubelet[17918]: F0701 11:58:36.613864 17918 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 01 11:58:36 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 01 11:58:36 master systemd[1]: Unit kubelet.service entered failed state.
Jul 01 11:58:36 master systemd[1]: kubelet.service failed.
essayez ceci sur le maître :
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Il démarre puis échoue à nouveau.
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692341 15525 remote_runtime.go:59] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692358 15525 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692381 15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692389 15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692420 15525 remote_image.go:50] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692427 15525 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692435 15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692440 15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692464 15525 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692480 15525 kubelet.go:317] Watching apiserver
Jul 02 10:37:16 master kubelet[15525]: W0702 10:37:16.680313 15525 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Si vous le voyez, il est dit qu'aucun réseau n'est trouvé. Exécutez cette commande et partagez le résultat.
Kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-75f8564758-92ws7 1/1 Running 0 25h
coredns-75f8564758-z9xn8 1/1 Running 0 25h
kube-flannel-ds-amd64-2j4mw 1/1 Running 0 25h
kube-flannel-ds-amd64-5tmhp 0/1 Pending 0 25h
kube-flannel-ds-amd64-rqwmz 1/1 Running 0 25h
kube-proxy-6v24w 1/1 Running 0 25h
kube-proxy-jgdw7 0/1 Pending 0 25h
kube-proxy-qppnk 1/1 Running 0 25h
lance ça
journaux kubectl kube-flanel-ds-amd64-5tmhp -n kube-system
si rien ne vient, lancez celui-ci :
kubectl décrit le pod kube-flanel-ds-amd64-5tmhp -n kube-system
Erreur du serveur : obtenez https://10.75.214.124 :10250/containerLogs/kube-system/kube-flannel-ds-amd64-5tmhp/kube-flannel : composez le tcp 10.75.214.124 :10250 : connectez : connexion refusée
Combien de nœuds sont en cours d'exécution pour vous ? DANS le cluster ? un nœud fait ce problème. C'est ce qu'on appelle un jeu de démons. Ils s'exécutent sur chaque nœud. Votre plan de contrôle n'accepte pas la demande de celui-ci. Je vais donc vous suggérer de suivre les étapes suivantes.
Ce processus fonctionnera.
kubectl récupère les nœuds :
NAME STATUS ROLES AGE VERSION
master NotReady master 26h v1.18.5
slave1 Ready <none> 26h v1.18.5
slave2 Ready <none> 26h v1.18.5
J'ai essayé les étapes que vous avez mentionnées:
C'est ce que j'obtiens.
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
drainez tous les nœuds à l'exception du maître et concentrez-vous sur cela. Quand il sera prêt, allez en ajouter d'autres.
Vider les nœuds, puis réinitialiser et init kubeadm n'aide pas. Le cluster n'est pas initialisé par la suite.
Mon problème était que je mettais à jour le nom d'hôte après la création du cluster. En faisant cela, c'est comme si le maître ne savait pas que c'était le maître.
Je cours toujours :
sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname)
mais maintenant je l'exécute avant l'initialisation du cluster
Commentaire le plus utile
@lukasredynk
oui, c'est donc un problème d'arc après tout, merci pour la confirmation.
concentrons-nous ici sur la flanelle car le problème du tissage semble être un problème tangent.
regardez ceci par @luxas pour le contexte, si vous ne l'
https://github.com/luxas/kubeadm-workshop
_cela devrait_ mais le manifeste que vous téléchargez n'est pas un "gros":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Autant que je sache, les arch taints se propagent et vous devez les corriger avec
kubectl
sur chaque nœud (?).ressemble à un manifeste "gras" est dans le maître et a été ajouté ici :
https://github.com/coreos/flanelle/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca
problème/pr connexe :
https://github.com/coreos/flanel/issues/663
https://github.com/coreos/flanel/pull/989
mon hypothèse est que c'est à la pointe de la technologie et que vous devez utiliser :
alors réduisez le cluster et essayez-le en espérant que cela fonctionne.
nos documents CNI auraient besoin d'une amélioration, mais cela doit se produire lorsque
flannel-next
est publié.