RELATÓRIO DE ERRO
apt-get update/upgrade
.kubeadm init --pod-network-cidr=10.244.0.0/16
. E então executou os comandos sugeridos.sysctl net.bridge.bridge-nf-call-iptables=1
.wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
.vim kube-flannel.yml
, substitua amd64
por arm64
kubectl apply -f kube-flannel.yml
.kubectl get pods --all-namespaces
:NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-ls44z 1/1 Running 0 20m
kube-system coredns-78fcdf6894-njnnt 1/1 Running 0 20m
kube-system etcd-devstats.team.io 1/1 Running 0 20m
kube-system kube-apiserver-devstats.team.io 1/1 Running 0 20m
kube-system kube-controller-manager-devstats.team.io 1/1 Running 0 20m
kube-system kube-flannel-ds-v4t8s 1/1 Running 0 13m
kube-system kube-proxy-5825g 1/1 Running 0 20m
kube-system kube-scheduler-devstats.team.io 1/1 Running 0 20m
Em seguida, juntou dois nós AMD64 usando kubeadm init
saída:
1º nó:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:49.987467 16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709 16652 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
2º nó:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:58.913060 38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222 38617 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Mas no mestre kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
cncftest.io NotReady <none> 7m v1.11.1
devstats.cncf.io NotReady <none> 7m v1.11.1
devstats.team.io Ready master 21m v1.11.1
E então: kubectl describe nodes
(o mestre é devstats.team.io
, os nós são: cncftest.io
e devstats.cncf.io
):
Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=cncftest.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:26:53 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.205.79
Hostname: cncftest.io
Capacity:
cpu: 48
ephemeral-storage: 459266000Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264047752Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 423259544900
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263945352Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 4C4C4544-0052-3310-804A-B7C04F4E4432
Boot ID: d87670d9-251e-42a5-90c5-5d63059f03ab
Kernel Version: 4.15.0-22-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.1.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m kubelet, cncftest.io Starting kubelet.
Normal NodeHasSufficientDisk 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m kubelet, cncftest.io Updated Node Allocatable limit across pods
Name: devstats.cncf.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.cncf.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:27:00 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.78.47
Hostname: devstats.cncf.io
Capacity:
cpu: 48
ephemeral-storage: 142124052Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264027220Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 130981526107
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263924820Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 00000000-0000-0000-0000-0CC47AF37CF2
Boot ID: f257b606-5da2-43fd-8782-0aa4484037f4
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.2.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m kubelet, devstats.cncf.io Starting kubelet.
Normal NodeHasSufficientDisk 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m kubelet, devstats.cncf.io Updated Node Allocatable limit across pods
Name: devstats.team.io
Roles: master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.team.io
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=147.75.97.234
kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:12:56 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:21:07 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 147.75.97.234
Hostname: devstats.team.io
Capacity:
cpu: 96
ephemeral-storage: 322988584Ki
hugepages-2Mi: 0
memory: 131731468Ki
pods: 110
Allocatable:
cpu: 96
ephemeral-storage: 297666278522
hugepages-2Mi: 0
memory: 131629068Ki
pods: 110
System Info:
Machine ID: 5eaa89a81ff348399284bb4cb016ffd7
System UUID: 10000000-FAC5-FFFF-A81D-FC15B4970493
Boot ID: 43b920e3-34e7-4de3-aa6c-8b5c525363ff
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system coredns-78fcdf6894-ls44z 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system coredns-78fcdf6894-njnnt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system etcd-devstats.team.io 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-devstats.team.io 250m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-devstats.team.io 200m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-flannel-ds-v4t8s 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%)
kube-system kube-proxy-5825g 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-devstats.team.io 100m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (0%) 100m (0%)
memory 190Mi (0%) 390Mi (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 23m kubelet, devstats.team.io Starting kubelet.
Normal NodeAllocatableEnforced 23m kubelet, devstats.team.io Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 23m (x5 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientPID
Normal NodeHasSufficientDisk 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasNoDiskPressure
Normal Starting 21m kube-proxy, devstats.team.io Starting kube-proxy.
Normal NodeReady 13m kubelet, devstats.team.io Node devstats.team.io status is now: NodeReady
versão kubeadm (use kubeadm version
):
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Meio Ambiente :
kubectl version
):Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
uname -a
: Linux devstats.team.io 4.15.0-20-generic # 21-Ubuntu SMP Ter 24 de abril 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU / LinuxNAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
lsb_release -a
:No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
uname -a
): Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
docker version
:docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Apr 18 01:26:37 2018
OS/Arch: linux/arm64
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Feb 28 17:46:05 2018
OS/Arch: linux/arm64
Experimental: false
O erro exato parece ser:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
No nó: cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
A partir deste segmento (sem KUBELET_NETWORK_ARGS
lá).
journalctl -xe
no nó:Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663 38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876 38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
O diretório /etc/cni/net.d
existe, mas está vazio.
Todos os nós no estado Ready
.
Basta seguir as etapas do tutorial . Tentei 3 vezes e isso acontece o tempo todo.
Master é ARM64, 2 nós são AMD64.
O mestre e um nó estão em Amsterdã e o segundo nó está nos EUA.
Posso usar kubectl taint nodes --all node-role.kubernetes.io/master-
para executar pods no mestre, mas isso não é uma solução. Eu quero ter um cluster real de vários nós para trabalhar.
@lukaszgryglicki
Parece que os nós não estão recebendo flanela porque estão na arquitetura amd64
Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
e
Name: devstats.team.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
Não sou especialista em flanela, mas acho que você deve verificar a documentação do produto para saber como fazê-lo funcionar em um ambiente com plataformas mistas
Esse é um bom ponto, mas que tal mensagem de erro - parece realmente não relacionado.
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Parece que alguns arquivos de configuração CNI estão faltando em /etc/cni/net.d
mas por quê?
Agora estou tentando um docker diferente 18.03ce
conforme sugerido no canal slack (17.03 foi sugerido, mas não há 17.03 para Ubuntu 18.04).
As etiquetas com o nome do arco não correspondem de fato. Mas o próximo rótulo beta.kubernetes.io/os=linux
é o mesmo em todos os 3 servidores.
O mesmo acontece com o Docker 18.03ce. Não vejo nenhuma diferença, isso não parece ser um problema do docker. Isso parece um problema de configuração CNI.
@lukaszgryglicki
Oi,
Mestre: Servidor de 96 núcleos de metal puro, ARM64, 128G RAM, swap desligado.
Nós (2): servidor bare metal 48 núcleos, AMD64, 256 G RAM, troca ajustada x 2.
essas são algumas especificações _agradáveis_.
a maneira como eu testo as coisas é a seguinte - se algo não funcionar com o weavenet, eu tento flanela e vice-versa.
então, por favor, tente weave e se sua configuração CNI funcionar com ele, então este é o plugin CNI relacionado.
embora a equipe do kubeadm ofereça suporte a plug-ins e complementos, geralmente delegamos problemas a seus respectivos mantenedores, porque não temos largura de banda para lidar com tudo.
Claro, tentei tecer algumas iterações atrás. Terminou em um loop de reinicialização do contêiner.
Agora tentarei o docker 17.03 para excluir o problema do docker (17.03 deve ser muito bem suportado).
Portanto, este não é um problema do docker. Em 17.03 o mesmo:
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: W0802 14:21:51.406786 21714 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: E0802 14:21:51.407074 21714 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
now will try weave net as suggested on the issue
Vou tentar tecer agora e postar os resultados aqui.
Então, tentei weave net
e não está funcionando:
No mestre: kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
cncftest.io NotReady <none> 5s v1.11.1
devstats.cncf.io NotReady <none> 12s v1.11.1
devstats.team.io NotReady master 7m v1.11.1
kubectl describe nodes
(o mesmo erro relacionado ao cni, mas também no nó mestre agora):Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=cncftest.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:39:56 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:58 +0000 Thu, 02 Aug 2018 14:39:56 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.205.79
Hostname: cncftest.io
Capacity:
cpu: 48
ephemeral-storage: 459266000Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264047752Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 423259544900
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263945352Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 4C4C4544-0052-3310-804A-B7C04F4E4432
Boot ID: d87670d9-251e-42a5-90c5-5d63059f03ab
Kernel Version: 4.15.0-22-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.2
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system weave-net-wwjrr 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 20m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 1m kubelet, cncftest.io Starting kubelet.
Normal NodeHasSufficientDisk 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1m (x2 over 1m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 1m kubelet, cncftest.io Updated Node Allocatable limit across pods
Name: devstats.cncf.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.cncf.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:39:49 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:59 +0000 Thu, 02 Aug 2018 14:39:49 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.78.47
Hostname: devstats.cncf.io
Capacity:
cpu: 48
ephemeral-storage: 142124052Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264027220Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 130981526107
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263924820Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 00000000-0000-0000-0000-0CC47AF37CF2
Boot ID: f257b606-5da2-43fd-8782-0aa4484037f4
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.2
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system weave-net-2fsrf 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 20m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 1m kubelet, devstats.cncf.io Starting kubelet.
Normal NodeHasSufficientDisk 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 1m kubelet, devstats.cncf.io Updated Node Allocatable limit across pods
Name: devstats.team.io
Roles: master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.team.io
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 14:32:14 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 14:40:56 +0000 Thu, 02 Aug 2018 14:32:07 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.97.234
Hostname: devstats.team.io
Capacity:
cpu: 96
ephemeral-storage: 322988584Ki
hugepages-2Mi: 0
memory: 131731468Ki
pods: 110
Allocatable:
cpu: 96
ephemeral-storage: 297666278522
hugepages-2Mi: 0
memory: 131629068Ki
pods: 110
System Info:
Machine ID: 5eaa89a81ff348399284bb4cb016ffd7
System UUID: 10000000-FAC5-FFFF-A81D-FC15B4970493
Boot ID: 43b920e3-34e7-4de3-aa6c-8b5c525363ff
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://17.9.0
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-devstats.team.io 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-devstats.team.io 250m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-devstats.team.io 200m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-69qnb 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-devstats.team.io 100m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-j9f5m 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 570m (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kubelet, devstats.team.io Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet, devstats.team.io Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 10m (x5 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientPID
Normal NodeHasSufficientDisk 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m (x6 over 10m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasNoDiskPressure
Normal Starting 8m kube-proxy, devstats.team.io Starting kube-proxy.
journalctl -xe
no mestre:Aug 02 14:42:18 devstats.team.io dockerd[44020]: time="2018-08-02T14:42:18.330999189Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.079835 56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080312 56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080677 56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:19 devstats.team.io kubelet[56340]: E0802 14:42:19.080815 56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:21 devstats.team.io kubelet[56340]: W0802 14:42:21.867690 56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:21 devstats.team.io kubelet[56340]: E0802 14:42:21.868005 56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.259681 56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260359 56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260833 56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.260984 56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:26 devstats.team.io kubelet[56340]: W0802 14:42:26.870675 56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.871316 56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
kubectl get po --all-namespaces
:NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-g8wzs 0/1 Pending 0 12m
kube-system coredns-78fcdf6894-tzs8n 0/1 Pending 0 12m
kube-system etcd-devstats.team.io 1/1 Running 0 12m
kube-system kube-apiserver-devstats.team.io 1/1 Running 0 12m
kube-system kube-controller-manager-devstats.team.io 1/1 Running 0 12m
kube-system kube-proxy-69qnb 1/1 Running 0 12m
kube-system kube-scheduler-devstats.team.io 1/1 Running 0 12m
kube-system weave-net-2fsrf 1/2 CrashLoopBackOff 5 5m
kube-system weave-net-j9f5m 1/2 CrashLoopBackOff 6 8m
kube-system weave-net-wwjrr 1/2 CrashLoopBackOff 5 4m
kubectl describe po --all-namespaces
:Name: coredns-78fcdf6894-g8wzs
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.1.3
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-jw4mv:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-jw4mv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m (x32 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
Warning FailedScheduling 3m (x48 over 5m) default-scheduler 0/3 nodes are available: 3 node(s) were not ready.
Name: coredns-78fcdf6894-tzs8n
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.1.3
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-jw4mv:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-jw4mv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m (x32 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
Warning FailedScheduling 3m (x47 over 5m) default-scheduler 0/3 nodes are available: 3 node(s) were not ready.
Name: etcd-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=etcd
tier=control-plane
Annotations: kubernetes.io/config.hash=cc73514fbc25558d566fe49661f006a0
kubernetes.io/config.mirror=cc73514fbc25558d566fe49661f006a0
kubernetes.io/config.seen=2018-08-02T14:31:13.654147902Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
etcd:
Container ID: docker://254c88b154393778ef7b1ead2aaaa0acb120ffb76d911f140172da3323f1f1e3
Image: k8s.gcr.io/etcd-arm64:3.2.18
Image ID: docker-pullable://k8s.gcr.io/etcd-arm64<strong i="13">@sha256</strong>:f0b7368ebb28e6226ab3b4dbce4b5c6d77dab7b5f6579b08fd645c00f7b100ff
Port: <none>
Host Port: <none>
Command:
etcd
--advertise-client-urls=https://127.0.0.1:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://127.0.0.1:2380
--initial-cluster=devstats.team.io=https://127.0.0.1:2380
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379
--listen-peer-urls=https://127.0.0.1:2380
--name=devstats.team.io
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Liveness: exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/pki/etcd from etcd-certs (rw)
/var/lib/etcd from etcd-data (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
etcd-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/etcd
HostPathType: DirectoryOrCreate
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki/etcd
HostPathType: DirectoryOrCreate
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-apiserver-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubernetes.io/config.hash=1f7835a47425009200d38bf94c337ab3
kubernetes.io/config.mirror=1f7835a47425009200d38bf94c337ab3
kubernetes.io/config.seen=2018-08-02T14:31:13.639443247Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-apiserver:
Container ID: docker://22b73993b141faebe6b4aab727d2235abb3422a17b60bc1be6c749c260e39f67
Image: k8s.gcr.io/kube-apiserver-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-apiserver-arm64<strong i="14">@sha256</strong>:bca1933fa25fc7f890700f6aebd572c6f8351f7bc89d2e4f2c44a63649e3fccf
Port: <none>
Host Port: <none>
Command:
kube-apiserver
--authorization-mode=Node,RBAC
--advertise-address=147.75.97.234
--allow-privileged=true
--client-ca-file=/etc/kubernetes/pki/ca.crt
--disable-admission-plugins=PersistentVolumeLabel
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--insecure-port=0
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 250m
Liveness: http-get https://147.75.97.234:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-controller-manager-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash=5d26a7fba3c17c9fa8969a466d6a0f1d
kubernetes.io/config.mirror=5d26a7fba3c17c9fa8969a466d6a0f1d
kubernetes.io/config.seen=2018-08-02T14:31:13.646000889Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-controller-manager:
Container ID: docker://5182bf5c7c63f9507e6319a2c3fb5698dc827ea9b591acbb071cb39c4ea445ea
Image: k8s.gcr.io/kube-controller-manager-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-controller-manager-arm64<strong i="15">@sha256</strong>:7fa0b0242c13fcaa63bff3b4cde32d30ce18422505afa8cb4c0f19755148b612
Port: <none>
Host Port: <none>
Command:
kube-controller-manager
--address=127.0.0.1
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--use-service-account-credentials=true
State: Running
Started: Thu, 02 Aug 2018 14:31:15 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 200m
Liveness: http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
flexvolume-dir:
Type: HostPath (bare host directory volume)
Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: kube-proxy-69qnb
Namespace: kube-system
Priority: 2000001000
PriorityClassName: system-node-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:32:25 +0000
Labels: controller-revision-hash=2718475167
k8s-app=kube-proxy
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Controlled By: DaemonSet/kube-proxy
Containers:
kube-proxy:
Container ID: docker://12fb2a4a8af025604e46783aa87d084bdc681365317c8dac278a583646a8ad1c
Image: k8s.gcr.io/kube-proxy-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-proxy-arm64<strong i="16">@sha256</strong>:c61f4e126ec75dedce3533771c67eb7c1266cacaac9ae770e045a9bec9c9dc32
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/kube-proxy
--config=/var/lib/kube-proxy/config.conf
State: Running
Started: Thu, 02 Aug 2018 14:32:26 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/kube-proxy from kube-proxy (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-4q6rl (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-proxy:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-proxy
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
kube-proxy-token-4q6rl:
Type: Secret (a volume populated by a Secret)
SecretName: kube-proxy-token-4q6rl
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=arm64
Tolerations:
CriticalAddonsOnly
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 13m kubelet, devstats.team.io Container image "k8s.gcr.io/kube-proxy-arm64:v1.11.1" already present on machine
Normal Created 13m kubelet, devstats.team.io Created container
Normal Started 13m kubelet, devstats.team.io Started container
Name: kube-scheduler-devstats.team.io
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:31:13 +0000
Labels: component=kube-scheduler
tier=control-plane
Annotations: kubernetes.io/config.hash=6e1c1eb822c75df4cec74cac9992eea9
kubernetes.io/config.mirror=6e1c1eb822c75df4cec74cac9992eea9
kubernetes.io/config.seen=2018-08-02T14:31:13.651239565Z
kubernetes.io/config.source=file
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 147.75.97.234
Containers:
kube-scheduler:
Container ID: docker://0b8018a7d0c2cb2dc64d9364dea5cea8047b0688c4ecb287dba8bebf9ab011a3
Image: k8s.gcr.io/kube-scheduler-arm64:v1.11.1
Image ID: docker-pullable://k8s.gcr.io/kube-scheduler-arm64<strong i="17">@sha256</strong>:28ab99ab78c7945a4e20d9369682e626b671ba49e2d4101b1754019effde10d2
Port: <none>
Host Port: <none>
Command:
kube-scheduler
--address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Running
Started: Thu, 02 Aug 2018 14:31:14 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
Name: weave-net-2fsrf
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: devstats.cncf.io/147.75.78.47
Start Time: Thu, 02 Aug 2018 14:39:49 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.78.47
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://e8f5c3b702166a15212ab9576696aa7a1a0cb5b94e9cba1451fc9cc2b1d1382d
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="18">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:43:04 +0000
Finished: Thu, 02 Aug 2018 14:43:05 +0000
Ready: False
Restart Count: 5
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://1cfd16507d6d9e1744bfc354af62301fb8678af12ace34113121a40ca93b6113
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="19">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:39:58 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 5m kubelet, devstats.cncf.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 5m kubelet, devstats.cncf.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 5m kubelet, devstats.cncf.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 5m kubelet, devstats.cncf.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 5m kubelet, devstats.cncf.io Created container
Normal Started 5m kubelet, devstats.cncf.io Started container
Normal Created 5m (x4 over 5m) kubelet, devstats.cncf.io Created container
Normal Started 5m (x4 over 5m) kubelet, devstats.cncf.io Started container
Normal Pulled 5m (x3 over 5m) kubelet, devstats.cncf.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Warning BackOff 56s (x27 over 5m) kubelet, devstats.cncf.io Back-off restarting failed container
Name: weave-net-j9f5m
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: devstats.team.io/147.75.97.234
Start Time: Thu, 02 Aug 2018 14:36:11 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.97.234
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="20">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:42:18 +0000
Finished: Thu, 02 Aug 2018 14:42:18 +0000
Ready: False
Restart Count: 6
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://3cd49dbca669ac83db95ebf943ed0053281fa5082f7fa403a56e30091eaec36b
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="21">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:36:31 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 9m kubelet, devstats.team.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 9m kubelet, devstats.team.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 9m kubelet, devstats.team.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 9m kubelet, devstats.team.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 9m kubelet, devstats.team.io Created container
Normal Started 9m kubelet, devstats.team.io Started container
Normal Created 8m (x4 over 9m) kubelet, devstats.team.io Created container
Normal Started 8m (x4 over 9m) kubelet, devstats.team.io Started container
Normal Pulled 8m (x3 over 9m) kubelet, devstats.team.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Warning BackOff 4m (x26 over 9m) kubelet, devstats.team.io Back-off restarting failed container
Name: weave-net-wwjrr
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: cncftest.io/147.75.205.79
Start Time: Thu, 02 Aug 2018 14:39:57 +0000
Labels: controller-revision-hash=332195524
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 147.75.205.79
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://d0d1dccfe0a1f57bce652e30d5df210a9b232dd71fe6be1340c8bd5617e1ce11
Image: weaveworks/weave-kube:2.4.0
Image ID: docker-pullable://weaveworks/weave-kube<strong i="22">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 02 Aug 2018 14:43:16 +0000
Finished: Thu, 02 Aug 2018 14:43:16 +0000
Ready: False
Restart Count: 5
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://e2c15578719788110131a4be3653a077441338b0f61f731add9dadaadfc11655
Image: weaveworks/weave-npc:2.4.0
Image ID: docker-pullable://weaveworks/weave-npc<strong i="23">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Aug 2018 14:40:09 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-blz79:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-blz79
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 5m kubelet, cncftest.io pulling image "weaveworks/weave-kube:2.4.0"
Normal Pulled 5m kubelet, cncftest.io Successfully pulled image "weaveworks/weave-kube:2.4.0"
Normal Pulling 5m kubelet, cncftest.io pulling image "weaveworks/weave-npc:2.4.0"
Normal Pulled 5m kubelet, cncftest.io Successfully pulled image "weaveworks/weave-npc:2.4.0"
Normal Created 5m kubelet, cncftest.io Created container
Normal Started 5m kubelet, cncftest.io Started container
Normal Created 4m (x4 over 5m) kubelet, cncftest.io Created container
Normal Pulled 4m (x3 over 5m) kubelet, cncftest.io Container image "weaveworks/weave-kube:2.4.0" already present on machine
Normal Started 4m (x4 over 5m) kubelet, cncftest.io Started container
Warning BackOff 44s (x27 over 5m) kubelet, cncftest.io Back-off restarting failed container
kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true
:I0802 14:49:02.034473 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.036654 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.044546 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.062906 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.063710 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf
I0802 14:49:02.063753 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.063791 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.063828 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.236764 64396 round_trippers.go:408] Response Status: 200 OK in 172 milliseconds
I0802 14:49:02.236870 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.236907 64396 round_trippers.go:414] Content-Type: application/json
I0802 14:49:02.236944 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
I0802 14:49:02.237363 64396 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-2fsrf","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-2fsrf","uid":"e8b2dfe9-9661-11e8-8ca9-fc15b4970491","resourceVersion":"1625","creationTimestamp":"2018-08-02T14:39:49Z","labels":{"controller-revision-hash":"332195524","name":"weave-net","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"66e82a46-9661-11e8-8ca9-fc15b4970491","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","ty [truncated 4212 chars]
I0802 14:49:02.261076 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.262803 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave
I0802 14:49:02.262844 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.262882 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.262919 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.275703 64396 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds
I0802 14:49:02.275743 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.275779 64396 round_trippers.go:414] Content-Type: text/plain
I0802 14:49:02.275815 64396 round_trippers.go:414] Content-Length: 69
I0802 14:49:02.275850 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
I0802 14:49:02.278054 64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.279649 64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave-npc
I0802 14:49:02.279691 64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.279728 64396 round_trippers.go:393] Accept: application/json, */*
I0802 14:49:02.279765 64396 round_trippers.go:393] User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.293271 64396 round_trippers.go:408] Response Status: 200 OK in 13 milliseconds
I0802 14:49:02.293321 64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.293358 64396 round_trippers.go:414] Content-Type: text/plain
I0802 14:49:02.293394 64396 round_trippers.go:414] Date: Thu, 02 Aug 2018 14:49:02 GMT
INFO: 2018/08/02 14:39:58.198716 Starting Weaveworks NPC 2.4.0; node name "devstats.cncf.io"
INFO: 2018/08/02 14:39:58.198969 Serving /metrics on :6781
Thu Aug 2 14:39:58 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/08/02 14:39:58.294002 Got list of ipsets: []
ERROR: logging before flag.Parse: E0802 14:40:28.338474 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338475 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338474 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.339275 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.340235 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.341457 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.340117 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.341216 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.342131 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.342657 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343322 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343396 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.343714 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.344561 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.346722 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.344468 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.345385 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.347275 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.345226 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.346184 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.347875 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347016 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347523 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.350821 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.347826 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.348883 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.351365 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.348662 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.349573 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.352012 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.349429 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.350420 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.352714 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.351213 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.352074 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.355261 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352128 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352949 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.355929 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.352903 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.353844 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.356576 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.353994 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.354564 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.357281 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.355515 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.356603 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.359533 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.356372 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.357453 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.360401 30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
Então, resumindo. É impossível instalar o cluster Kubernetes com apenas um único nó mestre e de trabalho no Ubuntu 18.04.
Acho que deve haver uma instrução de instalação sobre como configurar o k8s passo a passo usando o kubeadm no mais novo LTS Ubuntu.
acho que 18,04 quebrou tanto em termos do Docker que empacota quanto por systemd-resolved
.
então sim, é muito difícil escrever guias para cada tipo de distro que existe e não podemos realmente mantê-los com eficiência.
Além disso, embora o kubeadm seja o front-end aqui, o problema realmente pode não estar relacionado ao próprio kubeadm.
algumas perguntas:
/var/lib/kubelet/kubeadm-flags.env
quando você começa kubeadm join/init
nos 3 nós?journalctl -xeu kubelet
? isso ocorre apenas no nó mestre - e os outros? você pode despejá-los em um github gist ou em http://pastebin.com para que eu também dê uma olhada.KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
nó (cncftest.io, amd64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
nó (devstats.cncf.io, amd64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
Portanto, instalei o master kubeadm init
no host amd64 e tentei weave net
e o resultado é exatamente o mesmo de quando tentei no host arm64:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Há um pequeno progresso.
Eu instalei o master no amd64 e um nó no amd64 também. Tudo funcionou bem.
Eu adicionei o nó arm64 e agora tenho:
mestre amd64: pronto
nó amd64: pronto
nó arm64: NotReady: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
flannel
net plugin não pode se comunicar entre arquiteturas diferentes e arm64 não pode ser usado como um mestre.runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Alguma sugestão do que devo fazer? Onde devo relatar isso? Eu já tenho um cluster de 2 nós (mestre e nó amd64), mas quero ajudar a resolver esse problema para que se possa usar qualquer mestre de arco com qualquer nó de arco apenas OOTB.
@lukaszgryglicki
kube-flannel.yml
implanta o contêiner de flanela para apenas uma arquitetura. É por isso que em nós com arquiteturas diferentes o plugin cni não inicia e o nó nunca fica pronto
Eu nunca tentei sozinho, mas acho que você pode implantar dois manifestos de flanela hackeados com manchas diferentes (nomes ands) para evitar confundir as coisas, mas novamente minha sugestão é pedir para flanear as pessoas se já houver instruções sobre como fazer isso .
Mas eu ajustei o manifest no arm64 conforme sugerido no tutorial. Substituído amd64
por arm64
.
Então, talvez eu crie um problema para flannell
e cole um link para este tópico.
E agora, por que wave net
falha em ambos os archs com o mesmo bug relacionado ao cni? Talvez criar um problema para weave
também e também criar um link para este tópico?
@lukaszgryglicki
Quando você aperta kube-flannel.yml
para arm, ele para de funcionar em máquinas amd ... É por isso que estou supondo que a implantação de 2 manifestos bem ajustados, um para arm e um para amd, pode resolver seu problema.
E agora que penso, talvez você deva corrigir o mesmo problema com o daemon kube-proxy definido também, mas não posso testar isso agora, desculpe
Para o problema que você tem com o weave, não tenho informações suficientes. Um problema pode ser que o weave não funciona com --pod-network-cidr=10.244.0.0/16
, mas voltando ao problema inicial, não sei de fato se o weave funciona fora da caixa em plataformas mistas ou não.
Portanto, devo implantar dois manifestos diferentes para flanela em um mestre, certo? Não importa se o master é arm64 ou amd64, certo? O Mestre deve lidar com a geração de implantação correta do arco em si mesmo e nos nós?
Não tenho certeza do que você quer dizer aqui:
And now that I think of, might be you should fix the same issue with kube-proxy daemon set as well, but I can't test this now, sorry
Eu não usei --pod-network-cidr=10.244.0.0/16
para weave
. Usei apenas kubeadm init
.
Usei --pod-network-cidr=10.244.0.0/16
apenas para tentativas de flanela. Exatamente como dizem os médicos.
cc @luxas - Eu vi que você criou alguns documentos sobre implantações multi-arch k8s, talvez você possa dar algum feedback.
@lukasredynk
sim, então este é um problema de arco afinal, obrigado por confirmar.
vamos nos concentrar na flanela aqui, já que a questão da tecelagem parece tangente.
dê uma olhada em @luxas para o contexto, se ainda não viu:
https://github.com/luxas/kubeadm-workshop
O Mestre deve lidar com a geração de implantação correta do arco em si mesmo e nos nós?
_ deveria_, mas o manifesto que você está baixando não é um "gordo":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Tanto quanto eu entendo, os arquétipos são propagados e você precisa consertar isso com kubectl
em cada nó (?).
parece que um manifesto "gordo" está no mestre e foi adicionado aqui:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca
questão / pr:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989
minha suposição é que isso é o que há de mais moderno e você deve usar:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
então, desative o cluster e experimente e espere que funcione.
nossos documentos CNI precisariam de um aumento, mas isso precisa acontecer quando flannel-next
for lançado.
Ok, vou tentar depois do final de semana e postar meus resultados aqui. Obrigado.
@lukaszgryglicki oi, você conseguiu fazer isso funcionar usando o novo manifesto de flanela?
Ainda não, vou tentar hoje.
OK finalmente funcionou:
root<strong i="6">@devstats</strong>:/root# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cncftest.io Ready <none> 39s v1.11.1
devstats.cncf.io Ready <none> 46s v1.11.1
devstats.team.io Ready master 12m v1.11.1
Fat mainifest da flanela master
branch ajudou.
Obrigado, isso pode ser fechado.
Oi pessoal, estou na mesma situação.
Eu tenho nós de trabalho no estado Pronto, mas a flanela em arm64 continua travando com este erro:
1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm64-m5jfd': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm64-m5jfd: dial tcp 10.96.0.1:443: i/o timeout
@lukasredynk funcionou para você?
qualquer ideia?
O erro parece diferente, mas você usou o manifesto fat: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ?
Ele contém manifestos para vários arcos.
Sim, eu sou:
O problema agora é o recipiente de flanela que não fica de pé. :(
funciona em amd64
e arm64
- funciona para mim.
Infelizmente, não posso ajudar com arm
(32 bits), não tenho uma máquina arm
disponível.
Estou no braço64, mas obrigado, vou continuar a investigar ...
Ohh então desculpe, pensei que você estava de braço dado.
De qualquer forma, também sou muito novo nisso, então você precisa esperar a ajuda de outros caras.
Cole a saída de kubectl describe pods --all-namespace
e a possível saída de outros comandos que postei neste tópico. Isso pode ajudar alguém a rastrear o problema real.
Obrigado @lukaszgryglicki ,
esta é a saída de descrever pods: https://pastebin.com/kBVPYsMd
@lukaszgryglicki
feliz que funcionou no final.
Vou documentar o uso do manifesto de gordura para flanela na documentação, já que não tenho ideia de quando o 0.11.0 será lançado.
@ Leen15
relevante do pod com falha:
Warning FailedCreatePodSandBox 3m (x5327 over 7h) kubelet, nanopi-neo-plus2 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ddb551d520a757f4f8ff81d1dbfde50a98a5ec65385673a5a49a79e23a3243b" network for pod "arm-test-7894bfffd-njdcc": NetworkPlugin cni failed to set up pod "arm-test-7894bfffd-njdcc_default" network: open /run/flannel/subnet.env: no such file or directory
você está adicionando --pod-network-cidr=...
que é necessário para a flanela?
experimente também este guia:
https://github.com/kubernetes/kubernetes/issues/36575#issuecomment -264622923
@ neolit123 sim, encontrei o problema: a flanela não criou a interface de rede virtual (cni e flannel0).
Não sei o motivo e não consegui resolvê-lo depois de várias horas.
Desisti e mudei para enxame.
OK entendido. nesse caso, estou encerrando o problema.
obrigado.
Também encontrei o mesmo problema, e descobri que o nó não consegue extrair as imagens necessárias por causa do GFW na China, então puxo as imagens manualmente e ele se recupera ok
Eu executei este comando e ele resolveu meu problema:
Isso cria um arquivo no diretório /etc/cni/net.d com o nome de 10-flannel.conflist. Acredito que o kubernetes requer uma rede, que é definida por este pacote.
Meu cluster está no seguinte estado:
NOME STATUS ROLES IDADE VERSÃO
k8s-master Ready master 3h37m v1.14.1
node001 pronto
node02 pronto
Olá a todos,
Tenho 1 mestre e 2 nós. O segundo nó não está pronto.
root @ kube1 : ~ # kubectl get nodes
NOME STATUS ROLES IDADE VERSÃO
dockerlab1 pronto
kube1 Ready master 4h12m v1.14.3
labserver1 NotReady
root @ kube1 : ~ # kubectl get pods --all-namespaces
NOME NOME PRONTO STATUS REINICIA IDADE
kube-system coredns-fb8b8dccf-72llr 1/1 Executando 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 Executando 0 4h13m
kube-system etcd-kube1 1/1 Running 0 4h12m
kube-system kube-apiserver-kube1 1/1 Running 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Running 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Init: 0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Executando 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Executando 0 4h1m
kube-system kube-proxy-7m8jg 1/1 Executando 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 Running 0 4h13m
root @ kube1 : ~ # kubectl describe node labserver1
Nome: labserver1
Funções:
Rótulos: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anotações: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 09 Jun 2019 21:03:57 +0800
Taints: node.kubernetes.io/not- ready: NoExecute
node.kubernetes.io/not- ready: NoSchedule
Não programável: falso
Condições:
Tipo Status LastHeartbeatTime LastTransitionTime Motivo Mensagem
---- ------ ----------------- ------------------ ----- - -------
MemoryPressure False Dom, 09 de junho de 2019 21:28:31 +0800 Dom, 09 de junho de 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet tem memória suficiente disponível
DiskPressure False Sun, 09 jun 2019 21:28:31 +0800 Sun, 09 jun 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet não tem pressão de disco
PIDPressure False Dom, 09 de junho de 2019 21:28:31 +0800 Dom, 09 de junho de 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet tem PID suficiente disponível
Pronto False Dom, 09 Jun 2019 21:28:31 +0800 Dom, 09 Jun 2019 21:03:57 +0800 KubeletNotReady runtime network not ready: NetworkReady = false reason: NetworkPluginNotReady message: docker : network plugin is not ready: cni config não inicializado
Endereços:
IP interno: 172.31.8.125
Nome do host: labserver1
Capacidade:
cpu: 1
armazenamento efêmero: 18108284Ki
enormespages-1Gi: 0
páginas enormes-2 Mi: 0
memória: 1122528Ki
vagens: 110
Alocável:
cpu: 1
armazenamento efêmero: 16688594507
enormespages-1Gi: 0
páginas enormes-2 Mi: 0
memória: 1020128Ki
vagens: 110
Informação do sistema:
ID da máquina: 292dc4560f9309ccdd72b6935c80e8ec
UUID do sistema: DE4707DF-5516-784A-9B41-588FCDE49369
ID de inicialização: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Versão do kernel: 4.4.0-142-genérico
Imagem do sistema operacional: Ubuntu 16.04.6 LTS
Sistema operacional: linux
Arquitetura: amd64
Versão do tempo de execução do contêiner: docker: //18.9.6
Versão Kubelet: v1.14.3
Versão Kube-Proxy: v1.14.3
PodCIDR: 10.244.3.0/24
Pods não terminados: (2 no total)
Namespace Nome Solicitações de CPU Limites de CPU Solicitações de memória Limites de memória IDADE
--------- ---- ------------ ---------- --------------- ------------- ---
sistema kube kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
sistema kube kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Recursos alocados:
(Os limites totais podem ser superiores a 100 por cento, ou seja, supercomprometidos.)
Limites de solicitações de recursos
-------- -------- ------
cpu 100m (10%) 100m (10%)
memória 50Mi (5%) 50Mi (5%)
armazenamento efêmero 0 (0%) 0 (0%)
Eventos:
Digite Razão Idade da Mensagem
---- ------ ---- ---- -------
Normal Iniciando 45m kubelet, labserver1 Iniciando kubelet.
NodeHasSufficientMemory 45m kubelet normal, labserver1 O status do nó labserver1 agora é: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasSufficientPID
NodeAllocatableEnforced normal de 45m kubelet, labserver1 Limite alocável de nó atualizado entre pods
Normal Iniciando kubelet 25m, labserver1 Iniciando kubelet.
NodeAllocatableEnforced normal 25m kubelet, labserver1 Limite alocável de nó atualizado entre pods
NodeHasSufficientMemory normal 25m (x2 sobre 25m) kubelet, labserver1 O status do nó labserver1 agora é: NodeHasSufficientMemory
NodeHasSufficientPID normal 25m (x2 sobre 25m) kubelet, labserver1 Node labserver1 status agora é: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 sobre 25m) kubelet, labserver1 O status do nó labserver1 agora é: NodeHasNoDiskPressure
Normal Iniciando kubelet de 13 m, labserver1 Iniciando kubelet.
NodeHasSufficientMemory 13m kubelet normal, labserver1 O status do nó labserver1 agora é: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasSufficientPID
NodeAllocatableEnforced normal de 13m kubelet, labserver1 Limite alocável de nó atualizado entre pods
root @ kube1 : ~ #
Por favor ajude
Olá a todos,
Tenho 1 mestre e 2 nós. O segundo nó não está pronto.
root @ kube1 : ~ # kubectl get nodes
NOME STATUS ROLES IDADE VERSÃO
dockerlab1 Pronto 3h57m v1.14.3
kube1 Ready master 4h12m v1.14.3
labserver1 NotReady 22m v1.14.3root @ kube1 : ~ # kubectl get pods --all-namespaces
NOME NOME PRONTO STATUS REINICIA IDADE
kube-system coredns-fb8b8dccf-72llr 1/1 Executando 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 Executando 0 4h13m
kube-system etcd-kube1 1/1 Running 0 4h12m
kube-system kube-apiserver-kube1 1/1 Running 0 4h12m
kube-system kube-controller-manager-kube1 1/1 Running 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 Init: 0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 Executando 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 Executando 0 4h1m
kube-system kube-proxy-7m8jg 1/1 Executando 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 Running 0 4h13m
kube-system kube-scheduler-kube1 1/1 Running 0 4h13m
root @ kube1 : ~ # kubectl describe node labserver1
Nome: labserver1
Funções:
Rótulos: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=linux
Anotações: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 09 Jun 2019 21:03:57 +0800
Taints: node.kubernetes.io/not- ready: NoExecute
node.kubernetes.io/not- ready: NoSchedule
Não programável: falso
Condições:
Tipo Status LastHeartbeatTime LastTransitionTime Motivo MensagemMemoryPressure False Dom, 09 de junho de 2019 21:28:31 +0800 Dom, 09 de junho de 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet tem memória suficiente disponível
DiskPressure False Sun, 09 jun 2019 21:28:31 +0800 Sun, 09 jun 2019 21:03:57 +0800 KubeletHasNoDiskPressure kubelet não tem pressão de disco
PIDPressure False Dom, 09 de junho de 2019 21:28:31 +0800 Dom, 09 de junho de 2019 21:03:57 +0800 KubeletHasSufficientPID kubelet tem PID suficiente disponível
Pronto False Dom, 09 Jun 2019 21:28:31 +0800 Dom, 09 Jun 2019 21:03:57 +0800 KubeletNotReady runtime network not ready: NetworkReady = false reason: NetworkPluginNotReady message: docker : network plugin is not ready: cni config não inicializado
Endereços:
IP interno: 172.31.8.125
Nome do host: labserver1
Capacidade:
cpu: 1
armazenamento efêmero: 18108284Ki
enormespages-1Gi: 0
páginas enormes-2 Mi: 0
memória: 1122528Ki
vagens: 110
Alocável:
cpu: 1
armazenamento efêmero: 16688594507
enormespages-1Gi: 0
páginas enormes-2 Mi: 0
memória: 1020128Ki
vagens: 110
Informação do sistema:
ID da máquina: 292dc4560f9309ccdd72b6935c80e8ec
UUID do sistema: DE4707DF-5516-784A-9B41-588FCDE49369
ID de inicialização: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
Versão do kernel: 4.4.0-142-genérico
Imagem do sistema operacional: Ubuntu 16.04.6 LTS
Sistema operacional: linux
Arquitetura: amd64
Versão do tempo de execução do contêiner: docker: //18.9.6
Versão Kubelet: v1.14.3
Versão Kube-Proxy: v1.14.3
PodCIDR: 10.244.3.0/24
Pods não terminados: (2 no total)
Namespace Nome Solicitações de CPU Limites de CPU Solicitações de memória Limites de memória IDADEsistema kube kube-flannel-ds-amd64-6q6sz 100m (10%) 100m (10%) 50Mi (5%) 50Mi (5%) 25m
sistema kube kube-proxy-m7gdc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
Recursos alocados:
(Os limites totais podem ser superiores a 100 por cento, ou seja, supercomprometidos.)
Limites de solicitações de recursoscpu 100m (10%) 100m (10%)
memória 50Mi (5%) 50Mi (5%)
armazenamento efêmero 0 (0%) 0 (0%)
Eventos:
Digite Razão Idade da MensagemNormal Iniciando 45m kubelet, labserver1 Iniciando kubelet.
NodeHasSufficientMemory 45m kubelet normal, labserver1 O status do nó labserver1 agora é: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasSufficientPID
NodeAllocatableEnforced normal de 45m kubelet, labserver1 Limite alocável de nó atualizado entre pods
Normal Iniciando kubelet 25m, labserver1 Iniciando kubelet.
NodeAllocatableEnforced normal 25m kubelet, labserver1 Limite alocável de nó atualizado entre pods
NodeHasSufficientMemory normal 25m (x2 sobre 25m) kubelet, labserver1 O status do nó labserver1 agora é: NodeHasSufficientMemory
NodeHasSufficientPID normal 25m (x2 sobre 25m) kubelet, labserver1 Node labserver1 status agora é: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 25m (x2 sobre 25m) kubelet, labserver1 O status do nó labserver1 agora é: NodeHasNoDiskPressure
Normal Iniciando kubelet de 13 m, labserver1 Iniciando kubelet.
NodeHasSufficientMemory 13m kubelet normal, labserver1 O status do nó labserver1 agora é: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet, labserver1 O status do nó labserver1 agora é: NodeHasSufficientPID
NodeAllocatableEnforced normal de 13m kubelet, labserver1 Limite alocável de nó atualizado entre pods
root @ kube1 : ~ #Por favor ajude
Oi Athir,
Please check out logs in /var/logs/messages section of your Master node. You can find an actual error in those logs. But here are some general tips.
eu. Sempre concentre-se primeiro em seu nó mestre.
ii. Instale o docker engine nele e busque todas as imagens que estão sendo usadas para kubernetes. Quando tudo estiver em execução, adicione nós ao mestre. vai resolver todo o problema. Vi alguns artigos na internet que tentam buscar algumas imagens após anexar os nós escravos. Essa prática causa alguns problemas.
Oi saddique164, Obrigado por suas sugestões. Sim, como você disse, implantei outro nó escravo ontem e fui capaz de entrar no Master sem problemas.
Desculpe, não posso ajudar, não tenho mais nós ARM64, agora tenho um cluster AMD64 de 4 nós bare-metal.
O arquivo /etc/cni/net.d/10-flannel.conflist estava sem a chave cniVersion em sua configuração.
Adicionar "cniVersion": "0.2.0" resolveu o problema.
O arquivo /etc/cni/net.d/10-flannel.conflist estava sem a chave cniVersion em sua configuração.
Adicionar "cniVersion": "0.2.0" resolveu o problema.
Eu enfrentei o problema quando atualizei para V1.16.0 de 1.15.
dGVzdDoxMjPCow
Em 24 de setembro de 2019, às 12h52, "ronakgpatel" [email protected] escreveu:
O arquivo /etc/cni/net.d/10-flannel.conflist estava sem a chave cniVersion em
sua configuração.Adicionar "cniVersion": "0.2.0" resolveu o problema.
Eu enfrentei o problema quando atualizei para V1.16.0 de 1.15.
-
Você está recebendo isto porque está inscrito neste tópico.
Responda a este e-mail diretamente, visualize-o no GitHub
https://github.com/kubernetes/kubeadm/issues/1031?email_source=notifications&email_token=AND2HJTXHB6WSGAE7PSOAJTQLGMKVA5CNFSM4FNQHEHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7NBYZI#issuecomment-534387813 ,
ou silenciar o tópico
https://github.com/notifications/unsubscribe-auth/AND2HJXDKQARYKXVY4YLZMDQLGMKVANCNFSM4FNQHEHA
.
a flanela não é mantida de maneira muito ativa. eu recomendo chita ou weavenet.
o repositório de flanela precisava de uma correção.
o guia kubeadm para instalação de flanela acabou de ser atualizado, consulte:
https://github.com/kubernetes/website/pull/16575/files
Enfrentou o mesmo problema aqui.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.
Funcionou para mim.
docker: plugin de rede não está pronto: cni config uninitialized
Reinstale o docker no nó não pronto.
Funcionou para mim.
Eu executei este comando e ele resolveu meu problema:
- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Isso cria um arquivo no diretório /etc/cni/net.d com o nome de 10-flannel.conflist. Acredito que o kubernetes requer uma rede, que é definida por este pacote.
Meu cluster está no seguinte estado:NOME STATUS ROLES IDADE VERSÃO
k8s-master Ready master 3h37m v1.14.1
node001 Ready 3h6m v1.14.1
node02 Ready 167m v1.14.1
Isso simplesmente fez isso!
Tive um caso semelhante em que estava criando o plug-in de rede antes de vincular os trabalhadores, o que manteve o /etc/cni/net.d ausente.
Executei novamente a configuração após vincular os nós de trabalho usando:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Como resultado, a configuração em /etc/cni/net.d foi criada com sucesso e o nó mostrou um estado Pronto.
Espero que ajude alguém com o mesmo problema.
Eu executei este comando e ele resolveu meu problema:
- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Isso cria um arquivo no diretório /etc/cni/net.d com o nome de 10-flannel.conflist. Acredito que o kubernetes requer uma rede, que é definida por este pacote.
Meu cluster está no seguinte estado:NOME STATUS ROLES IDADE VERSÃO
k8s-master Ready master 3h37m v1.14.1
node001 Ready 3h6m v1.14.1
node02 Ready 167m v1.14.1
Execute esse Comando na Máquina Mestre e tudo estará no estado Pronto agora. Obrigado @ saddique164 .
A maneira mais rápida é adicionar flanela ao Kubernetes na arquitetura AMD64.
$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> kube-flannel.yaml
$ kubectl apply -f kube-flannel.yaml
Estou usando a versão 1.18 do kubernetes.
Eu usei este: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Nenhum arquivo foi criado em /etc/cni/net.d
O nó mestre é NotReady, enquanto os escravos estão no estado Pronto
Estou usando a versão 1.18 do kubernetes.
Eu usei este: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Nenhum arquivo foi criado em /etc/cni/net.d
O nó mestre é NotReady, enquanto os escravos estão no estado Pronto
NOTA: Este parece ser um problema de kubelet.
Jul 01 11:58:36 master kubelet[17918]: F0701 11:58:36.613864 17918 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 01 11:58:36 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 01 11:58:36 master systemd[1]: Unit kubelet.service entered failed state.
Jul 01 11:58:36 master systemd[1]: kubelet.service failed.
tente isso no mestre:
sed -i 's / cgroup-driver = systemd / cgroup-driver = cgroupfs / g' /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Ele começa e depois falha novamente.
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692341 15525 remote_runtime.go:59] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692358 15525 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692381 15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692389 15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692420 15525 remote_image.go:50] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692427 15525 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692435 15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692440 15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692464 15525 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692480 15525 kubelet.go:317] Watching apiserver
Jul 02 10:37:16 master kubelet[15525]: W0702 10:37:16.680313 15525 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Se você vir, está indicando que nenhuma rede foi encontrada. Execute este comando e compartilhe o resultado.
Kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-75f8564758-92ws7 1/1 Running 0 25h
coredns-75f8564758-z9xn8 1/1 Running 0 25h
kube-flannel-ds-amd64-2j4mw 1/1 Running 0 25h
kube-flannel-ds-amd64-5tmhp 0/1 Pending 0 25h
kube-flannel-ds-amd64-rqwmz 1/1 Running 0 25h
kube-proxy-6v24w 1/1 Running 0 25h
kube-proxy-jgdw7 0/1 Pending 0 25h
kube-proxy-qppnk 1/1 Running 0 25h
Rode isto
kubectl logs kube-flannel-ds-amd64-5tmhp -n kube-system
se nada acontecer, execute este:
kubectl describe pod kube-flannel-ds-amd64-5tmhp -n kube-system
Erro do servidor: obter https://10.75.214.124 : 10250 / containerLogs / kube-system / kube-flannel-ds-amd64-5tmhp / kube-flannel: disque tcp 10.75.214.124:10250: conectar: conexão recusada
Quantos nós estão em execução para você? NO cluster? um nó está fazendo esse problema. Isso é chamado de daemonset. Eles são executados em todos os nós. Seu plano de controle não está aceitando a solicitação dele. Portanto, vou sugerir que você siga os seguintes passos.
Este processo funcionará.
kubectl get nodes:
NAME STATUS ROLES AGE VERSION
master NotReady master 26h v1.18.5
slave1 Ready <none> 26h v1.18.5
slave2 Ready <none> 26h v1.18.5
Tentei as etapas que você mencionou:
Isso é o que eu ganho.
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
drene todos os nós, exceto o mestre e concentre-se nisso. Quando estiver pronto, vá para adicionar outros.
Drenar nós e, em seguida, redefinir kubeadm e init não ajuda. O cluster não é inicializado posteriormente.
Meu problema era que eu estava atualizando o nome do host depois que o cluster foi criado. Ao fazer isso, é como se o mestre não soubesse que era o mestre.
Eu ainda estou correndo:
sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname)
mas agora eu o executo antes da inicialização do cluster
Comentários muito úteis
@lukasredynk
sim, então este é um problema de arco afinal, obrigado por confirmar.
vamos nos concentrar na flanela aqui, já que a questão da tecelagem parece tangente.
dê uma olhada em @luxas para o contexto, se ainda não viu:
https://github.com/luxas/kubeadm-workshop
_ deveria_, mas o manifesto que você está baixando não é um "gordo":
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Tanto quanto eu entendo, os arquétipos são propagados e você precisa consertar isso com
kubectl
em cada nó (?).parece que um manifesto "gordo" está no mestre e foi adicionado aqui:
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca
questão / pr:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989
minha suposição é que isso é o que há de mais moderno e você deve usar:
então, desative o cluster e experimente e espere que funcione.
nossos documentos CNI precisariam de um aumento, mas isso precisa acontecer quando
flannel-next
for lançado.