Kubeadm: Kubeadm init bloque à "Cela peut prendre une minute ou plus si les images du plan de contrôle doivent être extraites

Créé le 31 janv. 2018  ·  67Commentaires  ·  Source: kubernetes/kubeadm

Versions

version de kubeadm (utilisez kubeadm version ):

Environnement :

  • Version Kubernetes (utilisez kubectl version ):v1.9.2
  • Fournisseur de cloud ou configuration matérielle :Virtual Box
  • OS (par exemple depuis /etc/os-release) :Ubuntu 16.04.0 LTS (Xeniak Xerus) amd64
  • Noyau (par exemple uname -a ):linux 4.4.0-62-generic
  • Autres :version kubeadm :v1.9.2 : amd64, version kubelet :v1.9.2 amd64, version kubernetes-cni :0.6.0-00 amd64, version docker :17.03.2-ce

Que s'est-il passé?

Lorsque j'essaie d'exécuter kubeadm init, il se bloque avec
xx@xx :~$ sudo kubeadm init --kubernetes-version=v1.9.2

[init] Utilisation de la version de Kubernetes : v1.9.2
[init] Utilisation des modes d'autorisation : [Node RBAC]
[preflight] Exécution des vérifications pré-vol.
[AVERTISSEMENT FileExisting-crictl] : crictl introuvable dans le chemin système
[preflight] Démarrage du service kubelet
[certificats] Certificat ca et clé générés.
[certificats] Certificat et clé apiserver générés.
[certificats] le certificat de service apiserver est signé pour les noms DNS [kickseed kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] et les IP [10.96.0.1 172.17.41.15]
[certificats] Certificat et clé apiserver-kubelet-client générés.
[certificats] Clé sa et clé publique générées.
[certificats] Certificat et clé front-proxy-ca générés.
[certificats] Certificat et clé générés pour le client proxy frontal.
[certificates] Des certificats et des clés valides existent maintenant dans "/etc/kubernetes/pki"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "admin.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "kubelet.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "controller-manager.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "scheduler.conf"
[plan de contrôle] Écrit le manifeste du pod statique pour le composant kube-apiserver dans "/etc/kubernetes/manifests/kube-apiserver.yaml"
[plan de contrôle] Écrit le manifeste du pod statique pour le composant kube-controller-manager dans "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[plan de contrôle] Écrit le manifeste du pod statique pour le composant kube-scheduler dans "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Écrit le manifeste de pod statique pour une instance etcd locale dans "/etc/kubernetes/manifests/etcd.yaml"
[init] Attendre que le kubelet démarre le plan de contrôle en tant que pods statiques à partir du répertoire "/etc/kubernetes/manifests".
[init] Cela peut prendre une minute ou plus si les images du plan de contrôle doivent être extraites.

Ensuite, je vérifie le journal kubelet :
xx@xx :~$ sudo journalctl -xeu kubelet :
31 janvier 14:45:03 kickseed kubelet[28516] : E0131 14:45:03.280984 28516 remote_runtime.go:92] Échec de RunPodSandbox à partir du service d'exécution : erreur rpc : code = Desc inconnu = échec de l'extraction de l'image "gcr.io/google_containers/ pause-amd64:3.0": Réponse d'erreur du démon : Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443 : I/O timeout
31 janvier 14:45:03 kickseed kubelet[28516] : E0131 14:45:03.281317 28516 kuberuntime_sandbox.go:54] CreatePodSandbox pour le pod "kube-scheduler-kickseed_kube-system(69c12074e336b0dbbd0a166 Unknown) = échec de l'extraction de l'image "gcr.io/google_containers/pause-amd64:3.0" : réponse d'erreur du démon : obtenez https://gcr.io/v1/_ping : composez le tcp 172.217.6.127:443 : expiration du délai d'e/s
31 janvier 14:45:03 kickseed kubelet[28516] : E0131 14:45:03.281580 28516 kuberuntime_manager.go:647] createPodSandbox pour le pod "kube-scheduler-kickseed_kube-system(69c12074e336b0dbbd0a166 Unknown) code desc: failed 6 Unknowna166ce052 error" = échec de l'extraction de l'image "gcr.io/google_containers/pause-amd64:3.0" : réponse d'erreur du démon : obtenez https://gcr.io/v1/_ping : composez le tcp 172.217.6.127:443 : expiration du délai d'e/s
31 14:45:03 KUBLET KUBLET [28516]: E0131 14: 45: 03.281875 25: 03.281875 25: 186] Synchronisation des erreurs POD 69C12074E336B0DBBD0A16666C05226A ("Kube-Scheduler-KickSeed_kube-Système (69C12074E336B0DBBD0A1666C05226A)"), Sauvegarde: Échec de " CreatePodSandbox "pour "Kube-programmateur kickseed_kube-système (69c12074e336b0dbbd0a1666ce05226a)" avec CreatePodSandboxError: "CreatePodSandbox pod \" Kube-programmateur kickseed_kube-système (69c12074e336b0dbbd0a1666ce05226a) \" a échoué: erreur rpc: Code = inconnu desc = Echec tirant image \ "gcr.io/google_containers/pause-amd64:3.0\" : réponse d'erreur du démon : obtenez https://gcr.io/v1/_ping : composez tcp 172.217.6.127:443 : délai d'attente d'E/S"
31 janvier 14:45:03 kickseed kubelet[28516] : E0131 14:45:03.380290 28516 event.go:209] Impossible d'écrire l'événement : 'Patch https://172.17.41.15 :6443/api/v1/namespaces/default /events/kickseed.150ecf46afb098b7 : composez le tcp 172.17.41.15:6443 : getsockopt : connexion refusée (peut réessayer après avoir dormi)
31 janvier 14:45:03 kickseed kubelet[28516] : E0131 14:45:03.933783 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 : Échec de la liste *v1. Pod : obtenir https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0 : composer TCP 172.17.41.15:6443 : getockopt : connexion refusée
31 janvier 14:45:03 kickseed kubelet[28516] : E0131 14:45:03.934707 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474 : Échec de la liste *v1.Node : Obtenez https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:45:03 kickseed kubelet[28516] : E0131 14:45:03.935921 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465 : Échec de la liste *v1.Service : Obtenez https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:45:04 kickseed kubelet[28516] : E0131 14:45:04.281024 28516 remote_runtime.go:92] Échec de RunPodSandbox à partir du service d'exécution : erreur rpc : code = Inconnu desc = échec de l'extraction de l'image "gcr.io/google_containers/ pause-amd64:3.0": Réponse d'erreur du démon : Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443 : I/O timeout
31 janvier 14:45:04 kickseed kubelet[28516] : E0131 14:45:04.281352 28516 kuberuntime_sandbox.go:54] CreatePodSandbox pour le pod "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712crb)ee9b" code d'erreur = échec 6 Description inconnue = échec de l'extraction de l'image "gcr.io/google_containers/pause-amd64:3.0" : réponse d'erreur du démon : obtenez https://gcr.io/v1/_ping : composez le tcp 172.217.6.127:443 : délai d'attente d'E/S
31 janvier 14:45:04 kickseed kubelet[28516] : E0131 14:45:04.281634 28516 kuberuntime_manager.go:647] createPodSandbox pour le pod "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712cerb)ee9b" code d'erreur : échec 6 Description inconnue = échec de l'extraction de l'image "gcr.io/google_containers/pause-amd64:3.0" : réponse d'erreur du démon : obtenez https://gcr.io/v1/_ping : composez le tcp 172.217.6.127:443 : délai d'attente d'E/S
31 janvier 14:45:04 KUBLET KUBLET [28516]: E0131 14: 45: 04.281938 25: 04.281938 28516 Pod_workers.go: 186] Synchronisation des erreurs POD 6546D6FAF0B50C9FC6712CE25C9FC6712CE25F9FC6CB ("Kube-contrôleur-manager-Bickseed_kube (6546D6FAF0B50C9FC6712CE25EE9B6CB)"), Sauvegarde: Échec Pour "CreateEpodsandbox" pour "Kube-contrôleur-Manager-Coupsed_kube-System (6546D6FAF0B50C9FC6712CE25EE9B6CB)" Avec CreateEpodsandboxError: "CreateEpodsandbox pour pod \" Kube-contrôleur-contrôleur-manager-kicksed_kube-Système (6546D6FAF0B50C9FC6712CE25F0C9FC6CB) \ "Échec de la RPC: Code = inconnu desc = échec de l'extraction de l'image \"gcr.io/google_containers/pause-amd64:3.0\" : réponse d'erreur du démon : obtenez https://gcr.io/v1/_ping : composez tcp 172.217.6.127:443 : i/o temps libre"
31 janvier 14:45:04 kickseed kubelet[28516] : E0131 14:45:04.934694 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 : Échec de la liste *v1. Pod : obtenir https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0 : composer TCP 172.17.41.15:6443 : getockopt : connexion refusée
31 janvier 14:45:04 kickseed kubelet[28516] : E0131 14:45:04.935613 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474 : Échec de la liste *v1.Node : Obtenez https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:45:04 kickseed kubelet[28516] : E0131 14:45:04.936669 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465 : Échec de la liste *v1.Service : Obtenez https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:45:05 kickseed kubelet[28516] : W0131 14:45:05.073692 28516 cni.go:171] Impossible de mettre à jour la configuration cni : aucun réseau trouvé dans /etc/cni/net.d
31 janvier 14:45:05 kickseed kubelet[28516] : E0131 14:45:05.074106 28516 kubelet.go:2105] Réseau d'exécution du conteneur non prêt : NetworkReady=false raison : NetworkPluginNotReady message : docker : le plug-in réseau n'est pas prêt : configuration cni non initialisé
31 janvier 14:45:05 kickseed kubelet[28516] : E0131 14:45:05.935680 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 : Échec de la liste *v1. Pod : obtenir https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0 : composer TCP 172.17.41.15:6443 : getockopt : connexion refusée
31 janvier 14:45:05 kickseed kubelet[28516] : E0131 14:45:05.937423 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474 : Échec de la liste *v1.Node : Obtenez https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:45:05 kickseed kubelet[28516] : E0131 14:45:05.937963 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465 : Échec de la liste *v1.Service : Obtenez https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:45:05 kickseed kubelet[28516] : I0131 14:45:05.974034 28516 kubelet_node_status.go:273] Définition de l'annotation de nœud pour activer l'attachement/le détachement du contrôleur de volume
31 janvier 14:45:06 kickseed kubelet[28516] : I0131 14:45:06.802447 28516 kubelet_node_status.go:273] Définition de l'annotation de nœud pour activer l'attachement/le détachement du contrôleur de volume
31 janvier 14:45:06 kickseed kubelet[28516] : I0131 14:45:06.804242 28516 kubelet_node_status.go:82] Tentative d'enregistrement du nœud kickseed
31 janvier 14:45:06 kickseed kubelet[28516] : E0131 14:45:06.804778 28516 kubelet_node_status.go:106] Impossible d'enregistrer le nœud "kickseed" avec le serveur API : Postez https://172.17.41.15 :6443/api/ v1/nœuds : composez tcp 172.17.41.15:6443 : getsockopt : con

xx@xx :~$ sudo systemctl status kubelet :

kubelet.service - kubelet : l'agent de nœud Kubernetes
Chargé : chargé (/lib/systemd/system/kubelet.service ; activé ; préréglage du fournisseur : activé)
Dépôt : /etc/systemd/system/kubelet.service.d
└─11-kubeadm.conf, 10-kubeadm1.conf, 90-local-extras.conf
Actif : actif (en cours d'exécution) depuis le mer. 2018-01-31 13:53:46 CST ; il y a 49 minutes
Documents : http://kubernetes.io/docs/
PID principal : 28516 (kubelet)
Tâches : 13
Mémoire : 37,8 M
Processeur : 22.767s
CGroup : /system.slice/kubelet.service
└─28516 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/ manifestes --allow-privileged=true --cgroup-driver=cgroupfs --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni /bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port =0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki --fail-swap-on=false

31 janvier 14:43:17 kickseed kubelet[28516] : E0131 14:43:17.862590 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474 : Échec de la liste *v1.Node : Obtenez https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:43:17 kickseed kubelet[28516] : E0131 14:43:17.863474 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465 : Échec de la liste *v1.Service : Obtenez https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:43:18 kickseed kubelet[28516] : E0131 14:43:18.621818 28516 event.go:209] Impossible d'écrire l'événement : 'Patch https://172.17.41.15 :6443/api/v1/namespaces/default /events/kickseed.150ecf46afb098b7 : composez le tcp 172.17.41.15:6443 : getsockopt : connexion refusée (peut réessayer après avoir dormi)
31 janvier 14:43:18 kickseed kubelet[28516] : E0131 14:43:18.862440 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 : Échec de la liste *v1. Pod : obtenir https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0 : composer TCP 172.17.41.15:6443 : getockopt : connexion refusée
31 janvier 14:43:18 kickseed kubelet[28516] : E0131 14:43:18.863379 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474 : Échec de la liste *v1.Node : Obtenez https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:43:18 kickseed kubelet[28516] : E0131 14:43:18.864424 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465 : Échec de la liste *v1.Service : Obtenez https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:43:19 kickseed kubelet[28516] : E0131 14:43:19.255460 28516 eviction_manager.go:238] gestionnaire d'expulsion : erreur inattendue : échec de l'obtention des informations sur le nœud : nœud "kickseed" introuvable
31 janvier 14:43:19 kickseed kubelet[28516] : E0131 14:43:19.863266 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 : Échec de la liste *v1. Pod : obtenir https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0 : composer TCP 172.17.41.15:6443 : getockopt : connexion refusée
31 janvier 14:43:19 kickseed kubelet[28516] : E0131 14:43:19.864238 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474 : Échec de la liste *v1.Node : Obtenez https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée
31 janvier 14:43:19 kickseed kubelet[28516] : E0131 14:43:19.865262 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465 : Échec de la liste *v1.Service : Obtenez https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0 : composez tcp 172.17.41.15:6443 : getsockopt : connexion refusée

Certaines images Docker sont répertoriées comme suit :
gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
gcr.io/google_containers/kube-proxy-amd64:v1.9.2
gcr.io/google_containers/etcd-amd64:3.2.14
gcr.io/google_containers/pause-amd64:3.1
gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.2
gcr.io/google_containers/kubedns-amd64:1.9
gcr.io/google_containers/kube-discovery-amd64:1.0
gcr.io/google_containers/exechealthz-amd64:v1.2.0
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8
gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1

À quoi vous attendiez-vous ?

kubeadm init doit se terminer

Comment le reproduire (le moins et le plus précisément possible) ?

virtualbox avec Ubuntu 16.04 et kubeadm 1.9.2

Autre chose que nous devons savoir ?

areUX lifecyclactive prioritimportant-soon

Commentaire le plus utile

https://github.com/kubernetes/kubernetes/issues/59680#issuecomment -364646304
désactiver selinux m'a aidé.

Tous les 67 commentaires

Ces images docker répertoriées ci-dessus ont été extraites de mon référentiel privé avant d'exécuter "kubeadm init --kubernetes-version v1.9.2", je ne peux pas accéder directement à gcr.io/google-containers en raison de GFW.

Même problème ici!

J'ai le même problème sur CentOS 7

+1

+1

+1

+1
serveurs sur vultr, bloqués ici aussi.

+1

+1

+1

Comme solution de contournement

1/ créer un registre docker sur votre maître kubernetes

2/ déclarez votre maître kubernetes en tant que gcr.io dans /etc/hosts

3/ Sur machine avec accès internet, connectez-vous sur ggogle cloud et téléchargez l'image
exemple :
docker gloud -- pull gcrio/goole_container/pause-amd64:3.0
docker gloud -- save -o /tmp/pause-amd.tar gcrio/goole_container/pause-amd64:3.0

4/ Uploader des images dans un registre docker repo
docker load -i /tmp/pause-amd64.tar
balise docker gcr.io/Google_containers/pause-amd64:3.0 yourdoke rregistry:pause-amd64 :3.0
docker push yourdoke rregistry:pause-amd64 :3.0

5/ Sur votre master kebernetes en tant que registre docker gcr.io

Obtenez des images à partir de votre dépôt de registre docker
docker tirez votreregistre de docke: pause-amd64 : 3.0

Tirez vers votre registre docker gcr.io local
balise docker yourdocke rregistry:pause-amd64 :3.0 gcr.io/google_containers/pause-amd64:3.0
docker push gcr.io/google_containers/pause-amd64:3.0

Téléchargez toutes les images utilisées par kubeadm init . Voir dans le /etc/kubernetes/manifest/*.yaml

C'est corrigé dans la 1.9.3 ?

+1

+1 - Cela ne se manifeste que la deuxième fois que j'exécute kubeadm init. La première fois, ça passe très bien. Je ne sais pas s'il y a un peu d'état de la première exécution qui n'est pas nettoyé correctement avec la réinitialisation de kubeadm.

+1

centos 7 et j'ai défini le proxy dans /etc/env puis affiché comme 👎

+1

Même problème ici. Centos7, dernière installation de kube (1.9.3), a essayé la documentation de hightower et toute la documentation de kubernetes. etcd et la flanelle fonctionnent et vivent et se lèvent. J'ai utilisé la variable d'environnement NO_PROXY pour mettre mes adresses IP externes afin qu'elle ne tente pas une connexion proxy à d'autres connexions, mais n'arrive jamais à ce point et obtienne les mêmes erreurs que tout le monde ci-dessus.

+1

J'ai le même problème, centos 7, kubelet v1.9.3;
Mais il semble que les images soient téléchargées avec succès,
docker images
gcr.io/google_containers/kube-apiserver-amd64 v1.9.3 360d55f91cbf 4 weeks ago 210.5 MB gcr.io/google_containers/kube-controller-manager-amd64 v1.9.3 83dbda6ee810 4 weeks ago 137.8 MB gcr.io/google_containers/kube-scheduler-amd64 v1.9.3 d3534b539b76 4 weeks ago 62.71 MB gcr.io/google_containers/etcd-amd64 3.1.11 59d36f27cceb 3 months ago 193.9 MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 22 months ago 746.9 kB

J'ai une machine virtuelle CentOS 7 ici et je l'ai déjà configurée avec notre serveur proxy.
J'ai reçu le même message d'expiration du délai, mais les images du docker ont été extraites et sont opérationnelles.

Moi aussi je rencontre le même problème. Consultez les sorties et les journaux pour plus d'informations.

```[ root@kube01 ~]# kubeadm init
[init] Utilisation de la version de Kubernetes : v1.9.3
[init] Utilisation des modes d'autorisation : [Node RBAC]
[preflight] Exécution des vérifications pré-vol.
[WARNING Hostname] : le nom d'hôte "kube01" n'a pas pu être atteint
[WARNING Hostname] : recherche du nom d'hôte "kube01" kube01 sur 10.10.0.81:53 : comportement incorrect du serveur
[AVERTISSEMENT FileExisting-crictl] : crictl introuvable dans le chemin système
[preflight] Démarrage du service kubelet
[certificats] Certificat ca et clé générés.
[certificats] Certificat et clé apiserver générés.
[certificats] le certificat de service apiserver est signé pour les noms DNS [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] et les IP [10.96.0.1 10.25.123.11]
[certificats] Certificat et clé apiserver-kubelet-client générés.
[certificats] Clé sa et clé publique générées.
[certificats] Certificat et clé front-proxy-ca générés.
[certificats] Certificat et clé générés pour le client proxy frontal.
[certificates] Des certificats et des clés valides existent maintenant dans "/etc/kubernetes/pki"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "admin.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "kubelet.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "controller-manager.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "scheduler.conf"
[plan de contrôle] Écrit le manifeste du pod statique pour le composant kube-apiserver dans "/etc/kubernetes/manifests/kube-apiserver.yaml"
[plan de contrôle] Écrit le manifeste du pod statique pour le composant kube-controller-manager dans "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[plan de contrôle] Écrit le manifeste du pod statique pour le composant kube-scheduler dans "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Écrit le manifeste de pod statique pour une instance etcd locale dans "/etc/kubernetes/manifests/etcd.yaml"
[init] Attendre que le kubelet démarre le plan de contrôle en tant que pods statiques à partir du répertoire "/etc/kubernetes/manifests".
[init] Cela peut prendre une minute ou plus si les images du plan de contrôle doivent être extraites.

In the meantime, while watching `docker ps` this is what I see:
***Note:*** Don't mind the length of time that the containers have been up — this is my third attempt and it's always the same.

```CONTAINER ID        IMAGE
              COMMAND                  CREATED              STATUS              PORTS               NAMES
c422b3fd67f9        gcr.io/google_containers/kube-apiserver-amd64<strong i="5">@sha256</strong>:a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f0
43            "kube-apiserver --req"   About a minute ago   Up About a minute                       k8s_kube-apiserver_kube-apiserver-k
ube01_kube-system_3ff6faac27328cf290a026c08ae0ce75_1
4b30b98bcc24        gcr.io/google_containers/kube-controller-manager-amd64<strong i="6">@sha256</strong>:3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599
769d0b7251e   "kube-controller-mana"   2 minutes ago        Up 2 minutes                            k8s_kube-controller-manager_kube-co
ntroller-manager-kube01_kube-system_d556d9b8ccdd523a5208b391ca206031_0
71c6505ed125        gcr.io/google_containers/kube-scheduler-amd64<strong i="7">@sha256</strong>:2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a16
75            "kube-scheduler --add"   2 minutes ago        Up 2 minutes                            k8s_kube-scheduler_kube-scheduler-k
ube01_kube-system_6502dddc08d519eb6bbacb5131ad90d0_0
9d01e2de4686        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-controller-manager-kub
e01_kube-system_d556d9b8ccdd523a5208b391ca206031_0
7fdaabc7e2a7        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-apiserver-kube01_kube-
system_3ff6faac27328cf290a026c08ae0ce75_0
a5a2736e6cd0        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-scheduler-kube01_kube-
system_6502dddc08d519eb6bbacb5131ad90d0_0
ea82cd3a27da        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_etcd-kube01_kube-system_727
8f85057e8bf5cb81c9f96d3b25320_0

SORTIE DE JOURNAL POUR gcr.io/google_containers/ kube-apiserver-amd64@sha256 :a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f043

I0309 19:59:29.570990       1 server.go:121] Version: v1.9.3
I0309 19:59:29.756611       1 feature_gate.go:190] feature gates: map[Initializers:true]
I0309 19:59:29.756680       1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0309 19:59:29.760396       1 master.go:225] Using reconciler: master-count
W0309 19:59:29.789648       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0309 19:59:29.796731       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0309 19:59:29.797445       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0309 19:59:29.804841       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/03/09 19:59:29 log.go:33: [restful/swagger] listing is available at https://10.25.123.11:6443/swaggerapi
[restful] 2018/03/09 19:59:29 log.go:33: [restful/swagger] https://10.25.123.11:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/03/09 19:59:30 log.go:33: [restful/swagger] listing is available at https://10.25.123.11:6443/swaggerapi
[restful] 2018/03/09 19:59:30 log.go:33: [restful/swagger] https://10.25.123.11:6443/swaggerui/ is mapped to folder /swagger-ui/
I0309 19:59:32.393800       1 serve.go:89] Serving securely on [::]:6443
I0309 19:59:32.393854       1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0309 19:59:32.393866       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0309 19:59:32.393965       1 controller.go:84] Starting OpenAPI AggregationController
I0309 19:59:32.393998       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0309 19:59:32.394012       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0309 19:59:32.394034       1 customresource_discovery_controller.go:152] Starting DiscoveryController
I0309 19:59:32.394057       1 naming_controller.go:274] Starting NamingConditionController
I0309 19:59:32.393855       1 crd_finalizer.go:242] Starting CRDFinalizer
I0309 19:59:32.394786       1 available_controller.go:262] Starting AvailableConditionController
I0309 19:59:32.394815       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0309 20:00:06.434318       1 trace.go:76] Trace[12318713]: "Create /api/v1/nodes" (started: 2018-03-09 19:59:32.431463052 +0000 UTC m=+2.986431803) (total time: 34.002792758s):
Trace[12318713]: [4.00201898s] [4.001725343s] About to store object in database
Trace[12318713]: [34.002792758s] [30.000773778s] END
E0309 20:00:32.406206       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)
E0309 20:00:32.406339       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E0309 20:00:32.406342       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
E0309 20:00:32.408094       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
E0309 20:00:32.415692       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes)
E0309 20:00:32.415818       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E0309 20:00:32.415862       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io)
E0309 20:00:32.415946       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E0309 20:00:32.416029       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas)
E0309 20:00:32.416609       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
E0309 20:00:32.416684       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io)
E0309 20:00:32.420305       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E0309 20:00:32.440196       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io)
E0309 20:00:32.440403       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E0309 20:00:32.448018       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts)
E0309 20:00:32.448376       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io)
E0309 20:00:33.395988       1 storage_rbac.go:175] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
I0309 20:00:43.455564       1 trace.go:76] Trace[375160879]: "Create /api/v1/nodes" (started: 2018-03-09 20:00:13.454506587 +0000 UTC m=+44.009475397) (total time: 30.001008377s):
Trace[375160879]: [30.001008377s] [30.000778516s] END

================================================== ====================

SORTIE DE JOURNAL POUR gcr.io/google_containers/ kube-controller-manager-amd64@sha256 :3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599769d0b7251e

I0309 19:51:35.248083       1 controllermanager.go:108] Version: v1.9.3
I0309 19:51:35.257251       1 leaderelection.go:174] attempting to acquire leader lease...
E0309 19:51:38.310839       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:41.766358       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:46.025824       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:49.622916       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:52.675648       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:55.697734       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:59.348765       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:01.508487       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:03.886473       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:06.120356       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:08.844772       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:12.083789       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:16.038882       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:18.555388       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:21.471034       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:24.236724       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:27.363968       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:30.045776       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:32.751626       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:36.383923       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:38.910958       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:41.400748       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:44.268909       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:47.640891       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:51.713420       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:54.419154       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:57.134430       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:00.942903       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:03.440586       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:07.518362       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:53:12.968927       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:16.228760       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:18.299005       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:20.681915       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:24.141874       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:28.484775       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:30.678092       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:34.107654       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:36.251647       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:39.914756       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:42.641017       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:45.058876       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:48.359511       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:51.667554       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:54.338101       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:57.357894       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:00.633504       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:03.244353       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:05.923510       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:09.817627       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:12.688349       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:16.803954       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:19.519269       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:23.668226       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:25.903217       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:30.248639       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:32.428029       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:34.962675       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:38.598370       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:41.179039       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:43.927574       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:48.190961       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:51.974141       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:55.898687       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:59.653210       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:02.094737       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:05.125275       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:09.280324       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:12.920886       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:17.272605       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:21.488182       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:23.708198       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:26.893696       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:31.121014       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:35.414628       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:38.252001       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:41.912479       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:45.621133       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:48.976244       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:52.537317       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:55.863737       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:59.682009       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:02.653432       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:04.968939       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:09.336478       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:13.488850       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:16.262967       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:22.685928       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:26.235497       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:28.442915       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:32.051827       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:35.547277       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:38.437120       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:41.007877       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:44.295081       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:46.746424       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:49.321870       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:52.831866       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:55.138333       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:57.815491       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:00.802112       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:03.848363       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:07.350593       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:10.672982       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:14.171660       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:17.923995       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:21.919624       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:23.923165       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:27.692006       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:30.654447       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:33.851703       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:37.302382       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:40.286552       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:42.358940       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:44.364982       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:46.372569       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:50.571683       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:53.988093       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:57.648006       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:01.607961       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:05.717138       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:08.819600       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:12.262314       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:14.327626       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:18.359683       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:20.961212       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:24.503457       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:27.099581       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:29.518623       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:32.943210       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:36.900236       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:40.567479       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:42.642410       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:45.938839       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:50.282483       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:54.086558       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:56.794469       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:00.604370       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:02.968978       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:05.825551       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:09.824458       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:12.383249       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:15.891164       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:19.088375       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:21.305063       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:23.366258       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:26.308481       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:32.440045       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:36.673744       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:40.049109       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:43.463730       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:46.454431       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:49.782639       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:52.964468       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:57.265527       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:01.181219       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:03.441468       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:07.324053       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:10.269835       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:12.584906       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:15.042928       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:18.820764       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:22.392476       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:24.630702       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:27.881904       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:30.123513       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:32.490088       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:34.675420       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:37.433904       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:39.819475       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:42.152164       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"

================================================== ====================

SORTIE DE JOURNAL POUR gcr.io/google_containers/ kube-scheduler-amd64@sha256 :2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a1675

W0309 19:51:34.800737       1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0309 19:51:34.812848       1 server.go:551] Version: v1.9.3
I0309 19:51:34.817093       1 server.go:570] starting healthz server on 127.0.0.1:10251
E0309 19:51:34.818028       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: Get https://10.25.123.11:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818279       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.25.123.11:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818346       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: Get https://10.25.123.11:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818408       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.25.123.11:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.819028       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.25.123.11:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.819386       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: Get https://10.25.123.11:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.820217       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.25.123.11:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.820659       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.25.123.11:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.821783       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.25.123.11:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:38.320455       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:38.329101       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:38.329733       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:38.332670       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:38.332707       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:38.332734       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:38.334248       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:38.334568       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:38.334594       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:39.322884       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:39.331726       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:39.333093       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:39.335939       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:39.335988       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:39.336229       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:39.336514       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:39.337881       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:39.338784       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:40.323869       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:40.332910       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:40.334120       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:40.337188       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:40.338218       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:40.339267       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:40.340635       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:40.342035       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:40.343070       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:41.325987       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:41.334782       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:41.336320       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:41.338996       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:41.339923       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:41.340904       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:41.342304       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:41.343675       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:41.344622       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:42.328038       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:42.336744       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:42.338239       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:42.340719       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:42.341878       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:42.342835       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:42.344100       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:42.345231       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:42.346405       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:43.330230       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:43.338706       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:43.339941       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:43.342476       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:43.343584       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:43.344615       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:43.345792       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:43.346976       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:43.348050       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:44.332307       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:44.340659       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:44.341607       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:44.344223       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:44.345380       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:44.346247       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:44.347536       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:44.348664       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:44.349648       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:45.334228       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:45.342638       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:45.343460       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:45.345969       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:45.347140       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:45.348176       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope

================================================== ====================

SORTIE DE JOURNAL POUR gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SORTIE DE JOURNAL POUR gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SORTIE DE JOURNAL POUR gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SORTIE DE JOURNAL POUR gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

+1
Mettre à jour:
Après avoir parcouru tout ce que je pouvais (je suis un peu nouveau sur k8s), j'ai finalement trouvé que kubectl describe pod -n kube-system kube-dns-<sha> montrait que le serveur virtuel sur lequel j'installais n'avait qu'un seul processeur et que le kube-dns ne démarrait pas à manquer de CPU. Curieusement, le kubectl logs pod -n kube-system kube-dns-<sha> n'a pas montré cette information.

Cela a fonctionné après une réinstallation du système d'exploitation (car un redémarrage après l'installation de kubeadm empêche le démarrage correct du maître k8s).
(désolé d'avoir oublié de capturer la sortie)

+1

J'ai eu le même problème, annulé et exécuté reset puis le même init que précédemment mais avec --apiserver-advertise-address=<my_host_public_ip_address> - et cela a fonctionné.

https://github.com/kubernetes/kubernetes/issues/59680#issuecomment -364646304
désactiver selinux m'a aidé.

la rétrogradation à 1.8.10 a résolu le problème pour moi.

+1

+1

Même problème ici avec la v1.9.3 sur Ubuntu 16.04 (pas de selinux)

+1 même problème

Même problème avec la v1.10 sur Ubuntu 16.04 sur arm64.

même problème avec v1.10 sur ubuntu 16.04 sur arm64 (pas de selinux)

Vérifiez le nombre de processeurs que vous avez sur le matériel que vous installez - 2 sont requis sur le maître pour l'installation, comme je l'ai écrit ci-dessus il y a un peu plus de 3 semaines.

@bbruun Le matériel utilisé est : https://www.pine64.org/?page_id=1491 donc 4 cœurs et ils sont bien détectés comme tels. Le matériel ne devrait pas être le problème alors. Mais merci quand même pour le tuyau. Peut-être que @qxing3 n'utilise pas le même matériel cependant...

@farfeduc c'était le barrage routier que j'ai rencontré, plusieurs tentatives consécutives avec la réinstallation de ma machine virtuelle pour tester l'installation et apprendre à connaître k8s, mais obtenir des journaux utilisables du système est un hazzle et j'ai essayé de l'obtenir partout où je pouvais jusqu'à ce que je reçoive un message indiquant qu'il n'y a pas assez de processeurs disponibles. Maintenant, j'ai acheté 3 Udoo x86 Ultra pour faire fonctionner un petit cluster pour jouer à la maison avec le travail où nous utilisons des instances légèrement plus grandes :-)

@bbruun J'ai configuré 2 processeurs pour ma machine virtuelle, merci quand même pour le conseil.

/attribuer @liztio

+1

+1 v1.10.0

+1 v1.10.0 et 1.10.1

+1

Chose intéressante, je trouve des deltas en fonction de l'endroit où je me déploie. J'espère trouver le temps d'explorer plus avant, mais jusqu'à présent, je le sais. Si j'utilise mon Mac/VMware Fusion et que j'exécute des machines virtuelles CentOS 7, je peux utiliser kubeadm 1.8 avec plein succès. Je n'ai jamais eu la v1.9 ou la v1.10 pour travailler localement. Cependant, en utilisant les images CentOS 7 de Digital Ocean, je peux exécuter les v1.8.x, v1.10.0 et v1.10.1 avec succès ; v1.9 semble "juste dépendre" pour une raison quelconque. Alors maintenant, il s'agit de creuser dans les deltas fins entre les deux environnements pour découvrir ce qui déclenche le commutateur. Je sais que les niveaux de noyau/correctif correspondent, ainsi que les moteurs Docker, etc. DO installe des éléments cloud-init, mes machines virtuelles locales ne le font pas, et ainsi de suite. Pas anodin pour savoir ce qui est différent. Je suis allé jusqu'à essayer de faire correspondre les tailles de disque (en pensant qu'un petit disque pourrait masquer une erreur quelque part, mais je l'ai également éliminé).

Dans tous les cas, l'extraction des images a toujours fonctionné, il s'agit simplement de faire répondre le service API et de ne pas continuer à recycler toutes les deux minutes en cas d'échec.

Les salutations,

Vous pouvez vous référer à https://docs.docker.com/config/daemon/systemd/#httphttps -proxy pour définir le proxy pour le démon docker.

+1 fonctionnant sur HypriotOS sur Raspberry PI 3

J'ai pu le faire fonctionner en installant la v1.9.6 au lieu de la dernière version.
Cela fonctionne donc normalement avec la v1.9.6 mais échoue avec les v1.10.0 et v1.10.1 sur ubuntu 16.04 sur arm64 sur les cartes sopine.

J'ai le même problème sur Raspberry Pi 3, HypriotOS. La rétrogradation vers 1.9.7-00 a également fonctionné pour moi.

+1, kubeadm v1.10.1, framboise pi 3b, hypriotOS

Dans mon cas, j'ai constaté que le conteneur etcd démarrait puis se terminait avec une erreur, ce qui provoquait le blocage de kubeadm init et éventuellement l'expiration du délai.

Pour vérifier si cela vous dérange, exécutez docker ps -a et vérifiez l'état du conteneur etcd. S'il ne fonctionne pas, vérifiez les journaux du conteneur etcd ( docker logs <container-id> ) et voyez s'il se plaint de ne pas pouvoir se lier à une adresse. Voir ce rapport de problème : https://github.com/kubernetes/kubernetes/issues/57709

Le problème que je viens de mentionner a une solution de contournement, mais assurez-vous que c'est ce que vous rencontrez en premier.

Assurez-vous que votre pare-feu autorise le trafic entrant sur 6443

Par exemple, si vous utilisez Ubuntu, exécutez ufw status pour voir s'il est activé

Puis ufw allow 6443 pour ouvrir le port

est-il possible de lister les images ci-dessous, et nous les extrayons manuellement par proxy, puis réinitialisons le kubeadm.
est-ce que ça marchera?
Parce que nous sommes en Chine, tu sais, le GFW.
Et je suis nouveau sur k8s, coincé ici lors de la configuration sur centos7.

Pour les personnes en Chine qui se cachent derrière LE GRAND PARE-FEU

@thanch2n merci beaucoup. Je vais essayer.

J'ai ajouté un proxy à docker, en utilisant this , les images semblent toutes avoir déjà été téléchargées, mais toujours bloquées "[init] Cela peut prendre une minute ou plus si les images du plan de contrôle doivent être extraites.".

répertoriez les images extraites automatiquement ci-dessous.

k8s.gcr.io/kube-apiserver-amd64 v1.10.2 e774f647e259 il y a 2 semaines 225 Mo
k8s.gcr.io/kube-scheduler-amd64 v1.10.2 0dcb3dea0db1 il y a 2 semaines 50,4 Mo
k8s.gcr.io/kube-controller-manager-amd64 v1.10.2 f3fcd0775c4e il y a 2 semaines 148 Mo
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b il y a 2 mois 193 MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 il y a 4 mois 742 kB

J'ai passé tellement de temps à essayer de comprendre cela. J'ai désactivé ufw, désactivé selinux, vérifié que le transfert ip est activé et que /proc/sys/net/bridge/bridge-nf-call-iptables est défini sur 1. Rien ne semblait résoudre le problème.

Finalement, j'ai décidé de rétrograder puis de mettre à niveau.

sudo apt-get -y --allow-downgrades install kubectl=1.5.3-00 kubelet=1.5.3-00 kubernetes-cni=0.3.0.1-07a8a2-00 et

curl -Lo /tmp/old-kubeadm.deb https://apt.k8s.io/pool/kubeadm_1.6.0-alpha.0.2074-a092d8e0f95f52-00_amd64_0206dba536f698b5777c7d210444a8ace18f48e045ab78687327631c6c694f42.deb

pour rétrograder de 1.10 et puis juste

sudo apt-get -y install kubectl kubelet kubernetes-cni kubeadm

Etcd redémarrait et le serveur API expirait. Après un certain temps, le serveur api redémarre en se plaignant de ne pas pouvoir se connecter. Existe-t-il un moyen d'activer la journalisation au niveau DEBUG ? Maintenant, assurez-vous de ce qui cause cela. Mais ça marche maintenant. Je voudrais certainement reproduire cela et le dépanner.

J'ai la raison pour laquelle je suis resté coincé.
Je l'exécute dans vmware et j'ai localisé 1 Go de RAM, le k8 a besoin d'au moins 2 Go de RAM.
Est-il possible d'ajouter une notification à ce sujet ?

+1 kubeadm 1.10.2 sur CentOS 7
4 Go de RAM 2 processeurs

+1 kubeadm 1.10.1 sur Debian Stretch (go1.9.3) sur une VM HyperV avec 6Go de RAM et 1 VCPU...

cela avait bien fonctionné dans le passé car j'ai régénéré le cluster plusieurs fois ...

J'ai essayé de passer à 2 VCPU dans HyperV, rien ne change.

+1 !

+1. kubeadm 1.10.1, Debian Stretch. A travaillé avant

Nous avons découvert qu'avec docker 1.13.1 sur Centos 7, nous avons rencontré des problèmes avec le pilote de stockage. Les journaux Docker ont montré 'readlink /var/lib/docker/overlay2/l: argument invalide'. Le passage au docker 18.03.1-ce semble résoudre ce problème et kubeadm init ne se bloque plus.

J'ai eu le même problème. Il s'est avéré qu'etcd a extrait le nom d'hôte de la machine Linux (somedomain.example.com), l'a recherché sur un serveur DNS, a trouvé une réponse pour le domaine générique (*.example.com) et a essayé de se lier à l'adresse IP renvoyée au lieu du apiserver-advertise-address.

Il y a eu un certain nombre de correctifs pour le pré-extraction ainsi que la détection du délai d'expiration du pivot, ce qui a résolu ce problème.

+1

J'ai essayé la méthode standard en laissant k8sadmin extraire les images, j'ai essayé plusieurs fois, puis j'ai extrait les images, réinitialisé, essayé d'ignorer les erreurs, échoue toujours.

pi@master-node-001 :~ $ sudo kubeadm reset
[reset] ATTENTION : les modifications apportées à cet hôte par 'kubeadm init' ou 'kubeadm join' seront annulées.
[preflight] exécution des vérifications avant le vol
[reset] arrêt du service kubelet
[reset] démontage des répertoires montés dans "/var/lib/kubelet"
[reset] suppression du contenu des répertoires avec état : [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] suppression du contenu des répertoires de configuration : [/etc/kubernetes/manifests /etc/kubernetes/pki]
[réinitialiser] en supprimant les fichiers : [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler. conf]
pi@master-node-001 :~ extraction des images de configuration $ kubeadm
[config/images] Extraction de k8s.gcr.io/kube-apiserver:v1.12.2
[config/images] Extraction de k8s.gcr.io/kube-controller-manager:v1.12.2
[config/images] Extraction de k8s.gcr.io/kube-scheduler:v1.12.2
[config/images] Extraction de k8s.gcr.io/kube-proxy:v1.12.2
[config/images] Extraction de k8s.gcr.io/pause:3.1
[config/images] Extraction de k8s.gcr.io/etcd:3.2.24
[config/images] Extraction de k8s.gcr.io/ coredns:1.2.2
pi@master-node-001 :~ $ sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
[init] en utilisant la version de Kubernetes : v1.12.2
[preflight] exécution des vérifications avant le vol
[preflight/images] Extraction des images requises pour la configuration d'un cluster Kubernetes
[preflight/images] Cela peut prendre une minute ou deux, selon la vitesse de votre connexion Internet
[preflight/images] Vous pouvez également effectuer cette action au préalable en utilisant 'kubeadm config images pull'
[kubelet] Écriture du fichier d'environnement kubelet avec des drapeaux dans le fichier "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Écriture de la configuration de kubelet dans le fichier "/var/lib/kubelet/config.yaml"
[preflight] Activation du service kubelet
[certificats] Certificat et clé etcd/ca générés.
[certificats] Certificat et clé apiserver-etcd-client générés.
[certificates] Certificat et clé etcd/server générés.
[certificates] etcd/server server cert est signé pour les noms DNS [master-node-001 localhost] et les IP [127.0.0.1 ::1]
[certificates] Certificat et clé etcd/peer générés.
[certificates] etcd/peer servant cert est signé pour les noms DNS [master-node-001 localhost] et les adresses IP [192.168.0.100 127.0.0.1 ::1]
[certificats] Certificat et clé etcd/healthcheck-client générés.
[certificats] Certificat ca et clé générés.
[certificats] Certificat et clé apiserver générés.
[certificats] le certificat de service apiserver est signé pour les noms DNS [master-node-001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] et les adresses IP [10.96.0.1 192.168.0.100]
[certificats] Certificat et clé apiserver-kubelet-client générés.
[certificats] Certificat et clé front-proxy-ca générés.
[certificats] Certificat et clé générés pour le client proxy frontal.
[certificates] des certificats et des clés valides existent maintenant dans "/etc/kubernetes/pki"
[certificats] Clé sa et clé publique générées.
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "/etc/kubernetes/admin.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "/etc/kubernetes/kubelet.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Écrit le fichier KubeConfig sur le disque : "/etc/kubernetes/scheduler.conf"
[controlplane] a écrit le manifeste du pod statique pour le composant kube-apiserver dans "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] a écrit le manifeste Static Pod pour le composant kube-controller-manager dans "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] a écrit le manifeste Static Pod pour le composant kube-scheduler dans "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Écrit le manifeste de pod statique pour une instance etcd locale dans "/etc/kubernetes/manifests/etcd.yaml"
[init] attend que le kubelet démarre le plan de contrôle en tant que pods statiques à partir du répertoire "/etc/kubernetes/manifests"
[init] cela peut prendre une minute ou plus si les images du plan de contrôle doivent être extraites

Malheureusement, une erreur s'est produite :
a expiré en attendant la condition

Cette erreur est probablement causée par :
- Le kubelet ne fonctionne pas
- Le kubelet est malsain en raison d'une mauvaise configuration du nœud d'une manière ou d'une autre (groupes de contrôle requis désactivés)

Si vous êtes sur un système alimenté par systemd, vous pouvez essayer de résoudre l'erreur avec les commandes suivantes :
- 'kubelet d'état systemctl'
- 'journalctl -xeu kubelet'

De plus, un composant du plan de contrôle peut avoir planté ou s'être arrêté lors du démarrage par l'environnement d'exécution du conteneur.
Pour résoudre les problèmes, répertoriez tous les conteneurs à l'aide de votre CLI d'exécution de conteneur préférée, par exemple docker.
Voici un exemple de la manière dont vous pouvez répertorier tous les conteneurs Kubernetes exécutés dans Docker :
- 'docker ps -a | grep kube | grep -v pause'
Une fois que vous avez trouvé le conteneur défaillant, vous pouvez inspecter ses journaux avec :
- 'docker logs CONTAINERID'
impossible d'initialiser un cluster Kubernetes
pi@master-node-001 :~ $ images docker
ID D'IMAGE D'ÉTIQUETTE DE RÉFÉRENTIEL TAILLE CRÉÉE
k8s.gcr.io/kube-controller-manager v1.12.2 4bc6cae738d8 il y a 7 jours 146 Mo
k8s.gcr.io/kube-apiserver v1.12.2 8bfe044a05e1 il y a 7 jours 177MB
k8s.gcr.io/kube-scheduler v1.12.2 3abf5566fec1 il y a 7 jours 52MB
k8s.gcr.io/kube-proxy v1.12.2 328ef67ca54f il y a 7 jours 84.5MB
k8s.gcr.io/kube-proxy v1.12.1 8c06fbe56458 il y a 3 semaines 84.7MB
k8s.gcr.io/kube-controller-manager v1.12.1 5de943380295 il y a 3 semaines 146MB
k8s.gcr.io/kube-scheduler v1.12.1 1fbc2e4cd378 il y a 3 semaines 52 Mo
k8s.gcr.io/kube-apiserver v1.12.1 ab216fe6acf6 il y a 3 semaines 177 Mo
k8s.gcr.io/etcd 3.2.24 e7a8884c8443 il y a 5 semaines 222 Mo
k8s.gcr.io/coredns 1.2.2 ab0805b0de94 il y a 2 mois 33.4MB
k8s.gcr.io/kube-scheduler v1.11.0 0e4a34a3b0e6 il y a 4 mois 56.8MB
k8s.gcr.io/kube-controller-manager v1.11.0 55b70b420785 il y a 4 mois 155MB
k8s.gcr.io/etcd 3.2.18 b8df3b177be2 il y a 6 mois 219MB
k8s.gcr.io/pause 3.1 e11a8cbeda86 il y a 10 mois 374kB
pi@master-node-001 :~ $ h | grep kubectl
-bash : h : commande introuvable
pi@master-node-001 :~ $ historique | grep kubectl
Liste des 9 pods kubectl
10 modules de liste kubectl
11 kubectl --aide
12 kubectl get pods -o large
14 kubectl get pods -o large
32h | grep kubectl
33 histoire | grep kubectl
pi@master-node-001 :~ $ !12
kubectl get pods -o large
Impossible de se connecter au serveur : net/http : délai d'expiration de la poignée de main TLS
pi@master-node-001 :~ $ historique | grep pause
17 menu fixe ps -a | grep kube | grep -v pause
35 histoire | grep pause
pi@master-node-001 :~ $ !17
docker ps-a | grep kube | grep -v pause
41623613679e 8bfe044a05e1 "kube-apiserver --au…" Il y a 29 secondes Up 14 secondes
0870760b9ea0 8bfe044a05e1 "kube-apiserver --au…" il y a 2 minutes Sorti (0) il y a 33 secondes k8s_kube-apiserver_kube-apiserver-master-node-001_kube-system_1ec53f8ef96c76af95c78c809252f05c_2
c60d65fab8a7 3abf5566fec1 "kube-scheduler --ad…" il y a 6 minutes Up 5 minutes
26c58f6c68e9 e7a8884c8443 "etcd --advertise-cl…" il y a 6 minutes Up 5 minutes k8s_etcd_etcd-master-node-001_kube-system_d01dcc7fc79b875a52f01e26432e6745_0
65546081ca77 4bc6cae738d8 "kube-controller-man…" il y a 6 minutes Up 5 minutes
pi@master-node-001 :~ $ kubectl get pods -o large
^ C
pi@master-node-001 :~ $ redémarrage sudo
Connexion à 192.168.0.100 fermée par l'hôte distant.
Connexion à 192.168.0.100 fermée.
karl@karl-PL62-7RC :~$ ping 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(84) octets de données.
^ C
--- 192.168.0.100 statistiques de ping ---
2 paquets transmis, 0 reçu, 100% de perte de paquets, temps 1015ms

karl@karl-PL62-7RC :~$ ssh [email protected]
ssh_exchange_identification : lecture : réinitialisation de la connexion par l'homologue
karl@karl-PL62-7RC :~$ ssh [email protected]
Mot de passe de [email protected] :
Linux master-node-001 4.14.71-v7+ #1145 SMP Ven 21 septembre 15:38:35 BST 2018 armv7l

Les programmes inclus avec le système Debian GNU/Linux sont des logiciels libres ;
les conditions exactes de distribution pour chaque programme sont décrites dans le
fichiers individuels dans /usr/share/doc/*/copyright.

Debian GNU/Linux est livré avec ABSOLUMENT AUCUNE GARANTIE, dans la mesure où
autorisé par la loi applicable.
Dernière connexion : mer 31 octobre 21:36:13 2018
pi@master-node-001 :~ $ kubectl get pods -o large
La connexion au serveur 192.168.0.100:6443 a été refusée - avez-vous spécifié le bon hôte ou port ?
pi@master-node-001 :~ $ sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
[init] en utilisant la version de Kubernetes : v1.12.2
[preflight] exécution des vérifications avant le vol
[ATTENTION FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml] : /etc/kubernetes/manifests/kube-apiserver.yaml existe déjà
[ATTENTION FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml] : /etc/kubernetes/manifests/kube-controller-manager.yaml existe déjà
[ATTENTION FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml] : /etc/kubernetes/manifests/kube-scheduler.yaml existe déjà
[ATTENTION FileAvailable--etc-kubernetes-manifests-etcd.yaml] : /etc/kubernetes/manifests/etcd.yaml existe déjà
[AVERTISSEMENT Port-10250] : le port 10250 est utilisé
[ATTENTION DirAvailable--var-lib-etcd] : /var/lib/etcd n'est pas vide
[preflight/images] Extraction des images requises pour la configuration d'un cluster Kubernetes
[preflight/images] Cela peut prendre une minute ou deux, selon la vitesse de votre connexion Internet
[preflight/images] Vous pouvez également effectuer cette action au préalable en utilisant 'kubeadm config images pull'
[kubelet] Écriture du fichier d'environnement kubelet avec des drapeaux dans le fichier "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Écriture de la configuration de kubelet dans le fichier "/var/lib/kubelet/config.yaml"
[preflight] Activation du service kubelet
[certificats] Utilisation du certificat et de la clé etcd/peer existants.
[certificats] Utilisation du certificat et de la clé apiserver-etcd-client existants.
[certificats] Utilisation du certificat et de la clé etcd/server existants.
[certificats] Utilisation du certificat et de la clé etcd/healthcheck-client existants.
[certificats] Utilisation du certificat et de la clé apiserver existants.
[certificats] Utilisation du certificat et de la clé apiserver-kubelet-client existants.
[certificates] Utilisation du certificat et de la clé front-proxy-client existants.
[certificates] des certificats et des clés valides existent maintenant dans "/etc/kubernetes/pki"
[certificats] Utilisation de la clé sa existante.
[kubeconfig] Utilisation du fichier KubeConfig à jour existant : "/etc/kubernetes/admin.conf"
[kubeconfig] Utilisation du fichier KubeConfig à jour existant : "/etc/kubernetes/kubelet.conf"
[kubeconfig] Utilisation du fichier KubeConfig à jour existant : "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Utilisation du fichier KubeConfig à jour existant : "/etc/kubernetes/scheduler.conf"
[controlplane] a écrit le manifeste du pod statique pour le composant kube-apiserver dans "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] a écrit le manifeste Static Pod pour le composant kube-controller-manager dans "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] a écrit le manifeste Static Pod pour le composant kube-scheduler dans "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Écrit le manifeste de pod statique pour une instance etcd locale dans "/etc/kubernetes/manifests/etcd.yaml"
[init] attend que le kubelet démarre le plan de contrôle en tant que pods statiques à partir du répertoire "/etc/kubernetes/manifests"
[init] cela peut prendre une minute ou plus si les images du plan de contrôle doivent être extraites

Malheureusement, une erreur s'est produite :
a expiré en attendant la condition

Cette erreur est probablement causée par :
- Le kubelet ne fonctionne pas
- Le kubelet est malsain en raison d'une mauvaise configuration du nœud d'une manière ou d'une autre (groupes de contrôle requis désactivés)

Si vous êtes sur un système alimenté par systemd, vous pouvez essayer de résoudre l'erreur avec les commandes suivantes :
- 'kubelet d'état systemctl'
- 'journalctl -xeu kubelet'

De plus, un composant du plan de contrôle peut avoir planté ou s'être arrêté lors du démarrage par l'environnement d'exécution du conteneur.
Pour résoudre les problèmes, répertoriez tous les conteneurs à l'aide de votre CLI d'exécution de conteneur préférée, par exemple docker.
Voici un exemple de la manière dont vous pouvez répertorier tous les conteneurs Kubernetes exécutés dans Docker :
- 'docker ps -a | grep kube | grep -v pause'
Une fois que vous avez trouvé le conteneur défaillant, vous pouvez inspecter ses journaux avec :
- 'docker logs CONTAINERID'
impossible d'initialiser un cluster Kubernetes

13 sudo kubeadm init --token-ttl=0
14 kubectl get pods -o large
15 sudo kubeadm réinitialiser
16 sudo kubeadm init --token-ttl=0
17 menu fixe ps -a | grep kube | grep -v pause
18 images de configuration kubeadm extraites --kubernetes-version=v1.11.0
19 sudo kubeadm réinitialiser
20 historique > notes.txt
21 autres notes.txt
22 redémarrage sudo
23 liste des images de configuration kubeadm
24 images de configuration kubeadm extraites --kubernetes-version=v1.11.0
25 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
Extraction de 26 images de configuration kubeadm
27 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
Extraction de 28 images de configuration kubeadm
29 sudo kubeadm réinitialiser
Extraction de 30 images de configuration kubeadm
31 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
32 images de docker
33h | grep kubectl
34 histoire | grep kubectl
35 kubectl get pods -o large
36 histoire | grep pause
37 menu fixe ps -a | grep kube | grep -v pause
38 kubectl get pods -o large
39 redémarrage sudo
40 kubectl get pods -o large
41 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all

Cette page vous a été utile?
0 / 5 - 0 notes