Kubeadm: Bloques de inicio de Kubeadm en "Esto puede tardar un minuto o más si las imágenes del plano de control tienen que extraerse

Creado en 31 ene. 2018  ·  67Comentarios  ·  Fuente: kubernetes/kubeadm

Versiones

Versión de kubeadm (use kubeadm version ):

Medio ambiente :

  • Versión de Kubernetes (usar kubectl version ):v1.9.2
  • Proveedor de la nube o configuración de hardware : Virtual Box
  • SO (por ejemplo, de /etc/os-release):Ubuntu 16.04.0 LTS (Xeniak Xerus) amd64
  • Núcleo (por ejemplo uname -a ): Linux 4.4.0-62-genérico
  • Otros : versión de kubeadm: v1.9.2: amd64, versión de kubelet: v1.9.2 amd64, versión de kubernetes-cni: 0.6.0-00 amd64, versión de docker: 17.03.2-ce

¿Qué sucedió?

Cuando trato de ejecutar kubeadm init, se cuelga con
xx@xx :~$ sudo kubeadm init --kubernetes-version=v1.9.2

[init] Uso de la versión de Kubernetes: v1.9.2
[init] Uso de los modos de autorización: [Nodo RBAC]
[preflight] Ejecución de comprobaciones previas al vuelo.
[ADVERTENCIA FileExisting-crictl]: crictl no encontrado en la ruta del sistema
[preflight] Iniciando el servicio kubelet
[certificados] Certificado y clave ca generados.
[certificados] Certificado y clave de apserver generados.
[certificados] el certificado de servicio del servidor ap está firmado para nombres DNS [kickseed kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] e IP [10.96.0.1 172.17.41.15]
[certificados] Certificado y clave apiserver-kubelet-client generados.
[certificados] Generó una clave sa y una clave pública.
[certificados] Certificado y clave front-proxy-ca generados.
[certificados] Certificado y clave de front-proxy-client generados.
[certificados] Ahora existen certificados y claves válidos en "/etc/kubernetes/pki"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "admin.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "kubelet.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "controller-manager.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "scheduler.conf"
[plano de control] Escribió el manifiesto de Static Pod para el componente kube-apiserver en "/etc/kubernetes/manifests/kube-apiserver.yaml"
[plano de control] Escribió el manifiesto de Static Pod para el componente kube-controller-manager en "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[plano de control] Escribió el manifiesto de Static Pod para el componente kube-scheduler en "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escribió el manifiesto de Static Pod para una instancia local de etcd en "/etc/kubernetes/manifests/etcd.yaml"
[init] Esperando a que kubelet inicie el plano de control como Static Pods desde el directorio "/etc/kubernetes/manifests".
[init] Esto puede tardar un minuto o más si las imágenes del plano de control tienen que extraerse.

Luego reviso el registro de kubelet:
xx@xx :~$ sudo journalctl -xeu kubelet:
31 de enero 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.280984 28516 remote_runtime.go:92] RunPodSandbox del servicio de tiempo de ejecución falló: error rpc: código = Desc desconocido = error al extraer la imagen "gcr.io/google_containers/ pause-amd64:3.0": Respuesta de error del daemon: Obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: tiempo de espera de i/o
31 de enero 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.281317 28516 kuberuntime_sandbox.go:54] CreatePodSandbox para pod "kube-scheduler-kickseed_kube-system(69c12074e336b0dbbd0a1666pcce05226a) código de error: error desconocido: desc. = Error al extraer la imagen "gcr.io/google_containers/pause-amd64:3.0": respuesta de error del demonio: obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: tiempo de espera de i/o
31 de enero 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.281580 28516 kuberuntime_manager.go:647] createPodSandbox para pod "kube-scheduler-kickseed_kube-system(69c12074e336b0dbbd0a1666pcce05226a) error: error de código desconocido: desc. = Error al extraer la imagen "gcr.io/google_containers/pause-amd64:3.0": respuesta de error del demonio: obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: tiempo de espera de i/o
31 de enero 14:45:03 Kickseed Kubelet [28516]: E0131 14: 45: 03.281875 28516 POD_WORKERS.GO: 186] Sincronización de errores POD 69C12074E336B0DBBD0A1666CE05226A ("kube-scheduler-kickseed_kube-system (69c12074e336b0dbbd0a1666ce05226a)"), saltando: no se pudo " Createpodsandbox "para" kube-scheduler-kickseed_kube-system (69c12074e336b0dbbd0a1666ce05226a) "con createpodsandboxerror:" createpodsandbox for pod \ "kube-scheduler-kickseed_kube-system (69c12074e336b0dbbd0a1666ce05226a) \" Error: Error de RPC: Código = desconocido Desc = Falling Tiring Image \ "gcr.io/google_containers/pause-amd64:3.0\": respuesta de error del demonio: obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: tiempo de espera de E/S"
31 de enero 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.380290 28516 event.go:209] No se puede escribir el evento: 'Parche https://172.17.41.15 :6443/api/v1/namespaces/default /events/kickseed.150ecf46afb098b7: marcar tcp 172.17.41.15:6443: getsockopt: conexión rechazada' (puede volver a intentarlo después de dormir)
31 de enero 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.933783 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Error al incluir *v1. Pod: Obtener https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: marcar tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.934707 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: No se pudo enumerar *v1.Node: Obtenga https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.935921 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Error al incluir *v1.Service: Obtenga https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.281024 28516 remote_runtime.go:92] RunPodSandbox del servicio de tiempo de ejecución falló: error rpc: código = Desc desconocido = error al extraer la imagen "gcr.io/google_containers/ pause-amd64:3.0": Respuesta de error del daemon: Obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: tiempo de espera de i/o
31 de enero 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.281352 28516 kuberuntime_sandbox.go:54] CreatePodSandbox para pod "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712ce25ee9b6cb) error: error: código = rpc" Desc desconocido = error al extraer la imagen "gcr.io/google_containers/pause-amd64:3.0": respuesta de error del daemon: obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: tiempo de espera de i/o
31 de enero 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.281634 28516 kuberuntime_manager.go:647] createPodSandbox para pod "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712ce25ee9b6cb) error: código fallido = rpc" Desc desconocido = error al extraer la imagen "gcr.io/google_containers/pause-amd64:3.0": respuesta de error del daemon: obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: tiempo de espera de i/o
31 de enero 14:45:04 Kickseed Kubelet [28516]: E0131 14: 45: 04.281938 28516 POD_WORKERS.GO: 186] Sincronización de errores POD 6546D6FAF0B50C9FC6712CE25EE9B6CB ("Kube-Controller-Manager-Kickseed_Kube-System (6546d6faf0b50c9fc6712ce2ee9b6cb)"), saltando: fallido a "CreatePodSandbox" para "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712ce25ee9b6cb)" con CreatePodSandboxError: "CreatePodSandbox para pod \"kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712cec25ee9) error: error desconocido" desc = error al extraer la imagen \"gcr.io/google_containers/pause-amd64:3.0\": respuesta de error del demonio: obtenga https://gcr.io/v1/_ping : marque tcp 172.217.6.127:443: i/o se acabó el tiempo"
31 de enero 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.934694 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Error al incluir *v1. Pod: Obtener https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: marcar tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.935613 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: No se pudo enumerar *v1.Node: Obtenga https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.936669 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Error al incluir *v1.Servicio: Obtenga https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:05 kickseed kubelet[28516]: W0131 14:45:05.073692 28516 cni.go:171] No se puede actualizar la configuración de cni: No se encontraron redes en /etc/cni/net.d
31 de enero 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.074106 28516 kubelet.go:2105] La red de tiempo de ejecución del contenedor no está lista: NetworkReady=false razón: NetworkPluginNotReady mensaje: docker : el complemento de red no está listo: cni config no inicializado
31 de enero 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.935680 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Error al incluir *v1. Pod: Obtener https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: marcar tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.937423 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: No se pudo enumerar *v1.Node: Obtenga https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.937963 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Error al incluir *v1.Servicio: Obtenga https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:45:05 kickseed kubelet[28516]: I0131 14:45:05.974034 28516 kubelet_node_status.go:273] Configuración de la anotación del nodo para habilitar la conexión/desconexión del controlador de volumen
31 de enero 14:45:06 kickseed kubelet[28516]: I0131 14:45:06.802447 28516 kubelet_node_status.go:273] Configuración de la anotación del nodo para habilitar la conexión/desconexión del controlador de volumen
31 de enero 14:45:06 kickseed kubelet[28516]: I0131 14:45:06.804242 28516 kubelet_node_status.go:82] Intentando registrar el nodo kickseed
31 de enero 14:45:06 kickseed kubelet[28516]: E0131 14:45:06.804778 28516 kubelet_node_status.go:106] No se puede registrar el nodo "kickseed" con el servidor API: Publicar https://172.17.41.15 :6443/api/ v1/nodos: marcar tcp 172.17.41.15:6443: getsockopt: estafa

xx@xx :~$ sudo systemctl status kubelet:

kubelet.service - kubelet: el agente de nodo de Kubernetes
Cargado: cargado (/lib/systemd/system/kubelet.service; habilitado; proveedor predeterminado: habilitado)
Directo: /etc/systemd/system/kubelet.service.d
└─11-kubeadm.conf, 10-kubeadm1.conf, 90-local-extras.conf
Activo: activo (en ejecución) desde el miércoles 31 de enero de 2018 a las 13:53:46 CST; Hace 49 minutos
Documentos: http://kubernetes.io/docs/
PID principal: 28516 (kubelet)
Tareas: 13
Memoria: 37,8 M
CPU: 22.767 s
CGroup: /system.slice/kubelet.service
└─28516 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/ manifiestos --allow-privileged=true --cgroup-driver=cgroupfs --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni /bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port =0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki --fail-swap-on=false

31 de enero 14:43:17 kickseed kubelet[28516]: E0131 14:43:17.862590 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: No se pudo enumerar *v1.Node: Obtenga https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:43:17 kickseed kubelet[28516]: E0131 14:43:17.863474 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Error al incluir *v1.Servicio: Obtenga https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.621818 28516 event.go:209] No se puede escribir el evento: 'Parche https://172.17.41.15 :6443/api/v1/namespaces/default /events/kickseed.150ecf46afb098b7: marcar tcp 172.17.41.15:6443: getsockopt: conexión rechazada' (puede volver a intentarlo después de dormir)
31 de enero 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.862440 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Error al incluir *v1. Pod: Obtenga https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.863379 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: No se pudo enumerar *v1.Node: Obtenga https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.864424 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Error al incluir *v1.Service: Obtenga https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.255460 28516 eviction_manager.go:238] administrador de desalojo: error inesperado: no se pudo obtener la información del nodo: no se encontró el nodo "kickseed"
31 de enero 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.863266 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Error al incluir *v1. Pod: Obtenga https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.864238 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: No se pudo enumerar *v1.Node: Obtenga https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada
31 de enero 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.865262 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Error al incluir *v1.Servicio: Obtenga https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: marque tcp 172.17.41.15:6443: getsockopt: conexión rechazada

Algunas imágenes acoplables se enumeran a continuación:
gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
gcr.io/google_containers/kube-proxy-amd64:v1.9.2
gcr.io/google_containers/etcd-amd64:3.2.14
gcr.io/google_containers/pause-amd64:3.1
gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.2
gcr.io/google_containers/kubedns-amd64:1.9
gcr.io/google_containers/kube-discovery-amd64:1.0
gcr.io/google_containers/exechealthz-amd64:v1.2.0
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8
gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1

¿Qué esperabas que pasara?

kubeadm init debería completarse

¿Cómo reproducirlo (de la forma más mínima y precisa posible)?

virtualbox con Ubuntu 16.04 y kubeadm 1.9.2

¿Algo más que necesitemos saber?

areUX lifecyclactive prioritimportant-soon

Comentario más útil

Todos 67 comentarios

Estas imágenes acoplables enumeradas anteriormente se extrajeron de mi repositorio privado antes de ejecutar "kubeadm init --kubernetes-version v1.9.2", no puedo acceder directamente a gcr.io/google-containers debido a GFW.

¡El mismo problema aqui!

Tengo el mismo problema en CentOS 7

+1

+1

+1

+1
servidores en vultr, atrapados aquí también.

+1

+1

+1

como solución

1/ cree un registro docker en su maestro de kubernetes

2/ declara tu maestro de kubernetes como gcr.io en /etc/hosts

3/ En una máquina con acceso a Internet, inicie sesión en Google Cloud y descargue la imagen
ejemplo:
gloud docker: tire de gcrio/goole_container/pause-amd64:3.0
gloud docker -- save -o /tmp/pause-amd.tar gcrio/goole_container/pause-amd64:3.0

4/ Subir imágenes a un registro de repositorio docker
carga acoplable -i /tmp/pausa-amd64.tar
etiqueta docker gcr.io/Google_containers/pause-amd64:3.0 yourdoke registry:pause-amd64 :3.0
ventana acoplable empuje el registro de su doke: pausa-amd64 : 3.0

5/ En su maestro de kebernetes como registro docker gcr.io

Obtenga imágenes de su repositorio de registro docker
ventana acoplable tire del registro de su muelle: pausa-amd64 : 3.0

Vaya a su registro docker gcr.io local
etiqueta de la ventana acoplable su registro de muelle: pausa-amd64 : 3.0 gcr.io/google_containers/pausa-amd64: 3.0
ventana acoplable empujar gcr.io/google_containers/pause-amd64:3.0

Descargue todas las imágenes utilizadas por kubeadm init. Ver en /etc/kubernetes/manifest/*.yaml

¿Está arreglado en 1.9.3?

+1

+1: esto solo se manifiesta la segunda vez que ejecuto kubeadm init. La primera vez arranca bien. No estoy seguro de si hay un poco de estado de la primera ejecución que no se limpia correctamente con el reinicio de kubeadm.

+1

centos 7 y configuré el proxy en /etc/env y luego se muestra como 👎

+1

El mismo problema aqui. Centos7, la última instalación de kube (1.9.3), probó la documentación de hightower y toda la documentación de kubernetes. etcd y la franela están trabajando y vivas y arriba. Utilicé la variable de entorno NO_PROXY para colocar mis direcciones IP externas para que no intente una conexión de proxy a otras conexiones, sin embargo, nunca llegue a ese punto y obtenga los mismos errores que todos los demás arriba.

+1

Tengo el mismo problema, centos 7, kubelet v1.9.3;
Pero parece que las imágenes se descargaron con éxito,
docker images
gcr.io/google_containers/kube-apiserver-amd64 v1.9.3 360d55f91cbf 4 weeks ago 210.5 MB gcr.io/google_containers/kube-controller-manager-amd64 v1.9.3 83dbda6ee810 4 weeks ago 137.8 MB gcr.io/google_containers/kube-scheduler-amd64 v1.9.3 d3534b539b76 4 weeks ago 62.71 MB gcr.io/google_containers/etcd-amd64 3.1.11 59d36f27cceb 3 months ago 193.9 MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 22 months ago 746.9 kB

Tengo un CentOS 7 vm aquí y ya lo configuré con nuestro servidor proxy.
Recibí el mismo mensaje de tiempo de espera, pero las imágenes de la ventana acoplable se extrajeron y están en funcionamiento.

Yo también estoy experimentando el mismo problema. Consulte las salidas y los registros para obtener más información.

```[ root@kube01 ~]# kubeadm init
[init] Uso de la versión de Kubernetes: v1.9.3
[init] Uso de los modos de autorización: [Nodo RBAC]
[preflight] Ejecución de comprobaciones previas al vuelo.
[ADVERTENCIA Nombre de host]: no se pudo acceder al nombre de host "kube01"
[ADVERTENCIA Nombre de host]: nombre de host "kube01" busque kube01 en 10.10.0.81:53: el servidor se está portando mal
[ADVERTENCIA FileExisting-crictl]: crictl no encontrado en la ruta del sistema
[preflight] Iniciando el servicio kubelet
[certificados] Certificado y clave ca generados.
[certificados] Certificado y clave de apserver generados.
[certificados] el certificado de servicio del servidor ap está firmado para nombres DNS [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] e IP [10.96.0.1 10.25.123.11]
[certificados] Certificado y clave apiserver-kubelet-client generados.
[certificados] Generó una clave sa y una clave pública.
[certificados] Certificado y clave front-proxy-ca generados.
[certificados] Certificado y clave de front-proxy-client generados.
[certificados] Ahora existen certificados y claves válidos en "/etc/kubernetes/pki"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "admin.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "kubelet.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "controller-manager.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "scheduler.conf"
[plano de control] Escribió el manifiesto de Static Pod para el componente kube-apiserver en "/etc/kubernetes/manifests/kube-apiserver.yaml"
[plano de control] Escribió el manifiesto de Static Pod para el componente kube-controller-manager en "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[plano de control] Escribió el manifiesto de Static Pod para el componente kube-scheduler en "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escribió el manifiesto de Static Pod para una instancia local de etcd en "/etc/kubernetes/manifests/etcd.yaml"
[init] Esperando a que kubelet inicie el plano de control como Static Pods desde el directorio "/etc/kubernetes/manifests".
[init] Esto puede tardar un minuto o más si las imágenes del plano de control tienen que extraerse.

In the meantime, while watching `docker ps` this is what I see:
***Note:*** Don't mind the length of time that the containers have been up — this is my third attempt and it's always the same.

```CONTAINER ID        IMAGE
              COMMAND                  CREATED              STATUS              PORTS               NAMES
c422b3fd67f9        gcr.io/google_containers/kube-apiserver-amd64<strong i="5">@sha256</strong>:a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f0
43            "kube-apiserver --req"   About a minute ago   Up About a minute                       k8s_kube-apiserver_kube-apiserver-k
ube01_kube-system_3ff6faac27328cf290a026c08ae0ce75_1
4b30b98bcc24        gcr.io/google_containers/kube-controller-manager-amd64<strong i="6">@sha256</strong>:3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599
769d0b7251e   "kube-controller-mana"   2 minutes ago        Up 2 minutes                            k8s_kube-controller-manager_kube-co
ntroller-manager-kube01_kube-system_d556d9b8ccdd523a5208b391ca206031_0
71c6505ed125        gcr.io/google_containers/kube-scheduler-amd64<strong i="7">@sha256</strong>:2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a16
75            "kube-scheduler --add"   2 minutes ago        Up 2 minutes                            k8s_kube-scheduler_kube-scheduler-k
ube01_kube-system_6502dddc08d519eb6bbacb5131ad90d0_0
9d01e2de4686        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-controller-manager-kub
e01_kube-system_d556d9b8ccdd523a5208b391ca206031_0
7fdaabc7e2a7        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-apiserver-kube01_kube-
system_3ff6faac27328cf290a026c08ae0ce75_0
a5a2736e6cd0        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-scheduler-kube01_kube-
system_6502dddc08d519eb6bbacb5131ad90d0_0
ea82cd3a27da        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_etcd-kube01_kube-system_727
8f85057e8bf5cb81c9f96d3b25320_0

SALIDA DE REGISTRO PARA gcr.io/google_containers/ kube-apiserver-amd64@sha256 :a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f043

I0309 19:59:29.570990       1 server.go:121] Version: v1.9.3
I0309 19:59:29.756611       1 feature_gate.go:190] feature gates: map[Initializers:true]
I0309 19:59:29.756680       1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0309 19:59:29.760396       1 master.go:225] Using reconciler: master-count
W0309 19:59:29.789648       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0309 19:59:29.796731       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0309 19:59:29.797445       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0309 19:59:29.804841       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/03/09 19:59:29 log.go:33: [restful/swagger] listing is available at https://10.25.123.11:6443/swaggerapi
[restful] 2018/03/09 19:59:29 log.go:33: [restful/swagger] https://10.25.123.11:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/03/09 19:59:30 log.go:33: [restful/swagger] listing is available at https://10.25.123.11:6443/swaggerapi
[restful] 2018/03/09 19:59:30 log.go:33: [restful/swagger] https://10.25.123.11:6443/swaggerui/ is mapped to folder /swagger-ui/
I0309 19:59:32.393800       1 serve.go:89] Serving securely on [::]:6443
I0309 19:59:32.393854       1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0309 19:59:32.393866       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0309 19:59:32.393965       1 controller.go:84] Starting OpenAPI AggregationController
I0309 19:59:32.393998       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0309 19:59:32.394012       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0309 19:59:32.394034       1 customresource_discovery_controller.go:152] Starting DiscoveryController
I0309 19:59:32.394057       1 naming_controller.go:274] Starting NamingConditionController
I0309 19:59:32.393855       1 crd_finalizer.go:242] Starting CRDFinalizer
I0309 19:59:32.394786       1 available_controller.go:262] Starting AvailableConditionController
I0309 19:59:32.394815       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0309 20:00:06.434318       1 trace.go:76] Trace[12318713]: "Create /api/v1/nodes" (started: 2018-03-09 19:59:32.431463052 +0000 UTC m=+2.986431803) (total time: 34.002792758s):
Trace[12318713]: [4.00201898s] [4.001725343s] About to store object in database
Trace[12318713]: [34.002792758s] [30.000773778s] END
E0309 20:00:32.406206       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)
E0309 20:00:32.406339       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E0309 20:00:32.406342       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
E0309 20:00:32.408094       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
E0309 20:00:32.415692       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes)
E0309 20:00:32.415818       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E0309 20:00:32.415862       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io)
E0309 20:00:32.415946       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E0309 20:00:32.416029       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas)
E0309 20:00:32.416609       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
E0309 20:00:32.416684       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io)
E0309 20:00:32.420305       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E0309 20:00:32.440196       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io)
E0309 20:00:32.440403       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E0309 20:00:32.448018       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts)
E0309 20:00:32.448376       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io)
E0309 20:00:33.395988       1 storage_rbac.go:175] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
I0309 20:00:43.455564       1 trace.go:76] Trace[375160879]: "Create /api/v1/nodes" (started: 2018-03-09 20:00:13.454506587 +0000 UTC m=+44.009475397) (total time: 30.001008377s):
Trace[375160879]: [30.001008377s] [30.000778516s] END

================================================== ====================

SALIDA DE REGISTRO PARA gcr.io/google_containers/ kube-controller-manager-amd64@sha256 :3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599769d0b7251e

I0309 19:51:35.248083       1 controllermanager.go:108] Version: v1.9.3
I0309 19:51:35.257251       1 leaderelection.go:174] attempting to acquire leader lease...
E0309 19:51:38.310839       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:41.766358       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:46.025824       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:49.622916       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:52.675648       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:55.697734       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:59.348765       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:01.508487       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:03.886473       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:06.120356       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:08.844772       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:12.083789       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:16.038882       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:18.555388       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:21.471034       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:24.236724       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:27.363968       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:30.045776       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:32.751626       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:36.383923       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:38.910958       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:41.400748       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:44.268909       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:47.640891       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:51.713420       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:54.419154       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:57.134430       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:00.942903       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:03.440586       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:07.518362       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:53:12.968927       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:16.228760       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:18.299005       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:20.681915       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:24.141874       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:28.484775       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:30.678092       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:34.107654       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:36.251647       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:39.914756       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:42.641017       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:45.058876       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:48.359511       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:51.667554       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:54.338101       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:57.357894       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:00.633504       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:03.244353       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:05.923510       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:09.817627       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:12.688349       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:16.803954       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:19.519269       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:23.668226       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:25.903217       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:30.248639       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:32.428029       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:34.962675       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:38.598370       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:41.179039       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:43.927574       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:48.190961       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:51.974141       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:55.898687       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:59.653210       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:02.094737       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:05.125275       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:09.280324       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:12.920886       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:17.272605       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:21.488182       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:23.708198       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:26.893696       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:31.121014       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:35.414628       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:38.252001       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:41.912479       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:45.621133       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:48.976244       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:52.537317       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:55.863737       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:59.682009       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:02.653432       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:04.968939       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:09.336478       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:13.488850       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:16.262967       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:22.685928       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:26.235497       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:28.442915       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:32.051827       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:35.547277       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:38.437120       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:41.007877       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:44.295081       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:46.746424       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:49.321870       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:52.831866       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:55.138333       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:57.815491       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:00.802112       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:03.848363       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:07.350593       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:10.672982       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:14.171660       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:17.923995       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:21.919624       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:23.923165       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:27.692006       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:30.654447       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:33.851703       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:37.302382       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:40.286552       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:42.358940       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:44.364982       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:46.372569       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:50.571683       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:53.988093       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:57.648006       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:01.607961       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:05.717138       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:08.819600       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:12.262314       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:14.327626       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:18.359683       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:20.961212       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:24.503457       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:27.099581       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:29.518623       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:32.943210       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:36.900236       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:40.567479       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:42.642410       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:45.938839       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:50.282483       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:54.086558       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:56.794469       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:00.604370       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:02.968978       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:05.825551       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:09.824458       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:12.383249       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:15.891164       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:19.088375       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:21.305063       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:23.366258       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:26.308481       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:32.440045       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:36.673744       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:40.049109       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:43.463730       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:46.454431       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:49.782639       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:52.964468       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:57.265527       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:01.181219       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:03.441468       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:07.324053       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:10.269835       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:12.584906       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:15.042928       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:18.820764       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:22.392476       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:24.630702       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:27.881904       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:30.123513       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:32.490088       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:34.675420       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:37.433904       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:39.819475       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:42.152164       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"

================================================== ====================

SALIDA DE REGISTRO PARA gcr.io/google_containers/ kube-scheduler-amd64@sha256 :2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a1675

W0309 19:51:34.800737       1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0309 19:51:34.812848       1 server.go:551] Version: v1.9.3
I0309 19:51:34.817093       1 server.go:570] starting healthz server on 127.0.0.1:10251
E0309 19:51:34.818028       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: Get https://10.25.123.11:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818279       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.25.123.11:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818346       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: Get https://10.25.123.11:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818408       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.25.123.11:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.819028       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.25.123.11:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.819386       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: Get https://10.25.123.11:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.820217       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.25.123.11:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.820659       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.25.123.11:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.821783       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.25.123.11:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:38.320455       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:38.329101       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:38.329733       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:38.332670       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:38.332707       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:38.332734       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:38.334248       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:38.334568       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:38.334594       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:39.322884       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:39.331726       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:39.333093       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:39.335939       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:39.335988       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:39.336229       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:39.336514       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:39.337881       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:39.338784       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:40.323869       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:40.332910       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:40.334120       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:40.337188       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:40.338218       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:40.339267       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:40.340635       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:40.342035       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:40.343070       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:41.325987       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:41.334782       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:41.336320       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:41.338996       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:41.339923       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:41.340904       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:41.342304       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:41.343675       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:41.344622       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:42.328038       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:42.336744       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:42.338239       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:42.340719       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:42.341878       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:42.342835       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:42.344100       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:42.345231       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:42.346405       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:43.330230       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:43.338706       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:43.339941       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:43.342476       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:43.343584       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:43.344615       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:43.345792       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:43.346976       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:43.348050       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:44.332307       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:44.340659       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:44.341607       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:44.344223       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:44.345380       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:44.346247       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:44.347536       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:44.348664       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:44.349648       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:45.334228       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:45.342638       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:45.343460       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:45.345969       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:45.347140       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:45.348176       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope

================================================== ====================

SALIDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SALIDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SALIDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SALIDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

+1
Actualizar:
Después de revisar todo lo que pude (soy un poco nuevo en k8s), finalmente descubrí que kubectl describe pod -n kube-system kube-dns-<sha> mostraba que el servidor virtual en el que estaba instalando solo tenía 1 CPU y kube-dns no se estaba iniciando debido a la falta de CPU. Por extraño que parezca, el kubectl logs pod -n kube-system kube-dns-<sha> no mostró esta información.

Funcionó después de una reinstalación del sistema operativo (ya que un reinicio después de la instalación de kubeadm hace que el maestro k8s no se inicie correctamente).
(perdón por olvidar capturar la salida)

+1

Tuve el mismo problema, cancelé y ejecuté reset y luego el mismo init que antes pero con --apiserver-advertise-address=<my_host_public_ip_address> , y funcionó.

degradar a 1.8.10 solucionó el problema para mí.

+1

+1

Mismo problema aquí con v1.9.3 en Ubuntu 16.04 (sin selinux)

+1 mismo problema

Mismo problema con v1.10 en Ubuntu 16.04 en arm64.

mismo problema con v1.10 en ubuntu 16.04 en arm64 (sin selinux)

Verifique la cantidad de CPU que tiene en el hardware que instala: se requieren 2 en el maestro para instalar, como escribí anteriormente hace poco más de 3 semanas.

@bbruun El hardware utilizado es: https://www.pine64.org/?page_id=1491 entonces 4 núcleos y se detectan correctamente como tales. El hardware no debería ser el problema entonces. Pero gracias por el consejo de todos modos. Quizás @qxing3 no use el mismo hardware...

@farfeduc fue el obstáculo que encontré, varios intentos seguidos con la reinstalación de mi máquina virtual para probar la instalación y familiarizarme con k8s, pero obtener registros utilizables del sistema es un lío e intenté obtenerlos de todos los lugares donde pude hasta que recibí un mensaje de que no había suficientes CPU disponibles. Ahora compré 3 Udoo x86 Ultra para ejecutar un pequeño clúster para jugar en casa junto con el trabajo donde usamos instancias un poco más grandes :-)

@bbruun Configuré 2 CPU para mi máquina virtual, gracias por la sugerencia de todos modos.

/asignar @liztio

+1

+1 v1.10.0

+1 v1.10.0 y 1.10.1

+1

Curiosamente, estoy encontrando deltas dependiendo de dónde implemente. Espero encontrar tiempo para explorar más, pero hasta ahora, sé esto. Si uso mi Mac/VMware Fusion y ejecuto máquinas virtuales CentOS 7, puedo usar kubeadm 1.8 con pleno éxito. Nunca obtuve v1.9 o v1.10 para trabajar localmente. Sin embargo, al usar imágenes de CentOS 7 en Digital Ocean, puedo ejecutar v1.8.x, v1.10.0 y v1.10.1 con éxito; v1.9 parece "simplemente depender" por alguna razón. Así que ahora, se trata de profundizar en los deltas finos entre los dos entornos para descubrir qué activa el interruptor. Sé que los niveles de Kernel/parche coinciden, así como los motores Docker, etc. SÍ instala cosas de inicio en la nube, mis máquinas virtuales locales no, y así sucesivamente. No es trivial descubrir qué es diferente. Fui tan lejos como para tratar de hacer coincidir los tamaños de disco (pensando que un disco más pequeño podría enmascarar algún error en alguna parte, pero también lo eliminé).

En todos los casos, extraer las imágenes siempre ha funcionado, es solo lograr que el servicio API responda y no siga reciclando cada par de minutos cuando falla.

Saludos,

Puede consultar https://docs.docker.com/config/daemon/systemd/#httphttps -proxy para configurar el proxy para el demonio docker.

+1 Corriendo en HypriotOS en Raspberry PI 3

Pude hacerlo funcionar instalando v1.9.6 en lugar de la última versión.
Así que funciona normalmente con v1.9.6 pero falla con v1.10.0 y v1.10.1 en ubuntu 16.04 en arm64 en tableros sopine.

Tengo el mismo problema en Raspberry Pi 3, HypriotOS. La degradación a 1.9.7-00 también funcionó para mí.

+1, kubeadm v1.10.1, frambuesa pi 3b, hypriotOS

En mi caso, descubrí que el contenedor etcd se estaba iniciando y luego saliendo con un error, y esto estaba causando que kubeadm init se bloqueara y finalmente se agotara el tiempo de espera.

Para verificar si esto lo está molestando, ejecute docker ps -a y verifique el estado del contenedor etcd. Si no se está ejecutando, verifique los registros del contenedor etcd ( docker logs <container-id> ) y vea si se queja de que no puede vincularse a una dirección. Consulte este informe de problemas: https://github.com/kubernetes/kubernetes/issues/57709

El problema que acabo de mencionar tiene una solución alternativa, pero asegúrese de que eso es lo que está encontrando primero.

Asegúrese de que su firewall permita el tráfico entrante en 6443

Por ejemplo, si está usando Ubuntu, ejecute ufw status para ver si está habilitado

Luego ufw allow 6443 para abrir el puerto

¿Es posible enumerar las imágenes a continuación, y las extraemos manualmente por proxy, luego iniciamos el kubeadm nuevamente?
¿Funcionará?
porque estamos en China, ya sabes, el GFW.
Y soy nuevo en k8s, atascado aquí cuando configuré en centos7.

Para las personas en China que están detrás de EL GRAN CORTAFUEGOS

@ thanch2n muchas gracias. voy a intentarlo

Agregué un proxy a la ventana acoplable, usando this , parece que todas las imágenes ya se han descargado, pero aún se atascaron "[init] Esto podría demorar un minuto o más si las imágenes del plano de control tienen que extraerse".

enumere las imágenes extraídas automáticamente a continuación.

k8s.gcr.io/kube-apiserver-amd64 v1.10.2 e774f647e259 Hace 2 semanas 225 MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.2 0dcb3dea0db1 Hace 2 semanas 50,4 MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.2 f3fcd0775c4e Hace 2 semanas 148 MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b Hace 2 meses 193 MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 Hace 4 meses 742 kB

Pasé mucho tiempo tratando de resolver esto. Deshabilité ufw, apagué selinux, me aseguré de que el reenvío de IP esté activado y también /proc/sys/net/bridge/bridge-nf-call-iptables esté configurado en 1. Nada parecía resolver el problema.

Finalmente decidí bajar de categoría y luego actualizar.

sudo apt-get -y --allow-downgrades install kubectl=1.5.3-00 kubelet=1.5.3-00 kubernetes-cni=0.3.0.1-07a8a2-00 y

curl -Lo /tmp/old-kubeadm.deb https://apt.k8s.io/pool/kubeadm_1.6.0-alpha.0.2074-a092d8e0f95f52-00_amd64_0206dba536f698b5777c7d210444a8ace18f48e045ab78687327631c6c694f42.deb

para bajar de 1.10 y luego solo

sudo apt-get -y install kubectl kubelet kubernetes-cni kubeadm

Etcd se estaba reiniciando y el servidor API se estaba agotando. Después de un tiempo, el servidor API se reinicia y se queja de que no puede conectarse. ¿Hay alguna manera de que podamos activar el registro de nivel DEBUG? Ahora seguro qué causa esto. Pero está funcionando ahora. Definitivamente me gustaría reproducir esto y solucionarlo.

Entendí la razón por la que me quedé.
Lo estoy ejecutando en vmware y localicé 1G de RAM, k8s necesita al menos 2G de RAM.
¿Hay alguna posibilidad de agregar una notificación sobre esto?

+1 kubeadm 1.10.2 en CentOS 7
4 GB de RAM 2 CPU

+1 kubeadm 1.10.1 en Debian Stretch (go1.9.3) en una máquina virtual HyperV con 6 GB de RAM y 1 VCPU...

había funcionado bien en el pasado ya que regeneré el grupo muchas veces...

Intenté cambiar a 2 VCPU en HyperV, nada cambia.

+1!

+1. kubeadm 1.10.1, Debian Stretch. trabajado antes

Descubrimos que con docker 1.13.1 en Centos 7 vimos problemas con el controlador de almacenamiento. Los registros de Docker mostraron 'readlink /var/lib/docker/overlay2/l: argumento no válido'. Pasar a la ventana acoplable 18.03.1-ce parece resolver este problema y kubeadm init ya no se bloquea.

Yo tuve el mismo problema. Resultó que etcd extrajo el nombre de host de la máquina Linux (somedomain.example.com), lo buscó en un servidor DNS, encontró una respuesta para el dominio comodín (*.example.com) e intentó vincular la dirección IP devuelta en lugar de la apiserver-advertise-address.

Ha habido una serie de correcciones para la extracción previa, así como la detección del tiempo de espera de pivote, por lo que se cierra este problema.

+1

Intenté la forma estándar dejando que k8sadmin bajara las imágenes, lo intenté muchas veces y luego extraí las imágenes, reinicié, intenté ignorar los errores, siempre falla.

pi@master-node-001 :~ $ sudo kubeadm restablecer
[restablecer] ADVERTENCIA: los cambios realizados en este host por 'kubeadm init' o 'kubeadm join' se revertirán.
[preflight] ejecutando comprobaciones previas al vuelo
[restablecer] detener el servicio kubelet
[restablecer] desmontar directorios montados en "/var/lib/kubelet"
[restablecer] eliminar contenido de directorios con estado: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[restablecer] eliminando el contenido de los directorios de configuración: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[restablecer] eliminar archivos: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler. conferencia]
pi@master-node-001 :~ $ extracción de imágenes de configuración de kubeadm
[config/images] Obtuvo k8s.gcr.io/kube-apiserver:v1.12.2
[config/images] Obtuvo k8s.gcr.io/kube-controller- manager:v1.12.2
[config/images] Obtuvo k8s.gcr.io/kube- scheduler:v1.12.2
[config/images] Obtuvo k8s.gcr.io/kube-proxy:v1.12.2
[config/images] Obtuvo k8s.gcr.io/pause:3.1
[config/images] Obtuvo k8s.gcr.io/etcd:3.2.24
[config/images] Obtuvo k8s.gcr.io/ coredns:1.2.2
pi@master-node-001 :~ $ sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=todos
[init] utilizando la versión de Kubernetes: v1.12.2
[preflight] ejecutando comprobaciones previas al vuelo
[preflight/images] Extracción de imágenes requeridas para configurar un clúster de Kubernetes
[verificación previa/imágenes] Esto puede demorar uno o dos minutos, dependiendo de la velocidad de su conexión a Internet
[preflight/images] También puede realizar esta acción de antemano usando 'kubeadm config images pull'
[kubelet] Escritura del archivo de entorno kubelet con banderas en el archivo "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Escribiendo la configuración de kubelet en el archivo "/var/lib/kubelet/config.yaml"
[preflight] Activando el servicio kubelet
[certificados] Certificado y clave etcd/ca generados.
[certificados] Certificado y clave apiserver-etcd-client generados.
[certificados] Certificado y clave etcd/servidor generados.
[certificados] etcd/el certificado de servicio del servidor está firmado para nombres DNS [master-node-001 localhost] e IP [127.0.0.1 :: 1]
[certificados] Certificado y clave etcd/peer generados.
[certificados] El certificado de servicio de etcd/peer está firmado para nombres DNS [master-node-001 localhost] e IP [192.168.0.100 127.0.0.1 :: 1]
[certificados] Certificado y clave de etcd/healthcheck-client generados.
[certificados] Certificado y clave ca generados.
[certificados] Certificado y clave de apserver generados.
[certificados] el certificado de servicio del servidor ap está firmado para nombres DNS [master-node-001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] e IP [10.96.0.1 192.168.0.100]
[certificados] Certificado y clave apiserver-kubelet-client generados.
[certificados] Certificado y clave front-proxy-ca generados.
[certificados] Certificado y clave de front-proxy-client generados.
[certificados] ahora existen certificados y claves válidos en "/etc/kubernetes/pki"
[certificados] Generó una clave sa y una clave pública.
[kubeconfig] Escribió el archivo KubeConfig en el disco: "/etc/kubernetes/admin.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Escribió el archivo KubeConfig en el disco: "/etc/kubernetes/scheduler.conf"
[controlplane] escribió el manifiesto Static Pod para el componente kube-apiserver en "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] escribió el manifiesto Static Pod para el componente kube-controller-manager en "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] escribió el manifiesto Static Pod para el componente kube-scheduler en "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escribió el manifiesto de Static Pod para una instancia local de etcd en "/etc/kubernetes/manifests/etcd.yaml"
[init] esperando que kubelet inicie el plano de control como Static Pods desde el directorio "/etc/kubernetes/manifests"
[init] esto puede tardar un minuto o más si las imágenes del plano de control tienen que extraerse

Desafortunadamente, ha ocurrido un error:
se agotó el tiempo de espera de la condición

Es probable que este error se deba a:
- El kubelet no se está ejecutando
- El kubelet no está en buen estado debido a una configuración incorrecta del nodo de alguna manera (cgroups requeridos deshabilitados)

Si está en un sistema alimentado por systemd, puede intentar solucionar el error con los siguientes comandos:
- 'kubelet de estado systemctl'
- 'diarioctl -xeu kubelet'

Además, es posible que un componente del plano de control se bloquee o se cierre cuando lo inició el tiempo de ejecución del contenedor.
Para solucionar problemas, enumere todos los contenedores utilizando su CLI de tiempo de ejecución de contenedores preferido, por ejemplo, docker.
Aquí hay un ejemplo de cómo puede enumerar todos los contenedores de Kubernetes que se ejecutan en Docker:
- 'docker ps -a | grep kube | grep -v pausa'
Una vez que haya encontrado el contenedor que falla, puede inspeccionar sus registros con:
- 'docker registra CONTAINERID'
no se pudo inicializar un clúster de Kubernetes
pi@master-node-001 :~ $ imágenes docker
ID DE LA IMAGEN DE LA ETIQUETA DEL REPOSITOR TAMAÑO CREADO
k8s.gcr.io/kube-controller-manager v1.12.2 4bc6cae738d8 Hace 7 días 146 MB
k8s.gcr.io/kube-apiserver v1.12.2 8bfe044a05e1 Hace 7 días 177 MB
k8s.gcr.io/kube-scheduler v1.12.2 3abf5566fec1 Hace 7 días 52MB
k8s.gcr.io/kube-proxy v1.12.2 328ef67ca54f Hace 7 días 84.5MB
k8s.gcr.io/kube-proxy v1.12.1 8c06fbe56458 Hace 3 semanas 84.7MB
k8s.gcr.io/kube-controller-manager v1.12.1 5de943380295 Hace 3 semanas 146 MB
k8s.gcr.io/kube-scheduler v1.12.1 1fbc2e4cd378 Hace 3 semanas 52 MB
k8s.gcr.io/kube-apiserver v1.12.1 ab216fe6acf6 Hace 3 semanas 177 MB
k8s.gcr.io/etcd 3.2.24 e7a8884c8443 Hace 5 semanas 222MB
k8s.gcr.io/coredns 1.2.2 ab0805b0de94 Hace 2 meses 33.4MB
k8s.gcr.io/kube-scheduler v1.11.0 0e4a34a3b0e6 Hace 4 meses 56.8MB
k8s.gcr.io/kube-controller-manager v1.11.0 55b70b420785 Hace 4 meses 155 MB
k8s.gcr.io/etcd 3.2.18 b8df3b177be2 Hace 6 meses 219MB
k8s.gcr.io/pause 3.1 e11a8cbeda86 Hace 10 meses 374kB
pi@master-nodo-001 :~ $ h | grep kubectl
-bash: h: comando no encontrado
pi@master-node-001 :~ $ historia | grep kubectl
Lista de 9 pods de Kubectl
10 listas de kubectl
11 kubectl --ayuda
12 kubectl obtener vainas -o ancho
14 kubectl obtener pods -o ancho
32 horas | grep kubectl
33 historia | grep kubectl
pi@master-node-001 :~ $ !12
kubectl obtener vainas -o ancho
No se puede conectar al servidor: net/http: tiempo de espera del protocolo de enlace TLS
pi@master-node-001 :~ $ historia | pausa grep
17 ventana acoplable ps -a | grep kube | grep -v pausa
35 historia | pausa grep
pi@master-node-001 :~ $ !17
ventana acoplable ps-a | grep kube | grep -v pausa
41623613679e 8bfe044a05e1 "kube-apiserver --au..." Hace 29 segundos Arriba 14 segundos k8s_kube-apiserver_kube-apiserver-master-node-001_kube-system_1ec53f8ef96c76af95c78c809252f05c_3
0870760b9ea0 8bfe044a05e1 "kube-apiserver --au…" Hace 2 minutos Salió (0) Hace 33 segundos k8s_kube-apiserver_kube-apiserver-master-node-001_kube-system_1ec53f8ef96c76af95c78c809252f05c_2
c60d65fab8a7 3abf5566fec1 "kube-scheduler --ad..." Hace 6 minutos Hasta 5 minutos k8s_kube-scheduler_kube-scheduler-master-node-001_kube-system_ee7b1077c61516320f4273309e9b4690_0
26c58f6c68e9 e7a8884c8443 "etcd --advertise-cl…" Hace 6 minutos Subir 5 minutos k8s_etcd_etcd-master-node-001_kube-system_d01dcc7fc79b875a52f01e26432e6745_0
65546081ca77 4bc6cae738d8 "kube-controller-man…" Hace 6 minutos Hasta 5 minutos k8s_kube-controller-manager_kube-controller-manager-master-node-001_kube-system_07e19bc9fea1626927c12244604bdb2f_0
pi@master-node-001 :~ $ kubectl obtener pods -o ancho
^C
pi@master-node-001 :~ $ sudo reiniciar
Conexión a 192.168.0.100 cerrada por host remoto.
Conexión a 192.168.0.100 cerrada.
karl@karl-PL62-7RC :~$ ping 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(84) bytes de datos.
^C
--- 192.168.0.100 estadísticas de ping ---
2 paquetes transmitidos, 0 recibidos, 100 % de pérdida de paquetes, tiempo 1015 ms

karl@karl-PL62-7RC :~$ ssh [email protected]
ssh_exchange_identification: lectura: restablecimiento de la conexión por pares
karl@karl-PL62-7RC :~$ ssh [email protected]
contraseña de [email protected] :
Linux master-node-001 4.14.71-v7+ #1145 SMP Vie 21 de septiembre 15:38:35 BST 2018 armv7l

Los programas incluidos con el sistema Debian GNU/Linux son software libre;
los términos de distribución exactos para cada programa se describen en el
archivos individuales en /usr/share/doc/*/copyright.

Debian GNU/Linux viene SIN GARANTÍA EN ABSOLUTO, en la medida
permitido por la ley aplicable.
Último inicio de sesión: mié 31 oct 21:36:13 2018
pi@master-node-001 :~ $ kubectl obtener pods -o ancho
Se rechazó la conexión al servidor 192.168.0.100:6443. ¿Especificó el host o puerto correcto?
pi@master-node-001 :~ $ sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=todos
[init] utilizando la versión de Kubernetes: v1.12.2
[preflight] ejecutando comprobaciones previas al vuelo
[ADVERTENCIA Archivo disponible--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml ya existe
[ADVERTENCIA Archivo disponible--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml ya existe
[ADVERTENCIA Archivo disponible--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml ya existe
[ADVERTENCIA Archivo disponible--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml ya existe
[ADVERTENCIA Puerto-10250]: El puerto 10250 está en uso
[ADVERTENCIA DirAvailable--var-lib-etcd]: /var/lib/etcd no está vacío
[preflight/images] Extracción de imágenes requeridas para configurar un clúster de Kubernetes
[verificación previa/imágenes] Esto puede demorar uno o dos minutos, dependiendo de la velocidad de su conexión a Internet
[preflight/images] También puede realizar esta acción de antemano usando 'kubeadm config images pull'
[kubelet] Escritura del archivo de entorno kubelet con banderas en el archivo "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Escribiendo la configuración de kubelet en el archivo "/var/lib/kubelet/config.yaml"
[preflight] Activando el servicio kubelet
[certificados] Usando el certificado y la clave etcd/peer existentes.
[certificados] Usando el certificado y la clave apserver-etcd-client existentes.
[certificados] Usando el certificado y la clave existentes de etcd/server.
[certificados] Usando el certificado y la clave existentes de etcd/healthcheck-client.
[certificados] Usando el certificado y la clave de apserver existentes.
[certificados] Usando el certificado y la clave apiserver-kubelet-client existentes.
[certificados] Usando el certificado y la clave de front-proxy-client existentes.
[certificados] ahora existen certificados y claves válidos en "/etc/kubernetes/pki"
[certificados] Utilizando la clave sa existente.
[kubeconfig] Usando el archivo KubeConfig actualizado existente: "/etc/kubernetes/admin.conf"
[kubeconfig] Usando el archivo KubeConfig actualizado existente: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Usando el archivo KubeConfig actualizado existente: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Usando el archivo KubeConfig actualizado existente: "/etc/kubernetes/scheduler.conf"
[controlplane] escribió el manifiesto Static Pod para el componente kube-apiserver en "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] escribió el manifiesto Static Pod para el componente kube-controller-manager en "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] escribió el manifiesto Static Pod para el componente kube-scheduler en "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escribió el manifiesto de Static Pod para una instancia local de etcd en "/etc/kubernetes/manifests/etcd.yaml"
[init] esperando que kubelet inicie el plano de control como Static Pods desde el directorio "/etc/kubernetes/manifests"
[init] esto puede tardar un minuto o más si las imágenes del plano de control tienen que extraerse

Desafortunadamente, ha ocurrido un error:
se agotó el tiempo de espera de la condición

Es probable que este error se deba a:
- El kubelet no se está ejecutando
- El kubelet no está en buen estado debido a una configuración incorrecta del nodo de alguna manera (cgroups requeridos deshabilitados)

Si está en un sistema alimentado por systemd, puede intentar solucionar el error con los siguientes comandos:
- 'kubelet de estado systemctl'
- 'diarioctl -xeu kubelet'

Además, es posible que un componente del plano de control se bloquee o se cierre cuando lo inició el tiempo de ejecución del contenedor.
Para solucionar problemas, enumere todos los contenedores utilizando su CLI de tiempo de ejecución de contenedores preferido, por ejemplo, docker.
Aquí hay un ejemplo de cómo puede enumerar todos los contenedores de Kubernetes que se ejecutan en Docker:
- 'docker ps -a | grep kube | grep -v pausa'
Una vez que haya encontrado el contenedor que falla, puede inspeccionar sus registros con:
- 'docker registra CONTAINERID'
no se pudo inicializar un clúster de Kubernetes

13 sudo kubeadm init --token-ttl=0
14 kubectl obtener pods -o ancho
15 sudo kubeadm restablecer
16 sudo kubeadm init --token-ttl=0
17 ventana acoplable ps -a | grep kube | grep -v pausa
Extraer 18 imágenes de configuración de kubeadm --kubernetes-version=v1.11.0
19 sudo kubeadm restablecer
20 historia > notas.txt
21 notas más.txt
22 sudo reiniciar
23 lista de imágenes de configuración de kubeadm
Extraer 24 imágenes de configuración de kubeadm --kubernetes-version=v1.11.0
25 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=todos
26 extracción de imágenes de configuración de kubeadm
27 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=todos
28 extracción de imágenes de configuración de kubeadm
29 sudo kubeadm restablecer
30 extracción de imágenes de configuración de kubeadm
31 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=todos
Imágenes de docker 32
33 horas | grep kubectl
34 historia | grep kubectl
35 kubectl obtener vainas -o ancho
36 historia | pausa grep
37 ventana acoplable ps -a | grep kube | grep -v pausa
38 kubectl obtener pods -o ancho
39 sudo reiniciar
40 kubectl obtener vainas -o ancho
41 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=todos

¿Fue útil esta página
0 / 5 - 0 calificaciones