Kubeadm: Kubeadm init bloqueia em "Isso pode levar um minuto ou mais se as imagens do plano de controle precisarem ser extraídas

Criado em 31 jan. 2018  ·  67Comentários  ·  Fonte: kubernetes/kubeadm

Versões

versão kubeadm (use kubeadm version ):

Ambiente :

  • Versão do Kubernetes (use kubectl version ): v1.9.2
  • Provedor de nuvem ou configuração de hardware : Virtual Box
  • SO (por exemplo, de /etc/os-release): Ubuntu 16.04.0 LTS (Xeniak Xerus) amd64
  • Kernel (por exemplo uname -a ):linux 4.4.0-62-generic
  • Outros :kubeadm version :v1.9.2: amd64, kubelet version :v1.9.2 amd64, kubernetes-cni version :0.6.0-00 amd64 ,docker version:17.03.2-ce

O que aconteceu?

Enquanto tento executar o kubeadm init, ele trava com
xx@xx :~$ sudo kubeadm init --kubernetes-version=v1.9.2

[init] Usando a versão do Kubernetes: v1.9.2
[init] Usando modos de autorização: [Node RBAC]
[preflight] Executando verificações de pré-voo.
[AVISO FileExisting-crictl]: crictl não encontrado no caminho do sistema
[preflight] Iniciando o serviço kubelet
[certificados] Gerado ca certificado e chave.
[certificates] Gerado apiserver certificado e chave.
[certificados] o certificado de serviço do apiserver é assinado para nomes DNS [kickseed kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] e IPs [10.96.0.1 172.17.41.15]
[certificates] Gerado certificado e chave apiserver-kubelet-client.
[certificados] Gerado sa chave e chave pública.
[certificados] Gerado certificado e chave front-proxy-ca.
[certificates] Gerado certificado e chave de cliente de proxy frontal.
[certificates] Certificados e chaves válidos agora existem em "/etc/kubernetes/pki"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "admin.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "kubelet.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "controller-manager.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "scheduler.conf"
[controlplane] Escreveu o manifesto Static Pod para o componente kube-apiserver para "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Escreveu o manifesto Static Pod para o componente kube-controller-manager para "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Escreveu o manifesto Static Pod para o componente kube-scheduler para "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escreveu o manifesto do pod estático para uma instância etcd local para "/etc/kubernetes/manifests/etcd.yaml"
[init] Aguardando o kubelet inicializar o plano de controle como pods estáticos do diretório "/etc/kubernetes/manifests".
[init] Isso pode levar um minuto ou mais se as imagens do plano de controle precisarem ser extraídas.

Então eu verifico o log do kubelet:
xx@xx :~$ sudo journalctl -xeu kubelet:
31 de janeiro 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.280984 28516 remote_runtime.go:92] Falha no RunPodSandbox do serviço de tempo de execução: erro rpc: código = Desconhecido desc = falha ao extrair imagem "gcr.io/google_containers/ pause-amd64:3.0": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o timeout
31 de janeiro 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.281317 28516 kuberuntime_sandbox.go:54] CreatePodSandbox para pod "kube-scheduler-kickseed_kube-system(69c12074e336b0dbbd0a1666ce05226a)" falhou: código rpd0a1666ce05226a = falha ao extrair a imagem "gcr.io/google_containers/pause-amd64:3.0": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o timeout
31 de janeiro 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.281580 28516 kuberuntime_manager.go:647] createPodSandbox for pod "kube-scheduler-kickseed_kube-system(69c12074e336b0dbbd0a1666ce05226a)" = erro de código desconhecido: 05226a = falha ao extrair a imagem "gcr.io/google_containers/pause-amd64:3.0": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o timeout
31 de janeiro 14:45:03 Kickseed Kubelet [28516]: E0131 14: 45: 03.281875 28516 pod_workers.go: 186] Erro sincronizar pod 69c12074e336b0dbbd0a1666cl05226a ("kube-scheduler-kickseed_kube-system (69c12074e3336b0dbbbd0a1666cé05226a)"), Pulando: Falha ao " CreatePodsandbox "for" Kube-Scheduler-kickseed_kube-system (69C12074E3366CE05226A) "com CreatePodsandBoxError:" CreatePodsandbox for pod \ "kickseed_kube-system (69c12074e3336b0dbbbd0a16666cl05226a) \" Falha: RPC Erro: Código = Desconhecido Desc = Falha na imagem Pulling \ "gcr.io/google_containers/pause-amd64:3.0\": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o timeout"
31 de janeiro 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.380290 28516 event.go:209] Não foi possível gravar o evento: 'Patch https://172.17.41.15 :6443/api/v1/namespaces/default /events/kickseed.150ecf46afb098b7: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada' (pode tentar novamente após dormir)
31 de janeiro 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.933783 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Falha ao listar *v1. Pod: Obter https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: dial tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.934707 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Falha ao listar *v1.Node: Obtenha https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:03 kickseed kubelet[28516]: E0131 14:45:03.935921 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Falha ao listar *v1.Service: Obtenha https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.281024 28516 remote_runtime.go:92] Falha no RunPodSandbox do serviço de tempo de execução: erro de rpc: código = Desconhecido desc = falha ao extrair imagem "gcr.io/google_containers/ pause-amd64:3.0": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o timeout
31 de janeiro 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.281352 28516 kuberuntime_sandbox.go:54] CreatePodSandbox para pod "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712ce25ee9b6cb) falhou: code Desc desconhecido = falha ao extrair a imagem "gcr.io/google_containers/pause-amd64:3.0": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o timeout
31 de janeiro 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.281634 28516 kuberuntime_manager.go:647] createPodSandbox for pod "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712ce25ee9b6cb)": erro de código = 25ee9b6cb Desc desconhecido = falha ao extrair a imagem "gcr.io/google_containers/pause-amd64:3.0": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o timeout
31 de janeiro 14:45:04 Kickseed Kubelet [28516]: E0131 14: 45: 04.281938 28516 pod_workers.go: 186] erro de sincronização do pod 6546d6f0b50c9fc6712c25c9fc6712ce25c9fc6712CE50C9F6CB ("kube-controller-manager-kickseed_kube-system (6546d6f0b50c9fc6712c25c6cb)") ")") ")") ")") ") para "CreatePodSandbox" para "kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712ce25ee9b6cb)" com CreatePodSandboxError: "CreatePodSandbox for pod \"kube-controller-manager-kickseed_kube-system(6546d6faf0b50c9fc6712cb) falhou: código desconhecido: 6712cb =ee desc = falha ao extrair imagem \"gcr.io/google_containers/pause-amd64:3.0\": Resposta de erro do daemon: Get https://gcr.io/v1/_ping : dial tcp 172.217.6.127:443: i/o tempo esgotado"
31 de janeiro 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.934694 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Falha ao listar *v1. Pod: Obter https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: dial tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.935613 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Falha ao listar *v1.Node: Obtenha https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:04 kickseed kubelet[28516]: E0131 14:45:04.936669 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Falha ao listar *v1.Service: Obtenha https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:05 kickseed kubelet[28516]: W0131 14:45:05.073692 28516 cni.go:171] Não foi possível atualizar a configuração cni: Nenhuma rede encontrada em /etc/cni/net.d
31 de janeiro 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.074106 28516 kubelet.go:2105] A rede de tempo de execução do contêiner não está pronta: NetworkReady=false reason:NetworkPluginNotReady message:docker : network plugin is not ready: cni config não inicializado
31 de janeiro 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.935680 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Falha ao listar *v1. Pod: Obter https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: dial tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.937423 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Falha ao listar *v1.Node: Obtenha https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:05 kickseed kubelet[28516]: E0131 14:45:05.937963 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Falha ao listar *v1.Service: Obtenha https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:45:05 kickseed kubelet[28516]: I0131 14:45:05.974034 28516 kubelet_node_status.go:273] Configurando a anotação do nó para habilitar a anexação/desanexação do controlador de volume
31 de janeiro 14:45:06 kickseed kubelet[28516]: I0131 14:45:06.802447 28516 kubelet_node_status.go:273] Configurando a anotação do nó para habilitar a anexação/desanexação do controlador de volume
31 de janeiro 14:45:06 kickseed kubelet[28516]: I0131 14:45:06.804242 28516 kubelet_node_status.go:82] Tentando registrar node kickseed
31 de janeiro 14:45:06 kickseed kubelet[28516]: E0131 14:45:06.804778 28516 kubelet_node_status.go:106] Não é possível registrar o nó "kickseed" com o servidor de API: Post https://172.17.41.15 :6443/api/ v1/nós: disque tcp 172.17.41.15:6443: getsockopt: con

xx@xx :~$ sudo systemctl status kubelet:

kubelet.service - kubelet: o agente do nó do Kubernetes
Carregado: carregado (/lib/systemd/system/kubelet.service; ativado; predefinição do fornecedor: ativado)
Drop-In: /etc/systemd/system/kubelet.service.d
└─11-kubeadm.conf, 10-kubeadm1.conf, 90-local-extras.conf
Ativo: ativo (em execução) desde qua 2018-01-31 13:53:46 CST; 49 minutos atrás
Documentos: http://kubernetes.io/docs/
PID principal: 28516 (kubelet)
Tarefas: 13
Memória: 37,8M
CPU: 22.767s
CGroup: /system.slice/kubelet.service
└─28516 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/ manifests --allow-privileged=true --cgroup-driver=cgroupfs --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni /bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port =0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki --fail-swap-on=false

31 de janeiro 14:43:17 kickseed kubelet[28516]: E0131 14:43:17.862590 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Falha ao listar *v1.Node: Obtenha https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:43:17 kickseed kubelet[28516]: E0131 14:43:17.863474 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Falha ao listar *v1.Service: Obtenha https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.621818 28516 event.go:209] Não foi possível gravar o evento: 'Patch https://172.17.41.15 :6443/api/v1/namespaces/default /events/kickseed.150ecf46afb098b7: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada' (pode tentar novamente após dormir)
31 de janeiro 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.862440 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Falha ao listar *v1. Pod: Obter https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: dial tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.863379 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Falha ao listar *v1.Node: Obtenha https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:43:18 kickseed kubelet[28516]: E0131 14:43:18.864424 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Falha ao listar *v1.Service: Obtenha https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.255460 28516 eviction_manager.go:238] gerenciador de despejo: erro inesperado: falha ao obter informações do nó: nó "kickseed" não encontrado
31 de janeiro 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.863266 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Falha ao listar *v1. Pod: Obter https://172.17.41.15 :6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkickseed&limit=500&resourceVersion=0: dial tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.864238 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Falha ao listar *v1.Node: Obtenha https://172.17.41.15 :6443/api/v1/nodes?fieldSelector=metadata.name%3Dkickseed&limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada
31 de janeiro 14:43:19 kickseed kubelet[28516]: E0131 14:43:19.865262 28516 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Falha ao listar *v1.Service: Obtenha https://172.17.41.15 :6443/api/v1/services?limit=500&resourceVersion=0: disque tcp 172.17.41.15:6443: getsockopt: conexão recusada

Algumas imagens do docker são listadas a seguir:
gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
gcr.io/google_containers/kube-proxy-amd64:v1.9.2
gcr.io/google_containers/etcd-amd64:3.2.14
gcr.io/google_containers/pause-amd64:3.1
gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.2
gcr.io/google_containers/kubedns-amd64:1.9
gcr.io/google_containers/kube-discovery-amd64:1.0
gcr.io/google_containers/exechealthz-amd64:v1.2.0
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8
gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1

O que você esperava que acontecesse?

kubeadm init deve ser concluído

Como reproduzi-lo (o mínimo e preciso possível)?

virtualbox com Ubuntu 16.04 e kubeadm 1.9.2

Algo mais que precisamos saber?

areUX lifecyclactive prioritimportant-soon

Comentários muito úteis

https://github.com/kubernetes/kubernetes/issues/59680#issuecomment -364646304
desativar o selinux me ajudou.

Todos 67 comentários

Essas imagens docker listadas acima foram extraídas do meu repositório privado antes de executar "kubeadm init --kubernetes-version v1.9.2" , não consigo acessar diretamente gcr.io/google-containers devido ao GFW.

Mesmo problema aqui!

Eu tenho o mesmo problema no CentOS 7

+1

+1

+1

+1
servidores no vultr, preso aqui também.

+1

+1

+1

Como solução alternativa

1/ crie um registro docker no seu mestre do kubernetes

2/ declare seu mestre do kubernetes como gcr.io em /etc/hosts

3/ Na máquina com acesso à internet, faça login na ggogle cloud e baixe a imagem
exemplo:
gloud docker -- puxe gcrio/goole_container/pause-amd64:3.0
gloud docker -- save -o /tmp/pause-amd.tar gcrio/goole_container/pause-amd64:3.0

4/ Faça upload de imagens para um registro de repositório do docker
docker load -i /tmp/pause-amd64.tar
tag docker gcr.io/Google_containers/pause-amd64:3.0 yourdoke rregistry:pause-amd64 :3.0
docker push yourdoke rregistry:pause-amd64 :3.0

5/ No seu mestre kebernetes como registro docker gcr.io

Obtenha imagens do seu repositório de registro do docker
docker pull yourdock rregistry:pause-amd64 :3.0

Puxe para o registro local do docker gcr.io
docker tag yourdocke rregistry:pause-amd64 :3.0 gcr.io/google_containers/pause-amd64:3.0
docker push gcr.io/google_containers/pause-amd64:3.0

Baixe todas as imagens usadas pelo kubeadm init . Veja no /etc/kubernetes/manifest/*.yaml

É fixo em 1.9.3?

+1

+1 - Isso só se manifesta na segunda vez que executo o kubeadm init. A primeira vez passa bem. Não tenho certeza se há um pouco de estado desde a primeira execução que não é limpo corretamente com a redefinição do kubeadm.

+1

centos 7 e defino proxy em /etc/env e mostro como 👎

+1

Mesmo problema aqui. Centos7, instalação mais recente do kube (1.9.3), tentou a documentação do hightower e toda a documentação do kubernetes. etcd e flanela estão funcionando e vivos e para cima. Usei a variável env NO_PROXY para colocar meus IPs externos para que ele não tente uma conexão proxy com outras conexões, no entanto, nunca chegue a esse ponto e obtenha os mesmos erros que todos os outros acima.

+1

Eu tenho o mesmo problema, centos 7, kubelet v1.9.3;
Mas parece que as imagens foram baixadas com sucesso,
docker images
gcr.io/google_containers/kube-apiserver-amd64 v1.9.3 360d55f91cbf 4 weeks ago 210.5 MB gcr.io/google_containers/kube-controller-manager-amd64 v1.9.3 83dbda6ee810 4 weeks ago 137.8 MB gcr.io/google_containers/kube-scheduler-amd64 v1.9.3 d3534b539b76 4 weeks ago 62.71 MB gcr.io/google_containers/etcd-amd64 3.1.11 59d36f27cceb 3 months ago 193.9 MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 22 months ago 746.9 kB

Eu tenho um CentOS 7 vm aqui, e já configurei com nosso servidor proxy.
Recebi a mesma mensagem de tempo limite, mas as imagens do docker foram extraídas e estão funcionando.

Eu também estou passando pelo mesmo problema. Consulte as saídas e os logs para obter mais informações.

```[ root@kube01 ~]# kubeadm init
[init] Usando a versão do Kubernetes: v1.9.3
[init] Usando modos de autorização: [Node RBAC]
[preflight] Executando verificações de pré-voo.
[WARNING Hostname]: o nome do host "kube01" não pôde ser alcançado
[AVISO Nome do host]: pesquisa do nome do host "kube01" kube01 em 10.10.0.81:53: comportamento incorreto do servidor
[AVISO FileExisting-crictl]: crictl não encontrado no caminho do sistema
[preflight] Iniciando o serviço kubelet
[certificados] Gerado ca certificado e chave.
[certificates] Gerado apiserver certificado e chave.
[certificados] o certificado de serviço do apiserver é assinado para nomes DNS [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] e IPs [10.96.0.1 10.25.123.11]
[certificates] Gerado certificado e chave apiserver-kubelet-client.
[certificados] Gerado sa chave e chave pública.
[certificados] Gerado certificado e chave front-proxy-ca.
[certificates] Gerado certificado e chave de cliente de proxy frontal.
[certificates] Certificados e chaves válidos agora existem em "/etc/kubernetes/pki"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "admin.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "kubelet.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "controller-manager.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "scheduler.conf"
[controlplane] Escreveu o manifesto Static Pod para o componente kube-apiserver para "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Escreveu o manifesto Static Pod para o componente kube-controller-manager para "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Escreveu o manifesto Static Pod para o componente kube-scheduler para "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escreveu o manifesto do pod estático para uma instância etcd local para "/etc/kubernetes/manifests/etcd.yaml"
[init] Aguardando o kubelet inicializar o plano de controle como pods estáticos do diretório "/etc/kubernetes/manifests".
[init] Isso pode levar um minuto ou mais se as imagens do plano de controle precisarem ser extraídas.

In the meantime, while watching `docker ps` this is what I see:
***Note:*** Don't mind the length of time that the containers have been up — this is my third attempt and it's always the same.

```CONTAINER ID        IMAGE
              COMMAND                  CREATED              STATUS              PORTS               NAMES
c422b3fd67f9        gcr.io/google_containers/kube-apiserver-amd64<strong i="5">@sha256</strong>:a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f0
43            "kube-apiserver --req"   About a minute ago   Up About a minute                       k8s_kube-apiserver_kube-apiserver-k
ube01_kube-system_3ff6faac27328cf290a026c08ae0ce75_1
4b30b98bcc24        gcr.io/google_containers/kube-controller-manager-amd64<strong i="6">@sha256</strong>:3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599
769d0b7251e   "kube-controller-mana"   2 minutes ago        Up 2 minutes                            k8s_kube-controller-manager_kube-co
ntroller-manager-kube01_kube-system_d556d9b8ccdd523a5208b391ca206031_0
71c6505ed125        gcr.io/google_containers/kube-scheduler-amd64<strong i="7">@sha256</strong>:2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a16
75            "kube-scheduler --add"   2 minutes ago        Up 2 minutes                            k8s_kube-scheduler_kube-scheduler-k
ube01_kube-system_6502dddc08d519eb6bbacb5131ad90d0_0
9d01e2de4686        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-controller-manager-kub
e01_kube-system_d556d9b8ccdd523a5208b391ca206031_0
7fdaabc7e2a7        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-apiserver-kube01_kube-
system_3ff6faac27328cf290a026c08ae0ce75_0
a5a2736e6cd0        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_kube-scheduler-kube01_kube-
system_6502dddc08d519eb6bbacb5131ad90d0_0
ea82cd3a27da        gcr.io/google_containers/pause-amd64:3.0
              "/pause"                 3 minutes ago        Up 2 minutes                            k8s_POD_etcd-kube01_kube-system_727
8f85057e8bf5cb81c9f96d3b25320_0

SAÍDA DE LOG PARA gcr.io/google_containers/kube-apiserver-amd64@sha256 :a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f043

I0309 19:59:29.570990       1 server.go:121] Version: v1.9.3
I0309 19:59:29.756611       1 feature_gate.go:190] feature gates: map[Initializers:true]
I0309 19:59:29.756680       1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0309 19:59:29.760396       1 master.go:225] Using reconciler: master-count
W0309 19:59:29.789648       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0309 19:59:29.796731       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0309 19:59:29.797445       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0309 19:59:29.804841       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/03/09 19:59:29 log.go:33: [restful/swagger] listing is available at https://10.25.123.11:6443/swaggerapi
[restful] 2018/03/09 19:59:29 log.go:33: [restful/swagger] https://10.25.123.11:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/03/09 19:59:30 log.go:33: [restful/swagger] listing is available at https://10.25.123.11:6443/swaggerapi
[restful] 2018/03/09 19:59:30 log.go:33: [restful/swagger] https://10.25.123.11:6443/swaggerui/ is mapped to folder /swagger-ui/
I0309 19:59:32.393800       1 serve.go:89] Serving securely on [::]:6443
I0309 19:59:32.393854       1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0309 19:59:32.393866       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0309 19:59:32.393965       1 controller.go:84] Starting OpenAPI AggregationController
I0309 19:59:32.393998       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0309 19:59:32.394012       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0309 19:59:32.394034       1 customresource_discovery_controller.go:152] Starting DiscoveryController
I0309 19:59:32.394057       1 naming_controller.go:274] Starting NamingConditionController
I0309 19:59:32.393855       1 crd_finalizer.go:242] Starting CRDFinalizer
I0309 19:59:32.394786       1 available_controller.go:262] Starting AvailableConditionController
I0309 19:59:32.394815       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0309 20:00:06.434318       1 trace.go:76] Trace[12318713]: "Create /api/v1/nodes" (started: 2018-03-09 19:59:32.431463052 +0000 UTC m=+2.986431803) (total time: 34.002792758s):
Trace[12318713]: [4.00201898s] [4.001725343s] About to store object in database
Trace[12318713]: [34.002792758s] [30.000773778s] END
E0309 20:00:32.406206       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)
E0309 20:00:32.406339       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E0309 20:00:32.406342       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
E0309 20:00:32.408094       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
E0309 20:00:32.415692       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes)
E0309 20:00:32.415818       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E0309 20:00:32.415862       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io)
E0309 20:00:32.415946       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E0309 20:00:32.416029       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas)
E0309 20:00:32.416609       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
E0309 20:00:32.416684       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io)
E0309 20:00:32.420305       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E0309 20:00:32.440196       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io)
E0309 20:00:32.440403       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E0309 20:00:32.448018       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts)
E0309 20:00:32.448376       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *rbac.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io)
E0309 20:00:33.395988       1 storage_rbac.go:175] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
I0309 20:00:43.455564       1 trace.go:76] Trace[375160879]: "Create /api/v1/nodes" (started: 2018-03-09 20:00:13.454506587 +0000 UTC m=+44.009475397) (total time: 30.001008377s):
Trace[375160879]: [30.001008377s] [30.000778516s] END

================================================== ====================

SAÍDA DE LOG PARA gcr.io/google_containers/kube-controller-manager-amd64@sha256 :3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599769d0b7251e

I0309 19:51:35.248083       1 controllermanager.go:108] Version: v1.9.3
I0309 19:51:35.257251       1 leaderelection.go:174] attempting to acquire leader lease...
E0309 19:51:38.310839       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:41.766358       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:46.025824       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:49.622916       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:52.675648       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:55.697734       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:51:59.348765       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:01.508487       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:03.886473       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:06.120356       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:08.844772       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:12.083789       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:16.038882       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:18.555388       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:21.471034       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:24.236724       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:27.363968       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:30.045776       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:32.751626       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:36.383923       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:38.910958       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:41.400748       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:44.268909       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:47.640891       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:51.713420       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:54.419154       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:52:57.134430       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:00.942903       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:03.440586       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:07.518362       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:53:12.968927       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:16.228760       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:18.299005       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:20.681915       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:24.141874       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:28.484775       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:30.678092       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:34.107654       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:36.251647       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:39.914756       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:42.641017       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:45.058876       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:48.359511       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:51.667554       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:54.338101       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:53:57.357894       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:00.633504       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:03.244353       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:05.923510       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:09.817627       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:12.688349       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:16.803954       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:19.519269       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:23.668226       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:25.903217       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:30.248639       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:32.428029       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:34.962675       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:38.598370       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:41.179039       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:43.927574       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:54:48.190961       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:51.974141       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:55.898687       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:54:59.653210       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:02.094737       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:05.125275       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:09.280324       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:12.920886       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:17.272605       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:21.488182       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:23.708198       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:26.893696       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:31.121014       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:35.414628       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:38.252001       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:41.912479       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:45.621133       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:48.976244       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:52.537317       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:55.863737       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:55:59.682009       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:02.653432       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:04.968939       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:09.336478       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:13.488850       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:16.262967       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:56:22.685928       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:26.235497       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:28.442915       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:32.051827       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:35.547277       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:38.437120       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:41.007877       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:44.295081       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:46.746424       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:49.321870       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:52.831866       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:55.138333       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:56:57.815491       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:00.802112       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:03.848363       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:07.350593       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:10.672982       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:14.171660       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:17.923995       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:21.919624       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:23.923165       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:27.692006       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:30.654447       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:33.851703       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:57:37.302382       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:40.286552       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:42.358940       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:44.364982       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:46.372569       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:50.571683       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:53.988093       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:57:57.648006       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:01.607961       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:05.717138       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:08.819600       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:12.262314       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:14.327626       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:18.359683       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:20.961212       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:24.503457       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:27.099581       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:29.518623       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:32.943210       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:36.900236       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:40.567479       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:42.642410       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:45.938839       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:50.282483       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:54.086558       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:58:56.794469       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:00.604370       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:02.968978       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:05.825551       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:09.824458       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:12.383249       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:15.891164       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:19.088375       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:21.305063       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:23.366258       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:26.308481       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.25.123.11:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:59:32.440045       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:36.673744       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:40.049109       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:43.463730       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:46.454431       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:49.782639       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:52.964468       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 19:59:57.265527       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:01.181219       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:03.441468       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:07.324053       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:10.269835       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:12.584906       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:15.042928       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:18.820764       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:22.392476       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:24.630702       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:27.881904       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:30.123513       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:32.490088       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:34.675420       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:37.433904       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:39.819475       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0309 20:00:42.152164       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"

================================================== ====================

SAÍDA DE LOG PARA gcr.io/google_containers/kube-scheduler-amd64@sha256 :2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a1675

W0309 19:51:34.800737       1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0309 19:51:34.812848       1 server.go:551] Version: v1.9.3
I0309 19:51:34.817093       1 server.go:570] starting healthz server on 127.0.0.1:10251
E0309 19:51:34.818028       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: Get https://10.25.123.11:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818279       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.25.123.11:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818346       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: Get https://10.25.123.11:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.818408       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.25.123.11:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.819028       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.25.123.11:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.819386       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: Get https://10.25.123.11:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.820217       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.25.123.11:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.820659       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.25.123.11:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:34.821783       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.25.123.11:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.25.123.11:6443: getsockopt: connection refused
E0309 19:51:38.320455       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:38.329101       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:38.329733       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:38.332670       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:38.332707       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:38.332734       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:38.334248       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:38.334568       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:38.334594       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:39.322884       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:39.331726       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:39.333093       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:39.335939       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:39.335988       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:39.336229       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:39.336514       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:39.337881       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:39.338784       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:40.323869       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:40.332910       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:40.334120       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:40.337188       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:40.338218       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:40.339267       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:40.340635       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:40.342035       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:40.343070       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:41.325987       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:41.334782       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:41.336320       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:41.338996       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:41.339923       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:41.340904       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:41.342304       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:41.343675       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:41.344622       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:42.328038       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:42.336744       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:42.338239       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:42.340719       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:42.341878       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:42.342835       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:42.344100       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:42.345231       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:42.346405       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:43.330230       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:43.338706       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:43.339941       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:43.342476       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:43.343584       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:43.344615       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:43.345792       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:43.346976       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:43.348050       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:44.332307       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:44.340659       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:44.341607       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:44.344223       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:44.345380       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:44.346247       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0309 19:51:44.347536       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0309 19:51:44.348664       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0309 19:51:44.349648       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0309 19:51:45.334228       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0309 19:51:45.342638       1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0309 19:51:45.343460       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0309 19:51:45.345969       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0309 19:51:45.347140       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0309 19:51:45.348176       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope

================================================== ====================

SAÍDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SAÍDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SAÍDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

SAÍDA DE REGISTRO PARA gcr.io/google_containers/pause-amd64:3.0

================================================== ====================

+1
Atualizar:
Depois de dar uma olhada em tudo o que pude (sou meio novo no k8s), finalmente descobri que kubectl describe pod -n kube-system kube-dns-<sha> mostrava que o servidor virtual em que estava instalando tinha apenas 1 CPU e o kube-dns não estava iniciando devido a falta de CPU. Curiosamente o kubectl logs pod -n kube-system kube-dns-<sha> não mostrou essa informação.

Funcionou após uma reinstalação do sistema operacional (como uma reinicialização após a instalação do kubeadm faz com que o k8s master não inicialize corretamente).
(desculpe por esquecer de capturar a saída)

+1

Eu tive o mesmo problema, cancelei e executei reset e depois o mesmo init como anteriormente, mas com --apiserver-advertise-address=<my_host_public_ip_address> - e funcionou.

https://github.com/kubernetes/kubernetes/issues/59680#issuecomment -364646304
desativar o selinux me ajudou.

o downgrade para 1.8.10 resolveu o problema para mim.

+1

+1

Mesmo problema aqui com v1.9.3 no Ubuntu 16.04 (sem selinux)

+1 mesmo problema

Mesmo problema com v1.10 no Ubuntu 16.04 no arm64.

mesmo problema com v1.10 no Ubuntu 16.04 no arm64 (sem selinux)

Verifique o número de CPUs que você tem no hadware que você instala - 2 são necessários no mestre para instalar como escrevi acima há pouco mais de 3 semanas.

@bbruun O hardware usado é: https://www.pine64.org/?page_id=1491 então 4 núcleos e eles são detectados corretamente como tal. O hardware não deve ser o problema então. Mas obrigado pela dica mesmo assim. Talvez @qxing3 não use o mesmo hardware...

@farfeduc foi o obstáculo que atingi, várias tentativas seguidas com a reinstalação da minha máquina virtual para testar a instalação e conhecer o k8s, mas tirar logs utilizáveis ​​do sistema é um hazzle e tentei obtê-lo de todos os lugares que pude até que eu recebi uma mensagem sobre CPU insuficiente disponível. Agora comprei 3 Udoo x86 Ultra's para rodar um pequeno cluster para jogar em casa junto com o trabalho onde usamos instâncias um pouco maiores :-)

@bbruun Configurei 2 cpu para minha máquina virtual, obrigado pela dica de qualquer maneira.

/atribuir @liztio

+1

+1 v1.10.0

+1 v1.10.0 e 1.10.1

+1

Curiosamente, estou encontrando deltas dependendo de onde eu implante. Espero encontrar tempo para explorar mais, mas até agora, eu sei disso. Se eu usar meu Mac/VMware Fusion e executar VMs CentOS 7, posso usar o kubeadm 1.8 com sucesso total. Eu nunca consegui v1.9 ou v1.10 para trabalhar localmente. No entanto, usando imagens do CentOS 7 na Digital Ocean, posso executar v1.8.x, v1.10.0 e v1.10.1 com sucesso; v1.9 parece "apenas depender" por algum motivo. Então, agora, é uma questão de cavar os finos deltas entre os dois ambientes para descobrir o que aciona o interruptor. Eu sei que os níveis de Kernel/patch correspondem, assim como os mecanismos do Docker, etc. DO instala coisas de inicialização de nuvem, minhas VMs locais não, e assim por diante. Não é trivial descobrir o que é diferente. Cheguei a tentar igualar os tamanhos dos discos (pensar que um disco pequeno pode mascarar algum erro em algum lugar, mas eliminei isso também).

Em todos os casos, extrair as imagens sempre funcionou, é apenas fazer com que o serviço da API responda e não continue reciclando a cada dois minutos quando falha.

Saudações,

Você pode consultar https://docs.docker.com/config/daemon/systemd/#httphttps -proxy para definir o proxy para docker daemon.

+1 Executando no HypriotOS no Raspberry PI 3

Consegui fazê-lo funcionar instalando a v1.9.6 em vez da última versão.
Portanto, funciona normalmente com v1.9.6, mas falha com v1.10.0 e v1.10.1 no Ubuntu 16.04 no arm64 em placas sopine.

Eu tenho o mesmo problema no Raspberry Pi 3, HypriotOS. O downgrade para 1.9.7-00 também funcionou para mim.

+1, kubeadm v1.10.1, framboesa pi 3b, hypriotOS

No meu caso, descobri que o contêiner etcd estava inicializando e saindo com erro, e isso estava causando o travamento de kubeadm init e, eventualmente, o tempo limite.

Para verificar se isso está mordendo você, execute docker ps -a e verifique o status do contêiner etcd. Se não estiver em execução, verifique os logs do contêiner etcd ( docker logs <container-id> ) e veja se ele está reclamando de não conseguir vincular a um endereço. Veja este relatório de problemas: https://github.com/kubernetes/kubernetes/issues/57709

O problema que acabei de mencionar tem uma solução alternativa, mas certifique-se de que é isso que você está enfrentando primeiro.

Certifique-se de que seu firewall está permitindo tráfego de entrada em 6443

Por exemplo, se você estiver usando o Ubuntu, execute ufw status para ver se ele está ativado

Então ufw allow 6443 para abrir a porta

é possível listar as imagens abaixo e puxá-las manualmente por proxy, depois inicializar o kubeadm novamente.
será que vai dar certo?
porque estamos na China, sabe, o GFW.
E eu sou novo no k8s, preso aqui quando configurado no centos7.

@tanch2n obrigado muito. Eu vou tentar.

Eu adicionei um proxy ao docker, usando this , as imagens parecem todas já terem sido baixadas, mas ainda travadas "[init] Isso pode levar um minuto ou mais se as imagens do plano de controle precisarem ser extraídas.".

liste as imagens extraídas automaticamente abaixo.

k8s.gcr.io/kube-apiserver-amd64 v1.10.2 e774f647e259 2 semanas atrás 225 MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.2 0dcb3dea0db1 2 semanas atrás 50,4 MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.2 f3fcd0775c4e 2 semanas atrás 148 MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 2 meses atrás 193 MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 4 meses atrás 742 kB

Passei muito tempo tentando descobrir isso. Desativei o ufw, desliguei o selinux, certifiquei-me de que o encaminhamento de ip está ativado e também /proc/sys/net/bridge/bridge-nf-call-iptables está definido como 1. Nada parecia resolver o problema.

Finalmente, decidi fazer o downgrade e depois atualizar.

sudo apt-get -y --allow-downgrades install kubectl=1.5.3-00 kubelet=1.5.3-00 kubernetes-cni=0.3.0.1-07a8a2-00 e

curl -Lo /tmp/old-kubeadm.deb https://apt.k8s.io/pool/kubeadm_1.6.0-alpha.0.2074-a092d8e0f95f52-00_amd64_0206dba536f698b5777c7d210444a8ace18f48e045ab78687327631c6c694f42.deb

para rebaixar de 1.10 e, em seguida, apenas

sudo apt-get -y install kubectl kubelet kubernetes-cni kubeadm

Etcd estava reiniciando e o servidor de API estava expirando. Depois de algum tempo, o api-server reinicia reclamando de não conseguir se conectar. Existe uma maneira de ativar o registro de nível DEBUG? Agora tenha certeza do que causa isso. Mas está funcionando agora. Eu definitivamente gostaria de reproduzir isso e solucioná-lo.

Eu tenho a razão pela qual eu parei.
Estou executando-o em vmware e localizado 1G de RAM, o k8s precisa de pelo menos 2G de RAM.
Existe alguma possibilidade de adicionar uma notificação sobre isso?

+1 kubeadm 1.10.2 no CentOS 7
4 GB de RAM 2 CPUs

+1 kubeadm 1.10.1 no Debian Stretch (go1.9.3) em uma VM HyperV com 6 GB de RAM e 1 VCPU...

funcionou bem no passado, pois regenerei o cluster muitas vezes ...

Tentei mudar para 2 VCPUs no HyperV, nada muda.

+1!

+1. kubeadm 1.10.1, Debian Stretch. Trabalhou antes

Descobrimos que com o docker 1.13.1 no Centos 7 vimos problemas com o driver de armazenamento. Os logs do Docker mostraram 'readlink /var/lib/docker/overlay2/l: argumento inválido'. Mover para o docker 18.03.1-ce parece resolver esse problema e o kubeadm init não trava mais.

Eu tive o mesmo problema. Acabou que o etcd puxou o nome do host da máquina linux (somedomain.example.com), procurou por ele em um servidor DNS, encontrou uma resposta para o domínio curinga (*.example.com) e tentou vincular ao endereço IP retornado em vez do apiserver-advertise-address.

Houve uma série de correções para pré-puxar, bem como detecção de tempo limite de pivô, fechando esse problema.

+1

Tentei da maneira padrão deixando o k8sadmin puxar as imagens para baixo, tentei várias vezes e depois puxei as imagens, redefini, tentei ignorar os erros, sempre falha.

pi@master-node-001 :~ $ sudo kubeadm reset
[reset] AVISO: as alterações feitas neste host por 'kubeadm init' ou 'kubeadm join' serão revertidas.
[preflight] executando verificações de pré-voo
[reset] parando o serviço kubelet
[reset] desmontando diretórios montados em "/var/lib/kubelet"
[reset] excluindo o conteúdo de diretórios com estado: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] excluindo o conteúdo dos diretórios de configuração: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] excluindo arquivos: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler. conf]
pi@master-node-001 :~ $ kubeadm config images pull
[config/images] Extraído k8s.gcr.io/kube-apiserver:v1.12.2
[config/images] Extraído k8s.gcr.io/kube-controller- manager:v1.12.2
[config/images] Extraído k8s.gcr.io/kube- scheduler:v1.12.2
[config/images] Extraído k8s.gcr.io/kube- proxy:v1.12.2
[config/images] Puxado k8s.gcr.io/pause:3.1
[config/images] Extraído k8s.gcr.io/etcd:3.2.24
[config/images] Extraído k8s.gcr.io/ coredns:1.2.2
pi@master-node-001 :~ $ sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
[init] usando a versão do Kubernetes: v1.12.2
[preflight] executando verificações de pré-voo
[preflight/images] Extrair imagens necessárias para configurar um cluster Kubernetes
[preflight/images] Isso pode levar um minuto ou dois, dependendo da velocidade da sua conexão com a Internet
[preflight/images] Você também pode executar esta ação de antemão usando 'kubeadm config images pull'
[kubelet] Escrevendo arquivo de ambiente kubelet com sinalizadores para o arquivo "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Gravando a configuração do kubelet no arquivo "/var/lib/kubelet/config.yaml"
[preflight] Ativando o serviço kubelet
[certificados] Certificado e chave etcd/ca gerados.
[certificates] Gerado apiserver-etcd-client certificado e chave.
[certificados] Certificado e chave etcd/servidor gerados.
[certificates] etcd/server serve o certificado é assinado para nomes DNS [master-node-001 localhost] e IPs [127.0.0.1 ::1]
[certificados] Gerado certificado e chave etcd/peer.
[certificates] etcd/peer servindo o certificado é assinado para nomes DNS [master-node-001 localhost] e IPs [192.168.0.100 127.0.0.1 ::1]
[certificados] Gerado certificado e chave etcd/healthcheck-client.
[certificados] Gerado ca certificado e chave.
[certificates] Gerado apiserver certificado e chave.
[certificados] o certificado de serviço do apiserver é assinado para nomes DNS [master-node-001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] e IPs [10.96.0.1 192.168.0.100]
[certificates] Gerado certificado e chave apiserver-kubelet-client.
[certificados] Gerado certificado e chave front-proxy-ca.
[certificates] Gerado certificado e chave de cliente de proxy frontal.
[certificados] certificados e chaves válidos agora existem em "/etc/kubernetes/pki"
[certificados] Gerado sa chave e chave pública.
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "/etc/kubernetes/admin.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Escreveu o arquivo KubeConfig no disco: "/etc/kubernetes/scheduler.conf"
[controlplane] escreveu o manifesto Static Pod para o componente kube-apiserver para "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] escreveu o manifesto Static Pod para o componente kube-controller-manager para "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] escreveu o manifesto Static Pod para o componente kube-scheduler para "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escreveu o manifesto do pod estático para uma instância etcd local para "/etc/kubernetes/manifests/etcd.yaml"
[init] aguardando o kubelet inicializar o plano de controle como pods estáticos do diretório "/etc/kubernetes/manifests"
[init] isso pode levar um minuto ou mais se as imagens do plano de controle precisarem ser extraídas

Infelizmente, ocorreu um erro:
expirou aguardando a condição

Este erro é provavelmente causado por:
- O kubelet não está em execução
- O kubelet não está saudável devido a uma configuração incorreta do nó de alguma forma (cgroups obrigatórios desativados)

Se você estiver em um sistema com tecnologia systemd, poderá tentar solucionar o erro com os seguintes comandos:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Além disso, um componente do plano de controle pode ter travado ou encerrado quando iniciado pelo tempo de execução do contêiner.
Para solucionar problemas, liste todos os contêineres usando sua CLI de tempo de execução de contêineres preferida, por exemplo, docker.
Aqui está um exemplo de como você pode listar todos os contêineres do Kubernetes em execução no docker:
- 'docker ps -a | grep kube | grep -v pausa'
Depois de encontrar o contêiner com falha, você pode inspecionar seus logs com:
- 'logs docker CONTAINERID'
não foi possível inicializar um cluster Kubernetes
pi@master-node-001 :~ $ imagens docker
ID DA IMAGEM DA TAG DO REPOSITÓRIO TAMANHO CRIADO
k8s.gcr.io/kube-controller-manager v1.12.2 4bc6cae738d8 7 dias atrás 146 MB
k8s.gcr.io/kube-apiserver v1.12.2 8bfe044a05e1 7 dias atrás 177 MB
k8s.gcr.io/kube-scheduler v1.12.2 3abf5566fec1 7 dias atrás 52 MB
k8s.gcr.io/kube-proxy v1.12.2 328ef67ca54f 7 dias atrás 84,5 MB
k8s.gcr.io/kube-proxy v1.12.1 8c06fbe56458 3 semanas atrás 84,7 MB
k8s.gcr.io/kube-controller-manager v1.12.1 5de943380295 3 semanas atrás 146 MB
k8s.gcr.io/kube-scheduler v1.12.1 1fbc2e4cd378 3 semanas atrás 52 MB
k8s.gcr.io/kube-apiserver v1.12.1 ab216fe6acf6 3 semanas atrás 177 MB
k8s.gcr.io/etcd 3.2.24 e7a8884c8443 5 semanas atrás 222 MB
k8s.gcr.io/coredns 1.2.2 ab0805b0de94 2 meses atrás 33,4 MB
k8s.gcr.io/kube-scheduler v1.11.0 0e4a34a3b0e6 4 meses atrás 56,8 MB
k8s.gcr.io/kube-controller-manager v1.11.0 55b70b420785 4 meses atrás 155 MB
k8s.gcr.io/etcd 3.2.18 b8df3b177be2 6 meses atrás 219 MB
k8s.gcr.io/pause 3.1 e11a8cbeda86 10 meses atrás 374kB
pi@master-node-001 :~ $ h | grep kubectl
-bash: h: comando não encontrado
pi@master-node-001 :~ $ histórico | grep kubectl
9 lista de pods kubectl
10 pods de lista kubectl
11 kubectl --ajuda
12 kubectl obter pods -o largo
14 kubectl obter pods -o largo
32h | grep kubectl
33 história | grep kubectl
pi@master-node-001 :~ $ !12
kubectl obtém pods -o wide
Não é possível conectar ao servidor: net/http: tempo limite de handshake TLS
pi@master-node-001 :~ $ histórico | pausa grep
17 docker ps -a | grep kube | grep -v pausa
35 história | pausa grep
pi@master-node-001 :~ $ !17
janela de encaixe ps -a | grep kube | grep -v pausa
41623613679e 8bfe044a05e1 "kube-apiserver --au…" 29 segundos atrás Até 14 segundos k8s_kube-apiserver_kube-apiserver-master-node-001_kube-system_1ec53f8ef96c76af95c78c809252f05c_3
0870760b9ea0 8bfe044a05e1 "kube-apiserver --au…" 2 minutos atrás Saiu (0) 33 segundos atrás k8s_kube-apiserver_kube-apiserver-master-node-001_kube-system_1ec53f8ef96c76af95c78c809252f05c_2
c60d65fab8a7 3abf5566fec1 "kube-scheduler --ad…" 6 minutos atrás Até 5 minutos k8s_kube-scheduler_kube-scheduler-master-node-001_kube-system_ee7b1077c61516320f4273309e9b4690_0
26c58f6c68e9 e7a8884c8443 "etcd --advertise-cl…" 6 minutos atrás Até 5 minutos k8s_etcd_etcd-master-node-001_kube-system_d01dcc7fc79b875a52f01e26432e6745_0
65546081ca77 4bc6cae738d8 "kube-controller-man…" 6 minutos atrás Até 5 minutos k8s_kube-controller-manager_kube-controller-manager-master-node-001_kube-system_07e19bc9fea1626927c12244604bdb2f_0
pi@master-node-001 :~ $ kubectl obtém pods -o wide
^ C
pi@master-node-001 :~ $ sudo reboot
Conexão com 192.168.0.100 fechada por host remoto.
Conexão para 192.168.0.100 fechada.
karl@karl-PL62-7RC :~$ ping 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(84) bytes de dados.
^ C
--- 192.168.0.100 estatísticas de ping ---
2 pacotes transmitidos, 0 recebidos, 100% de perda de pacotes, tempo 1015ms

karl@karl-PL62-7RC :~$ ssh [email protected]
ssh_exchange_identification: leia: Conexão redefinida por peer
karl@karl-PL62-7RC :~$ ssh [email protected]
senha de [email protected] :
Linux master-node-001 4.14.71-v7+ #1145 SMP Sex 21 de setembro 15:38:35 BST 2018 armv7l

Os programas incluídos no sistema Debian GNU/Linux são software livre;
os termos exatos de distribuição para cada programa estão descritos no
arquivos individuais em /usr/share/doc/*/copyright.

Debian GNU/Linux vem com ABSOLUTAMENTE NENHUMA GARANTIA, na medida em que
permitido pela lei aplicável.
Último login: quarta-feira, 31 de outubro 21:36:13 2018
pi@master-node-001 :~ $ kubectl obtém pods -o wide
A conexão com o servidor 192.168.0.100:6443 foi recusada - você especificou o host ou a porta correta?
pi@master-node-001 :~ $ sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
[init] usando a versão do Kubernetes: v1.12.2
[preflight] executando verificações de pré-voo
[AVISO FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml já existe
[AVISO FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml já existe
[AVISO FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml já existe
[AVISO FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml já existe
[AVISO Porta-10250]: A porta 10250 está em uso
[AVISO DirAvailable--var-lib-etcd]: /var/lib/etcd não está vazio
[preflight/images] Extrair imagens necessárias para configurar um cluster Kubernetes
[preflight/images] Isso pode levar um minuto ou dois, dependendo da velocidade da sua conexão com a Internet
[preflight/images] Você também pode executar esta ação de antemão usando 'kubeadm config images pull'
[kubelet] Escrevendo arquivo de ambiente kubelet com sinalizadores para o arquivo "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Gravando a configuração do kubelet no arquivo "/var/lib/kubelet/config.yaml"
[preflight] Ativando o serviço kubelet
[certificados] Usando o certificado e a chave etcd/peer existentes.
[certificates] Usando o certificado e a chave apiserver-etcd-client existente.
[certificados] Usando o certificado e a chave etcd/servidor existentes.
[certificados] Usando o certificado e a chave etcd/healthcheck-client existentes.
[certificates] Usando o certificado e a chave apiserver existentes.
[certificates] Usando o certificado e a chave apiserver-kubelet-client existente.
[certificates] Usando o certificado e a chave do front-proxy-client existente.
[certificados] certificados e chaves válidos agora existem em "/etc/kubernetes/pki"
[certificados] Usando a chave sa existente.
[kubeconfig] Usando o arquivo KubeConfig atualizado existente: "/etc/kubernetes/admin.conf"
[kubeconfig] Usando o arquivo KubeConfig atualizado existente: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Usando o arquivo KubeConfig atualizado existente: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Usando o arquivo KubeConfig atualizado existente: "/etc/kubernetes/scheduler.conf"
[controlplane] escreveu o manifesto Static Pod para o componente kube-apiserver para "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] escreveu o manifesto Static Pod para o componente kube-controller-manager para "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] escreveu o manifesto Static Pod para o componente kube-scheduler para "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Escreveu o manifesto do pod estático para uma instância etcd local para "/etc/kubernetes/manifests/etcd.yaml"
[init] aguardando o kubelet inicializar o plano de controle como pods estáticos do diretório "/etc/kubernetes/manifests"
[init] isso pode levar um minuto ou mais se as imagens do plano de controle precisarem ser extraídas

Infelizmente, ocorreu um erro:
expirou aguardando a condição

Este erro é provavelmente causado por:
- O kubelet não está em execução
- O kubelet não está saudável devido a uma configuração incorreta do nó de alguma forma (cgroups obrigatórios desativados)

Se você estiver em um sistema com tecnologia systemd, poderá tentar solucionar o erro com os seguintes comandos:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Além disso, um componente do plano de controle pode ter travado ou encerrado quando iniciado pelo tempo de execução do contêiner.
Para solucionar problemas, liste todos os contêineres usando sua CLI de tempo de execução de contêineres preferida, por exemplo, docker.
Aqui está um exemplo de como você pode listar todos os contêineres do Kubernetes em execução no docker:
- 'docker ps -a | grep kube | grep -v pausa'
Depois de encontrar o contêiner com falha, você pode inspecionar seus logs com:
- 'logs docker CONTAINERID'
não foi possível inicializar um cluster Kubernetes

13 sudo kubeadm init --token-ttl=0
14 kubectl obter pods -o largo
15 sudo kubeadm reset
16 sudo kubeadm init --token-ttl=0
17 docker ps -a | grep kube | grep -v pausa
18 imagens de configuração do kubeadm pull --kubernetes-version=v1.11.0
19 sudo kubeadm reset
20 histórico > notas.txt
mais 21 notas.txt
22 sudo reinicialização
23 lista de imagens de configuração do kubeadm
24 imagens de configuração do kubeadm pull --kubernetes-version=v1.11.0
25 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
26 imagens de configuração do kubeadm puxadas
27 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
28 imagens de configuração kubeadm puxadas
29 sudo kubeadm reset
30 imagens de configuração kubeadm puxadas
31 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all
32 imagens do docker
33h | grep kubectl
34 história | grep kubectl
35 kubectl obter vagens -o de largura
36 história | pausa grep
37 janela de encaixe ps -a | grep kube | grep -v pausa
38 kubectl obter pods -o wide
39 sudo reboot
40 kubectl obter vagens -o de largura
41 sudo kubeadm init --token-ttl=0 --ignore-preflight-errors=all

Esta página foi útil?
0 / 5 - 0 avaliações