error marking master: timed out waiting for the condition
https://github.com/kubernetes/kubeadm/issues/1092
https://github.com/kubernetes/kubeadm/issues/937
https://github.com/kubernetes/kubeadm/issues/1087
https://github.com/kubernetes/kubeadm/issues/715
https://github.com/kubernetes/kubernetes/issues/45727
/ tipo bug
versão kubeadm (use kubeadm version
):
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Meio Ambiente :
kubectl version
):kubectl version --kubeconfig=/etc/kubernetes/admin.conf
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
ubuntu xenial em baremetal
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
uname -a
):Linux testymaster1 4.4.0-131-generic #157-Ubuntu SMP Thu Jul 12 15:51:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
kubeadm init --config /etc/kubernetes/kubeadmcfg.yaml
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [testymaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.238 10.0.23.238 127.0.0.1 10.0.23.241 10.0.23.242 10.0.23.243 10.0.23.238 10.0.23.239 10.0.23.240 10.0.23.244 10.0.23.245 10.0.23.246]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 22.506258 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node testymaster1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node testymaster1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
Master para inicializar sem problemas.
https://gist.github.com/joshuacox/4505fbeceb2e394900a24c3cae14131c
execute o acima assim:
bash etcd-test6.sh 10.0.0.6 10.0.0.7 10.0.0.8'
neste ponto, você deve ter um cluster etcd saudável rodando em três hosts
em seguida, em um host separado (10.0.0.9) execute as etapas detalhadas aqui:
https://kubernetes.io/docs/setup/independent/high-availability/#external -etcd
com esta configuração:
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.0.6'
- '10.0.0.7'
- '10.0.0.8'
- '10.0.0.9'
controlPlaneEndpoint: "10.0.0.9"
etcd:
external:
endpoints:
- https://10.0.0.6:2379
- https://10.0.0.7:2379
- https://10.0.0.8:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
podSubnet: 10.244.0.0/16
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cbc9036b0675 51a9c329b7c5 "kube-apiserver --..." 23 minutes ago Up 23 minutes k8s_kube-apiserver_kube-apiserver-testymaster1_kube-system_c55b3dd53dd51e69d2acd3a6aa486e32_0
aeebe73a2c98 d6d57c76136c "kube-scheduler --..." 23 minutes ago Up 23 minutes k8s_kube-scheduler_kube-scheduler-testymaster1_kube-system_ee7b1077c61516320f4273309e9b4690_0
58fc131c3b50 15548c720a70 "kube-controller-m..." 23 minutes ago Up 23 minutes k8s_kube-controller-manager_kube-controller-manager-testymaster1_kube-system_690790d9ba49d9118c24c004854af4db_0
4f628d299b8e k8s.gcr.io/pause:3.1 "/pause" 23 minutes ago Up 23 minutes k8s_POD_kube-scheduler-testymaster1_kube-system_ee7b1077c61516320f4273309e9b4690_0
2fe08cdd58c9 k8s.gcr.io/pause:3.1 "/pause" 23 minutes ago Up 23 minutes k8s_POD_kube-controller-manager-testymaster1_kube-system_690790d9ba49d9118c24c004854af4db_0
85638811980c k8s.gcr.io/pause:3.1 "/pause" 23 minutes ago Up 23 minutes k8s_POD_kube-apiserver-testymaster1_kube-system_c55b3dd53dd51e69d2acd3a6aa486e32_0
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 20-etcd-service-manager.conf
Active: active (running) since Sun 2018-11-11 20:09:14 UTC; 19min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 4731 (kubelet)
Tasks: 60
Memory: 40.2M
CPU: 59.614s
CGroup: /system.slice/kubelet.service
└─4731 /usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
Nov 11 20:27:54 testymaster1 kubelet[4731]: I1111 20:27:54.434903 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:27:58 testymaster1 kubelet[4731]: I1111 20:27:58.434922 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:00 testymaster1 kubelet[4731]: I1111 20:28:00.737709 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:10 testymaster1 kubelet[4731]: I1111 20:28:10.788482 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:18 testymaster1 kubelet[4731]: I1111 20:28:18.434933 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:20 testymaster1 kubelet[4731]: I1111 20:28:20.828593 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:30 testymaster1 kubelet[4731]: I1111 20:28:30.877710 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:40 testymaster1 kubelet[4731]: I1111 20:28:40.924675 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:50 testymaster1 kubelet[4731]: I1111 20:28:50.974638 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:29:01 testymaster1 kubelet[4731]: I1111 20:29:01.024980 4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
journalctl -xeu kubelet
https://gist.github.com/joshuacox/3c0b4aa2b66d1172067a32e6e064f948
docker logs cbc9036b0675
os registros do contêiner da API Kube:
https://gist.github.com/joshuacox/ab29412c1653e2b1fd2fa06cdd0ae2e2
/ assign @timothysc
/ assign @rdodev @liztio
Corrigi esse problema desabilitando tls do etcd.
cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "10.20.0.13" # node 2 ip addr
- "10.20.0.14" # node 3 ip addr
controlPlaneEndpoint: "lb.xxx.yyy:6443"
etcd:
external:
endpoints:
- http://10.20.0.11:2379
- http://10.20.0.13:2379
- http://10.20.0.14:2379
#caFile: /etc/kubernetes/pki/etcd/ca.crt
#certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
#keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"
docker 18.06.1-ce
k8s v1.12.2
@joshuacox é uma
controlPlaneEndpoint: "10.0.0.9:PORT"
Ou, alternativamente, se você estiver usando 1.12, tente InitConfig + ClusterConfig? Por exemplo:
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: PUBLICIP
bindPort: PORT
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: HOSTNAME
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
# all the relevant SAN hosts here
certificatesDir: /etc/kubernetes/pki
clusterName: CLUSTER_NAME
controlPlaneEndpoint: ""
etcd:
##etcd config here
kubernetesVersion: KUBE_VERSION
networking:
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
@rdodev qual porta? 6443?
E no InitConfig + ClusterConfig está tudo em um arquivo, por exemplo /etc/kubernetes/kubeadmcfg.yaml
? E isso acontece em masters ou hosts etcd? Ou talvez apenas o mestre inicial?
Ei @joshuacox
Sim, primeiro para fazer a triagem com alterações mínimas, basta adicionar 6643 a esse parâmetro de configuração e executar kubeadm init
.
Se isso ainda não funcionar, então sim, pegue esse snippet e substitua por variáveis pertinentes em um kubeadm-config.yaml e kubeadm initi --config
apontando para ele
embora eu não tenha duplicado toda a execução do baremetal ainda, posso provisionar rapidamente um novo cluster em hosts KVM.
cat /etc/kubernetes/kubeadmcfg.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.23.214'
- '10.0.23.219'
- '10.0.23.215'
- '10.0.23.210'
- '10.0.23.211'
- '10.0.23.212'
- '10.0.23.216'
- '10.0.23.217'
- '10.0.23.218'
controlPlaneEndpoint: "10.0.23.210:6443"
etcd:
external:
endpoints:
- https://10.0.23.214:2379
- https://10.0.23.219:2379
- https://10.0.23.215:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
podSubnet: 10.244.0.0/16
kubeadm init --config /etc/kubernetes/kubeadmcfg.yaml
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [extetcdmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.210 10.0.23.210 127.0.0.1 10.0.23.214 10.0.23.219 10.0.23.215 10.0.23.210 10.0.23.211 10.0.23.212 10.0.23.216 10.0.23.217 10.0.23.218]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 21.507034 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node extetcdmaster1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node extetcdmaster1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
Vou tentar com o init_cluster a seguir.
mesmos resultados com a configuração init_cluster
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 10.0.23.210
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: extetcdetcd1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.23.214'
- '10.0.23.219'
- '10.0.23.215'
- '10.0.23.210'
- '10.0.23.211'
- '10.0.23.212'
- '10.0.23.216'
- '10.0.23.217'
- '10.0.23.218'
controlPlaneEndpoint: "10.0.23.210:6443"
etcd:
external:
endpoints:
- https://10.0.23.214:2379
- https://10.0.23.219:2379
- https://10.0.23.215:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
podSubnet: 10.244.0.0/16
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[WARNING Hostname]: hostname "extetcdetcd1" could not be reached
[WARNING Hostname]: hostname "extetcdetcd1" lookup extetcdetcd1 on 10.0.23.1:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [extetcdetcd1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.210 10.0.23.210 127.0.0.1
10.0.23.214 10.0.23.219 10.0.23.215 10.0.23.210 10.0.23.211 10.0.23.212 10.0.23.216 10.0.23.217 10.0.23.218]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 22.006916 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node extetcdetcd1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node extetcdetcd1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
@joshuacox no entanto, o aviso revela
[WARNING Hostname]: hostname "extetcdetcd1" could not be reached
[WARNING Hostname]: hostname "extetcdetcd1" lookup extetcdetcd1 on 10.0.23.1:53: no such host
O que explicaria a incapacidade de inicializar o mestre.
@rdodev que é novo na configuração do cluster init, mas pode explicar alguns dos clusters bem-sucedidos no passado. ou seja, meu roteador de fibra do Google aprende os nomes das VMs eventualmente e retornará DNS se as VMs existirem por tempo suficiente para qualquer evento que aconteça que acione o roteador para aprender o nome daquele endereço MAC específico. Gerar um novo cluster expõe esse problema. Tive a impressão de que o kubernetes tinha seu próprio DNS interno.
@joshuacox para esclarecer: então o master em sua rede doméstica e os servidores etcd estão em outro lugar? Talvez eu tenha entendido mal o cenário.
@rdodev são todas VMs na minha rede doméstica e podem se comunicar muito bem, ainda esperando no roteador do Google para aprender os nomes de host. Acho que preciso configurar um servidor DNS interno ou atribuir-lhes nomes de host disponíveis publicamente que resolvem para endereços internos. Mas isso parece excessivo para apenas um cluster de teste.
@joshuacox em vez de dns para o cluster etcd, você pode apenas usar ips?
@rdodev Não tenho certeza de onde isso está definido. é o nome: linha nas coisas do cluster init?
@rdodev isso foi um erro, que de fato não era extetcdetcd1
mas na verdade extetcdmaster1
, corrigindo isso ainda leva à falha da contaminação:
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [extetcdmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.210 10.0.23.210 127.0.0.1 10.0.23.214 10.0.23.219 10.0.23.215 10.0.23.210 10.0.23.211 10.0.23.212 10.0.23.216 10.0.23.217 10.0.23.218]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 24.005968 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node extetcdmaster1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node extetcdmaster1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
e a configuração corrigida:
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 10.0.23.210
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: extetcdmaster1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.23.214'
- '10.0.23.219'
- '10.0.23.215'
- '10.0.23.210'
- '10.0.23.211'
- '10.0.23.212'
- '10.0.23.216'
- '10.0.23.217'
- '10.0.23.218'
controlPlaneEndpoint: "10.0.23.210:6443"
etcd:
external:
endpoints:
- https://10.0.23.214:2379
- https://10.0.23.219:2379
- https://10.0.23.215:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
podSubnet: 10.244.0.0/16
@joshuacox não está claro para mim em sua postagem original, você configurou o cluster externo usando essas instruções? https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/
sim eu os converti em um único script:
https://gist.github.com/joshuacox/4505fbeceb2e394900a24c3cae14131c
além disso, estou integrando-os ao kubash, do qual tenho uma ramificação aqui
Ambos me permitem repetir todo o procedimento muito rapidamente com algo como:
kubash yaml2cluster -n testy ~/.kubash/examples/testy-cluster.yaml && kubash -n testy -y provision && kubash -n testy --verbosity=105 etcd_ext
ou em vez da última etapa, usando o script bash menor:
tar zcf - scripts/etcd-test.sh| ssh [email protected] 'tar zxvf -;cd scripts; bash etcd-test.sh 10.0.0.6 10.0.0.7 10.0.0.8'
ou para ainda menos digitação:
scripts/tester extetcd
irá derrubar o cluster extetcd
e construí-lo do zero e executar o método extetcd.
@joshuacox Muito obrigado por toda a configuração das informações. Deixe-me analisar este / repro e entrarei em contato com você.
@joshuacox você está no K8s Slack? Pode ser mais fácil para comunicação rápida.
Não tenho certeza se isso é útil ou relacionado. Mas eu encontrei esse mesmo problema ao usar kubernetes no Clearlinux quando estava usando VMs criadas usando virt-manager. O problema era que o nome do host não estava resolvendo.
nslookup myhostname
não resolveria.
Adicionar os hosts a / etc / hosts e garantir que o nsswitch.conf o use não ajudou.
O servidor dns (dnsmasq) que lida com as VMs teve que fornecer a resolução. Depois de garantir que as resoluções de nome funcionassem corretamente, garantindo que o servidor dns upstream resolvesse os nomes de host, as coisas começaram a funcionar.
@mcastelino não totalmente alheio, especialmente com a discussão sobre o assunto do nome de host. É importante notar que aqui na minha situação estou usando uma rede com ponte e é o roteador que fornece resolução aqui na configuração da minha casa e não o dnsmasq do KVM / libvirt / virt-manager.
apenas pensei em garantir que todos os certificados funcionassem e a rede estivesse boa, então executei o comando docker test do master principal que falhou ao marcar
root<strong i="6">@extetcdmaster1</strong>:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.214:2379 cluster-health
member 1a3ca09cf567d334 is healthy: got healthy result from https://10.0.23.215:2379
member 5a25d004511f496e is healthy: got healthy result from https://10.0.23.219:2379
member 9f536c972b739e17 is healthy: got healthy result from https://10.0.23.214:2379
cluster is healthy
root<strong i="7">@extetcdmaster1</strong>:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.215:2379 cluster-health
member 1a3ca09cf567d334 is healthy: got healthy result from https://10.0.23.215:2379
member 5a25d004511f496e is healthy: got healthy result from https://10.0.23.219:2379
member 9f536c972b739e17 is healthy: got healthy result from https://10.0.23.214:2379
cluster is healthy
root<strong i="8">@extetcdmaster1</strong>:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.219:2379 cluster-health
member 1a3ca09cf567d334 is healthy: got healthy result from https://10.0.23.215:2379
member 5a25d004511f496e is healthy: got healthy result from https://10.0.23.219:2379
member 9f536c972b739e17 is healthy: got healthy result from https://10.0.23.214:2379
cluster is healthy
parece ser um problema de permissão? aqui estão os registros de um contêiner do planejador em execução em uma instância mestre depois que não consegue se marcar como mestre:
E1118 15:24:08.851240 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1118 15:24:08.852972 1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:178: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1118 15:24:08.853795 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1118 15:24:08.855062 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1118 15:24:09.847470 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1118 15:24:09.848437 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1118 15:24:09.849649 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1118 15:24:09.850702 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1118 15:24:09.851748 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1118 15:24:09.852623 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1118 15:24:09.854439 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1118 15:24:09.855419 1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:178: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1118 15:24:09.856799 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1118 15:24:09.857736 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I1118 15:24:11.719441 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I1118 15:24:11.819795 1 controller_utils.go:1034] Caches are synced for scheduler controller
I1118 15:24:11.819966 1 leaderelection.go:187] attempting to acquire leader lease kube-sys
finalmente, tenha um método de sucesso aqui:
prepare os nós etcd executando este script no nó etcd primário:
https://gist.github.com/joshuacox/9df2a029b04e63443b62c2824cf5fb95
tar zcf - scripts/etcd-test.sh| ssh [email protected] 'tar zxvf -;cd scripts; bash etcd-test.sh 10.0.23.218 10.0.23.219 10.0.23.220';
e, em seguida, inicializar um mestre, este script pode ser executado em qualquer host que tenha sido codificado para acesso ssh ao mestre e ao nó etcd primário
https://gist.github.com/joshuacox/f0f0b25e51df5638f3778d80d4af8c63
bash scripts/final_master.sh 10.0.23.215 10.0.23.218
EDIT: deixando aberto enquanto faço alguns testes para garantir que não seja anômalo
Repeti isso algumas vezes agora no bare metal e em VMs
temos planos de melhorar a maneira como o etcd é tratado e uma configuração de HA é criada, removendo algumas das etapas manuais. isso está no roteiro para versões futuras.
finalmente, tenha um método de sucesso aqui:
prepare os nós etcd executando este script no nó etcd primário:
https://gist.github.com/joshuacox/9df2a029b04e63443b62c2824cf5fb95
tar zcf - scripts/etcd-test.sh| ssh [email protected] 'tar zxvf -;cd scripts; bash etcd-test.sh 10.0.23.218 10.0.23.219 10.0.23.220';
e, em seguida, inicializar um mestre, este script pode ser executado em qualquer host que tenha sido codificado para acesso ssh ao mestre e ao nó etcd primário
https://gist.github.com/joshuacox/f0f0b25e51df5638f3778d80d4af8c63
bash scripts/final_master.sh 10.0.23.215 10.0.23.218
EDIT: deixando aberto enquanto faço alguns testes para garantir que não seja anômalo
Estou tendo um problema semelhante ao inicializar um cluster com o kubeadm. Você pode explicar melhor como você resolveu? Todos os outros tickets relacionados a este problema foram fechados e apontados para este.
Com um cluster etcd externo em funcionamento, minha configuração de kubeadm é a seguinte:
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- 127.0.0.1
- kubernetes.default
- kubernetes.default.svc.cluster.local
- kubeapi-lb.example.com
controlPlaneEndpoint: "kubeapi-lb.example.com:6443"
etcd:
external:
endpoints:
- https://10.9.2.60:2379
- https://10.9.3.67:2379
- https://10.9.2.33:2379
caFile: /etcd/kubernetes/pki/etcd/ca.pem
certFile: /etcd/kubernetes/pki/etcd/client.pem
keyFile: /etcd/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: "10.100.0.1/24"
bootstrapTokens:
- groups:
- "system:bootstrappers:kubeadm:default-node-token"
token: "redacted"
ttl: "0"
usages:
- signing
- authentication
clusterName: "data-nva"
nodeRegistration:
name: "kubemaster-01"
criSocket: "/var/run/dockershim.sock"
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
fracasso
# /usr/bin/kubeadm init --config kubeadm-config.yaml
...
...
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 32.040906 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node $mynode as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node $mynode as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
detalhes do ambiente
# docker --version
Docker version 18.06.1-ce, build e68fc7a215d7133c34aa18e3b72b4a21fd0c6136
# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
# cat /etc/*release*
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
Amazon Linux release 2 (Karoo)
cpe:2.3:o:amazon:amazon_linux:2
@blieberman
eu também !
Você pode usar kubeadm init --config ..... -v265 para ver alguns registros
Não se esqueça de testar a conexão mestre com a pilha etcd:
root<strong i="6">@extetcdmaster1</strong>:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.219:2379 cluster-health
aqui está o script final que usei para provisionar etcd, master e node:
https://gist.github.com/joshuacox/95aad9bee0c7e49e735ec3ec553b24ca
ou de uma maneira mais robusta, meu script completo:
Olá, estou tendo um problema semelhante com a versão 1.13.4 do k8s
k8s-c3-lb - 10.10.10.76
k8s-c3-e1 - 10.10.10.90
k8s-c3-e2 - 10.10.10.91
k8s-c3-e3 - 10.10.10.92
k8s-c3-m1 - 10.10.10.93
k8s-c3-m2 - 10.10.10.94
k8s-c3-m3 - 10.10.10.95
k8s-c3-w1 - 10.10.10.96
k8s-c3-w2 - 10.10.10.97
k8s-c3-w3 - 10.10.10.98
root@k8s-c3-m1:~# docker --version
Docker version 18.06.1-ce, build e68fc7a
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
root@k8s-c3-m1:~#
root@k8s-c3-lb:~# cat nginx.conf
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
error_log /var/log/nginx/error.log info;
stream {
upstream k8s-c3 {
server 10.10.10.93:6443;
server 10.10.10.94:6443;
server 10.10.10.95:6443;
}
server {
listen 6443;
proxy_pass k8s-c3;
}
}
root@k8s-c3-lb:~#
root@k8s-c3-e1:~# cat kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "10.10.10.90"
peerCertSANs:
- "10.10.10.90"
extraArgs:
initial-cluster: k8s-c3-e1=https://10.10.10.90:2380,k8s-c3-e2=https://10.10.10.91:2380,k8s-c3-e3=https://10.10.10.92:2380
initial-cluster-state: new
name: k8s-c3-e1
listen-peer-urls: https://10.10.10.90:2380
listen-client-urls: https://10.10.10.90:2379
advertise-client-urls: https://10.10.10.90:2379
initial-advertise-peer-urls: https://10.10.10.90:2380
root@k8s-c3-e1:~#
root@k8s-c3-m1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.10.10.90:2379 cluster-health
member 2855b88ffd64a219 is healthy: got healthy result from https://10.10.10.91:2379
member 54861c1657ba1b20 is healthy: got healthy result from https://10.10.10.92:2379
member 6fc6fbb1e152a287 is healthy: got healthy result from https://10.10.10.90:2379
cluster is healthy
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# cat /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "127.0.0.1"
- "10.10.10.90"
- "10.10.10.91"
- "10.10.10.92"
- "10.10.10.76"
controlPlaneEndpoint: "10.10.10.76:6443"
etcd:
external:
endpoints:
- https://10.10.10.90:2379
- https://10.10.10.91:2379
- https://10.10.10.92:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# kubeadm init --config /root/kubeadmcfg.yaml -v 256
I0304 14:52:28.103162 1391 initconfiguration.go:169] loading configuration from the given file
I0304 14:52:28.107089 1391 interface.go:384] Looking for default routes with IPv4 addresses
I0304 14:52:28.107141 1391 interface.go:389] Default route transits interface "eth0"
I0304 14:52:28.107440 1391 interface.go:196] Interface eth0 is up
I0304 14:52:28.107587 1391 interface.go:244] Interface "eth0" has 1 addresses :[10.10.10.93/24].
I0304 14:52:28.107695 1391 interface.go:211] Checking addr 10.10.10.93/24.
I0304 14:52:28.107724 1391 interface.go:218] IP found 10.10.10.93
I0304 14:52:28.107759 1391 interface.go:250] Found valid IPv4 address 10.10.10.93 for interface "eth0".
I0304 14:52:28.107791 1391 interface.go:395] Found active IP 10.10.10.93
I0304 14:52:28.107979 1391 version.go:163] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable.txt
I0304 14:52:29.493555 1391 feature_gate.go:206] feature gates: &{map[]}
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
I0304 14:52:29.494477 1391 checks.go:572] validating Kubernetes and kubeadm version
I0304 14:52:29.494609 1391 checks.go:171] validating if the firewall is enabled and active
I0304 14:52:29.506263 1391 checks.go:208] validating availability of port 6443
I0304 14:52:29.506767 1391 checks.go:208] validating availability of port 10251
I0304 14:52:29.507110 1391 checks.go:208] validating availability of port 10252
I0304 14:52:29.507454 1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0304 14:52:29.507728 1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0304 14:52:29.507959 1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0304 14:52:29.508140 1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0304 14:52:29.508316 1391 checks.go:430] validating if the connectivity type is via proxy or direct
I0304 14:52:29.508504 1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.508798 1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.509053 1391 checks.go:104] validating the container runtime
I0304 14:52:29.749661 1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:29.778962 1391 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0304 14:52:29.779324 1391 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0304 14:52:29.779573 1391 checks.go:644] validating whether swap is enabled or not
I0304 14:52:29.779818 1391 checks.go:373] validating the presence of executable ip
I0304 14:52:29.780044 1391 checks.go:373] validating the presence of executable iptables
I0304 14:52:29.780251 1391 checks.go:373] validating the presence of executable mount
I0304 14:52:29.780465 1391 checks.go:373] validating the presence of executable nsenter
I0304 14:52:29.780674 1391 checks.go:373] validating the presence of executable ebtables
I0304 14:52:29.780925 1391 checks.go:373] validating the presence of executable ethtool
I0304 14:52:29.781018 1391 checks.go:373] validating the presence of executable socat
I0304 14:52:29.781221 1391 checks.go:373] validating the presence of executable tc
I0304 14:52:29.781415 1391 checks.go:373] validating the presence of executable touch
I0304 14:52:29.781647 1391 checks.go:515] running all checks
I0304 14:52:29.838382 1391 checks.go:403] checking whether the given node name is reachable using net.LookupHost
I0304 14:52:29.838876 1391 checks.go:613] validating kubelet version
I0304 14:52:29.983771 1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:30.011507 1391 checks.go:208] validating availability of port 10250
I0304 14:52:30.011951 1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/etcd/ca.crt
I0304 14:52:30.012301 1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.crt
I0304 14:52:30.012360 1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.key
I0304 14:52:30.012408 1391 checks.go:685] validating the external etcd version
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0304 14:52:30.238175 1391 checks.go:833] image exists: k8s.gcr.io/kube-apiserver:v1.13.4
I0304 14:52:30.378446 1391 checks.go:833] image exists: k8s.gcr.io/kube-controller-manager:v1.13.4
I0304 14:52:30.560185 1391 checks.go:833] image exists: k8s.gcr.io/kube-scheduler:v1.13.4
I0304 14:52:30.745876 1391 checks.go:833] image exists: k8s.gcr.io/kube-proxy:v1.13.4
I0304 14:52:30.930200 1391 checks.go:833] image exists: k8s.gcr.io/pause:3.1
I0304 14:52:31.096902 1391 checks.go:833] image exists: k8s.gcr.io/coredns:1.2.6
I0304 14:52:31.097108 1391 kubelet.go:71] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0304 14:52:31.256217 1391 kubelet.go:89] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0304 14:52:31.530165 1391 certs.go:113] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-c3-m1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.93 10.10.10.76 127.0.0.1 10.10.10.90 10.10.10.91 10.10.10.92 10.10.10.76]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Using existing etcd/ca keyless certificate authority[certs] External etcd mode: Skipping etcd/server certificate authority generation
[certs] External etcd mode: Skipping etcd/peer certificate authority generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0304 14:52:33.267470 1391 certs.go:113] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0304 14:52:33.995630 1391 certs.go:72] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0304 14:52:34.708619 1391 kubeconfig.go:92] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0304 14:52:35.249743 1391 kubeconfig.go:92] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0304 14:52:35.798270 1391 kubeconfig.go:92] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0304 14:52:36.159920 1391 kubeconfig.go:92] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0304 14:52:36.689060 1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.701499 1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0304 14:52:36.701545 1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.703214 1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0304 14:52:36.703259 1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.704327 1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0304 14:52:36.704356 1391 etcd.go:97] [etcd] External etcd mode. Skipping the creation of a manifest for local etcd
I0304 14:52:36.704377 1391 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0304 14:52:36.705892 1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0304 14:52:36.707216 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:36.711008 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s in 3 milliseconds
I0304 14:52:36.711030 1391 round_trippers.go:444] Response Headers:
I0304 14:52:36.711077 1391 request.go:779] Got a Retry-After 1s response for attempt 1 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:37.711365 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:37.715841 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s in 4 milliseconds
I0304 14:52:37.715880 1391 round_trippers.go:444] Response Headers:
I0304 14:52:37.715930 1391 request.go:779] Got a Retry-After 1s response for attempt 2 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:38.716182 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:38.717826 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s in 1 milliseconds
I0304 14:52:38.717850 1391 round_trippers.go:444] Response Headers:
I0304 14:52:38.717897 1391 request.go:779] Got a Retry-After 1s response for attempt 3 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:39.718135 1391 round_trippers.go:419] curl -k -v -XGET -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:39.719946 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s in 1 milliseconds
I0304 14:52:39.719972 1391 round_trippers.go:444] Response Headers:
I0304 14:52:39.720022 1391 request.go:779] Got a Retry-After 1s response for attempt 4 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:40.720273 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:40.722069 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s in 1 milliseconds
I0304 14:52:40.722093 1391 round_trippers.go:444] Response Headers:
I0304 14:52:40.722136 1391 request.go:779] Got a Retry-After 1s response for attempt 5 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:41.722440 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:41.724033 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s in 1 milliseconds
I0304 14:52:41.724058 1391 round_trippers.go:444] Response Headers:
I0304 14:52:41.724103 1391 request.go:779] Got a Retry-After 1s response for attempt 6 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:42.724350 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:52.725613 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s in 10001 milliseconds
I0304 14:52:52.725683 1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.226097 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.720051 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 493 milliseconds
I0304 14:52:53.720090 1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.720103 1391 round_trippers.go:447] Content-Type: text/plain; charset=utf-8
I0304 14:52:53.720115 1391 round_trippers.go:447] X-Content-Type-Options: nosniff
I0304 14:52:53.720125 1391 round_trippers.go:447] Content-Length: 879
I0304 14:52:53.720135 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.720197 1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
healthz check failed
I0304 14:52:53.726022 1391 round_trippers.go:419] curl -k -v -XGET -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.739616 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 13 milliseconds
I0304 14:52:53.739690 1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.739705 1391 round_trippers.go:447] X-Content-Type-Options: nosniff
I0304 14:52:53.739717 1391 round_trippers.go:447] Content-Length: 858
I0304 14:52:53.740058 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.740083 1391 round_trippers.go:447] Content-Type: text/plain; charset=utf-8
I0304 14:52:53.740342 1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.226068 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.232126 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 6 milliseconds
I0304 14:52:54.232149 1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.232161 1391 round_trippers.go:447] Content-Type: text/plain; charset=utf-8
I0304 14:52:54.232172 1391 round_trippers.go:447] X-Content-Type-Options: nosniff
I0304 14:52:54.232182 1391 round_trippers.go:447] Content-Length: 816
I0304 14:52:54.232192 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.232234 1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.726154 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.734050 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 7 milliseconds
I0304 14:52:54.734091 1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.734111 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.734129 1391 round_trippers.go:447] Content-Type: text/plain; charset=utf-8
I0304 14:52:54.734146 1391 round_trippers.go:447] X-Content-Type-Options: nosniff
I0304 14:52:54.734163 1391 round_trippers.go:447] Content-Length: 774
I0304 14:52:54.734250 1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.226158 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.231693 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 5 milliseconds
I0304 14:52:55.231734 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.231754 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.231772 1391 round_trippers.go:447] Content-Type: text/plain; charset=utf-8
I0304 14:52:55.231789 1391 round_trippers.go:447] X-Content-Type-Options: nosniff
I0304 14:52:55.231805 1391 round_trippers.go:447] Content-Length: 774
I0304 14:52:55.231998 1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.726404 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.733705 1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 200 OK in 7 milliseconds
I0304 14:52:55.733746 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.733766 1391 round_trippers.go:447] Content-Type: text/plain; charset=utf-8
I0304 14:52:55.733792 1391 round_trippers.go:447] Content-Length: 2
I0304 14:52:55.733809 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.733888 1391 request.go:942] Response Body: ok
[apiclient] All control plane components are healthy after 19.026898 seconds
I0304 14:52:55.736342 1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:55.738400 1391 uploadconfig.go:114] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0304 14:52:55.741686 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.751480 1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.751978 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.752324 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.752367 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:55.752586 1391 round_trippers.go:447] Content-Length: 1423
I0304 14:52:55.752696 1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - 127.0.0.1\n - 10.10.10.90\n - 10.10.10.91\n - 10.10.10.92\n - 10.10.10.76\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n external:\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n endpoints:\n - https://10.10.10.90:2379\n - https://10.10.10.91:2379\n - https://10.10.10.92:2379\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n dnsDomain: cluster.local\n podSubnet: \"\"\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n k8s-c3-m1:\n advertiseAddress: 10.10.10.93\n bindPort: 6443\n k8s-c3-m2:\n advertiseAddress: 10.10.10.94\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.756813 1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - 127.0.0.1\n - 10.10.10.90\n - 10.10.10.91\n - 10.10.10.92\n - 10.10.10.76\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n external:\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n endpoints:\n - https://10.10.10.90:2379\n - https://10.10.10.91:2379\n - https://10.10.10.92:2379\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n dnsDomain: cluster.local\n podSubnet: \"\"\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n k8s-c3-m1:\n advertiseAddress: 10.10.10.93\n bindPort: 6443\n k8s-c3-m2:\n advertiseAddress: 10.10.10.94\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.757443 1391 round_trippers.go:419] curl -k -v -XPOST -H "Content-Type: application/json" -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:55.913083 1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 155 milliseconds
I0304 14:52:55.913243 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.913271 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:55.913290 1391 round_trippers.go:447] Content-Length: 218
I0304 14:52:55.913335 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.913438 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm-config","kind":"configmaps"},"code":409}
I0304 14:52:55.914863 1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - 127.0.0.1\n - 10.10.10.90\n - 10.10.10.91\n - 10.10.10.92\n - 10.10.10.76\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n external:\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n endpoints:\n - https://10.10.10.90:2379\n - https://10.10.10.91:2379\n - https://10.10.10.92:2379\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n dnsDomain: cluster.local\n podSubnet: \"\"\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n k8s-c3-m1:\n advertiseAddress: 10.10.10.93\n bindPort: 6443\n k8s-c3-m2:\n advertiseAddress: 10.10.10.94\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.915123 1391 round_trippers.go:419] curl -k -v -XPUT -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.923538 1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds
I0304 14:52:55.924120 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.924437 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:55.924810 1391 round_trippers.go:447] Content-Length: 1423
I0304 14:52:55.925107 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.925521 1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - 127.0.0.1\n - 10.10.10.90\n - 10.10.10.91\n - 10.10.10.92\n - 10.10.10.76\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n external:\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n endpoints:\n - https://10.10.10.90:2379\n - https://10.10.10.91:2379\n - https://10.10.10.92:2379\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n dnsDomain: cluster.local\n podSubnet: \"\"\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n k8s-c3-m1:\n advertiseAddress: 10.10.10.93\n bindPort: 6443\n k8s-c3-m2:\n advertiseAddress: 10.10.10.94\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.926346 1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.926823 1391 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:55.946643 1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 19 milliseconds
I0304 14:52:55.947026 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.947441 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.947798 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:55.948105 1391 round_trippers.go:447] Content-Length: 298
I0304 14:52:55.948447 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io \"kubeadm:nodes-kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:55.949132 1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.949653 1391 round_trippers.go:419] curl -k -v -XPUT -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.960370 1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config 200 OK in 10 milliseconds
I0304 14:52:55.960920 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.961216 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:55.961507 1391 round_trippers.go:447] Content-Length: 464
I0304 14:52:55.961789 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.962002 1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Anodes-kubeadm-config","uid":"51a356c9-3e69-11e9-8dd7-0050569c544c","resourceVersion":"559","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.964418 1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.965022 1391 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:55.983782 1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 18 milliseconds
I0304 14:52:55.983847 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.983890 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:55.983920 1391 round_trippers.go:447] Content-Length: 312
I0304 14:52:55.983948 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.984007 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io \"kubeadm:nodes-kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:55.984330 1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.984464 1391 round_trippers.go:419] curl -k -v -XPUT -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.994138 1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.994193 1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.994497 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:55.994878 1391 round_trippers.go:447] Content-Length: 678
I0304 14:52:55.995094 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.995377 1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Anodes-kubeadm-config","uid":"51a61bf8-3e69-11e9-8dd7-0050569c544c","resourceVersion":"560","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:56.001421 1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:56.002891 1391 uploadconfig.go:128] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
I0304 14:52:56.005261 1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n anonymous:\n enabled: false\n webhook:\n cacheTTL: 2m0s\n enabled: true\n x509:\n clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n mode: Webhook\n webhook:\n cacheAuthorizedTTL: 5m0s\n cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n imagefs.available: 15%\n memory.available: 100Mi\n nodefs.available: 10%\n nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.005580 1391 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:56.026664 1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 20 milliseconds
I0304 14:52:56.026763 1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.026798 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:56.026852 1391 round_trippers.go:447] Content-Length: 228
I0304 14:52:56.026931 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.027084 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubelet-config-1.13","kind":"configmaps"},"code":409}
I0304 14:52:56.027551 1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n anonymous:\n enabled: false\n webhook:\n cacheTTL: 2m0s\n enabled: true\n x509:\n clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n mode: Webhook\n webhook:\n cacheAuthorizedTTL: 5m0s\n cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n imagefs.available: 15%\n memory.available: 100Mi\n nodefs.available: 10%\n nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.027830 1391 round_trippers.go:419] curl -k -v -XPUT -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13'
I0304 14:52:56.036853 1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 8 milliseconds
I0304 14:52:56.036900 1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.037253 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:56.037291 1391 round_trippers.go:447] Content-Length: 2133
I0304 14:52:56.037554 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.037755 1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13","uid":"51a9de57-3e69-11e9-8dd7-0050569c544c","resourceVersion":"561","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n anonymous:\n enabled: false\n webhook:\n cacheTTL: 2m0s\n enabled: true\n x509:\n clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n mode: Webhook\n webhook:\n cacheAuthorizedTTL: 5m0s\n cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n imagefs.available: 15%\n memory.available: 100Mi\n nodefs.available: 10%\n nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.038255 1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.038523 1391 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:56.052414 1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 13 milliseconds
I0304 14:52:56.052512 1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.052572 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:56.052603 1391 round_trippers.go:447] Content-Length: 296
I0304 14:52:56.052685 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.052955 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io \"kubeadm:kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:56.053398 1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.053646 1391 round_trippers.go:419] curl -k -v -XPUT -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13'
I0304 14:52:56.061599 1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13 200 OK in 7 milliseconds
I0304 14:52:56.061691 1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.061723 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.061779 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:56.061808 1391 round_trippers.go:447] Content-Length: 467
I0304 14:52:56.061917 1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Akubelet-config-1.13","uid":"51abee39-3e69-11e9-8dd7-0050569c544c","resourceVersion":"562","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.062370 1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.062564 1391 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:56.076620 1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 13 milliseconds
I0304 14:52:56.076664 1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.076902 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:56.076938 1391 round_trippers.go:447] Content-Length: 310
I0304 14:52:56.077092 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.077299 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io \"kubeadm:kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:56.077657 1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.077940 1391 round_trippers.go:419] curl -k -v -XPUT -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" -H "Content-Type: application/json" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13'
I0304 14:52:56.084893 1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13 200 OK in 6 milliseconds
I0304 14:52:56.084937 1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.085395 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:52:56.085635 1391 round_trippers.go:447] Content-Length: 675
I0304 14:52:56.085675 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.086357 1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Akubelet-config-1.13","uid":"51ad932c-3e69-11e9-8dd7-0050569c544c","resourceVersion":"563","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.086694 1391 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-c3-m1" as an annotation
...
I0304 14:53:16.587525 1391 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:53:16.597510 1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 9 milliseconds
I0304 14:53:16.597872 1391 round_trippers.go:444] Response Headers:
I0304 14:53:16.597909 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:53:16.598117 1391 round_trippers.go:447] Content-Length: 188
I0304 14:53:16.598141 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:53:16 GMT
I0304 14:53:16.598332 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
[kubelet-check] Initial timeout of 40s passed.
...
I0304 14:53:17.111508 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
I0304 14:54:56.095649 1391 round_trippers.go:419] curl -k -v -XGET -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:54:56.101815 1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 6 milliseconds
I0304 14:54:56.101895 1391 round_trippers.go:444] Response Headers:
I0304 14:54:56.101926 1391 round_trippers.go:447] Content-Type: application/json
I0304 14:54:56.101945 1391 round_trippers.go:447] Content-Length: 188
I0304 14:54:56.101996 1391 round_trippers.go:447] Date: Mon, 04 Mar 2019 14:54:56 GMT
I0304 14:54:56.102074 1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6e9af6f2585 dd862b749309 "kube-scheduler --ad…" 28 minutes ago Up 28 minutes k8s_kube-scheduler_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_1
76bcca06bb0c 40a817357014 "kube-controller-man…" 28 minutes ago Up 28 minutes k8s_kube-controller-manager_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_1
74c9b34ec00d fc3801f0fc54 "kube-apiserver --au…" About an hour ago Up About an hour k8s_kube-apiserver_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0
e68bbbc0967e k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0
0d6e0d0040cf k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_0
29f7974ae280 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 20-etcd-service-manager.conf
Active: active (running) since Mon 2019-03-04 14:52:31 GMT; 55min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 1512 (kubelet)
Tasks: 17
Memory: 42.1M
CPU: 2min 45.226s
CGroup: /system.slice/kubelet.service
└─1512 /usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
Mar 04 15:46:26 k8s-c3-m1 kubelet[1512]: I0304 15:46:26.450692 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:27 k8s-c3-m1 kubelet[1512]: I0304 15:46:27.566498 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:36 k8s-c3-m1 kubelet[1512]: I0304 15:46:36.519582 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:46 k8s-c3-m1 kubelet[1512]: I0304 15:46:46.621611 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.566111 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.568601 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.706182 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:06 k8s-c3-m1 kubelet[1512]: I0304 15:47:06.778864 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:16 k8s-c3-m1 kubelet[1512]: I0304 15:47:16.852441 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:26 k8s-c3-m1 kubelet[1512]: I0304 15:47:26.893380 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# journalctl -xeu kubelet
Mar 04 15:38:53 k8s-c3-m1 kubelet[1512]: I0304 15:38:53.135568 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:03 k8s-c3-m1 kubelet[1512]: I0304 15:39:03.215031 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:13 k8s-c3-m1 kubelet[1512]: I0304 15:39:13.290469 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:23 k8s-c3-m1 kubelet[1512]: I0304 15:39:23.367081 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:26 k8s-c3-m1 kubelet[1512]: I0304 15:39:26.566426 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:33 k8s-c3-m1 kubelet[1512]: I0304 15:39:33.431954 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:33 k8s-c3-m1 kubelet[1512]: I0304 15:39:33.566201 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:43 k8s-c3-m1 kubelet[1512]: I0304 15:39:43.498836 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:53 k8s-c3-m1 kubelet[1512]: I0304 15:39:53.570568 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:03 k8s-c3-m1 kubelet[1512]: I0304 15:40:03.655276 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:08 k8s-c3-m1 kubelet[1512]: I0304 15:40:08.566616 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:13 k8s-c3-m1 kubelet[1512]: I0304 15:40:13.756879 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:23 k8s-c3-m1 kubelet[1512]: I0304 15:40:23.821072 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:33 k8s-c3-m1 kubelet[1512]: I0304 15:40:33.904937 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:34 k8s-c3-m1 kubelet[1512]: I0304 15:40:34.566237 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:41 k8s-c3-m1 kubelet[1512]: I0304 15:40:41.566373 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:43 k8s-c3-m1 kubelet[1512]: I0304 15:40:43.980238 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:54 k8s-c3-m1 kubelet[1512]: I0304 15:40:54.049829 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:04 k8s-c3-m1 kubelet[1512]: I0304 15:41:04.120501 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:14 k8s-c3-m1 kubelet[1512]: I0304 15:41:14.188172 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:24 k8s-c3-m1 kubelet[1512]: I0304 15:41:24.257331 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:29 k8s-c3-m1 kubelet[1512]: I0304 15:41:29.566046 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:34 k8s-c3-m1 kubelet[1512]: I0304 15:41:34.336272 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:44 k8s-c3-m1 kubelet[1512]: I0304 15:41:44.421498 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:51 k8s-c3-m1 kubelet[1512]: I0304 15:41:51.566118 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:54 k8s-c3-m1 kubelet[1512]: I0304 15:41:54.510862 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:04 k8s-c3-m1 kubelet[1512]: I0304 15:42:04.602424 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:11 k8s-c3-m1 kubelet[1512]: I0304 15:42:11.566156 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:14 k8s-c3-m1 kubelet[1512]: I0304 15:42:14.672348 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:24 k8s-c3-m1 kubelet[1512]: I0304 15:42:24.739645 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:34 k8s-c3-m1 kubelet[1512]: I0304 15:42:34.809602 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:44 k8s-c3-m1 kubelet[1512]: I0304 15:42:44.569874 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:44 k8s-c3-m1 kubelet[1512]: I0304 15:42:44.878417 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:54 k8s-c3-m1 kubelet[1512]: I0304 15:42:54.949520 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:57 k8s-c3-m1 kubelet[1512]: I0304 15:42:57.566517 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:05 k8s-c3-m1 kubelet[1512]: I0304 15:43:05.031910 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:15 k8s-c3-m1 kubelet[1512]: I0304 15:43:15.131797 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:25 k8s-c3-m1 kubelet[1512]: I0304 15:43:25.199036 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:29 k8s-c3-m1 kubelet[1512]: I0304 15:43:29.566339 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:35 k8s-c3-m1 kubelet[1512]: I0304 15:43:35.311614 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:45 k8s-c3-m1 kubelet[1512]: I0304 15:43:45.376789 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:55 k8s-c3-m1 kubelet[1512]: I0304 15:43:55.452387 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:57 k8s-c3-m1 kubelet[1512]: I0304 15:43:57.566088 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:05 k8s-c3-m1 kubelet[1512]: I0304 15:44:05.502619 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:15 k8s-c3-m1 kubelet[1512]: I0304 15:44:15.582590 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:24 k8s-c3-m1 kubelet[1512]: I0304 15:44:24.567123 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:25 k8s-c3-m1 kubelet[1512]: I0304 15:44:25.622999 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:35 k8s-c3-m1 kubelet[1512]: I0304 15:44:35.669595 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:45 k8s-c3-m1 kubelet[1512]: I0304 15:44:45.742763 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:49 k8s-c3-m1 kubelet[1512]: I0304 15:44:49.566491 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:55 k8s-c3-m1 kubelet[1512]: I0304 15:44:55.812636 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:58 k8s-c3-m1 kubelet[1512]: I0304 15:44:58.566265 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:05 k8s-c3-m1 kubelet[1512]: I0304 15:45:05.890388 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:15 k8s-c3-m1 kubelet[1512]: I0304 15:45:15.971426 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:26 k8s-c3-m1 kubelet[1512]: I0304 15:45:26.043344 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.117636 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.566338 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:46 k8s-c3-m1 kubelet[1512]: I0304 15:45:46.190995 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:51 k8s-c3-m1 kubelet[1512]: I0304 15:45:51.566093 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:56 k8s-c3-m1 kubelet[1512]: I0304 15:45:56.273010 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:06 k8s-c3-m1 kubelet[1512]: I0304 15:46:06.346175 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:16 k8s-c3-m1 kubelet[1512]: I0304 15:46:16.384087 1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
root@k8s-c3-m1:~#
Alguma ideia ?
Acabei de lançar o kubash 1.13.4 e testei os métodos empilhado e extetcd usando o 1.13.4. Eu gostaria de reunir qualquer outra informação de um cluster em execução, se você quiser.
Olá, atualizei a saída do Kubeadm quando descobri que meu balanceador de carga, que está sendo executado no docker, não foi configurado para usar o host network_mode. Não tenho certeza se isso importa, mas é melhor prevenir do que remediar.
@rdodev @timothysc alguma ideia qual é o meu problema aqui? Devo abrir um novo problema para isso?
@joshuacox você pode compartilhar o que exatamente corrigiu seu problema?
@hreidar Consertei alterando o script final para este:
https://gist.github.com/joshuacox/95aad9bee0c7e49e735ec3ec553b24ca
Eu sugiro que você faça um script de tudo para que possa reproduzir o erro de forma consistente; então, se pudermos reproduzir seu erro também, teremos muito mais probabilidade de identificar o problema.
Ok, aqui estão as etapas que escrevi até agora ...
# note! - docker needs to be installed on all nodes (it is on my my 16.04 template VMs)
# install misc tools
apt-get update && apt-get install -y apt-transport-https curl
# install required k8s tools
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
# turn off swap
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
# create systemd config for kubelet
cat << _EOF_ > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
Restart=always
_EOF_
# reload systemd and start kublet
systemctl daemon-reload
systemctl restart kubelet
### on all etcd nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
# cerate clusterConfig for etcd
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${ETCDINFO[$HOSTNAME]}"
peerCertSANs:
- "${ETCDINFO[$HOSTNAME]}"
extraArgs:
initial-cluster: ${ETCDNAMES[0]}=https://${ETCDIPS[0]}:2380,${ETCDNAMES[1]}=https://${ETCDIPS[1]}:2380,${ETCDNAMES[2]}=https://${ETCDIPS[2]}:2380
initial-cluster-state: new
name: ${HOSTNAME}
listen-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
listen-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
advertise-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
initial-advertise-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
EOF
### run only on one etcd node (k8s-c3-e1)
# generate the main certificate authority (creates two files in /etc/kubernetes/pki/etcd/)
kubeadm init phase certs etcd-ca
# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml
# copy cert files from k8s-c3-e1 to the other etcd nodes
scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[1]}: && \
ssh -t ubuntu@${ETCDIPS[1]} "sudo mv pki /etc/kubernetes/ && \
sudo chown -R root.root /etc/kubernetes/pki"
scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[2]}: && \
ssh -t ubuntu@${ETCDIPS[2]} "sudo mv pki /etc/kubernetes/ && \
sudo chown -R root.root /etc/kubernetes/pki"
# copy cert files from k8s-c3-e1 to the master nodes
scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[0]}: && \
ssh -t ubuntu@${MASTERIPS[0]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"
scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[1]}: && \
ssh -t ubuntu@${MASTERIPS[1]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"
scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[2]}: && \
ssh -t ubuntu@${MASTERIPS[2]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"
### run on the other etcd nodes
# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml
# create manifest for
kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
### run only on one etcd node (k8s-c3-e1)
# check if cluster is running
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${ETCDIPS[0]}:2379 cluster-health
### run on all master nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
VIP=10.10.10.76
# cerate clusterConfig for master nodes
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "127.0.0.1"
- "${ETCDIPS[0]}"
- "${ETCDIPS[1]}"
- "${ETCDIPS[2]}"
- "${VIP}"
controlPlaneEndpoint: "${VIP}:6443"
etcd:
external:
endpoints:
- https://${ETCDIPS[0]}:2379
- https://${ETCDIPS[1]}:2379
- https://${ETCDIPS[2]}:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
EOF
### run only on the first master node (k8s-c3-m1)
# init the first master node
service kubelet stop && \
kubeadm init --config /root/kubeadmcfg.yaml
... e estou preso na etapa de inicialização mestre :-)
Esta é uma configuração externa do etcd? Por que você não inclui essa bandeira?
https://gist.github.com/joshuacox/95aad9bee0c7e49e735ec3ec553b24ca#file -final_node-sh-L42
Eu não estava ciente de sua existência. Este é o comando exato?
kubeadm init --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml,ExternalEtcdVersion --config /etc/kubernetes/kubeadmcfg.yaml
Isso me dá um erro:
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
tem certeza de que este sistema está limpo? essas portas em uso indicam que você já tem um cluster (parcialmente?) em execução
EDITAR: talvez
kubadm reset
mais EDIT: também parece que era código antigo, e você está correto sobre o comando agora pelo menos nos documentos parece que eles implementaram um switch no bloco external
no json. E, de fato, meu próprio código em kubash não tem mais essas sinalizações Eu tenho um cluster 1.13.4 em execução usando esta linha para implementar meu kubeadm init
que na verdade não tem nenhuma dessas sinalizações
Você está certo, esqueci de redefinir, mas estou usando o bloqueio externo no meu manifesto. Estou tentando seguir a documentação oficial o máximo que posso, mas estou preso na inicialização de um nó mestre, conforme mostrado em meus posts anteriores.
O que esse erro está me dizendo?
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
O Kubeadm não consegue falar com o docker via /var/run/dockershim.sock?
Parece estar tentando anotar um nó que não existe.
@joshuacox, como está a configuração do docker? Qual driver cgroup você está usando para docker e kubelet?
docker info
Containers: 19
Running: 17
Paused: 0
Stopped: 2
Images: 31
Server Version: 17.03.3-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 103
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6c463891b1ad274d505ae3bb738e530d1df2b3c7
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-142-generic
Operating System: Ubuntu 16.04.6 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 2.119 GiB
Name: thalhalla-master1
ID: JYXS:H6MM:FFLN:ILI3:2LRX:WOKR:AUC6:VTJH:A5W6:DGPD:WPO3:6KNF
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Ok, parece ser uma configuração semelhante, mas vou tentar a sua versão do docker.
root@k8s-c3-m1:~# docker info
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 7
Server Version: 18.09.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 27
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e6b3f5632f50dbc4e9cb6288d911bf4f5e95b18e
runc version: 6635b4f0c6af3810594d2770f662f34ddc15b40d
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-112-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859GiB
Name: k8s-c3-m1
ID: EQ42:4KQG:5Z42:GQ67:OUU5:SPUA:P6VB:OM7P:S5XF:VLER:5DZI:DU4S
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No swap limit support
Sem sorte. Parece que o etcd está vazio e o único recurso que posso listar da API do k8s após essa etapa com falha é um ClusterIP
root@k8s-c3-m1:~# kubectl get all --kubeconfig /etc/kubernetes/admin.conf
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d4h
root@k8s-c3-m1:~#
Acho que preciso abrir um novo problema para tentar fazer com que um desenvolvedor analise isso. As informações nos logs não estão fazendo nenhum sentido para mim.