Kubernetes: x509 cert issues after kubeadm init

Created on 1 Jul 2017  ·  28Comments  ·  Source: kubernetes/kubernetes

BUG REPORT: (I think?)

What happened:

I ran the following steps on Ubuntu 16.04:

  1. sudo apt-get update
  2. sudo apt-get upgrade
  3. sudo su
  4. kubeadm reset
  5. kubeadm init --token [redacted] --apiserver-advertise-address=192.168.13.1 --pod-network-cidr=10.244.0.0/16
  6. exit
  7. mkdir -p $HOME/.kube
  8. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  9. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  10. kubectl get nodes

Upon doing this, I receive:

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

I've tried uninstalling kubectl, kubeadm and kubelet a couple of times (even with --purge) and no matter what I do, it (kubeadm 1.7) doesn't generate a working admin.conf. However, I run the following:

curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key https://192.168.13.1:6443

and get:

{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1beta1",
    "/apis/apps",
    "/apis/apps/v1beta1",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1beta1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/networking.k8s.io",
    "/apis/networking.k8s.io/v1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1alpha1",
    "/apis/rbac.authorization.k8s.io/v1beta1",
    "/apis/settings.k8s.io",
    "/apis/settings.k8s.io/v1alpha1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1",
    "/healthz",
    "/healthz/autoregister-completion",
    "/healthz/ping",
    "/healthz/poststarthook/apiservice-registration-controller",
    "/healthz/poststarthook/apiservice-status-available-controller",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/ca-registration",
    "/healthz/poststarthook/extensions/third-party-resources",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/healthz/poststarthook/kube-apiserver-autoregistration",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/healthz/poststarthook/start-apiextensions-controllers",
    "/healthz/poststarthook/start-apiextensions-informers",
    "/healthz/poststarthook/start-kube-aggregator-informers",
    "/healthz/poststarthook/start-kube-apiserver-informers",
    "/logs",
    "/metrics",
    "/swagger-2.0.0.json",
    "/swagger-2.0.0.pb-v1",
    "/swagger-2.0.0.pb-v1.gz",
    "/swagger.json",
    "/swaggerapi",
    "/ui",
    "/ui/",
    "/version"
  ]
}

What you expected to happen:

After initializing the master via kubeadm init, I expected to be able to use kubectl to install a network plugin; since it x509's, I cannot do that.

Environment:

  • Kubernetes version (use kubectl version): 1.7
  • OS (e.g. from /etc/os-release): Ubuntu 16.04.2 LTS
  • Kernel (e.g. uname -a): Linux radium-control 4.4.0-83-generic #106-Ubuntu SMP Mon Jun 26 17:54:43 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
arekubeadm sicluster-lifecycle

Most helpful comment

do you have $KUBECONFIG pointing to /etc/kubernetes/kubelet.conf?

export KUBECONFIG=/etc/kubernetes/kubelet.conf
kubectl get nodes

All 28 comments

@carldanley There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-* for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

_Note: method (1) will trigger a notification to the team. You can find the team list here and label list here_

/sig cluster-lifecycle

Unsure if this helps, but I had the same and realised I was using the old setup guide, copying /etc/kubernetes/admin.conf into ~/.kube/admin.conf and setting $KUBECONFIG=$HOME/.kube/admin.conf. I cleared the environment variable and kubectl defaults back to using ~/.kube/config.

I'm also seeing this using kubeadm v1.7 - it's preventing nodes from joining the cluster

Same error for my installation. Try with v1.6.5 and 1.6.7 it works fine.

Same problem here.

.

(kubeadm init seems okay)

ns2 ~ # kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12
[preflight] WARNING: no supported init system detected, skipping checking for services
[preflight] WARNING: no supported init system detected, skipping checking for services
[preflight] WARNING: no supported init system detected, skipping checking for services
[preflight] WARNING: socat not found in system path
[preflight] No supported init system detected, won't ensure kubelet is running.
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [ns2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 ip_of_my_server]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 36.004283 seconds
[token] Using token: 62af23.9fba33a48799d425
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token [some string] [ip_of_my_server]:6443

(kubeadm join seems okay, too)

h1 ~ # kubeadm join --token [some string] [ip_of_my_server]:6443 --skip-preflight-checks 
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "192.168.0.254:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.254:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.0.254:6443"
[discovery] Successfully established connection with API Server "192.168.0.254:6443"
[bootstrap] Detected server version: v1.7.3
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

(but kubectl get nodes fails)

byungnam2@ns2 ~ $ kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

do you have $KUBECONFIG pointing to /etc/kubernetes/kubelet.conf?

export KUBECONFIG=/etc/kubernetes/kubelet.conf
kubectl get nodes

@liggitt After I set the $KUBECONFIG to /etc/kubernetes/kubelet.conf, now it gives me a timeout error.

ns2 ~ # ./kubernetes/kubernetes/server/bin/kubectl get nodes
Error from server (ServerTimeout): the server cannot complete the requested operation at this time, try again later (get nodes)

And now I want where the $KUBECONFIG came from because there is no such statement in the manual I'm referencing.

From the output of the node join command:

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Encountered same problem while playing with kubeadm.

After kubeadm init and kubeadm reset for a few times, kubelet will fail communicating with apiserver because certificate signed by unknown authority (in kubelet logs). And also kubeadm init blocks for ever.

After removing /run/kubernetes/ manually, all things come back. Maybe there are problems of cleaning certificates when running kubeadm reset?

/area kubeadm

I am on kubeadm 1.8 and this problem still occurs.

ubuntu@ip-172-31-9-157:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
ubuntu@ip-172-31-9-157:~$
ubuntu@ip-172-31-9-157:~$
ubuntu@ip-172-31-9-157:~$ kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
ubuntu@ip-172-31-9-157:~$
ubuntu@ip-172-31-9-157:~$
ubuntu@ip-172-31-9-157:~$

I manually checked /var/run/kubernetes. It was cleaned when I ran kubeadm reset. Not sure what the actual problem is.

ATTENTION: "To start using your cluster, you need to run (as a regular user)"

[root@master1 ~]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

[root@master1 ~]# su - regular_user

[regular_user@master1 ~]$ mkdir -p $HOME/.kube
[regular_user@master1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[regular_user@master1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

[regular_user@master1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1.virti.corp NotReady master 6m v1.8.1
master2.virti.corp NotReady 4m v1.8.1

@jeffbr13 Thanks. It works.

Please update docs with this workaround

If you kubeadm reset and then kubeadm init again, and if you ever ran the following as root, you need to run it again (as root) to get the new config:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then you can still run as root.

if found that if you were ever going to run "sudo kubeadm reset" you'll need to remove your .kube dir to clear the cached dirs.
After that you can follow @petersonwsantos
oh, be sure to set KUBECONFIG to whatever you (re)name your config file to e.g. $HOME/.kube/config

tanks freind ints true.

Config as the following lines, _$kubectl get nodes_ works:

_root:~/k8s# cat 04-config.sh
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chmod 777 $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/kubelet.conf
export KUBECONFIG=/home/ubuntu/.kube/config
kubectl get nodes

this is likely because you have a multi master setup and have generated the /etc/kubernetes/pki/ca.* on each of the masters. Rather than copying them from the first master to the rest.

I found the solution in the kubernetes documentation
while following the documentation dont forgot to create .kube directory using this command
mkdir -p $HOME/.kube

Because as you need this command it will move the .kube directory
mv $HOME/.kube $HOME/.kube.bak

https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/

For others that might have this issue, might want to try and move the /root/.kube folder to backup location if it exists and retry. Very possible to have a cached root version being used that is no longer valid, as you will run kubeadm as sudo.

My problem was I had custom certificates I had created during the KubeEdge Getting Started guide. Not messing around with ssl and kubeedge made it work.

ATTENTION: "To start using your cluster, you need to run (as a regular user)"

[root@master1 ~]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

[root@master1 ~]# su - regular_user

[regular_user@master1 ~]$ mkdir -p $HOME/.kube
[regular_user@master1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[regular_user@master1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

This works. Except that I had to set my KUBECONFIG again since that was changed

export KUBECONFIG=$HOME/.kube/config

[regular_user@master1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1.virti.corp NotReady master 6m v1.8.1
master2.virti.corp NotReady 4m v1.8.1

do you have $KUBECONFIG pointing to /etc/kubernetes/kubelet.conf?

export KUBECONFIG=/etc/kubernetes/kubelet.conf
kubectl get nodes

that works for me , thanks a lot.

export KUBECONFIG=/etc/kubernetes/kubelet.conf
kubectl get nodes

is work from me

ATTENTION: "To start using your cluster, you need to run (as a regular user)"

[root@master1 ~]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

[root@master1 ~]# su - regular_user

[regular_user@master1 ~]$ mkdir -p $HOME/.kube
[regular_user@master1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[regular_user@master1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

[regular_user@master1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1.virti.corp NotReady master 6m v1.8.1
master2.virti.corp NotReady 4m v1.8.1

This worked!

After kubeadm init you have to remove $HOME/.kube folder and create new:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Was this page helpful?
0 / 5 - 0 ratings

Related issues

broady picture broady  ·  3Comments

rhohubbuild picture rhohubbuild  ·  3Comments

Seb-Solon picture Seb-Solon  ·  3Comments

ttripp picture ttripp  ·  3Comments

chowyu08 picture chowyu08  ·  3Comments