Kubeadm: certificate signed by unknown authority --Corporate Network - Proxy

Created on 22 Jun 2018  ·  4Comments  ·  Source: kubernetes/kubeadm

Is this a request for help?

YES

What keywords did you search in kubeadm issues before filing this one?

x509: certificate signed by unknown authority -- INSIDE CORPORATE NETWORK

If you have found any duplicates, you should instead reply there and close this page.

If you have not found any duplicates, delete this section and continue on.

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

bug report

Versions

kubeadm version (use kubeadm version): --1.10.4
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
    NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:7"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

  • Kernel (e.g. uname -a):
    Linux kubem1 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

What happened?

Throwing error x509: certificate signed by unknown authority

What you expected to happen?

Need to run kubeadm init without any error

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?

I have configured proxy in several files.

.bash_profile
/etc/environment
/etc/systemd/system/docker.service.d/http-proxy.conf

/etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment="HTTP_PROXY=http://:@:8080"
Environment="HTTPS_PROXY=https://:@:8080"
Environment="NO_PROXY=localhost,127.0.0.1,10.169.150.123"

/etc/environment

export http_proxy="http://:@:8080"
export https_proxy="https://:@:8080"
export HTTP_PROXY="http://:@:8080"
export HTTPS_PROXY="https://:@:8080"
export no_proxy="10.169.150.123,127.0.0.1,localhost"

IN Bash Profile

export KUBECONFIG=/etc/kubernetes/admin.conf
export http_proxy="http://:@:8080"
export https_proxy="https://:@:8080"
export HTTP_PROXY="http://:@:8080"
export HTTPS_PROXY="https://:@:8080"
export no_proxy="10.169.150.123,127.0.0.1,localhost"

Opened the necessary ports

cat /etc/sysconfig/iptables

-A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2379-2380 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10251 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10252 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10255 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT

and reloaded the firewall.

ALso checked with disableling the firewall.

Disabled SELINUX

########Commented KUBELET_NETWORK_ARGS

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"

Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

If i run by passing the version,

[root@kubem1 ~]# kubeadm init --kubernetes-version=v1.10.4
[init] Using Kubernetes version: v1.10.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "https://****". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubem1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.169.150.123]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubem1.****] and IPs [10.169.150.123]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.

#############IN error log

Jun 22 04:31:34 kubem1.**** kubelet[7275]: E0622 04:31:34.942572 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.169.150.123:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:35 kubem1.**** kubelet[7275]: E0622 04:31:35.888104 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://10.169.150.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:35 kubem1.**** kubelet[7275]: E0622 04:31:35.888256 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://10.169.150.123:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:35 kubem1.
**** kubelet[7275]: E0622 04:31:35.943992 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://10.169.150.123:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:36 kubem1.**** kubelet[7275]: E0622 04:31:36.889648 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://10.169.150.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:36 kubem1.**** kubelet[7275]: E0622 04:31:36.891490 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://10.169.150.123:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:36 kubem1.
**** kubelet[7275]: E0622 04:31:36.945185 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://10.169.150.123:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:37 kubem1.**** kubelet[7275]: E0622 04:31:37.890407 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://10.169.150.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:37 kubem1.**** kubelet[7275]: E0622 04:31:37.891696 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://10.169.150.123:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:37 kubem1.
**** kubelet[7275]: E0622 04:31:37.946023 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://10.169.150.123:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:38 kubem1.**** kubelet[7275]: E0622 04:31:38.121910 7275 eviction_manager.go:247] eviction manager: failed to get get summary stats: failed to get node info: node "kubem1.****" not found
Jun 22 04:31:38 kubem1.**** kubelet[7275]: E0622 04:31:38.892292 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://10.169.150.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:38 kubem1.**** kubelet[7275]: E0622 04:31:38.894157 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://10.169.150.123:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused
Jun 22 04:31:38 kubem1.
**** kubelet[7275]: E0622 04:31:38.947002 7275 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://10.169.150.123:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubem1.****&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused

also added /etc/resolve.conf

[root@kubem1 ~]# cat /etc/resolv.conf

Generated by NetworkManager

domain <>

search <>

nameserver <>

nameserver <>

nameserver 8.8.8.8
nameserver 8.8.4.4

Should i add any entry in this file??

Should i import any certificate??

I AM IN PROXY ENVIRONMENT

Also Tried the below,

kubeadm reset
systemctl daemon-reload
systemctl restart docker.service
systemctl stop kubelet.service

Below images were not able to pull through docker.

docker pull k8s.gcr.io/kube-apiserver-amd64:v1.10.3
docker pull k8s.gcr.io/kube-controller-manager-amd64:v1.10.3
docker pull k8s.gcr.io/kube-scheduler-amd64:v1.10.3
docker pull k8s.gcr.io/etcd-amd64:3.1.12

kinbug prioritimportant-longterm

Most helpful comment

Here I can see spec.nodeName=kubem1.*****************, which appears to be an edited out FQDN. On the other hand, in your DNS configuration, you don't seem to have domain and search directives.

Are Google DNS servers even allowed in your network? You may have to contact your local network admin for further information on that.

Further more, if docker pull is failing for you, it may be due to DNS problems. What is the error docker pull is failing with?

k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://10.169.150.123:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubem1.*****************&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused

All 4 comments

Here I can see spec.nodeName=kubem1.*****************, which appears to be an edited out FQDN. On the other hand, in your DNS configuration, you don't seem to have domain and search directives.

Are Google DNS servers even allowed in your network? You may have to contact your local network admin for further information on that.

Further more, if docker pull is failing for you, it may be due to DNS problems. What is the error docker pull is failing with?

k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://10.169.150.123:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubem1.*****************&limit=500&resourceVersion=0: dial tcp 10.169.150.123:6443: getsockopt: connection refused

I have edited this for posting in the github 'spec.nodeName=kubem1.*****'

Google DNS is not allowed in our network.

Docker pull is working for rest of the images, but pulling "docker pull k8s.gcr.io/kube-apiserver-amd64:v1.10.3" is not working.

Please let me know if i need to do any modifications in the config file.

/assign @liztio

Closing this issue due to lack of solid reproducer instructions.
Please reopen if there is still an issue.

Was this page helpful?
0 / 5 - 0 ratings