Kubeadm: port 10251 and 10252 are in use

Created on 7 Jul 2017  ·  13Comments  ·  Source: kubernetes/kubeadm

Version info

kubeadm v1.6.5

Reproduce

In master server: (1)kubeadm init; (2)kubeadm reset; (3) kubeadm init again and got the error port 10251 is in use, port 10252 is in use.

kubeadm init --token abcdef.1234567890abcdef --kubernetes-version v1.6.5
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.5
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
    Port 10251 is in use
    Port 10252 is in use

Solution

I check the usage status of all ports. It seems like kubeadm failed to reset the controller and scheduler.

$ netstat -lnp | grep 1025
tcp6       0      0 :::10251                :::*                    LISTEN      4366/kube-scheduler
tcp6       0      0 :::10252                :::*                    LISTEN      4353/kube-controlle
$ kill 4366
$ kill 4353

After kill them, I can initialize kubernetes cluster.

kubeadm init --token abcdef.1234567890abcdef --kubernetes-version v1.6.5
[kubeadm] WARNING:  is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.5
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [xxx.xxx.xxx.xxx kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [xxx.xxx.xxx.xxx]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 16.281203 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 5.501873 seconds
[token] Using token: abcdef.1234567890abcdef
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token abcdef.1234567890abcdef xxx.xxx.xxx.xxx:6443

In a word

I attached the solution for users who want to get rid of same issue.
Could you guys check whether is a bug?

Most helpful comment

Thanks for filing the issue!

I don't think it's a bug really. Somehow you/kubeadm haven't cleaned up correctly, since the kube-scheduler and controller-manager containers were still running. Or docker restarted them somehow.
Or you started the controller-manager and scheduler yourself somehow.
Or you ^C'd kubeadm reset a little in beforehand...

If it happens often, please reopen, but I don't think there's something to fix here really

All 13 comments

Thanks for filing the issue!

I don't think it's a bug really. Somehow you/kubeadm haven't cleaned up correctly, since the kube-scheduler and controller-manager containers were still running. Or docker restarted them somehow.
Or you started the controller-manager and scheduler yourself somehow.
Or you ^C'd kubeadm reset a little in beforehand...

If it happens often, please reopen, but I don't think there's something to fix here really

@luxas Fair point. I'll pay attention to the issue. If something wrong on my configs which causing to the issue, I'd like to report that.

Followed your tips, but was still having the same issues. Realized there's minikube running, which had to be stopped since it uses the same ports, but lists the processes as "localkube".

Like @luxas said, this worked. Probably the best solu as nothing else worked.
$ sudo kubeadm reset

I got into this situation by having downgrading docker then trying to run minikube. kubeadm reset resolved the issue as @luxas suggested.

kubeadm reset fixed the issue

docker ps; docker inspect etcd1 listed the etcd container which was using the related port numbers. As hence sudo kubeadm init failed to succeed.
As I did : docker kill etcd1

There are some other issues left about initializing the kubernetes cluster (SSH, kernel cgroups config,...), essentially know what version of Linux/Architecture do you use, but that may be cleared up in the requirements details.

kubeadm reset
fixed issue for me too

I have a same problem of minikube start.
And I solved the problem with the following steps:
1、docker stop $(docker ps -a -q)
2、use --extra-config parameter of minikube start. like: minikube start --kubernetes-version=1.17.2 --vm-driver=none kubelet.ignore-preflight-errors kubeadm.ignore-preflight-errors

I have a same problem of minikube start.
And I solved the problem with the following steps:
1、docker stop $(docker ps -a -q)
2、use --extra-config parameter of minikube start. like: minikube start --kubernetes-version=1.17.2 --vm-driver=none kubelet.ignore-preflight-errors kubeadm.ignore-preflight-errors

hi Everyone,
I'm trying to install Kubernetes on ubuntu VM, but unfortunately i'm facing some issue despite kubeadm reset.
root@KVM:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out # Save output for future review
W0719 22:06:28.075574 15363 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ahmed-kvm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8smaster] and IPs [x.x.x.x x.x.x.x]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ahmed-kvm localhost] and IPs [x.x.x.x x.x.x.x]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ahmed-kvm localhost] and IPs [x.x.x.x x.x.x.x ]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0719 22:06:31.223537 15363 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0719 22:06:31.224263 15363 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

Thanks for your help

For anyone facing this issue, check if you had microk8s installed and remove. That was my issue

sudo kubeadm reset

solved mine also

Was this page helpful?
0 / 5 - 0 ratings