Kubernetes: Running Kubernetes Locally via Docker - `kubectl get nodes` returns `The connection to the server localhost:8080 was refused - did you specify the right host or port?`

Created on 1 Apr 2016  ·  56Comments  ·  Source: kubernetes/kubernetes

Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.

Steps taken:

  • export K8S_VERSION='1.3.0-alpha.1' (tried 1.2.0 as well)
  • copy-paste the docker run command
  • download the appropriate kubectl binary and put in on PATH (which kubectl works)
  • (optionally) setup the cluster
  • run kubectl get nodes

In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know

kinsupport

Most helpful comment

You can solve this with "kubectl config":

$ kubectl config set-cluster demo-cluster --server=http://master.example.com:8080
$ kubectl config set-context demo-system --cluster=demo-cluster
$ kubectl config use-context demo-system
$ kubectl get nodes
NAME                 STATUS    AGE
master.example.com   Ready     3h
node1.example.com    Ready     2h
node2.example.com    Ready     2h

All 56 comments

Have the same issue with version 1.2.0 +1

@xificurC @jankoprowski Have you checked whether the apiserver is running?

Please take a look at our troubleshooting guide:
http://kubernetes.io/docs/troubleshooting/

If you still need help, please ask on stackoverflow.

apiserver failed with:

F0421 14:28:55.140493 1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory

I also met that problem and my apiserver is not failed,all the process(apiserver,controller-manager,schdeuler,kublet and kube-proxy) runinng normally. My docker version is 1.11.2,if anyone knows how to resolve this problem?

I have met this problems too. Since I need to use Kubernetes1.2.2, I use docker to deploy the kubernetes. The same problem happens. The apiserver is down. Logs here,

I0725 08:56:20.440089       1 genericapiserver.go:82] Adding storage destination for group batch
W0725 08:56:20.440127       1 server.go:383] No RSA key provided, service account token authentication disabled
F0725 08:56:20.440148       1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory

The apiserver is failed and I cannot deploy Kubernetes. Does anyone know about it?

Try using --server to specify your master:
kubectl --server=16.187.189.90:8080 get pod -o wide

Hello I'm getting the following error on Centos 7, how can solve this issue?

[root@ip-172-31-11-12 system]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

You can solve this with "kubectl config":

$ kubectl config set-cluster demo-cluster --server=http://master.example.com:8080
$ kubectl config set-context demo-system --cluster=demo-cluster
$ kubectl config use-context demo-system
$ kubectl get nodes
NAME                 STATUS    AGE
master.example.com   Ready     3h
node1.example.com    Ready     2h
node2.example.com    Ready     2h

In my case I had just to remove ~/.kube/config which left from previous attempt.

Hi,
I still met this problem with
kubernetes-master-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-node-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-unit-test-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-ansible-0.6.0-0.1.gitd65ebd5.el7.noarch
kubernetes-client-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-1.4.0-0.1.git87d9d8d.el7.x86_64

if I config KUBE_API_ADDRESS with below value
KUBE_API_ADDRESS="--insecure-bind-address=10.10.10.xx"
I met this error, and it work if I pass options "--server=10.10.10.xx:8080" to cmd

if I config KUBE_API_ADDRESS with below value
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
it works good.

I was trying to get status from remote system using ansible and I was facing same issue.
I tried and it worked.
kubectl --kubeconfig ./admin.conf get pods --all-namespaces -o wide

Similar to @sumitkau, I solved my problem with setting new kubelet config location using:
kubectl --kubeconfig /etc/kubernetes/admin.conf get no
You can also copy /etc/kubernetes/admin.conf to ~/.kube/config and it works, but I don't know that it's a good work or not!

update the entry in /etc/kubernetes/apiserver ( on master server)
KUBE_API_PORT="--port=8080"
then do a systemctl restart kube-apiserver

If this happens in GCP, the below most likely will resolve the issue:

gcloud container clusters get-credentials your-cluster --zone your-zone --project your-project

Thanks to @mamirkhani. I solved this error.
However I just found such info in "kubeadm init" output:
_Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf_

I think this is the recommended solution.

I had the same problem. When creating cluster via web gui in google cloud and trying to run kubectl I get

The connection to the server localhost:8080 was refused - did you specify the right host or port?

everything you have to do is fetch kubectl config for your cluser which will be stored in $HOME/.kubectl/config:

$ gcloud container clusters get-credentials guestbook2
Fetching cluster endpoint and auth data.
kubeconfig entry generated for guestbook2.

Now kubectl works just fine

kubectl is expecting ~/.kube/config as the filename for its configuration.

The quick fix that worked for me was to create a symbolic link:

ln -s ~/.kube/config.conjure-canonical-kubern-e82 ~/.kube/config

N.B. This was for a "conjure-up kubernetes" deployment.

This issue has been confused me for 1 week, it seems to be working for me now. If you have this issue, first of all, you need to know which node it happens on.

If it is a master node, then make sure all of kubernetes pods are running by command
kubectl get pods --all-namespaces,

mine looks like this
kube-system etcd-kubernetes-master01 1/1 Running 2 6d kube-system kube-apiserver-kubernetes-master01 1/1 Running 3 6d kube-system kube-controller-manager-kubernetes-master01 1/1 Running 2 6d kube-system kube-dns-2425271678-3kkl1 3/3 Running 6 6d kube-system kube-flannel-ds-brw34 2/2 Running 6 6d kube-system kube-flannel-ds-psxc8 2/2 Running 7 6d kube-system kube-proxy-45n1h 1/1 Running 2 6d kube-system kube-proxy-fsn6f 1/1 Running 2 6d kube-system kube-scheduler-kubernetes-master01 1/1 Running 2 6d

if it does not, then verify if you have those files in your /etc/kubernetes/ directory,
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf, if you do, then copy those files with a normal user (not ROOT user)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

and then see if kubectl version works or not, if it still does not work, then follow the tutorial at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ and tear down your cluster and rebuilt your master.

If it happens on (slave) nodes, then make sure if you have the files
kubelet.conf manifests pki
in your directory of /etc/kubernetes/, and in this kubelet.conf, the server field should point to your master IP, which is the same settings as your master node admin.conf,
If you dont have the kubelet.conf, that is probably because you haven'r run the command to join your nodes with your master
kubeadm join --token f34tverg45ytt34tt 192.168.1.170:6443, you should get this command (token) after your master node is built.

after login as normal user on (slave) node, you probably wont see a config file in your ~/.kube, then create this folder then copy _admin.conf_ from your master node into your ~/.kube/ directory on this (slave) node as _config_ with a normal user, and then do the copy and try kubectl version, it works for me.

While I know that there might be multiple reasons for failure here, in my case removing ~/.kube/cache helped immediately.

I have this issues. This solution work for me:

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

If you don't have admin.conf, plz install kubeadm
And then remove ~/.kube/cache

rm -rf ~/.kube/cache

You need to switch context.
kubectl config use-context docker-for-desktop

HI Team,

we need to install sap vora, for that kubernetes and Docker are prerequisites. we have installed kubernetes master and kubectl, docker . but when we are checking

kubectl cluster-info

kubectl cluster-info dump

2018-05-09 06:47:57.905806 I | proto: duplicate proto type registered: google.protobuf.Any
2018-05-09 06:47:57.905997 I | proto: duplicate proto type registered: google.protobuf.Duration
2018-05-09 06:47:57.906019 I | proto: duplicate proto type registered: google.protobuf.Timestamp
The connection to the server 10.x.x.x:6443 was refused - did you specify the right host or port?

when we checked systemctl status kubelet -l

kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled)
Active: failed (Result: start-limit) since Wed 2018-05-09 04:17:21 EDT; 2h 28min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 2513 ExecStart=/usr/bin/hyperkube kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_INITIAL_ARGS $KUBELET_ARGS (code=exited, status=203/EXEC)
Main PID: 2513 (code=exited, status=203/EXEC)

we have performed below settings

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

but no use. can anyone help

regards
karthik

I have this issues. This solution work for me:
export KUBECONFIG=/etc/kubernetes/admin.conf

HI mapsic,

I exported abov env, but no use. getting same error.

2018-05-15 04:27:32.221744 I | proto: duplicate proto type registered: google.protobuf.Any
2018-05-15 04:27:32.221912 I | proto: duplicate proto type registered: google.protobuf.Duration
2018-05-15 04:27:32.221936 I | proto: duplicate proto type registered: google.protobuf.Timestamp
The connection to the server 10.x.x.x:6443 was refused - did you specify the right host or port

I have also changed kubelet service and config file entries, still no use.

Regards
karthik

Hi Karthik,

Maybe you can try to check the log about the kubelet service.

I am getting above error

[admin ~]$ kubectl cluster-info
Kubernetes master is running at https://xxxxx:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server xxxxx:6443 was refused - did you specify the right host or port?
[admin~]$ kubectl cluster-info dump
The connection to the server xxxx:6443 was refused - did you specify the right host or port?

Getting same error while using kubectl get pods --all-namespaces

Hello I'm getting the following error on Centos 7, how can solve this issue?

[root@ip-172-31-11-12 system]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I think u have not installed kubeadm .... install kubeadm first and then try

Ok will do that

On Mon, 17 Sep 2018 at 08:04, Karthik Nair notifications@github.com wrote:

Hello I'm getting the following error on Centos 7, how can solve this
issue?

[root@ip-172-31-11-12 system]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I think u have not installed kubeadm .... install kubeadm first and then
try


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/issues/23726#issuecomment-421909051,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AkFH4U0Cwjeg2Y4sL8irk5velVQDj6u2ks5ub0lzgaJpZM4H9nxM
.

>

Kind Regards,

Bishnu Sunuwar

You must run these commands first -

[user@k8s-master ~]# mkdir -p $HOME/.kube
[user@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[user@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

I am getting issue as "The connection to the server 10.0.48.115:6443 was refused - did you specify the right host or port?"

kubectl version

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 10.0.48.115:6443 was refused - did you specify the right host or port?

Who can help on this ? appreciate

Deleted the old config from ~/.kube and then restarted docker (for macos) and it rebuilt the config folder. All good now when I do 'kubectl get nodes'.

删除minikube虚机及配置文件,重新安装minikube(v0.25.2),其他版本可能会有坑

$ minikube delete
$ rm -rf ~/.minikube
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.2/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Use below command. It worked for me.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Use below command. It worked for me.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Thanks! this worked!

In my case, I had rebooted the master node of kubernetes, and when restarting, the SWAP partition of memory exchange is enabled by default

  1. sudo systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf, 90-local-extras.conf
   Active: activating (auto-restart) (Result: exit-code) since 금 2018-04-20 15:27:00 KST; 6s ago
     Docs: http://kubernetes.io/docs/
  Process: 17247 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 17247 (code=exited, status=255)
  1. sudo swapon -s
Filename    type        size    Used    priority
/dev/sda6   partition   950267  3580    -1
  1. sudo swapoff /dev/sda6

  2. sudo systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-01-14 08:28:56 -05; 15min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 7018 (kubelet)
    Tasks: 25 (limit: 3319)
   CGroup: /system.slice/kubelet.service
           └─7018 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes
  1. kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8smaster   Ready    master   47h   v1.13.2
k8snode1    Ready    <none>   45h   v1.13.2
k8snode2    Ready    <none>   45h   v1.13.2

I didn't run this.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

caused the problem.

ip route add default via xxx.xxx.xxx.xxx on k8s master

$ kubectl apply -f Deployment.yaml
unable to recognize "Deployment.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "Deployment.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

It works. Quite simple. If you are using desktop software, better find the solution from the preference setting first. haha.

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

tks

well, it may sound stupid, but maybe you didn`t install miniKube to run your cluster locally

try reinstall minikube if you have one or try usingkubectl proxy --port=8080.

Ok, on docker for Mac (v 2.0.5.0) there are TWO settings that both need to be toggled.

docker

Make sure it removes all the containers

docker rm -f $(docker ps -aq)

After you make sure all the containers have been removed, restart kubelet

systemctl restart kubelet

[mayuchau@cg-id .kube]$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I am getting above error. I tried above mentioned solutions but it didn't work for me.

[mayuchau@cg-id .kube]$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I am getting above error. I tried above mentioned solutions but it didn't work for me.

Issue Resolved after verifying permissions of /var/run/docker.sock in master node

Here is how I resolved it:

  1. Make sure kubectl is installed. Check it using:
    gcloud components list
    If no install kubectl first.

  2. Go to your project's Kubernetes engine console on gcloud platform.

  3. There connect with the cluster in which your project resides. It will give you a command that you have run in your local command prompt/terminal. For example, it will look like:

gcloud container clusters get-credentials <Cluster_Name> --zone <Zone> --project <Project_Id>

After a successful run of this command you would be able to run:
kubectl get nodes

Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.

Steps taken:

  • export K8S_VERSION='1.3.0-alpha.1' (tried 1.2.0 as well)
  • copy-paste the docker run command
  • download the appropriate kubectl binary and put in on PATH (which kubectl works)
  • (optionally) setup the cluster
  • run kubectl get nodes

In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know

Thanks!!!
This reminded me that I didn't have a variable export in my ~/.bashrc for a KUBEKONFIG system variable.
Adding that fixed my issue!

E.g.:
### ADD in ~/.bashrc
export KUBECONFIG=$HOME/.kube/eksctl/clusters/serv-eks-dev

one possible cause of this problem is, the current context in kube config is deleted with some tool and no current context remains.

check with:

kubectl config get-contexts

and if there is no current context, make one current with:

kubectl config use-context <context name>

I faced similar issue which was resolved with
export KUBECONFIG=/etc/kubernetes/admin.conf

If it helps anyone (I came here via Google search on the error) my Docker Desktop for Mac had Kubernetes disabled by default. Ticking Enabled Kubernetes and Apply & Restart sorted out the error.
image

In Mac OS : I am Running Kubernetes Locally via Docker ,to be specific https://k3d.io/- So post installation, once the cluster is created ,if i execute the command kubectl cluster-info returns
The connection to the server 0.0.0.0:51939 was refused - did you specify the right host or port? Does anyone have any pointers to this issue?

PS: Docker , docker-machine installed via Homebrew

What does kubectl config get-contexts return @navkmurthy?

navkmurthy$ k3d cluster create -p 5432:30080@agent[0] -p 9082:30081@agent[0] --agents 3 --update-default-kubeconfig
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating node 'k3d-k3s-default-agent-0'
INFO[0002] Creating node 'k3d-k3s-default-agent-1'
INFO[0003] Creating node 'k3d-k3s-default-agent-2'
INFO[0004] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0024] Cluster 'k3s-default' created successfully!
INFO[0024] You can now use it like this:
kubectl cluster-info
navkmurthy$ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 0.0.0.0:53706 was refused - did you specify the right host or port?

@paulmwatson

navkmurthy$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE

navkmurthy$

Was this page helpful?
0 / 5 - 0 ratings