Kubernetes: 1.6.1 The connection to the server localhost:8080 was refused

Created on 19 Apr 2017  ·  48Comments  ·  Source: kubernetes/kubernetes

Kubernetes version v1.6.1

Environment:

  • arm64 cavium thunder x:
  • Ubuntu 16.04.2 LTS
  • 4.4.0-72-generic

What happened:
init kubernetes with
kubeadm init --kubernetes-version=v1.6.1 --pod-network-cidr=10.244.0.0/16 than tried
kubectl taint nodes --all node-role.kubernetes.io/master- and got this
The connection to the server localhost:8080 was refused - did you specify the right host or port?

or this

# kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?

or

# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/arm64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Most helpful comment

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

All 48 comments

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

Great..thanks.. it worked ..

Reproduce the same error when doing a tutorial from Udacity called Scalable Microservices with Kubernetes https://classroom.udacity.com/courses/ud615, at the point of Using Kubernetes, Part 3 of Lesson.

Launch a Single Instance:

kubectl run nginx --image=nginx:1.10.0

Error:

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

How I resolved the Error:

Login to Google Cloud Platform

Navigate to Container Engine Google Cloud Platform, Container Engine

Click CONNECT on Cluster

Use login Credentials to access Cluster [NAME] in your Teminal

Proceeded With Work!!!

on trying the command
kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"

I am getting the following error.. what will be the reason

_The connection to the server localhost:8080 was refused - did you specify the right host or port?_

solution from @csarora worked for me

Hi
I'm getting this error can any one help me on this error:
kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/arm64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

@csarora
csarora commented on Apr 19

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
fedya closed this on Apr 19
@sharif786
sharif786 commented on May 5

Great..thanks.. it worked ..
@GoodFaithParadigm8
GoodFaithParadigm8 commented 29 days ago

Reproduce the same error when doing a tutorial from Udacity called Scalable Microservices with Kubernetes https://classroom.udacity.com/courses/ud615, at the point of Using Kubernetes, Part 3 of Lesson.

Launch a Single Instance:

kubectl run nginx --image=nginx:1.10.0

Error:

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

How I resolved the Error:

Login to Google Cloud Platform

Navigate to Container Engine Google Cloud Platform, Container Engine

Click CONNECT on Cluster

Use login Credentials to access Cluster [NAME] in your Teminal

Proceeded With Work!!!
@lithint
lithint commented 7 days ago

on trying the command
kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"

I am getting the following error.. what will be the reason

The connection to the server localhost:8080 was refused - did you specify the right host or port?

I didn't have admin.conf
Did I miss something?

admin.conf must come out of thin air.

DC/OS works much better out of the box this is as painful as any cloud console. YUCK!

until 1.8 kubelet.conf worked for me @Rukeith @jeffhoffman13
now this is missing for some reason...

I'm having this issue after installing via gcloud on Travis CI.

I having this issue. I can't find admin.conf

I can't find admin.conf what to do now plz help me

@kensupermen @MSKPV @Rukeith

The admin.conf is generated when you run the init command and not the join command.

try run with sudo permission mode
sudo kubectl....

@italojs as I said... The admin.conf is generated when you run the init command and not the join command, atleast when i messed with it. You can type as much sudo you want. Joining a cluster wont generate the admin.conf.

create .kube folder and a symlink on it to the directory which has the yaml files and pem file (k8s related) and right smylink inside the .kube folder to right yml file - that should solve the problem...

if you are using minikube then try
$ minikube delete
then
$ minikube start

Hello All. Need your help. I have installed kubectl and minikube on my MAC but both are not working.

When I run minikube start I get an error "Segmentation fault: 11"

When I run kubectl get nodes I get an error "The connection to the server localhost:8080 was refused - did you specify the right host or port?"

Please can you help me fix this issue

vim /etc/hosts
127.0.0.1 localhost
modify to:
10.0.0.8 localhost

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

Thanks,it worked!

please help me while installing node i am getting this error
The connection to the server localhost:8080 was refused - did you specify the right host or port?

There is a configuration issue, if you have setup kubernetes using root and trying to execute kubectl command from the different user then this error will occur.
To resolved this issue run simply below command
root@devops:~# cp -r .kube/ /home/ubuntu/

root@devops:~# chown -R ubuntu:ubuntu /home/ubuntu/.kube

root@devops:~# su ubuntu

root@devops:~# kubectl get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cron 1/1 Running 0 2h 10.244.0.97 devops

To those that can't find admin.conf, hopefully this is relevant to your flavor of Linux, but I typically use:
updatedb
locate admin.conf

I was able to find the file this way, hope it helps you as well!

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

This should be moved into the docs no? MIssing in the setup AFAIK

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

This worked, thanks 💯

I executed these commands right after the following one for generating the certificates and token for adding nodes to this master later on:

kubeadm init --pod-network-cidr=10.244.0.0/16  --apiserver-advertise-address $MASTER_IP

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

It seem to be a very intersing track to take to get the connection non refused to the master. It may be added to some root manual. thanks @csarora

I got this error on kubectl get all because there was no cluster created. After creating a cluster with gcloud container clusters create the error went away and the kubectl command worked.

http://localhost:8080/

@kopollo

http://localhost:8080/ -> http://localhost.support/:8080 ???

This looks like spam/fishing? Can someone remove this comment?

I used the k8s cluster installed by rancher, but I did not install kubeadm, how can I generate admin.conf?

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

Thanks! That fixed my issue

The problem I encountered is as follows:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Because slave node missing configuration file “config”
Solution
master:

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf  $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube.conf

slave :
Copy files from remote master node , and rename to config

    mkdir -p $HOME/.kube/
    scp root@matet:/etc/kubernetes/admin.conf   $HOME/.kube/config

If you're using minikube, then you need to start the minikube and then it will be alright.

$ minikube start

Once it's up and running, check the kubectl version

$ kubectl version

Hope this helps

cp /etc/kubernetes/admin.conf /root/.kube/config

This issue was that I was using root account, switched back to regular user and executed the command which fixed my issue.

If you are creating a cluster with more than 1 node (using kubeadm, k8s, ...) @SunHarvey solution works. The admin.conf file is created only in the master node, because here we execute the kubeadm init command. So we have to copy the contents to the slave nodes

Someone might want to correct this page that directs the install user to check the kubectl install directly... https://kubernetes.io/docs/tasks/tools/install-kubectl/

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

it is still showing the same error when i runkubectl version

If you are trying to run it on a VM then

  1. delete the current minikube profile by running minikube delete
  2. start minikube again with flags minikube start --vm-driver=none

Try to check your /etc/kubernetes/manifests/kube-apiserver.yaml and see if "insecure-port" is set to 8080, in my case that's the reason.

This happened to me because of my .kube/config file having wrong indentations (due to manual editing)

For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.

kube documentation about this issue

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

it worked thanks

What's the solution to this? I'm getting

chown: cannot access '/home/travis/.kube/config': No such file or directory on Travis when following the instructions listed above...

I am running kubelet in standalone mode.
I manually created my /var/lib/config.yaml

$ cat /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
    anonymous:
        enabled: true
    webhook:
        enabled: false
authorization:
    mode: AlwaysAllow
clusterDNS:
    - 127.0.0.53
clusterDomain: cluster.local
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodPath: /etc/kubernetes/manifests
enable-controller-attach-detach:
    - "false"

Manually created my static pod file and put it in staticPodPath.
It created the pods as expected.

I did not run "kubeadm init". There is no API server either. Hence no "admin.conf".

Running any kubelet command fails with "The connection to the server localhost:8080 was refused".

But I need to create secrets, that is required to pull image from registry.

Any tips on how I could accomplish this?

did you run below commands after kubeadm init

To start using your cluster, you need to run (as a regular user):

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

I am getting this error
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo: error while loading shared libraries: libpam.so.0: cannot open shared object file: No such file or directory

May be we need to run: minikube start
I had the same error "The connection to the server localhost:8080 was refused - did you specify the right host or port?"

root@book:/home/user# su - user
user@book:~$ kubectl version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.2",
    "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
    "gitTreeState": "clean",
    "buildDate": "2020-09-16T13:41:02Z",
    "goVersion": "go1.15",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}
The connection to the server localhost:8080 was refused - did you specify the right host or port?


user@book:~$ minikube start
😄  minikube v1.13.1 on Ubuntu 20.04
✨  Automatically selected the virtualbox driver
💿  Downloading VM boot image ...
    > minikube-v1.13.1.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.13.1.iso: 173.91 MiB / 173.91 MiB  100.00% 2.41 MiB p/s 1m12s
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.19.2 preload ...
    > preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB
🔥  Creating virtualbox VM (CPUs=2, Memory=3900MB, Disk=20000MB) ...
🔥  Deleting "minikube" in virtualbox ...
🤦  StartHost failed, but will try again: creating host: create: creating: /usr/bin/VBoxManage storagectl minikube --name SATA --add sata --hostiocache on failed:
VBoxManage: error: Storage controller named 'SATA' already exists
VBoxManage: error: Details: code VBOX_E_OBJECT_IN_USE (0x80bb000c), component SessionMachine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "AddStorageController(Bstr(pszCtl).raw(), StorageBus_SATA, ctl.asOutParam())" at line 1078 of file VBoxManageStorageController.cpp

🔥  Creating virtualbox VM (CPUs=2, Memory=3900MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" by default


user@book:~$ kubectl version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.2",
    "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
    "gitTreeState": "clean",
    "buildDate": "2020-09-16T13:41:02Z",
    "goVersion": "go1.15",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "serverVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.2",
    "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
    "gitTreeState": "clean",
    "buildDate": "2020-09-16T13:32:58Z",
    "goVersion": "go1.15",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

Great..thanks.. it worked ..

Thanks a lot, it worked for me aswell.

Was this page helpful?
0 / 5 - 0 ratings