helm init --service-account tiller
$HELM_HOME has been configured at /home/ubuntu/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
Output of helm version
:
$ helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Error: could not find tiller
Output of kubectl version
:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.):
AWS / Kops
What's the output of kubectl -n kube-system get pods
?
helm init
only checks that the deployment manifest was submitted to kubernetes. If you want to check for if tiller is live and ready, use helm init --wait
. :)
I'm getting the Error: could not find tiller
message too, using Kubernetes under Docker for Desktop (Mac).
helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Error: could not find tiller
Running kubectl -n kube-system get pods
on context docker-for-desktop
gives me:
etcd-docker-for-desktop 1/1 Running 1 8m
kube-apiserver-docker-for-desktop 1/1 Running 1 8m
kube-controller-manager-docker-for-desktop 1/1 Running 1 8m
kube-dns-86f4d74b45-t8pq8 3/3 Running 0 11m
kube-proxy-d6c4q 1/1 Running 0 9m
kube-scheduler-docker-for-desktop 1/1 Running 1 8m
$ helm init --wait
$HELM_HOME has been configured at /home/ubuntu/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Just hangs... I hit ctrl-c after 2 minutes
$ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-59558d567-6qgbv 1/1 Running 0 7d
coredns-59558d567-s6w7t 1/1 Running 0 7d
dns-controller-b76dfc754-f9vlj 1/1 Running 0 7d
etcd-server-events-ip-10-132-1-49.us-west-2.compute.internal 1/1 Running 3 7d
etcd-server-events-ip-10-132-2-171.us-west-2.compute.internal 1/1 Running 0 7d
etcd-server-events-ip-10-132-3-80.us-west-2.compute.internal 1/1 Running 0 7d
etcd-server-ip-10-132-1-49.us-west-2.compute.internal 1/1 Running 3 7d
etcd-server-ip-10-132-2-171.us-west-2.compute.internal 1/1 Running 0 7d
etcd-server-ip-10-132-3-80.us-west-2.compute.internal 1/1 Running 0 7d
kube-apiserver-ip-10-132-1-49.us-west-2.compute.internal 1/1 Running 1 7d
kube-apiserver-ip-10-132-2-171.us-west-2.compute.internal 1/1 Running 1 7d
kube-apiserver-ip-10-132-3-80.us-west-2.compute.internal 1/1 Running 1 7d
kube-controller-manager-ip-10-132-1-49.us-west-2.compute.internal 1/1 Running 0 7d
kube-controller-manager-ip-10-132-2-171.us-west-2.compute.internal 1/1 Running 0 7d
kube-controller-manager-ip-10-132-3-80.us-west-2.compute.internal 1/1 Running 0 7d
kube-proxy-ip-10-132-1-103.us-west-2.compute.internal 1/1 Running 0 7d
kube-proxy-ip-10-132-1-49.us-west-2.compute.internal 1/1 Running 0 7d
kube-proxy-ip-10-132-2-171.us-west-2.compute.internal 1/1 Running 0 7d
kube-proxy-ip-10-132-2-175.us-west-2.compute.internal 1/1 Running 0 7d
kube-proxy-ip-10-132-3-115.us-west-2.compute.internal 1/1 Running 0 7d
kube-proxy-ip-10-132-3-80.us-west-2.compute.internal 1/1 Running 0 7d
kube-scheduler-ip-10-132-1-49.us-west-2.compute.internal 1/1 Running 0 7d
kube-scheduler-ip-10-132-2-171.us-west-2.compute.internal 1/1 Running 0 7d
kube-scheduler-ip-10-132-3-80.us-west-2.compute.internal 1/1 Running 0 7d
Interesting. What about kubectl -n kube-system get deployments
? Maybe there's something wrong where new pods aren't getting scheduled. Check the status of that deployment and see if something's up.
If I run helm init --wait
on my simple Docker for Desktop k8s setup, it just hangs with no output.
$ helm init --wait
$HELM_HOME has been configured at ~/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Running kubectl -n kube-system get deployments
gives:
kubectl -n kube-system get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-dns 1 1 1 1 10h
tiller-deploy 1 0 0 0 10h
$ kubectl -n kube-system get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
coredns 2 2 2 2 7d
dns-controller 1 1 1 1 7d
tiller-deploy 1 0 0 0 7d
Sorry about this. Can you both try kubectl -n kube-system describe deployment tiller-deploy
? You'll likely get more information on why a pod is not being scheduled. if not you can try debugging the replica set that the kubernetes deployment deployed (hehe :smile:).
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-replication-controllers
kubectl -n kube-system describe deployment tiller-deploy
returns:
kubectl -n kube-system describe deployment tiller-deploy
Name: tiller-deploy
Namespace: kube-system
CreationTimestamp: Tue, 25 Sep 2018 23:36:14 +0100
Labels: app=helm
name=tiller
Annotations: deployment.kubernetes.io/revision=2
Selector: app=helm,name=tiller
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=helm
name=tiller
Service Account: tiller
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.10.0
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
Command:
/tiller
--listen=localhost:44134
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
ReplicaFailure True FailedCreate
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: tiller-deploy-55bfddb486 (0/1 replicas created)
Events: <none>
and the replica set? basically go down the list in that doc and see if you find anything useful.
thanks for me i got the same error upon describing the replicaset it gave the error that i had not created the service account. I deleted the deployment(tiller) , created the service account and then reran and it worked
closing as a cluster issue, not a helm issue
The Kubernetes cluster works great, I have numerous services running under it. What doesn't work is helm/tiller.
I used kubectl -n kube-system delete deployment tiller-deploy
and kubectl -n kube-system delete service/tiller-deploy
. Then helm --init
worked. I was missing removing the service previously.
mabushey solution , works!
@mabushey solution works, but with helm init
instead of helm --init
I came across @psychemedia issue as well.
After running kubectl -n kube-system describe deployment tiller-deploy
I had the same output. And if you read carefully @psychemedia output it says
...
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
ReplicaFailure True FailedCreate
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: tiller-deploy-55bfddb486 (0/1 replicas created)
Events: <none>
The important bit is ReplicaFailure True FailedCreate
and the following NewReplicaSet: tiller-deploy-55bfddb486 (0/1 replicas created)
.
To find what the problem he should have run
kubectl -n kube-system describe replicaser tiller-deploy-55bfddb486
(or just kubectl describe replicaser tiller-deploy-55bfddb486
depending if namespace is set or not... you can find it by listing all _replicasets_ kubectl get replicaset --all-namespaces
).
The reason why the _replicaset_ wasn't created should have been listed there under Events:
.
I actually had the same issue running on a different namespace than kube-system
.
See https://github.com/helm/helm/issues/3304#issuecomment-468997006
NOTICE: This ticket shouldn't be closed as there is no published solution to this issue, just a selfish overstatement that the few members of this thread deduced from the ReplicaFailure status and acknowledge tacitly to each other but never provided explicitly to the log. No reproduction/solution steps were published.
This issue was originally closed because there was no steps provided to reproduce the original issue. @mabushey's solution in https://github.com/helm/helm/issues/4685#issuecomment-433209134 appears to fix the issues he was having with his cluster, but without a series of steps to reproduce the issue, we cannot identify what causes this situation to occur in the first place, and therefore we closed it as a solved support ticket with no actionable resolution.
It's been 6 months since this issue was opened so I doubt we'll be able to figure out the exact steps to reproduce @mabushey and @psychemedia's environment. However, If you can reliably reproduce the issue, please feel free to respond here with your steps so we can better understand how this bug occurs and provide a better solution (or better yet, identify a fix to address the issue). We can then re-open this issue to determine if a patch can be provided.
If you're continuing to have issues and @mabushey's solution in https://github.com/helm/helm/issues/4685#issuecomment-433209134 does not work for you, please open a new support ticket referencing this issue.
@bacongobbler
The problem occurs when Tiller is created without a proper serviceaccount. This happens for two reasons a. the helm init script does not do this as it certainly should b. the namespace in question mismatches with an existing service account definition.
To go arround it you must first run "helm delete" and then create a rbac-config.yaml:
`
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
if you require to use a different namespace, make sure it later matches your Tiller installation and that the cluster-admin role exists (it usually does!)
Then
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller --history-max 200
And you're good to go.
So I tried to create the service account as described by @datascienceteam01 -- succeeded. Did the helm init --service-account (etc.) -- seemed to succeed.
But the deployment seems to just... spin. No events, notably:
$ kubectl -n kube-system describe deployment tiller-deploy
Name: tiller-deploy
Namespace: kube-system
CreationTimestamp: Sun, 28 Apr 2019 10:26:24 -0700
Labels: app=helm
name=tiller
Annotations: <none>
Selector: app=helm,name=tiller
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=helm
name=tiller
Service Account: tiller
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.13.1
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 200
Mounts: <none>
Volumes: <none>
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>
Sorry to resurrect a dead thread, and the symptoms look a little different. My config is kinda a frankenconfig: Docker Desktop running in Windows 10, helm installed under the Ubuntu shell (Services for Linux). Kubernetes seems happy; I can do all the usual kubectl stuff. I'm just having some problems getting helm init to work.
Any thoughts on how to troubleshoot?
I'm going to try the helm init under windows (if I can figure out how to install helm in windows!) if I can't figure it out under the Ubuntu bash shell, but I'd really like to make sure it's working under the Linux shell, because that's my "real" dev environment.
Also, sorry for deving on Windows. Right now, at least, I have no other options :)
this issue is a pain for a long time. Thinking to move out of Helm. sigh!!. Every time my pipelines fails because of this.
@Tenseiga Could you please elaborate on the issue so that we can help you? May be give us info like helm version
output, kubectl version output, and also check anything relating to the tiller pod logs, tiller deployment, tiller replica set using
kubectl describe`. We will try our best to fix it!
@tomcanham you could help here too, to reproduce the issue, if you are still facing the issue!
Any update on this?
helm init
(without any additional options) was the ticket for me which installed/setup tiller. all is well after that.
This issue happens to me every time I try to switch from cluster to cluster by using "config set-context", the Kubernetes changes context just fine but helm does not, instead it emits "Error: could not find tiller", when I try "helm init" I get "Warning: Tiller is already installed at the cluster".
If I change back the context helm works again. not sure if relevant but the cluster it's working on is PKS and the one it's not working on is EKS.
After a LOT of beating my head into a wall, i figured out why I was seeing this issue... According to the aws documentation for EKS HERE, you set the TILLER_NAMESPACE environment variable to tiller
. This was causing the helm binary to deploy tiller in the tiller
namespace (go figure).
After unsetting that variable and re-deploying, all was well...
You can also override those settings with command line args documented HERE
HTH
Why is this closed if so many people are having issues with this?
I've opened a question on stackoverflow: https://stackoverflow.com/questions/57906429/helm-init-says-tiller-is-already-on-cluster-but-its-not
Have you all tried:
kubectl apply -f tiller.yaml
helm init --service-account tiller --upgrade
tiller.yaml:
kind: ServiceAccount
apiVersion: v1
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
This is part of my up.sh
script for starting my dev cluster from scratch. The --upgrade
flag was necessary to allow it to be executed multiple times. I believe the original error about not being able to find tiller is related to it being installed but the tiller-deploy-*
pod not being found in kube-system
.
Worked for me by following https://helm.sh/docs/using_helm/#tiller-and-role-based-access-control
Just create the yaml and run the command
The point is, the error is misleading. THAT is the issue in my eyes.
i get the below error
kubeflow@masternode:~$ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /home/kubeflow/.helm.
Error: error installing: the server could not find the requested resource
kubeflow@masternode:~$
appreciate any help
kubectl -n kube-system delete deployment tiller-deploy
It works but with small change
use $ helm init
Most helpful comment
I used
kubectl -n kube-system delete deployment tiller-deploy
andkubectl -n kube-system delete service/tiller-deploy
. Thenhelm --init
worked. I was missing removing the service previously.