Helm: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

Created on 12 Nov 2017  ·  30Comments  ·  Source: helm/helm

When install a helm package, I got the following error like this:

[root@k8s-master3 ~]# helm install --name nginx stable/nginx-ingress
Error: release nginx failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

Here is my helm version:

[root@k8s-master3 ~]# helm version
Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}

And my kubectl version:

[root@k8s-master3 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.1-alicloud", GitCommit:"19408ab2a1b736fe97a9d9cf24c6fb228f23f12f", GitTreeState:"clean", BuildDate:"2017-10-19T04:05:24Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Any help will be appreciated, thanks a lot!

questiosupport

Most helpful comment

That's because you don't have the permission to deploy tiller, add an account for it:

kubectl --namespace kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller-cluster-rule \
 --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl --namespace kube-system patch deploy tiller-deploy \
 -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 

Console output:

serviceaccount "tiller" created
clusterrolebinding "tiller-cluster-rule" created
deployment "tiller-deploy" patched

Then run command below to check it :

helm list
helm repo update
helm install --name nginx-ingress stable/nginx-ingress

All 30 comments

It seems that you have encountered a problem related to privileges.
You could enable rbac in when deploying the chart:

$ helm install --name nginx --set rbac.create=true stable/nginx-ingress

@flyer103

It still can not work.
image

Same problem here. Enabling rbac does not help.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-10T13:17:12Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

$ helm install --name my-hdfs-namenode hdfs-namenode-k8s
Error: release my-hdfs-namenode failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

Help would really be appreciated!

What you need to do is grant tiller (via the default service account) access to install resources in the default namespace. See https://github.com/kubernetes/helm/blob/master/docs/service_accounts.md

Hi, @bacongobbler
Thanks for help. I follow your instructions mentioned above, and I've done the following things:
First of all, I reset the tiller:

helm reset --force

After doing this, I create a RBAC yaml file:

[root@k8s-master3 ~]# cat rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: default

And then init my tiller:

helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.7.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

However, the tiller is not installed successfully:

[root@k8s-master3 ~]# helm version
Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}
Error: cannot connect to Tiller

And I sew the deployments in kube-system namespace is like this:

[root@k8s-master3 ~]# kubectl get deployments --all-namespaces
NAMESPACE     NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ci            jenkins                    1         1         1            1           5d
default       redis-master               1         1         1            0           4d
kube-system   default-http-backend       1         1         1            1           5d
kube-system   heapster                   1         1         1            1           5d
kube-system   kube-dns                   1         1         1            1           5d
kube-system   kubernetes-dashboard       1         1         1            1           5d
kube-system   monitoring-influxdb        1         1         1            1           5d
kube-system   nginx-ingress-controller   1         1         1            1           5d
kube-system   tiller-deploy              1         0         0            0           9m

Any ideas about how to solve this problem?
Thanks in advance!

@noprom try this

delete the deployment of tiller manually

create these rbac config for tiller

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

run delete (yes delete) on that rbac config
run create again
then run helm init --upgrade to replace

you should not have any more errors.

@innovia
Great! Thanks, I've solved this problem.
Thanks a lot!

Happy to help :)

@innovia
Fantastic post!😄

Thanks!

the above doesn't work Still getting

namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

That's because you don't have the permission to deploy tiller, add an account for it:

kubectl --namespace kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller-cluster-rule \
 --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl --namespace kube-system patch deploy tiller-deploy \
 -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 

Console output:

serviceaccount "tiller" created
clusterrolebinding "tiller-cluster-rule" created
deployment "tiller-deploy" patched

Then run command below to check it :

helm list
helm repo update
helm install --name nginx-ingress stable/nginx-ingress

@ykfq Thanks a ton, it works! But every time, we deploy on a new cluster, we need to do this? What a inconvenience!

@antran89
If you use the official tiller installation instruction, you'll have to do so:

  • Create a serviceaccount for tiller
  • Bind a role for the ServiceAccout created above (cluster-admin role is needed)
  • Make a ClusterRoleBinding for ServiceAccout
  • Patch the deployment created when using helm init

So, there is another way to make it easer - install via yaml file:

vim tiller.yaml

apiVersion: v1
kind: Service
metadata:
  name: tiller-deploy
  namespace: kube-system
  labels:
    app: helm
    name: tiller
spec:
  ports:
  - name: tiller
    port: 44134
    protocol: TCP
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tiller-deploy
  namespace: kube-system
  labels:
    app: helm
    name: tiller
  annotations:
    deployment.kubernetes.io/revision: "5"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      labels:
        app: helm
        name: tiller
    spec:
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        name: tiller
        image: gcr.io/kubernetes-helm/tiller:v2.8.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 44134
          name: tiller
          protocol: TCP
        - containerPort: 44135
          name: http
          protocol: TCP
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /liveness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readiness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      serviceAccount: tiller
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-cluster-rule
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

Then create the resourses:

kubectl create -f tiller.yaml

Make sure to check your service .

the above yaml content was exported from a running cluster, using command:

kubectl -n kube-system get svc tiller-deploy -o=yaml
kubectl -n kube-system get deploy tiller-deploy -o=yaml
kubectl -n kube-system get sa tiller -o=yaml
kubectl -n kube-system get clusterrolebinding tiller-cluster-rule -o=yaml

This yaml hasn't tested yet, if you have any question, make a comment.

@ykfq I don't like the idea of giving Tiller full cluster admin privileges, but nothing else worked for me. I tried following this example. I was trying to restrict Tiller to acting only on namespaces I let it act.

But always ran into this issue (was deploying Concourse):

Error: release concourse failed: namespaces "concourse" is forbidden: User "system:serviceaccount:tiller-system:tiller-user" cannot get namespaces in the namespace "concourse": Unknown user "system:serviceaccount:tiller-system:tiller-user"

Any ideas of how to make that specific example work? I changed some parameters around, the entire YAML with RBACs was this one:

apiVersion: v1
kind: Namespace
metadata:
  name: tiller-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller-user
  namespace: tiller-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-manager
  namespace: tiller-system
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["configmaps"]
  verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-binding
  namespace: tiller-system
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-manager
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
  name: concourse
---
apiVersion: v1
kind: Namespace
metadata:
  name: concourse-main
----
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-role
  namespace: concourse
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-namespace-role
  namespace: concourse
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["namespaces"]
  verbs: ["*"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-main-role
  namespace: concourse-main
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-main-role
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-concourse-main-role
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-role
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-concourse-role
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-namespace-role
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-concourse-namespace-role
  apiGroup: rbac.authorization.k8s.io

helm init --upgrade --service-account tiller

@brunoban helm v3 will remove tiller so from what i understood the permissions will be by the user who apply it

@innovia Oh... I did not know that. Gonna try to get up to speed now then. Thanks!

then run helm init --upgrade to replace

@innovia Where to put the rbac config file?

@cjbottaro did you read the post i wrote Hwo to setup helm and tiller per namespace ?

I don't follow your question, can you please re-explain?

@innovia Nevermind, I figured it out. Just had to run

kubectl create -f tiller.yaml
helm init --upgrade --service-account tiller

this worked for me:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

I'm following the official Helm documentation for "Deploy Tiller in a namespace, restricted to deploying resources only in that namespace". Here is my bash script:

Namespace="$1"

kubectl create namespace $Namespace
kubectl create serviceaccount "tiller-$Namespace" --namespace $Namespace
kubectl create role "tiller-role-$Namespace" /
    --namespace $Namespace /
    --verb=* /
    --resource=*.,*.apps,*.batch,*.extensions
kubectl create rolebinding "tiller-rolebinding-$Namespace" /
    --namespace $Namespace /
    --role="tiller-role-$Namespace" /
    --serviceaccount="$Namespace:tiller-$Namespace"

Running helm upgrade gives me the following error:

Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

Is there a bug in the official documentation? Have I read it wrong?

What was the full command for helm init? Can you please open a separate ticket for this?

@bacongobbler Moved issue here https://github.com/helm/helm/issues/4933

the above doesn't work Still getting

namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

Follow below Command:-

helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

What you need to do is grant tiller (via the default service account) access to install resources in the default namespace. See https://github.com/kubernetes/helm/blob/master/docs/service_accounts.md

The file name is now rbac.md and the link is at https://github.com/helm/helm/blob/master/docs/rbac.md.

That's because you don't have the permission to deploy tiller, add an account for it:

kubectl --namespace kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller-cluster-rule \
 --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl --namespace kube-system patch deploy tiller-deploy \
 -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 

Console output:

serviceaccount "tiller" created
clusterrolebinding "tiller-cluster-rule" created
deployment "tiller-deploy" patched

Then run command below to check it :

helm list
helm repo update
helm install --name nginx-ingress stable/nginx-ingress

It would be great if tiller installation docs be updated with these precise instructions
I had the following yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: ""
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

If I'm correct i was missing the tiller deployment in this yaml?

helm init --upgrade --service-account tiller

The above command fixes this issue, highly recommend this step at first :)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

libesz picture libesz  ·  3Comments

InAnimaTe picture InAnimaTe  ·  3Comments

naveensrinivasan picture naveensrinivasan  ·  3Comments

adam-sandor picture adam-sandor  ·  3Comments

bq1756 picture bq1756  ·  3Comments