Helm: Helm init fails on Kubernetes 1.16.0

Created on 6 Sep 2019  ·  83Comments  ·  Source: helm/helm

Output of helm version: v2.14.3
Output of kubectl version: client: v1.15.3, server: v1.16.0-rc.1
Cloud Provider/Platform (AKS, GKE, Minikube etc.): IBM Cloud Kubernetes Service

$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/xxxx/.helm.
Error: error installing: the server could not find the requested resource

$ helm init --debug --service-account tiller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
. 
.
.

Looks like helm is trying to create tiller Deployment with: apiVersion: extensions/v1beta1
According to: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16
that is no longer supported.

bug

Most helpful comment

The following sed works-for-me:

helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@  replicas: 1@  replicas: 1\n  selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -

The issue with @mattymo solution (using kubectl patch --local) is that is seems to not work when its input contains multiple resource (here a Deployment and a Service).

All 83 comments

We've avoided updating tiller to apps/v1 in the past due to complexity with having helm init --upgrade reconciling both extensions/v1beta1 and apps/v1 tiller Deployments. It looks like once we start supporting Kubernetes 1.16.0 we will have to handle that case going forward and migrate to the newer apiVersion.

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

Nice! You might be able to achieve the same effect with the --override flag than crazy sed hacks :)

Nice! You might be able to achieve the same effect with the --override flag than crazy sed hacks :)

Yes, but his crazy sed hacks I can copy & paste, whereas this helm init --override "apiVersion"="apps/v1" just does not work. Ok, the sed hack does not work either.

current workaround seems to be something like this:

helm init --output yaml > tiller.yaml
and update the tiller.yaml:

  • change to apps/v1
  • add the selector field
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
....

The following sed works-for-me:

helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@  replicas: 1@  replicas: 1\n  selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -

The issue with @mattymo solution (using kubectl patch --local) is that is seems to not work when its input contains multiple resource (here a Deployment and a Service).

Kubernetes 1.16.0 was release yesterday: 9/18/2018.
Helm is broken on this latest Kubernetes release unless the above work around is used.

When will this issue be fixed and when will Helm 2.15.0 be released ?

If you want to use one less sed :)
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Thanks!

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@jbrette you are my hero! I was struggling with the selector stanza.

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

how to change ? can you describe more details?

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@gm12367 how to change and can you describe more details?

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@gm12367 how to change and can you describe more details?

For example, you can use helm init --service-account tiller --tiller-namespace kube-system --debug to print YAML-format manifests, --debug option will do this

@gm12367 Yes, I can see the print but just output. So, what command I can change the output?

@gm12367 I want to change apps/v1 and add selector part

@puww1010 I just redirected the output in a file, and then used VIM to change it. Below commands as reference.

# helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml
# vim helm-init.yaml
# kubectl apply -f helm-init.yaml

if your go environment is set up and you can't wait until the following PR which fixes this issue [Helm init compatible with Kubernetes 1.16] #6462 is merged, you can always do:

Build

mkdir p ${GOPATH}/src/k8s.io
cd ${GOPATH}/src/k8s.io 
git clone -b kube16 https://github.com/keleustes/helm.git
cd helm
make bootstrap build

Test:

kubectl version

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
/bin/helm init --wait --tiller-image gcr.io/kubernetes-helm/tiller:v2.14.3
Creating /home/xxx/.helm
Creating /home/xxx/.helm/repository
Creating /home/xxx/.helm/repository/cache
Creating /home/xxx/.helm/repository/local
Creating /home/xxx/.helm/plugins
Creating /home/xxx/.helm/starters
Creating /home/xxx/.helm/cache/archive
Creating /home/xxx/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/xxx/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

```bash
kubectl get deployment.apps/tiller-deploy -n kube-system -o yaml

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-09-22T01:01:11Z"
  generation: 1
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
  resourceVersion: "553"
  selfLink: /apis/apps/v1/namespaces/kube-system/deployments/tiller-deploy
  uid: 124001ca-6f31-417e-950b-2452ce70f522
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: helm
      name: tiller
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /liveness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
          protocol: TCP
        - containerPort: 44135
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readiness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-09-22T01:01:23Z"
    lastUpdateTime: "2019-09-22T01:01:23Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-09-22T01:01:11Z"
    lastUpdateTime: "2019-09-22T01:01:23Z"
    message: ReplicaSet "tiller-deploy-568db6b69f" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

@jbrette Still having the same issue after following your instruction

@jbrette Still having the same issue after following your instruction

Looks like you typed "helm" instead of "./bin/helm"....so you are using the old version of the binary.

After successful init you won't be able to install a chart package from repository until replacing extensions/v1beta1 in it as well.
Here is how to adapt any chart from repository for k8s v1.16.0
The example is based on prometheus chart.

git clone https://github.com/helm/charts
cd charts/stable

Replace extensions/v1beta1 to policy/v1beta1 PodSecurityPolicy:

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: policy/v1beta1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+PodSecurityPolicy/ {print FILENAME}' {} +`

NetworkPolicy apiVersion is handled well by _helpers.tpl for those charts where it is used.

Replace extensions/v1beta1 to apps/v1 in Deployment, StatefulSet, ReplicaSet, DaemonSet

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`
sed -i 's@apiVersion: apps/v1beta2@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`

Create a new package:

helm package ./prometheus
Successfully packaged chart and saved it to: /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Install it:
helm install /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Based on https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

P.S. For some charts with dependencies you might need to use helm dependency update and replace dependent tgz with patched ones if applicable.

Getting the same error when running helm init --history-max 200

output

$HELM_HOME has been configured at /Users/neil/.helm.
Error: error installing: the server could not find the requested resource
$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller

This branch works https://github.com/keleustes/helm/tree/kube16. You can build the branch yourself. I also uploaded the binary here for your convenience https://s3-us-west-2.amazonaws.com/bin.cryptexlabs.com/github.com/keleustes/helm/kube16/darwin/helm. One caveat is you have to use the canary image flag helm init --canary-image since the build is unreleased

You should not need the canary image to make this work. I would suggest using helm init --tiller-image gcr.io/kubernetes-helm/tiller:v2.14.3 as @jbrette mentioned earlier if you want to try out #6462.

I'd recommend users to try one of the workarounds provided earlier first before trying out a PR anyways; that way, they can continue to use Helm 2.14.3 instead of a custom dev branch that's still under review.

When i do the command it deploy it but after that cans see it in pods and says Error from server (NotFound): pods "tiller-deploy" not found

helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

deployment.apps/tiller-deploy created
service/tiller-deploy created

But when i do kubectl get pods --all-namespaces cant see the pods
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-559hw 1/1 Running 0 23h
kube-system coredns-5644d7b6d9-xcmjd 1/1 Running 0 23h
kube-system etcd-fmp 1/1 Running 0 24h
kube-system kube-apiserver-fmp 1/1 Running 0 24h
kube-system kube-controller-manager-fmp 1/1 Running 1 24h
kube-system kube-flannel-ds-amd64-ffx2g 1/1 Running 0 23h
kube-system kube-proxy-lfvrz 1/1 Running 0 24h
kube-system kube-scheduler-fmp 1/1 Running 0 24h

kubectl get all --all-namespaces | grep tiller
kube-system service/tiller-deploy ClusterIP xxx 44134/TCP 2m52s
kube-system deployment.apps/tiller-deploy 0/1 0 0 2m54s
kube-system replicaset.apps/tiller-deploy-77855d9dcf 1 0 0 2m54s

@DanielIvaylov I think you don't have the tiller service account. Please create it, and then the deployment will create the tiller pod too.

Thanks!

@DanielIvaylov I think you don't have the tiller service account. Please create it, and then the deployment will create the tiller pod too.

Thanks!

Sorry, im new i tought that this is going to start it. How i start the tiller service account?

@DanielIvaylov

kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

Added below flag to api server (/etc/kubernetes/manifests/kube-apiserver.yaml) that temporarily re-enabled those deprecated API.

--runtime-config=apps/v1beta1=true,apps/v1beta2=true,extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true

This fixed the helm v2

For people on windows, we were able to install/upgrade tiller via powershell like so:

$(helm init --output yaml) -replace "extensions/v1beta1","apps/v1"

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

for me the kubectl patch is hanging
no messages in the /var/log/syslog file

service accounts already exist

kubeflow@masternode:~$ kubectl --namespace kube-system create sa tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
kubeflow@masternode:~$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller" already exists

can you please advise

executing the below

export PATH=$PATH:/usr/local/bin
which helm
which tiller
helm install \
--name nfs-client-provisioner \
--set nfs.server=10.0.0.4 \
--set nfs.path=/nfsroot \
--set storageClass.name=nfs \
--set storageClass.defaultClass=true \
stable/nfs-client-provisioner

returns back with

/usr/local/bin/helm
/usr/local/bin/tiller
Error: could not find tiller

appreciate any help as this is now a show stopper

@cyrilthank It seems the tiller error is because there is no tiller deployment running, try running this command to install tiller:
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

helm version -s should return the server (tiller) version if its up and running properly

Thank you Sir.
You have been helpful in enabling us to proceed with kubeflow to the next step

ok now i think i got this issue
https://github.com/kubeflow/kubeflow/issues/4184
back as blocker

Appreciate it much if you could help with advice on how i may get some help on the https://github.com/kubeflow/kubeflow/issues/4184

@cyrilthank try the steps provided above: https://github.com/helm/helm/issues/6374#issuecomment-533853888
You need to replace the deprecated api versions with the new ones

kubeflow_workaround_and_error_traces.txt

Thank you Sir for your patient replies especially in keeping this issue open

Sorry about this but looks like i am doing something wrong in the workaround steps

Appreciate if you could review the steps in the attached traces and advise me on what i am doing wrongly

@cyrilthank you just need to run the sed commands against your kubeflow yamls to replace the old api extension with the new one (no need to deploy prometheus at all 😆 ). Sorry If I didnt express myself well enough.
The fix is basically replacing extensions/v1beta1 with apps/v1in your kubeflow dpl yamls

ah so i did a dumb copy :(

my KFAPP=/nfsroot/kf-poc

i still seem to be getting a few messages and the final error out

can you please help as i depend on you now to move to the next step on kubeflow

kubeflow_workaround_sed_and_error_traces.txt

@cyrilthank you just need to run the sed commands against your kubeflow yamls to replace the old api extension with the new one (no need to deploy prometheus at all laughing ). Sorry If I didnt express myself well enough.
The fix is basically replacing extensions/v1beta1 with apps/v1in your kubeflow dpl yamls

apps/v1beta2 also with apps/v1

https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

Thanks much @uniuuu for your help
Can you please advise where/how to obtain the yaml files referenced in https://github.com/helm/helm/files/3662328/kubeflow_workaround_sed_and_error_traces.txt

https://github.com/helm/helm/issues/6374#issuecomment-533840097
https://github.com/helm/helm/issues/6374#issuecomment-533185074

requesting since after making the sed changes we still have encountered the errors referenced in

can you please advise if the step

kubectl convert -f --output-version /

needs to be executed for every being every yaml file in the KFAPP location including .kache

If you have applied the workaround mentioned above when working with helm init, and still get the following error when trying things like helm version, it's because the helm deployment cannot be found.

Error: could not find tiller

You need to run kubectl get events --all-namespaces | grep -i tiller to know why it's not ready.

For example, my issue is simply as below, because I don't need serviceaccount "tiller" with microk8s.

microk8s.kubectl get events --all-namespaces | grep -i tiller
kube-system    23m         Warning   FailedCreate                   replicaset/tiller-deploy-77855d9dcf            Error creating: pods "tiller-deploy-77855d9dcf-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found

So I did the workaournd without the service account

- helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
+ helm init spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

@cyrilthank I removed your comment because it isn't relevant to the discussion involved here. Please continue to follow up in kubeflow/kubeflow#4184. Thanks!

helm init spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Slight correction

helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

+1

@puww1010 I just redirected the output in a file, and then used VIM to change it. Below commands as reference.

# helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml
# vim helm-init.yaml
# kubectl apply -f helm-init.yaml

I tried doing this. After editing the file in VIM i use the kubectl apply command, but it doesn't seem to do anything. When I run helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml again or helm init --output yaml the changes haven't been applied. Anyone else experience this?

If you want to use one less sed :)
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Thanks!

I just upgraded our k8s and I faced this problem and I used above solution. It creates deployment but replicaset fails and this is what I get from kubectl describe -n kube-system replicasets.apps tiller-deploy-77855d9dcf:

Events:
  Type     Reason        Age                 From                   Message
  ----     ------        ----                ----                   -------
  Warning  FailedCreate  41s (x14 over 82s)  replicaset-controller  Error creating: pods "tiller-deploy-77855d9dcf-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found

Where I can find a yaml file to create that serviceaccount?

@DanielIvaylov

kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

This solved my problem!

6462 has been merged and will be available in the next release (2.15.0). For now, feel free to use the workarounds provided above or use the canary release.

Thanks everyone!

Canary image still produces the same error unless it doesn't have this merge yet,

@puww1010 I just redirected the output in a file, and then used VIM to change it. Below commands as reference.

# helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml
# vim helm-init.yaml
# kubectl apply -f helm-init.yaml

I tried doing this. After editing the file in VIM i use the kubectl apply command, but it doesn't seem to do anything. When I run helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml again or helm init --output yaml the changes haven't been applied. Anyone else experience this?

Yeah, happening for me as well.

You might need to change the image location to gcr.azk8s.cn/kubernetes-helm/tiller:v2.14.3 the gcr.io location seems to be blocked.

You might need to change the image location to gcr.azk8s.cn/kubernetes-helm/tiller:v2.14.3 the gcr.io location seems to be blocked.

While a completely valid issue, that issue is slightly orthogonal to the issue present in this issue, gcr.io is blocked in China, unfortunately. See https://github.com/helm/charts/issues/14607 for more information.

we have been able to fix this issue by rolling back kubernetes version to 1.15.4

Thanks @UmamaheshMaxwell for sharing this.

Can you please share the steps you used to rollback the kubernetes version?

@cyrilthank if it's minikube, minikube config set kubernetes-version v1.15.4

Thanks @UmamaheshMaxwell for sharing this.

Can you please share the steps you used to rollback the kubernetes version?

@cyrilthank we have been using our own VMs ( Ubuntu 18+), below are the setps to install k8s version 1.15.4

  1. kubeadm reset
  2. sudo apt-get install kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00
  3. sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --apiserver-advertise-address=x.x.x.x --apiserver-cert-extra-sans=x.x.x.x --kubernetes-version "1.15.4"

--pod-network-cidr=10.244.10.0/16 - flannel
--apiserver-advertise-address=x.x.x.x - private IP of your VM ( Master)
--apiserver-cert-extra-sans=x.x.x.x - Public IP of your VM ( Master) ( This is required, if you are trying to access your Master from your local machine.

Note: Follow the below link to set up a kubeconfig file for a self-hosted Kubernetes cluster (http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/)

Let me know if you still have any questions.

@cyrilthank if it's minikube, minikube config set kubernetes-version v1.15.4

Thanks @MrSimonEmms mine is not mini I think I will have to go with @UmamaheshMaxwell 's steps

Thanks @UmamaheshMaxwell for sharing this.
Can you please share the steps you used to rollback the kubernetes version?

@cyrilthank we have been using our own VMs ( Ubuntu 18+), below are the setps to install k8s version 1.15.4

kubeadm reset
sudo apt-get install kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00
sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --apiserver-advertise-address=x.x.x.x --apiserver-cert-extra-sans=x.x.x.x --kubernetes-version "1.15.4"

--pod-network-cidr=10.244.10.0/16 - flannel
--apiserver-advertise-address=x.x.x.x - private IP of your VM ( Master)
--apiserver-cert-extra-sans=x.x.x.x - Public IP of your VM ( Master) ( This is required, if you are trying to access your Master from your local machine.
Note: Follow the below link to set up a kubeconfig file for a self-hosted Kubernetes cluster (http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/)
Let me know if you still have any questions.

Thanks @UmamaheshMaxwell for your patient reply

I have an existing kubernetes 1.16 setup can you please confirm if I can try running these steps?

Thanks @UmamaheshMaxwell for sharing this.
Can you please share the steps you used to rollback the kubernetes version?
@cyrilthank we have been using our own VMs ( Ubuntu 18+), below are the setps to install k8s version 1.15.4
kubeadm reset
sudo apt-get install kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00
sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --apiserver-advertise-address=x.x.x.x --apiserver-cert-extra-sans=x.x.x.x --kubernetes-version "1.15.4"
--pod-network-cidr=10.244.10.0/16 - flannel
--apiserver-advertise-address=x.x.x.x - private IP of your VM ( Master)
--apiserver-cert-extra-sans=x.x.x.x - Public IP of your VM ( Master) ( This is required, if you are trying to access your Master from your local machine.
Note: Follow the below link to set up a kubeconfig file for a self-hosted Kubernetes cluster (http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/)
Let me know if you still have any questions.

Thanks @UmamaheshMaxwell for your patient reply

I have an existing kubernetes 1.16 setup can you please confirm if I can try running these steps?

Yeah @cyrilthank, even we had kubernetes 1.16.1 but we had to roll it back to 1.15.4, below is the link if you want to set it up from scratch.

VM's OS Version

Distributor ID: Ubuntu
Description:    Ubuntu 18.04.3 LTS
Release:    18.04
Codename:   bionic

Clean up kuberenetes

kubeadm reset
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*   
sudo apt-get autoremove  
sudo rm -rf ~/.kube

Setup Kubernetes (_Both Master and Node_)
https://www.howtoforge.com/tutorial/how-to-install-kubernetes-on-ubuntu/
(you better automate the steps suggested in above link as much as you can, to save your time)

Let me know if you still need any further help. Happy Journey with roll back :), hope you have a smooth journey with it.

You might need to change the image location to gcr.azk8s.cn/kubernetes-helm/tiller:v2.14.3 the gcr.io location seems to be blocked.

While a completely valid issue, that issue is slightly orthogonal to the issue present in this issue, gcr.io is blocked in China, unfortunately. See helm/charts#14607 for more information.

I'm not in China, but the US. But I think my VPN blocked that site. Anyway, I followed all the steps outlined in this thread and couldn't get it working until I tried to get the image manually and saw it was not responding--just something else to try in case someone else gets stuck at the same spot as me.

I'm also getting the error:

$ helm init
$HELM_HOME has been configured at C:\Users\user\.helm.
Error: error installing: the server could not find the requested resource

I'm trying a solution proposed in this issue, particularly this one. However, after modifying the tiller.yaml file accordingly, I'm not able to update the configuration. I'm trying the following command in order to apply the changes/update the configuration:

$ kubectl apply -f tiller.yaml
deployment.apps/tiller-deploy configured
service/tiller-deploy configured

But then, if I run:

$ helm init --output yaml > tiller2.yaml

The tiller2.yaml file shows:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:

Basically, the changes are not reflected. So I assume that I'm not updating the configuration properly. What would be the correct way to do it?


EDIT: I managed to get it running. I'm using Minikube, and in order to get it running, first I downgraded the Kubernetes version to 1.15.4.

minikube delete
minikube start --kubernetes-version=1.15.4

Then, I was using a proxy, so I had to add Minikube's IP to the NO_PROXY list: 192.168.99.101 in my case. See: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/

Note: After some further testing, perhaps the downgrade is not necessary, and maybe all I was missing was the NO_PROXY step. I added all 192.168.99.0/24, 192.168.39.0/24 and 10.96.0.0/12 to the NO_PROXY setting and now it seems to work fine.

helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Its worked for me, Thank you so much

As the Kubernetes API evolves, APIs are periodically reorganized or upgraded. When APIs evolve, the old API is deprecated and eventually removed.

The v1.16 release will stop serving the following deprecated API versions in favor of newer and more stable API versions:

NetworkPolicy (in the extensions/v1beta1 API group)
    Migrate to use the networking.k8s.io/v1 API, available since v1.8. Existing persisted data can be retrieved/updated via the networking.k8s.io/v1 API.
PodSecurityPolicy (in the extensions/v1beta1 API group)
    Migrate to use the policy/v1beta1 API, available since v1.10. Existing persisted data can be retrieved/updated via the policy/v1beta1 API.
DaemonSet, Deployment, StatefulSet, and ReplicaSet (in the extensions/v1beta1 and apps/v1beta2 API groups)
    Migrate to use the apps/v1 API, available since v1.9. Existing persisted data can be retrieved/updated via the apps/v1 API.

The v1.20 release will stop serving the following deprecated API versions in favor of newer and more stable API versions:

Ingress (in the extensions/v1beta1 API group)
    Migrate to use the networking.k8s.io/v1beta1 API, serving Ingress since v1.14. Existing persisted data can be retrieved/updated via the networking.k8s.io/v1beta1 API.

What to Do

  • Change YAML files to reference the newer APIs
  • Update custom integrations and controllers to call the newer APIs
  • Update third party tools (ingress controllers, continuous delivery systems) to call the newer APIs

Refer to :

As a helm n00b who is using minikube, I was able to get around this issue by setting a kubernetes version like so:

$ minikube delete
$ minikube start --kubernetes-version=1.15.4

Hope it helps!

@PierreF I used your solution ( https://github.com/helm/helm/issues/6374#issuecomment-533186177 ) with k8s v1.16.1 and helm v2.15.0 and the tiller is not working.

Readiness probe failed: Get http://10.238.128.95:44135/readiness: dial tcp 10.238.128.95:44135: connect: connection refused

@joshprzybyszewski-wf I used following command

minikube start --memory=16384 --cpus=4 --kubernetes-version=1.15.4
kubectl create -f istio-1.3.3/install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller
helm install istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
helm install istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system

And now get,

Error: validation failed: [unable to recognize "": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3", unable to recognize "": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3", unable to recognize "": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "handler" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "handler" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2"]

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

you missed to add macthLabels post selector.

I was forwarded to @jbrette 's solution. This is what I got when I ran it

error: error parsing STDIN: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context

This has been fixed in Helm 2.16.0.

I was forwarded to @jbrette 's solution. This is what I got when I ran it

error: error parsing STDIN: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context

Check the yaml files, in most cases that referenced line has { } or [ ] and still has other things defined under it which causes the error. In most cases the issue is within the values.yaml, otherwise check the templates section of the chart.

Just a side note to @PierreF's and @mihivagyok's solution. Those did not work for me when I use private helm repos.

$ helm repo add companyrepo https://companyrepo
Error: Couldn't load repositories file (/home/username/.helm/repository/repositories.yaml).

I guess that happens because helm init is not run, just generates yaml file. I fixed that by running helm init -c as an extra.

in k8s v1.16.6, helm init otput require spec.selector fyi .

current workaround seems to be something like this:

helm init --output yaml > tiller.yaml
and update the tiller.yaml:

  • change to apps/v1
  • add the selector field
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
....

It's works, since kubernetes change apiVersion apps/v1 to for Deployment, there is one thing need to be change is we need to add selector matchLabels for spec

Another workaround can be to use helm 3, which does not use tiller.

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Hi, while trying this am getting this :

jenkins@jenkin:~/.kube$ helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Command 'kubectl' not found, but can be installed with:

snap install kubectl
Please ask your administrator.

jenkins@jenkin:~/.kube$

Output of helm version: v2.14.3
Output of kubectl version: client: v1.15.3, server: v1.16.0-rc.1
Cloud Provider/Platform (AKS, GKE, Minikube etc.): IBM Cloud Kubernetes Service

$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/xxxx/.helm.
Error: error installing: the server could not find the requested resource

$ helm init --debug --service-account tiller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
. 
.
.

Looks like helm is trying to create tiller Deployment with: apiVersion: extensions/v1beta1
According to: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16
that is no longer supported.

am getting this error: how can I solve it???

root@jenkin:~# helm init --service-account tiller
$HELM_HOME has been configured at /root/.helm.
Error: error installing: unknown (post deployments.extensions)
root@jenkin:~#

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

I am getting this error :

jenkins@jenkin:~/.helm$ helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Command 'kubectl' not found, but can be installed with:

snap install kubectl
Please ask your administrator.

jenkins@jenkin:~/.helm$

Workaround, using jq:

helm init -o json | jq '(select(.apiVersion == "extensions/v1beta1") .apiVersion = "apps/v1")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.app = "helm")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.name = "tiller")' | kubectl create -f -

Workaround, using jq:

helm init -o json | jq '(select(.apiVersion == "extensions/v1beta1") .apiVersion = "apps/v1")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.app = "helm")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.name = "tiller")' | kubectl create -f -

You can't update resource with kubectl create

@ikarlashov easy enough to replace 'create' with 'apply.' The one-liner above presumes one hasn't tried creating the resources yet.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

danielcb picture danielcb  ·  3Comments

InAnimaTe picture InAnimaTe  ·  3Comments

naveensrinivasan picture naveensrinivasan  ·  3Comments

antoniaklja picture antoniaklja  ·  3Comments

KavinduZoysa picture KavinduZoysa  ·  3Comments