Helm: Error: UPGRADE FAILED: no resource with the name "anything_goes" found

Created on 19 Dec 2017  ·  72Comments  ·  Source: helm/helm

Hi,

We are constantly hitting a problem that manifests itself with this Error: UPGRADE FAILED: no resource with the name "site-ssl" found, for example. They can appear after any innocuous update to a template.
Could you, please, help me with understanding the problem. What causes those messages to appear?

I've been unsuccessful in triaging the issue further, it may happen anytime, haven't really found a pattern yet.

Perhaps, there is a problem with how we deploy? helm upgrade hmmmmm /tmp/dapp-helm-chart-20171219-20899-1ppm74grrwrerq --set global.namespace=hmm --set global.env=test --set global.erlang_cookie=ODEzMTBlZjc5ZGY5NzQwYTM3ZDkwMzEx --set global.tests=no --set global.selenium_tests=no --namespace hmm --install --timeout 300

Helm: v2.7.2, v2.6.2, Kubernetes: v1.7.6, v1.8.5. I've tried every possible combination of these 4 versions, neither work.

bug

Most helpful comment

Completely removing release from Helm via helm delete release works, but it is not a viable solution.

Why can't Helm just overwrite whatever is currently installed? Aren't we living in a declarative world with Kubernetes?

All 72 comments

Completely removing release from Helm via helm delete release works, but it is not a viable solution.

Why can't Helm just overwrite whatever is currently installed? Aren't we living in a declarative world with Kubernetes?

Just got the same thing... quite new for me at it seems to be a new issue. delete the resource will fix it.
v2.7.2 with Kubernetes 1.7.7.
pretty it worked before...

I had this problem - it was due to a PersistentVolume that i'd created. To resolve, I deleted the PV and PVC. Ran helm upgrade XXX XXX and it worked fine. Probably still something that should be investigated as the PV did exist.

I got the feeling it might be related to bad pv... but then the error is quite misleading !
also some weirds logs from tiller... seems that it is working on 2 version at the same time...

just tried with 2.7.1 with no luck...

[main] 2017/12/21 15:30:48 Starting Tiller v2.7.1 (tls=false)
[main] 2017/12/21 15:30:48 GRPC listening on :44134
[main] 2017/12/21 15:30:48 Probes listening on :44135
[main] 2017/12/21 15:30:48 Storage driver is ConfigMap
[main] 2017/12/21 15:30:48 Max history per release is 0
[tiller] 2017/12/21 15:30:55 preparing update for xxx
[storage] 2017/12/21 15:30:55 getting deployed release from "xxx" history
[tiller] 2017/12/21 15:30:56 copying values from xxx (v65) to new release.
[storage] 2017/12/21 15:30:56 getting last revision of "xxx"
[storage] 2017/12/21 15:30:56 getting release history for "xxx"
[tiller] 2017/12/21 15:30:59 rendering helm-xxx chart using values
2017/12/21 15:30:59 info: manifest "helm-xxx/templates/scheduler-deploy.yaml" is empty. Skipping.
2017/12/21 15:30:59 info: manifest "helm-xxx/templates/recomposer-deploy.yaml" is empty. Skipping.
2017/12/21 15:31:00 info: manifest "helm-xxx/templates/recomposer-pvc.yaml" is empty. Skipping.
2017/12/21 15:31:00 info: manifest "helm-xxx/templates/scheduler-pvc.yaml" is empty. Skipping.
2017/12/21 15:31:00 info: manifest "helm-xxx/templates/scheduler-secret.yaml" is empty. Skipping.
2017/12/21 15:31:00 info: manifest "helm-xxx/templates/recomposer-secret.yaml" is empty. Skipping.
[tiller] 2017/12/21 15:31:09 creating updated release for xxx
[storage] 2017/12/21 15:31:09 creating release "xxx.v80"
[tiller] 2017/12/21 15:31:09 performing update for xxx
[tiller] 2017/12/21 15:31:09 executing 0 pre-upgrade hooks for xxx
[tiller] 2017/12/21 15:31:09 hooks complete for pre-upgrade xxx
[tiller] 2017/12/21 15:31:11 preparing update for xxx
[storage] 2017/12/21 15:31:11 getting deployed release from "xxx" history
[storage] 2017/12/21 15:31:11 getting last revision of "xxx"
[storage] 2017/12/21 15:31:11 getting release history for "xxx"
[tiller] 2017/12/21 15:31:18 rendering helm-xxx chart using values
2017/12/21 15:31:18 info: manifest "helm-xxx/templates/scheduler-secret.yaml" is empty. Skipping.
2017/12/21 15:31:18 info: manifest "helm-xxx/templates/scheduler-pvc.yaml" is empty. Skipping.
2017/12/21 15:31:19 info: manifest "helm-xxx/templates/scheduler-deploy.yaml" is empty. Skipping.
[kube] 2017/12/21 15:31:28 building resources from updated manifest
[tiller] 2017/12/21 15:31:46 creating updated release for xxx
[storage] 2017/12/21 15:31:46 creating release "xxx.v81"
[tiller] 2017/12/21 15:31:47 performing update for xxx
[tiller] 2017/12/21 15:31:47 executing 0 pre-upgrade hooks for xxx
[tiller] 2017/12/21 15:31:47 hooks complete for pre-upgrade xxx
[kube] 2017/12/21 15:31:49 checking 7 resources for changes
[kube] 2017/12/21 15:31:49 Looks like there are no changes for Secret "xxx-helm-xxx-nginx-secret"
[kube] 2017/12/21 15:31:50 Looks like there are no changes for Secret "xxx-application-secret"
[kube] 2017/12/21 15:31:50 Looks like there are no changes for Secret "azure-secret"
[kube] 2017/12/21 15:31:51 Looks like there are no changes for ConfigMap "xxx-helm-xxx-nginx-config"
[kube] 2017/12/21 15:31:51 Looks like there are no changes for ConfigMap "xxx-application-config"
[kube] 2017/12/21 15:31:51 Looks like there are no changes for Service "xxx-application-svc"
[kube] 2017/12/21 15:31:51 Looks like there are no changes for StatefulSet "xxx-application"
[tiller] 2017/12/21 15:31:51 executing 0 post-upgrade hooks for xxx
[tiller] 2017/12/21 15:31:51 hooks complete for post-upgrade xxx
[storage] 2017/12/21 15:31:51 updating release "xxx.v65"
[tiller] 2017/12/21 15:31:51 updating status for updated release for xxx
[storage] 2017/12/21 15:31:51 updating release "xxx.v80"
[kube] 2017/12/21 15:31:57 building resources from updated manifest
[kube] 2017/12/21 15:32:10 checking 11 resources for changes
[kube] 2017/12/21 15:32:10 Looks like there are no changes for Secret "xxx-helm-xxx-nginx-secret"
[tiller] 2017/12/21 15:32:10 warning: Upgrade "xxx" failed: no resource with the name "xxx-recomposer-secret" found
[storage] 2017/12/21 15:32:10 updating release "xxx.v65"
[storage] 2017/12/21 15:32:10 updating release "xxx.v81"

seems it get confused at doing to release at the same time...

just reapplied the same config twice...

[tiller] 2017/12/21 15:50:46 preparing update for xxx
[storage] 2017/12/21 15:50:46 getting deployed release from "xxx" history
[storage] 2017/12/21 15:50:46 getting last revision of "xxx"
[storage] 2017/12/21 15:50:46 getting release history for "xxx"
[tiller] 2017/12/21 15:50:50 rendering helm-xxx chart using values
2017/12/21 15:50:50 info: manifest "helm-xxx/templates/scheduler-pvc.yaml" is empty. Skipping.
2017/12/21 15:50:50 info: manifest "helm-xxx/templates/recomposer-deploy.yaml" is empty. Skipping.
2017/12/21 15:50:50 info: manifest "helm-xxx/templates/scheduler-secret.yaml" is empty. Skipping.
2017/12/21 15:50:50 info: manifest "helm-xxx/templates/scheduler-deploy.yaml" is empty. Skipping.
2017/12/21 15:50:50 info: manifest "helm-xxx/templates/recomposer-secret.yaml" is empty. Skipping.
2017/12/21 15:50:50 info: manifest "helm-xxx/templates/recomposer-pvc.yaml" is empty. Skipping.
[tiller] 2017/12/21 15:50:58 creating updated release for xxx
[storage] 2017/12/21 15:50:58 creating release "xxx.v85"
[tiller] 2017/12/21 15:50:59 performing update for xxx
[tiller] 2017/12/21 15:50:59 executing 0 pre-upgrade hooks for xxx
[tiller] 2017/12/21 15:50:59 hooks complete for pre-upgrade xxx
[kube] 2017/12/21 15:51:13 building resources from updated manifest
[kube] 2017/12/21 15:51:22 checking 7 resources for changes
[kube] 2017/12/21 15:51:22 Looks like there are no changes for Secret "xxx-helm-xxx-nginx-secret"
[kube] 2017/12/21 15:51:23 Looks like there are no changes for Secret "xxx-application-secret"
[kube] 2017/12/21 15:51:23 Looks like there are no changes for Secret "azure-secret"
[kube] 2017/12/21 15:51:23 Looks like there are no changes for ConfigMap "xxx-helm-xxx-nginx-config"
[kube] 2017/12/21 15:51:23 Looks like there are no changes for ConfigMap "xxx-application-config"
[kube] 2017/12/21 15:51:24 Looks like there are no changes for Service "xxx-application-svc"
[kube] 2017/12/21 15:51:24 Deleting "xxx-recomposer-secret" in xxx...
[kube] 2017/12/21 15:51:24 Failed to delete "xxx-recomposer-secret", err: secrets "xxx-recomposer-secret" not found
[kube] 2017/12/21 15:51:24 Deleting "xxx-recomposer-config" in xxx...
[kube] 2017/12/21 15:51:24 Failed to delete "xxx-recomposer-config", err: configmaps "xxx-recomposer-config" not found
[kube] 2017/12/21 15:51:24 Deleting "xxx-recomposer-pv" in ...
[kube] 2017/12/21 15:51:24 Failed to delete "xxx-recomposer-pv", err: persistentvolumes "xxx-recomposer-pv" not found
[kube] 2017/12/21 15:51:24 Deleting "xxx-recomposer-pvc" in xxx...
[kube] 2017/12/21 15:51:24 Failed to delete "xxx-recomposer-pvc", err: persistentvolumeclaims "xxx-recomposer-pvc" not found
[kube] 2017/12/21 15:51:24 Deleting "xxx-recomposer" in xxx...
[kube] 2017/12/21 15:51:24 Using reaper for deleting "xxx-recomposer"
[kube] 2017/12/21 15:51:24 Failed to delete "xxx-recomposer", err: deployments.extensions "xxx-recomposer" not found
[tiller] 2017/12/21 15:51:24 executing 0 post-upgrade hooks for xxx
[tiller] 2017/12/21 15:51:24 hooks complete for post-upgrade xxx
[storage] 2017/12/21 15:51:24 updating release "xxx.v68"
[tiller] 2017/12/21 15:51:24 updating status for updated release for xxx
[storage] 2017/12/21 15:51:24 updating release "xxx.v85"
[storage] 2017/12/21 15:51:25 getting last revision of "xxx"
[storage] 2017/12/21 15:51:25 getting release history for "xxx"
[kube] 2017/12/21 15:51:38 Doing get for Secret: "xxx-helm-xxx-nginx-secret"
[kube] 2017/12/21 15:51:39 get relation pod of object: xxx/Secret/xxx-helm-xxx-nginx-secret
[kube] 2017/12/21 15:51:39 Doing get for Secret: "xxx-application-secret"
[kube] 2017/12/21 15:51:39 get relation pod of object: xxx/Secret/xxx-application-secret
[kube] 2017/12/21 15:51:39 Doing get for Secret: "azure-secret"
[kube] 2017/12/21 15:51:39 get relation pod of object: xxx/Secret/azure-secret
[kube] 2017/12/21 15:51:39 Doing get for ConfigMap: "xxx-helm-xxx-nginx-config"
[kube] 2017/12/21 15:51:39 get relation pod of object: xxx/ConfigMap/xxx-helm-xxx-nginx-config
[kube] 2017/12/21 15:51:39 Doing get for ConfigMap: "xxx-application-config"
[kube] 2017/12/21 15:51:39 get relation pod of object: xxx/ConfigMap/xxx-application-config
[kube] 2017/12/21 15:51:39 Doing get for Service: "xxx-application-svc"
[kube] 2017/12/21 15:51:39 get relation pod of object: xxx/Service/xxx-application-svc
[kube] 2017/12/21 15:51:39 Doing get for StatefulSet: "xxx-application"
[kube] 2017/12/21 15:51:39 get relation pod of object: xxx/StatefulSet/xxx-application

might be related to #2941

a said in the other thread, one of the way to fix the issue was to delete the buggy configmaps ... seems to do it for me currently...

That is all fine and dandy. Until that time, when you have to delete something critical from a production namespace. Which, coincidentally, happened to me just now. :c

I've faced the issue as well when we upgrade an release if there are multiple DEPLOYED status of this release, Have to fix it with delete those corresponding configmaps.

Same problem. Everything was just fine yesterday and I did multiple upgrades. Today I just added a new yaml with service and deployment block separated with --- and the upgrade failed.

The interesting thing is, helm created the service and then complained about it (and didn't do the deployment).
I commented out the service and just ran upgrade with the deployment block - it worked. However, helm didn't delete the service - which it should have as it's removed from the yaml file.

Update: I manually deleted the service, uncommented it from the yaml and ran the upgrade - this time it worked like a charm!

I was having this exact error. It looks like the issue is related to templates with multiple API objects similar to what @amritb saw. In my case, I had a template that had multiple API objects that could be toggled on and off similar to:

{{ if .Values.enabled }}
---
...

Breaking that into its own template file and cleaning up the orphaned objects that helm created and forgot about resolved the issue for me. It sounds like there is a bug in how helm gets previous config if the number of objects per template changes between releases.

Adding another datapoint: I appear to be having the exact same issue as @awwithro. We're using a jinja loop to create multiple cronjobs via a template, and when a new upgrade caused this loop to fill in an additional cronjob, we ran into the bug. Seemed to trigger #2941 as well (or possibly one bug causes the other), and deleting the zombie configmaps fixes it.

Just trapped into this even without using any configmaps

Some extra color for anyone who may be stuck:
I was running into this when introducing new subcharts and objects to my release. I was able to solve by checking every object type that was being added, and deleting any existing objects that would cause a naming collision.

This seems to be in line with others' evidence that deletion is the only way to solve right now 😕

Also running across this =\

I also needed to delete affected resources. Not good for a production environment =_(

I'm seeing something I think is similar. The problem appears to be that a helm upgrade does not --reuse-values from the previous deploy. If I specify the same set of values on the command-line as the initial installation did, then helm upgrade works. Dunno if this helps (or anyone else can confirm this).

Like @amritb, after I manually deleted the object that helm initially failed on, it succeeded after the next upgrade. I did not experience #2941.

The same problem using helm 2.8.0. Kubernetes versions client=v1.8.6 and server=v1.8.5-gke.0.

$ helm upgrade bunny ./app --debug
[debug] Created tunnel using local port: '54274'

[debug] SERVER: "127.0.0.1:54274"

Error: UPGRADE FAILED: no ConfigMap with the name "bunny-proxy-config" found

But the configmap exists in $ kubectl get configmap. If I manually delete the configmap, it works, but next time it fails again.

Here is the configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ template "proxy.fullname" . }}-config
  # namespace: {{ .Release.Namespace }} # I've tried adding and removing it
  labels: # labels are the same as labels from $ kubectl describe configmap bunny-proxy-config
    app: {{ template "proxy.name" . }}
    chart: {{ template "proxy.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
data:
  asd: qwe

I deleted release and re-installed again. Also, I was using api version extensions/v1beta for deployment, I've changed to apiVersion: apps/v1beta2, I don't know if this helped or not.
Also currently I'm running tiller locally.

Right now seems like everything is working.

This is really easy to reproduce, happens if there is an error in manifest.

Like we have resource1 and resource2, resource2 depends on first. When we upgrade release, resource1 is created (eg PV & PVC), but resource2 fails. After this only deletion of resource1 helps, as helm always reports a problem on upgrade (PersistentVolume with name ... not found)

We had the same issue (the resource that got us was Secrets). Removing the new secrets and re-deploying fixed it.

Do note that because of the failures, we now have 11 different releases when we do helm list, 10 FAILED ones and 1 DEPLOYED. That's not expected, right? Same issue as here it seems: https://github.com/kubernetes/helm/issues/2941

This has pretty much made helm unusable for regular production deploys for us :( We're currently investigating doing things like passing --dry-run to helm and piping it to kubectl apply... Since this seems to affect only a subset of users, am unsure what it is that we are doing wrong :(

After tailing the tiller logs, I found that tiller was trying to update an old release at the same time:

[storage] 2018/02/14 18:25:40 updating release "s2osf.v10"
[storage] 2018/02/14 18:25:40 updating release "s2osf.v44"

Deleting the old configmap for s2osf.v10 and then upgrading worked.

Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

Having the same issue as @binoculars:

[storage] 2018/02/15 10:20:50 updating release "control.v136"
[storage] 2018/02/15 10:20:50 updating release "control.v226"

Causing weird problems with UPGRADE FAILED: no Secret with the name "foobar" found.
I even tried deleting this secret which then caused errors on some configmap instead, and at 3rd run it once again complained on the previous secret.

This might have been triggered after upgrading from helm 2.7.x to 2.8.1.


Client: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}

if your last resort is about deleting the old release, there might be a less destructive work around as my comment https://github.com/kubernetes/helm/issues/3513#issuecomment-366918019

basically finding that old revision in the logs and editing the configmap manually where tiller stores the deployed status. There should not be two revisions with DEPLOYED status afaik.

Found a new solution to this problem.

kubectl -n kube-system edit cm name_of_your_release.v2, where v2 is the latest revision number marked as FAILED in helm list. You might also want to edit one of the DEPLOYED releases and change status to SUPERSEDED, so that we wont have two deployed releases at the same time.

@zuzzas this is what I referred to. Worked for me as well

@balboah the problem is that we've got only one deployment in DEPLOYED state, but if it's not of the latest ones (who are marked as FAILED, in most scenarios), it'll still crash. The problem seems unrelated to have two or more deployments in DEPLOYED state in most of our cases.

@zuzzas do you have many releases in the same namespace or only one? Once, I had a problem with two releases updating the same objects, then they will conflict with each other.

If it's only one, how many failures do you have until the deployed version? how did you verify that its only one deployed?

We believe this has been fixed (moving forward) via #3539. Please re-open if we happen to be wrong. :)

Thank you everyone for your work on this!

Note that this has not been fixed for existing charts in this state; you'll still need to remove the old releases that are in state DEPLOYED for things to work again. @balboah just prevented the case where you can get into the "multiple releases marked as DEPLOYED" state. :)

Hm, I still get this issue on Helm 2.8.2 (not the latest, but I tried with 2.9.0 and it gives the same error.) Usually deleting the offending resource manually can fix it, though often it cascades into multiple resources that all need deletion before it successfully upgrades.

I have a bit of a large helm chart with nested dependencies; might that be it?

I'm seeing the same issue with clusterrolebinding. I added the new resource to my chart and upgrade as well as upgrade --install would fail with Error: UPGRADE FAILED: no ClusterRoleBinding with the name "test-clusterrolebinding" found

I'm experiencing the same issue as @ramyala with ClusterRole. The ClusterRole exists, but creating the RoleBinding fails with that error.

On Helm 2.9.1 I have encountered the same issue:

helm upgrade --install --namespace my-namespace my-stack stack
Error: UPGRADE FAILED: no ConfigMap with the name "my-stack-my-app" found

While I see this ConfigMap on my cluster.

I experience this issue if I have a multiple resources with hooks in one file

+1, this is happening again with 2.9.1. Please reopen.

Re-labeling this as a bug. We're not sure what caused this regression to occur but if anyone can provide steps on how to reproduce this bug on 2.9.1 that would be most appreciated.

@bacongobbler

I am seeing this too when trying to deploy a new Ingress in my helm chart. I am admittedly new to Ingress but it seems like it's correct based on all the examples and I've been doing other helm/k8s stuff for a couple months.

I already deployed the helm chart stable/nginx-ingress so the controller is present. The error seems to suggest it's trying to find the one I'm trying to create. Here is the command I'm running:

helm upgrade some-existing-release-name -i --set imageTag=$TAG-$BUILD_NUMBER --namespace=default ./deploy/helm where deploy/helm contains my chart manifests.

Error: UPGRADE FAILED: no Ingress with the name "my-ingress" found

yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  labels:
    app: my-app
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: {{ $.Values.appDomain }}
    http:
      paths:
      - path: /*
        backend:
          serviceName: web-app
          servicePort: 80
      - path: /api/*
        backend:
          serviceName: api
          servicePort: 8080

UPDATE
I removed the /* from both of my paths and it no longer gave an error when trying to upgrade/install. Maybe that's just not a valid syntax

Hi,
Here are the steps that introduced the issue in my env:

  1. I had a helm chart deployed and upgraded several times.
  2. Created new CronJob object in the chart and upgraded again - the cron job was created successfully.
  3. All next upgrades are failing with the reported error "Error: UPGRADE FAILED: no CronJob with the name “cronjob-name” found"

I'm also seeing the issue when i add a Secret which didnt' exist earlier. I tried adding a "db-credentials"
secret which led to:

Error: UPGRADE FAILED: no Secret with the name "db-credentials" found

potentially relevant fix: #4146

if anyone running into this error could test that PR and see if it fixes this, then we'd be able to confirm that it's likely a regression in the k8s API and move forward with that fix. Thanks!

I can't 100% confirm if this will always reproduce, but I've noticed this tends to happen in the following situation:

  1. I upgrade a Helm chart, including a new resource
  2. That upgrade fails, but the resource was created as part of the failed upgrade
  3. All subsequent upgrades fail

If I do a helm rollback to the last succeeded deploy and then try re-upgrading, it does seem to work.

It seems very easy to reproduce it manually, without intentionally trying to upgrade a chart with harmful changes (for example, modifying an immutable Job objects):

  1. Take some chart and deploy it (but omit one resource, let's say a Service)
  2. Add omitted resource manually (for example, with "kubectl create"), but with the name corresponding to the release
  3. Add the omitted resource back to the chart and then try to upgrade it, helm should report "UPGRADE FAILED: no with the name found"

The steps are different but still the root cause seems the same. Correct me if I'm wrong on the assumption, but it seems to me that the last DEPLOYED release's revision does not have information about the particular resource, either because it was added "outside" Helm (manually for example) or the latest upgrade failed on some step (let's say on upgrading an immutable Job), at the same time deploying other objects and then recording them in the FAILED revision (but still without any track in the DEPLOYED revision what is expected, otherwise it would mean changing the history). On the next run the Tiller's kube client sees the resources on the cluster, meaning they should be already deployed and thus recorded, it checks the latest DEPLOYED revision (it seems the FAILED revision is not contacted at all) and does not see them listed there so it reports error.

@bacongobbler I tested #4146 with a custom tiller image and it did fix this issue! For others who are looking for a solution, you can apply the patch in the issue on current master and compile:

make bootstrap build docker-build

You will have to upload the tiller image to your repo and reinstall tiller to your cluster. I was able to get away with a force reset and re-install without destroying the current releases.

$GO_HOME/src/k8s.io/helm/bin/helm init -i gcr.io/my-repo/tiller:1 --service-account tiller

thank you @ramyala for testing the fix! I'll mention it in the dev call tomorrow and see if any of the other core maintainers see any edge cases that may come up with the patch. If not let's merge.

So I found a few bugs with #4146 that makes it an undesirable PR to move forward with. I reported my findings between master, #4146, and #4223 here: https://github.com/kubernetes/helm/pull/4223#issuecomment-397413568

@adamreese and I managed to identify the underlying bug that causes this particular error, and go through the different scenarios and edge cases with each of the proposed PRs. If anyone else could confirm my findings or find other edge cases, that would be much appreciated!

Oh, and something I failed to mention: because the cluster's in an inconsistent state, this can easily be worked around by manually intervening and deleting the resource that the error reports as "not found". Following the example I demonstrated in https://github.com/kubernetes/helm/pull/4223#issuecomment-397413568:

><> helm fetch --untar https://github.com/kubernetes/helm/files/2103643/foo-0.1.0.tar.gz
><> helm install ./foo/
...
><> vim foo/templates/service.yaml
><> kubectl create -f foo/templates/service.yaml
service "foo-bar" created
><> helm upgrade $(helm last) ./foo/
Error: UPGRADE FAILED: no Service with the name "foo-bar" found
><> kubectl delete svc foo-bar
service "foo-bar" deleted
><> helm upgrade $(helm last) ./foo/
Release "riotous-echidna" has been upgraded. Happy Helming!
...
><> kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
foo-bar      ClusterIP   10.104.143.52   <none>        80/TCP    3s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   1h

In the interest of keeping things all together, I'm going to close this as a duplicate of #1193 since the two tickets are identical. Please report any findings there so we can all work off a single ticket. Thanks!

Warning: this information is kind of sketchy and I can't make sense of it, but just in case this is useful to somebody, I worked around this problem by changing my service selector from:

selector:
    app: {{ template "mything.name" . }}

to

selector:
    app: mything

Perhaps there is some sort of issue with using a variable in this context?

Try helm delete RELEASE_NAME --purge
and install it again.

I am hitting this issue too. I tried adding subchart with a deployment in my chart, it succeeded when upgraded with helm upgrade chart chart-1.0.1.tgz just for the first time, after that when I tried helm upgrade chart chart-1.0.1.tgz it failed with the error Error: UPGRADE FAILED: no Deployment with name "subchart-deployment" found

Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

The helm tiller logs just logs the same error. Anyone experiencing this too?

Same problem. Everything was just fine yesterday and I did multiple upgrades. Today I just added a new yaml with service and deployment block separated with --- and the upgrade failed.

The interesting thing is, helm created the service and then complained about it (and didn't do the deployment).
I commented out the service and just ran upgrade with the deployment block - it worked. However, helm didn't delete the service - which it should have as it's removed from the yaml file.

Update: I manually deleted the service, uncommented it from the yaml and ran the upgrade - this time it worked like a charm!

hi,i also has this problem, and i can not solve it , can you would like give me some prompts.

Just confirming I am witnessing the same issue and the cause also is as earlier indicated.

Added a new secret, referenced it in a volume (invalid syntax). Upgrade failed, subsequent upgrades failed with the error as above.

Listing secrets showed it had been created. Manually deleted secret and the upgrade went through successfully.

Same, @thedumbtechguy. I run into this issue routinely. It's especially fun when Helm decides you need to delete _all_ your secrets, configmaps, roles, etc. Upgrading becomes a game of wack-a-mole with an ever-increasing list of arguments to kubectl delete. I should have thrown in the towel on this sisyphean task months ago, but it's too late for that now. Sure hope this and the dozens of similar issues can be fixed!

I've been using helm for one week and already faced everything outlined
here https://medium.com/@7mind_dev/the-problems-with-helm-72a48c50cb45

A lot needs fixing here.

On Fri, Mar 15, 2019, 10:49 PM Tom Davis notifications@github.com wrote:

Same, @thedumbtechguy https://github.com/thedumbtechguy. I run into
this issue routinely. It's especially fun when Helm decides you need to
delete all your secrets, configmaps, roles, etc. Upgrading becomes a
game of wack-a-mole with an ever-increasing list of arguments to kubectl
delete. I should have thrown in the towel on this sisyphean task months
ago, but it's too late for that now. Sure hope this and the dozens of
similar issues can be fixed!


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/helm/helm/issues/3275#issuecomment-473464809, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA4XZU4KMQePtZKcir8S5kWulkbYg-8Uks5vXCNggaJpZM4RGz7W
.

I experienced the same with Helm v2.10. I already had a chart deployed, added another configMap to the chart. It reported that the deployment failed because it couldn't find configMap "blah". I did

helm upgrade <NAME> chart --debug --dryrun 

To verify the configMap was indeed being rendered, it was. Checked the configMaps in the cluster, and found it there. Deleted the blah configMap, re-ran the upgrade, it worked.

https://github.com/helm/helm/pull/5460 should better clarify the error message going forward.

Fair point.

$ helm upgrade linting-unicorn testrail                                                                                                                                                                                        
Error: UPGRADE FAILED: no ConfigMap with the name "linting-unicorn-testrail-php-config" found

Keep up the good work helm team.

In case this is a big deal to anyone else, thought I'd point out https://github.com/helm/helm/pull/4871 should fix these issues.

Note that it appears it still hasn't been approved by the Helm team. Plus there were some concerns about the automatic deletion resources. Just mentioning it in case anyone wants to build it from source and give it a try.

Having the same issue and only workaround seems to be helm delete --purge release and install again!

I ran into the same issue. @fbcbarbosa it looks like it was merged 2 weeks ago. It should hopefully be a part of the next release 2.14.0.

Having the same issue and only workaround seems to be helm delete --purge release and install again!

A less destructive option is doing a helm rollback to the /current/ version (i.e. by 0 steps). I cannot guarantee success, but for us so far, it has always unwedged things successfully.

Is there an idea if this is going to be in the next release, and if it does when it is coming?

5460 was merged 2 months ago, which means it should be in helm 2.14.0.

I fixed the issue by

  1. delete those resources that complained by "helm upgrade". (It says Not found but actually it can be found). Don't delete the whole release, otherwise if in production you will be screwed completely.
  2. redo helm upgrade. Now this time it should be "Happy Helming" shows up. :)

We ran into this issue in PROD, when a requirement to our umbrella helm chart added a configmap based on a conditional. For us the work around fix was to

helm rollback <some revision that's acceptable>
helm upgrade <desired version>

For us, a simple rollback to the current revision has always worked:

helm ls
helm rollback <NAME> <current REVISION>

@tobypeschel do you have idea how your fix works?

Was this page helpful?
0 / 5 - 0 ratings