helm upgrade --install no longer works

Created on 28 Nov 2017  ·  57Comments  ·  Source: helm/helm

As of helm v2.7.1, after updating tiller, running helm upgrade --install flag no longer works. The following error is displayed: Error: UPGRADE FAILED: "${RELEASE}" has no deployed releases. Downgrading to v2.7.0 or v2.6.2 does not produce the error.

Most helpful comment

I thought I was experiencing the same problem, but it turned out I just had an old delete (but not purged), release hanging around. check helm list -a , and if your release is there, helm delete --purge releasename. helm upgrade -i is working successfully on 2.7.2 for me.

All 57 comments

I thought I was experiencing the same problem, but it turned out I just had an old delete (but not purged), release hanging around. check helm list -a , and if your release is there, helm delete --purge releasename. helm upgrade -i is working successfully on 2.7.2 for me.

This is a side-effect of fixing issues around upgrading releases that were in a bad state. https://github.com/kubernetes/helm/pull/3097 was the PR that fixed this issue. Is there an edge case here that we failed to catch?

Check helm list -a as @tcolgate mentioned, perhaps also explaining how to reproduce it would also be helpful to determine if it's an uncaught edge case or a bug.

Also having the same problem, along with duplicate release names:

$>helm ls -a|grep ingress
nginx-ingress               9           Thu Nov 30 11:33:06 2017    FAILED      nginx-ingress-0.8.2         kube-ingress
nginx-ingress               11          Thu Nov 30 11:37:58 2017    FAILED      nginx-ingress-0.8.2         kube-ingress
nginx-ingress               12          Thu Nov 30 11:38:50 2017    FAILED      nginx-ingress-0.8.2         kube-ingress
nginx-ingress               8           Thu Nov 30 11:31:27 2017    FAILED      nginx-ingress-0.8.2         kube-ingress
nginx-ingress               10          Thu Nov 30 11:33:53 2017    FAILED      nginx-ingress-0.8.2         kube-ingress
$>helm diff nginx-ingress ./nginx-ingress
Error: "nginx-ingress" has no deployed releases

When you were upgrading, what message was displayed?

same error as the diff above, but an install would say it was already installed.

I mean in the previous upgrade attempts that ended up in a FAILED status. I want to know how we get into the situation where all releases are in a failed state.

Ohh, the duplicate release name deployments? That I'm not sure, I get it quite often. Sometimes they are all in a DEPLOYED state, sometimes a mix of FAILED and DEPLOYED. We use a CI/CD Jenkins server that constantly updates every PR merge so we do several helm upgrade's a day, typically only having a new container tag. Usually the duplicates are just annoying when looking at whats deployed, this was the first time we had a hard issue with them, and normally we don't upgrade the ingress controller as we were in this case.

I seem to have ended up with something similar, I see a few duplicates in my releases lists:

$ helm ls
NAME                      REVISION    UPDATED                     STATUS      CHART                           NAMESPACE
.....
front-prod                180         Tue Dec  5 17:28:22 2017    DEPLOYED    front-1                         prod
front-prod                90          Wed Sep 13 14:36:06 2017    DEPLOYED    front-1                         prod 
...

All of them seem to be in a DEPLOYED state, but it could well be that a previous upgrade failed at some point, as I have hit that bug several times. I am still on K8S 1.7, so have not updated to helm 2.7 yet.

Same issue, can't upgrade over FAILED deploy

Same here using 2.7.2. The first attempt of a release was failed. Then when I tried an upgrade --install I've got the error "Error: UPGRADE FAILED: "${RELEASE}" has no deployed releases".

Same problem here with 2.7.2, helm upgrade --install fails with:

Error: UPGRADE FAILED: "APPNAME" has no deployed releases

If the release is entirely purged with helm del --purge APPNAME then a subsequent upgrade --install works ok.

I'm experiencing the same problem. Combined with #3134 that leaves no option for automated idempotent deployments without some scripting to workaround.

@winjer just tried deleting with --purge and for me it didn't work although the error changed
/ # helm upgrade foo /charts/foo/ -i --wait
Error: UPGRADE FAILED: "foo" has no deployed releases
/ # helm delete --purge foo
release "foo" deleted
/ # helm upgrade foo /charts/foo/ -i --wait
Release "foo" does not exist. Installing it now.
Error: release foo failed: deployments.extensions "foo-foo-some-service-name" already exists

@prein This is because you have a service that is not "owner" by helm, but already exists in the cluster. The behaviour you are experiencing seems correct to me. The deploy cannot succeed because helm would have to "take ownership" of an API object that it did not own before.

It does make sense to be able to upgrade a FAILED release, if the new manifest is actually correct and doesn't content with any other resources in the cluster.

I'm also seeing this behavior on a release called content:

helm upgrade --install --wait --timeout 300 -f ./helm/env/staging.yaml --set image.tag=xxx --namespace=content content ./helm/content
Error: UPGRADE FAILED: no resource with the name "content-content" found
helm list | grep content
content                         60          Mon Dec 25 06:02:38 2017    DEPLOYED    content-0.1.0                   content           
content                         12          Tue Oct 10 00:02:24 2017    DEPLOYED    content-0.1.0                   content           
content                         37          Tue Dec 12 00:42:42 2017    DEPLOYED    content-0.1.0                   content           
content                         4           Sun Oct  8 05:58:44 2017    DEPLOYED    k8s-0.1.0                       content           
content                         1           Sat Oct  7 22:29:24 2017    DEPLOYED    k8s-0.1.0                       content           
content                         61          Mon Jan  1 06:39:21 2018    FAILED      content-0.1.0                   content           
content                         62          Mon Jan  1 06:50:41 2018    FAILED      content-0.1.0                   content           
content                         63          Tue Jan  2 17:05:22 2018    FAILED      content-0.1.0                   content           

I will have to delete this to be able to continue to deploy, let me know if there is anything I can do to help debug this.
(I think we should rename the issue, as it is more about the duplicates?)
(we also run 2.7.2)

I actually have another duplicate release on my cluster, if you have any command for me to run to help debug that? Let me know!

just upgraded to tiller 2.7.2 and we're seeing the same thing. helm delete RELEASE_NAME followed by helm upgrade RELEASE_NAME . fails with Error: UPGRADE FAILED: "RELEASE_NAME" has no deployed releases. upgrade is the intended way to restore a deleted (but not purged) release, correct?

Looks like you can restore the release by rolling back to the deleted version.

seeing the same issue with v2.7.2 , fails when there are no previous successfully deployed releases

Also seeing two potential versions of this issue:


in CI:

+ helm upgrade --install --wait api-feature-persistent-data . --values -
+ cat
WARNING: Namespace doesn't match with previous. Release will be deployed to default
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
Error: UPGRADE FAILED: "api-feature-persistent-data" has no deployed releases

on my local machine:

( both at my OSX bash and in a gcloud/kubectl container )

+ helm upgrade --install --wait api-feature-persistent-data . --values -
+ cat
WARNING: Namespace doesn't match with previous. Release will be deployed to default
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
Error: UPGRADE FAILED: no PersistentVolumeClaim with the name "api-feature-persistent-data-db" found

The warnings are normal for our chart.
The errors are interesting because one of our subcharts has a pvc.yaml in it.

helm del --purge <release> does mitigate the problem.
This does make our CI pipeline difficult to upgrade.

@adamreese what is the final resolution for this issue? We're on 2.8 and we still cannot upgrade a previously FAILED release with the change to helm list.

In particular, we're running into the following type of issues:

  • deploy a release, OK
  • upgrade --install --wait, but maybe there's a small bug and --wait doesn't succeed (e.g., liveness probes never make it up)
  • after fixing the chart, upgrade --install --wait fails with xxx has no deployed releases

Deleting/purging is not desirable or acceptable here: the release may have multiple resources (pods, load balancers) that are not affected by the one resource that won't go up. In previous Helm versions, upgrade --install allowed us to only patch the change that broke the full release without having to remove all the resources.

Helm is the owner of all resources involved at all times here -- the resource is only marked FAILED because --wait didn't succeed to wait for all resources to be in a good state. I assume the same will happen if a pod is a bit too slow to start and in many similar cases.

@peay see #3353 for follow-up discussion.

Thanks -- that clears it up. Actually realized we were only hitting it when we had no successful release to begin with. In that case, purge is a fine workaround.

This still happens if the install fails.
You need to purge and try again.

UPGRADE FAILED: "API" has no deployed releases
then you need to manually purge
helm delete --purge API
and it will work.

As of helm 2.9 you can also perform helm upgrade --install --force so there's no need to purge. For previous releases:

helm delete API
helm install --replace --name API ./mychart

@bacongobbler I'm still confused about this behavior.
Would you be able to answer https://github.com/kubernetes/helm/pull/3597#issuecomment-382843932 when you have time?

Thanks for your work on this.

Sure. I'm AFK at the moment but I can answer later this evening. Fully understand the confusion and I'll try to answer your questions as best as I can. It's just crazy over at the office prepping other stuff for KubeCon EU. :)

I'm open to help hack on this and/or improve docs when we're out there.
Let's definitely meet up :+1:

@bacongobbler does this address #3353 and negate the bunch of code I've written as part of #4004?

In my case helm upgrade --install --force did a delete --purge and after that an install.

Is this the expected behaviour? I've almost lost 2 months of work because of this. When did force started to mean delete?

^ I had some conversations with folks at kubecon and found that quite a few teams are pinned to v2.7.0 because of this behavior change.

I agree that upgrade install should never ever be destructive, even with whatever --force could ever mean.

@bacongobbler, sorry I wasn't able to meet up when we were out in CPH.
Is there documentation behind the rationale for changing the behavior to not allow upgrading a failed release?
The old behavior seems much more desirable than what we have now.

See the second comment in https://github.com/kubernetes/helm/issues/3353 for background context on why we had to make that change

I'm really curious to hear what is the proposed path going forward. We cannot back out #3097 because of the problems demonstrated in #3353, so I'd love to hear what the community thinks is the right path forward to fix this issue. We can back out #3597 but from what I've heard there's no good solution going forward to fix the helm upgrade --install problem. :disappointed:

I know we're working on re-working the apply logic for Helm 3 but that's a long way out

Thanks for linking that @bacongobbler :)
Your suggestion here sounds like a reasonable approach:

it might be valuable to not perform a diff when no successful releases have been deployed. The experience would be the same as if the user ran helm install for the very first time in the sense that there would be no "current" release to diff against. I'd be a little concerned about certain edge cases though. @adamreese do you have any opinions on this one?

This would allow us to back out #3597 since the only failure case (nothing to diff against) would be addressed.
This makes upgrade --install non-destructive again and more similar to kubectl apply.

Intuitively, that is what I would expect a upgrade --force to do: don't do a diff-and-patch operations but just apply the complete template, ignoring what is in place at the moment. I can't really think of any technical reasons why this would not be possible, but I am not familiar with the inner workings of Helm either.
It can still be a dangerous operation but anyone using a --force flag normally expects a certain risk by forcing updates. While I would argue one does not expect this to delete and recreate your deployment, with potential downtime.

I've read through the discussions, but I'm still not clear on how to have an idempotent upgrade --install command (or sequence of commands).

With the current stable version, how can I achieve this in an automated script? I need to be able to deploy non-interactively without using delete --purge, even if a previous attempt failed.

As for future plans, this is the behavior I originally expected from upgrade --install:

  • Install if no previous installations were made
  • Upgrade a previously successful installation
  • Replace a previously failed installation
  • If the installation fails, the old resources should still be in place, with no downtime where possible
  • No destructive ops (such as the automatic delete --purge mentioned above)

In my personal opinion, no extra flags should be required for this behavior. This is how package managers generally behave. Unless I misunderstood the requirements, I don't think a --force flag, for example, is necessary.

Has there been any updates regarding this? What is the proper way of reliably running an upgrade on an existing installation without having to run a purge if something fails?

@MythicManiac FWIW:
I still have our teams pinned on v2.7.0 because of this behavior.
We don't seem to have any issues with resources upgrading and deleting when they are supposed to using helm upgrade --install with this version.

We also have this problem. It's very annoying that I need to delete K8s services and related AWS ELBs because helm has no deployed releases. The package manager is great but this issue leads to production downtime which is not good.

As a very hacky workaround, if the problem with the origin deploy is
resolvable (e.g. preexisting service.), Doing a rollback to the original
failed release can work.

On Fri, 5 Oct 2018, 18:13 Eugene Glotov, notifications@github.com wrote:

We also have this problem. It's very annoying that I need to delete K8s
services and related AWS ELBs because helm has no deployed releases. The
package manager is great but this issue leads to production downtime which
is not good.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/helm/helm/issues/3208#issuecomment-427436187, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAEo8_pISzHFAuOz6Lsv5EjrFaEo_HkYks5uh5M7gaJpZM4Qtexz
.

@tcolgate, thank you! I just fixed another problem (https://github.com/helm/helm/issues/2426#issuecomment-427388715) with your workaround and will try to test it for exist ELBs when I am deploying a new chart next week over existing resources.

Doing a rollback to the original failed release can work.

@tcolgate, I just tested and no, it doesn't work in the case of first deploy.

$ helm upgrade --wait --timeout 900 --install myproject charts/myproject/myproject-1.1.1.tgz
14:53:18 Release "myproject" does not exist. Installing it now.
14:53:18 Error: release myproject failed: deployments.apps "myproject" already exists

$ helm list
NAME            REVISION    UPDATED                     STATUS      CHART               APP VERSION NAMESPACE
myproject       1           Mon Oct  8 11:53:18 2018    FAILED      myproject-1.1.1                 default

$ helm rollback myproject 1
Error: "myproject" has no deployed releases

I am curious, if Helm can't deploy a chart over existing resources so why helm delete causes deleting exactly these resources?

@thomastaylor312, we faced this issue ~as well as https://github.com/helm/helm/issues/2426~ (up: I found the real root cause for 2426) with helm 2.11.0. Do you think they should be reopened?

I found this thread after a Error: UPGRADE FAILED: "xxx-service" has no deployed releases
While it was visible from a helm ls -a.

I decided to see if it was an issue because of an incorrect --set value, and --debug --dry-run --force actually STILL deleted my running pod ... my expectation was that a dry run action would NEVER modify cluster resources.

It did work though, and the service could be redeployed afterwards, but we experienced downtime.

my expectation was that a dry run action would NEVER modify cluster resources.

This is a fair expectation -- we should make that... not happen

I believe that was patched in https://github.com/helm/helm/pull/4402 but it'd be nice if someone were to check. Sorry about that!

same problem after upgrade to 2.11.0

Why is this closed? Do we have a proper way to handle this now?

@gerbsen, there isn't a way around this with current versions of Helm that is non-destructive.
We still use Helm 2.7.0 for everything in my org. It is a very old version that has other bugs, but it does not have this issue.

Just had helm upgrade --install --force do a delete --purge and destroy my pvc/pv without telling me (on recycling). Had several failed releases, so it was in a state it was running in kubernetes, but helm thought there were no running releases. Not good times at all.

@notque after losing all grafana dashboard twice I've started doing backups before doing any kind of upgrade, having this kind of risk removes all the benefits of using helm

For those who are seeking for help, the issue was gone for me after upgrading helm to v.2.15.2

Still seeing this issue on 2.16.0

Why is it still closed? 2.16.1 is still affected

@nick4fake I think it's a duplicate of https://github.com/helm/helm/issues/5595

Was this page helpful?
0 / 5 - 0 ratings