Helm: Add 'helm install --app-version' command/versioning a chart against an app version

Created on 22 Feb 2018  ·  118Comments  ·  Source: helm/helm

As far as I'm aware, helm install currently only supports specifying the --version flag to specify a chart version to install.

I'm unsure how the 'appVersion' field in a Chart.yaml is supposed to be used, but it seems generally useful to add support for versioning your application against a specific version (or version set) of a Chart.

Am I mis-using the appVersion field here? Should I instead be constantly building my chart to be backward compatible with previous version, or otherwise how can I infer to my users which chart version to specify when running helm install if they want a particular version (this becomes even more complex when you consider a user can also change the version deployed with something like --set image.tag, which often results in a version change of an application).

feature

Most helpful comment

I do not understand this discussion at all. Why not giving people the option to use the app version in a way they think is the best fit for them is so much a big issue?

In a current form for me will be best to not have this APP VERSION at all. It is bringing only confusion to people in our project. We have >80 services which are using the same helm chart and because it is not possible to easily change this APP VERSION in the helm upgrade -i ... I see that all of our applications will forever stay with 1.0 here. And I do not plan to repackage the already packaged chart to just change the app version. Why I should complicate my CI to fit your design???

I also see that I just need to say to everyone to not use the helm list as it will be something not useful for them. To check which version of our applications they have they will need to use something else.

I was optimistic at the start of reading this conversation but after going to the end seeing how you discuss this and how you fight to force users to have your way of thinking I lost hope now :(.

All 118 comments

I just ran into this as well. I normally want the image tag to be specified at time of packaging the chart, but for debugging the app wanted to do an install with a different tag.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Bringing this up again as it was raised in https://github.com/kubernetes/charts/pull/5919

Copying over parts of my recent comment:


We specifically choose to keep the minor version of our chart aligned with the minor application version (although patch version numbers can, and do drift apart).

This is because we may add a new flag to a new release of cert-manager, and by adding support for it into the Helm chart would break compatibility with older releases of cert-manager, as they do not support the flag. This is a pretty fundamental question around Helm chart versioning in general IMO, and one we don't have a good story for.

I know it isn't recommended to try and align appVersion with chart version, but this way, a user knows they can use Helm chart version 0.3.x with any of cert-manager 0.3.x, and chart version 0.4.x with cert-manager 0.4.x. Compatibility is defined within the minor version.

/remove-lifecycle stale

I'd like to bring this back up for discussion.

Overall we haven't seen much compelling in favor of versioning the charts for our internal apps when all that is changing is the image tag used by some components. When upgrading a release, the appVersion field seems like the right place for this information.

Copying over my original proposal referenced above:


Our current workflow for deploying helm charts involves ansible tasks that call the helm upgrade CLI command and it would be nice to be able to pass a flag to set the appVersion when revising a release for the same chart version.

It may be a little weird conceptually because an appVersion is associated with a chart rather than release, but in our case we are just updating the image tag used for some containers and our workflow hasn't come to incorporate chart version and/or chart repositories yet. This may change in the future, but for now I don't see any issue with adding a flag for --app-version on install and upgrade as the field is purely informational.

when developing our own internal applications the app itself changes a lot more than the chart that deploys it. Typically our Continuous Deployment command is a helm upgrade with nothing more than --set imageTag=<new_version> (obviously, used elsewhere in the chart to set the container version) If we replace that with --app-version it would give us another visual point in helm ls to see what versions of code are deployed along with the version of the chart that is deployed.

To make this more visible otherwise I have standardized on setting a metadata tag of imageTag that gets set to the imageTag passed on install/upgrade. This allows me to use the K8s dashboard or easily create Graphana dashboards with imageTag displayed, but requires me to leave the command line and go mousey clicky.

Any news about this?

Thanks

Any update on this. It seems PR from @Eraac does what is requested. As @TD-4242 mentioned, we also run helm upgrade --set imageTag=<imagetag>, however this doesn't update the APP VERSION listed in output of helm ls. Being able to --set app-version or --set version will allow us to run helm upgrade such that helm ls correctly shows the version that is deployed.

Any updates?

Aaaaany time soon would be lovely!

it will be very useful

Would also love the ability to set app version at install time as we use common charts to deploy applications

+1

+1

It would be very helpful

Requesting the same

Please stop the spam with the +1. There is already a PR (https://github.com/helm/helm/pull/4961) and people are discussing the proposal. The last reply was even 2 days ago.

Hi @filipre

There is already a PR (#4961) and people are discussing the proposal.

According to that PR, this was moved here:
https://github.com/helm/helm/pull/5492
This is the latest PR to #4961 and looks like if we're waiting for a review to be merged ...

@filipre Could you please say: what happens with the PR? Looks like PR has no move half of month

This would be very useful. I ran into something where I need to pin to version 0.6.0 of an app, and the chart version has no relation to the app version.

I agree, I also think this would be very useful. Any updates?

Just hit this issue now when I realised this would be an issue whilst writing a Helm chart that we plan on reusing for many applications. Given the lack of progress on solving this problem in a simple way (i.e. with the flag to set app version on install), I've come up with an alternative that should work for now. It's super simple really - just do a helm fetch first with the untar option, and then helm package with the --app-version flag that does exist there, then proceed to install that local chart.

It's not ideal, but the end result in helm list is correct, and it's very simple to do this on a CI server. Would love to just have --app-version be available on helm install and helm upgrade though.

The summary of the discussion in #5492 was that a command that wraps the logic of helm package and helm install would solve the use case originally described in this issue.

In other words, you can work around this by running the following:

$ helm package myapp --app-version 1.0.0
$ helm install myapp-1.0.0.tgz

(Moving comment from recently closed PR here - so it doesn't end up in Nirvana)

Here are my 2 cents on this:
Assuming we have a helm chart version X deploying a service with appVersion Y.

The helm chart is used to describe the infrastructure inside of Kubernetes on chart version X which is used to host a service of appVersion Y.

During initial development both X and Y will change regularly. However, at some point X will be more or less stable and Y will continue to change (unless Y has some new requirements to infrastructure, which most likely happens far less often in the development cycle of Y).

With the approach proposed in this ticket one could take a stable helm chart package on version X to deploy an appVersion Y, Y+1, Y+N, etc.

However, not allowing this flag to be overridden during helm install or upgrade and instead only in e.g. package would effectively tie both X and Y together forcing me to always creating a new X+1 for Y+1. This seems unnecessary to me and would result in a ton of helm packages that effectively haven't changed apart from that they are referencing a new appVersion. From my perspective, a version of an application and a version of the infrastructure hosting that application have a relation, but should or could still be independently versioned. How it's being done should be left to the respective development teams.

Summary:

This approach definitely works, but it also results in a lot of unnecessary Helm packages where only the AppVersion has changed:
$ helm package myapp --app-version 1.0.0 $ helm install myapp-1.0.0.tgz

Yeah, but I suppose it's not much of a problem if you take the approach I mentioned above. Push your chart to a chart repo with no app version set (or 0.0.0 or something), then when you want to use it, use helm fetch, then package it with the right app version, then use the local .tgz file and don't push that chart. That way your chart repo stays clean and you only have the actual chart changes represented there.

Yes, it would work. In this case one could never directly consume the deployment artifact (e.g. by directly installing from the Helm repository), but always has to sent it through an additional step that mutates the artifact.

Where it was argued that the Charts.yaml should be immutable, I argue that the deployment artifact should be.

_Summarising my thoughts from https://github.com/helm/helm/pull/5492#issuecomment-520029902_

There is a problem in how community interprets the package, chart and versions. @BenjaminSchiborr, I hope this will make sense to you.

Chart - is a source code of your release. Like a source code of your app. Consists of templates, code files.
Package - is a build of your release, artifact. Like a binary, built from your source code. Consists of fixed production versions: both chart version and app version.
Release - is a build, deployed with specified configuration.

There is no way you can make a Release out of Chart. It just does not work this way!

Before deploying you app on the stage, you need to have a chart. Then, you need to package you app with a chart, fixing both versions, using helm package. It will result in a package, deployable on any stage. Then, you install this package on, for example, QA stage, promoting it on UA and then on Production, using helm install.

It is the way any package-oriented software works.

The Confusion

helm install needs a source, which should be installed. Source could be:

  1. Package name, available from the registry
  2. Package file path, if package is already downloaded or created
  3. Package URL, if it is available using HTTP
  4. Chart directory path

4th approach feels like a black sheep here, don't you think? This is why ppl confuse Package and Chart. And this is a root of the problems.

Reasoning

There are two kinds of the apps in the wild:

Big/medium - this is where we have a time, money and resources to setup detailed, granular flows for better introspection and quality guarantees.
Small - a microservice, pet project, PoC, low-cost projects, projects w/o DevOps knowledge or even helm development process testing.

With small projects, you don't have a time or need to create or deal with packaging. You want to write some template files and deploy it with one command!

This is why helm install allows such usage, as helm package & helm install. But it does not provide full helm package capabilities, such --app-version.

helm install can be helm package & helm install, but it would make helm install a mess, a hell for support, testing and making good practices.

Proposal 1

Simplify helm install. It should allow only packages to be provided, to simplify the codebase, tests and make it opinionated, to simplify understanding of the helm.

Proposal 2 - helm run

Introduce new command: helm run. A command, which should just work. Ideal for small apps. Or maybe even medium and big.

It should combine helm package and helm install, providing capabilities from both commands, excluding ones which make no sense in such usecase.

helm package creates a build, as go build does it. go run allows to start the app without the building process, so helm run looks like a solid name here.

Additional things to consider:

  • Should it use upgrade --install instead?
  • Should it have --atomic enabled by default?

@iorlas,
your comments make sense. However, you're assuming that there can only be one final package which inheritently ties together both infrastructure version and software version, wheras I'm assuming I have a package for my software version and a package for my infrastructure and want to tie them down in a release (e.g. through a desired state configuration that references software version and infrastructure version).

I do not see why a development process should be forced into that pattern where it could also be left to the responsible development groups to decide if they want to tie together infrastructure and software version at the helm package level or later. Currently a deployment process has to always always utilize new helm packages, whereas only the software version changes. This results in thousands of useless packages in a repository for me.

I'm fine if there's some long-term advantage to this. I just do not see it.

@BenjaminSchiborr Okay, let me break it down a bit.

You can have your Chart(== infrastructure), which has version X.
You can have your application, version Y.

How helm works right now, it ties together infrastructure and application versions on the helm package step. Then, you need to tie it with k8s namespace, producing Release.

So the formula is: package(infra + app) + k8s = Release

What you really want, is to skip this middle-step and tie all 3 components together on one step - Release. Like this: infra + app + k8s = Release. Am I correct?

This is what helm run will do, on the surface. Under the hood, it will be the same.

But... I feel you may be missing the point of Helm. Although any tool can be used as user pleases, there is always an idea which influences the community, creates a "way", so a telescope won't end up as a hummer with ability to brew a beer.

Let me try to describe how it should be used and it would be awesome to see it from your perspective.

Helm itself is created to abstract away the k8s templating and deployment, tie together application and infrastructure, with its dependencies: instead of manually rewriting templates, applying it on K8s and then providing new image tag, you need only one command - helm upgrade. Like an MSI or deb package.

When you need to install new app version or downgrade it, you supposed to use helm for it. Instead of managing the app, you manage the whole package. It would be a pain in the bass to rollback app version and infrastructure separately when something goes wrong - I've been there, won't suggest to anyone.

So it is the right thing to have many packages in your registry, since the package is not the infrastructure, package is the app, because the app does not mean a thing in K8s w/o infrastructure.

If your problem that you have too many packages in the repo, I would suggest to use artifacts instead of repositories. I do it like that in CI: build the app, send to the docker registry, create the package, save it as artifact for release. CircleCI, Travis, Azure Pipelines support creating files attached to build as artifacts. Can you do the same?

Maybe I'm missing the point of Helm. Maybe Helm has missed a point here. I think this ticket is about evaluating just that. And personally - also about expanding my horizon :)

But yeah, in an abstract way what you are saying is correct. I don't want the software version coupled to the helm package/release, so essentially it is infra + app + k8s = Release. Same as I do not want properties of my software version tied to my helm package/release (apart from maybe sane default that I can override).

Regarding the example you provide further down. I don't see how this shows how this approach is problematic. You would still use helm to roll-back or roll-forward. If the infrastructure changes, you use a changed helm chart version. If the software version changes, you use a different appversion. If a parameter changes, you use a different parameter. It would always be (single) helm call per service.

Can you elaborate on?

If your problem that you have too many packages in the repo, I would suggest to use artifacts instead of repositories.

What I was referring to was too many packaged helm charts (that already include appVersion). Think of it as one helm chart version that's stable and an appVersion that changes hundreds of times a day. So daily a few hundred packaged helm charts per service in a repository that is later consumed by an automation.

(My pipelines generally look the same as yours: Build App --> Docker image (results in appVersion) --> packages chart (with updated appVersion ) --> pushes to repository?

I think this ticket is about evaluating just that.

For sure! In my opinion, we already have too many levels of abstraction, so having a helm here a bit overwhelms 😄Also, there are k8s operators here, which are created for some(maybe most?) of the problems Helm solves. But that's a topic for another time, hehe.

Think of it as one helm chart version that's stable and an appVersion that changes hundreds of times a day. So daily a few hundred packaged helm charts per service in a repository that is later consumed by an automation.

Yup, it definitely feels like too much, but it is intended. Like, you have a build run, it should produce some artifacts, then save it to use to deploy on stage. Like... how we can have a build run which does not have a build result? Should we generate the build when we deploy? It would be really wrong. Although some CI pipelines do that for the JS builds.

The same problem we have with docker: each build generates new docker image, which goes into docker registry. We need to save it, how we supposed to deploy it then, if we wouldn't have it?

Of course we can docker save to save space on the registry and sweep in later in build arifacts retention policy. But so we can helm package it and keep as a file.

But I definitely see your point, we can have one "installer", which can accept an app version. Since such installer has infrastructure, you can keep it the same, just changing the app version. Looks neat and simple, but there is a problem.

App itself does not make sense in k8s environment w/o infrastructure.

What if your app relies on some infrastructure? Basic example - configmap.

What if you will need to rollback app then?
You would need to downgrade the app, but then you need to downgrade the infrastructure too.

What if you will need to rollback infrastructure then?
Previous infrastructure does not have a clue which app version you need to install, since it is not tied to it. So, you would need to remember which app supports which infrastructure and set it manually.

Really, that's would be a hell. And it is a hell right now, when you don't have helm. And it is no wrong. But in that case, you have little reasons to use helm.

we already have too many levels of abstraction
All nice and simple wink

I think your last point is very convincing:

Previous infrastructure does not have a clue which app version you need to install, since it is not tied to it
This is definitely a problem if you decouple these things and have the Helm package/release as the source of truth.

However, for many people this is probably not the case. There is an orchestration on top of Helm (yay, another layer of abstraction) that ties multiple helm charts (and appVersions) together (Think helmsman, harness, or the like). And that layer itself is also versioned. In that case what you describe is no longer an issue, because you would not revert to an older version of a helm chart, but to an older version of the orchestrating layer (that makes sense of app and. infrastructure).

But helm alone, yes, 100% a problem 💣. I think that's the reason why the idea was to explicitly allow to override the appVersion and, by default, not allow it.

One thing I like about coupling together the chart version and the application version is that it becomes clear which application version belongs to which chart. If you need to redeploy one specific version you don't have to remember which application version was compatible with which chart version. Because they are linked together, you simply refer to the right chart version and you will be sure, that application and chart match. I think this is basically what @iorlas described, right?

At the end, the chart version will act as a "super"-version:

  • any change (regardless if it changed the application or the infrastructure, i.e. chart) will result in a new chart version, i.e. new application version will imply that the chart version also changed
  • a new chart version does not imply that the application version changed

For documentation purposes (and maybe other fancy ideas) you could introduce another "only-chart" version yourself that will only refer to the chart definition itself.

@filipre
Yup-yup! That's how Helm supposed to work, based on current architecture design decisions. As I see it.

The problem is that it feels weird sometimes - too much for setup and questionable idea of having app and infra tied together. So, is it the right approach - that's a question.

@BenjaminSchiborr

However, for many people this is probably not the case

For sure! Even whole k8s, containerising could be too much.

Let me try to break it down from a bit other perspective, trying to find the problem helm is created for:

  • Infrastructure instance is how whole product works: SaaS instances, VM pools, k8s setups, apps versions(including databases, observability tools, routers, sidecars and the product apps instances)
  • Infrastructure instance needs one source of truth. Now we have utilities like Terraform.
  • Too many things in one folder = hard. We need to decompose it. Hello Terraform modules.
  • Both platform and application in one folder = hard. We need to decompose. Hello platform and containers. K8s steps in.

    1. So, Terraform modules can manage the platform, including creation of empty, ready-to-use containers layer

    2. K8s manages the containers, allowing to create basic resources using YAML

  • Many YAML files for many K8s apps(including databases and stuff) = hard. Split it into folder per app.

    1. So there we have some folders like PostgreSQL, Redis, MyPetShop. Each of which has YAML files for the resources we have. And it needs its app versions to be set in order to be applied into K8s.

  • Hello Helm - instrument, which allows to set up these folders(called Charts), but more: apply it together, rollback.
  • Chart looks solid. Let's reuse it by supporting variables. Now Chart is not an Infrastructure, but Infrastructure template.
  • Chart looks awesome. Let's share it with friends. Each time we update the chart, we need to push it to the file repo with index.

So, it feels awesome and all, w/o packages at all. So, you need to apply this Chart, then provide application version. And it should be it.

But problem arises: no one wants to remember which Chart is needed for which app version. Which Chart is updated to provide new configuration value for which app version.

In the end of the day, all we want is to "Setup myApp version 1.4.2 as K8s app", which incapsulates all the risks, dependencies, changes into one artifact - application installer, which is apps versions + hooks + setup logic + infrastructure to connect it all. This is why we have such things as MSI, Deb, RPM, even NPM, Go mod, Pip, Gem.

This is where Package comes on the scene. And Package, by definition, needs to be created as installable release, within CI/CD flow. So we can send it into registry and/or setup on our system(k8s cluster).

And no other project is different. When we helm install Chart directly, w/o package, we do the same thing. But instead of package creation, we create it on different step. Instead of building it on app building process, we build it on release step. We still tie together infrastructure and app versions. Implicitly.

Funny thing is:

  • Dependency updated = update the Infrastructure template(Chart) version
  • App updated = generate a fork, subset of the Infrastructure template(Chart) - Package, with it's own version

Still, k8s operators are to be projected on current problems, so there should be only one instrument, which should work like operators, but provide easy release process, as helm does.

Any thoughts? We may be creating something new here, but better

What you're describing makes a lot of sense for applications that are meant to be used by other people on infra that you don't have control over. However, in enterprise scenarios making packages becomes busywork: we might have dozens of cookie-cutter microservices, which are deployed into the shared or cookie-cutter environments, and by the virtue of CI/CD pipeline definition that lives in the repo itself (think azure-pipelines.yaml), the "package version" is just a build produced from a particular version of the master branch. Meaning, I don't really need to store the "package" anywhere - my build will produce the same package, with same bits and same variables used in configmaps, etc. In scenarios like this, I'll be revving helm chart only when service infra changes, which happens quite rarely. Helm is in this picture is because 1) I already have to use it to deploy some pieces of infra (e.g., nginx), 2) I don't have to reinvent the wheel with templating k8s yaml.

@wasker

Let's project it on, for example, docker. Docker image is a package too. It ties together binaries with OS image = infrastructure. I believe, the reason to make a docker image each time we make a build, is the same as to make a helm package.

If one has no need to abstract everything to the docker images, one does not need docker and can live with plain VM.

So, if we try to project docker usage onto helm, using helm as infrastructure instrument only would be like using docker only to create an initial image, then updating such image on the k8s host itself, by sending new binaries. It is bad, the same kind of bad as using helm and not repackaging it each time.

Anyway, I think, we went the wrong way. Does anybody use helm and then updates image manually? I believe, we have 3 general use-cases:

  1. helm package chart -> helm install package
  2. helm install chart
  3. helm install -> kubectl set image

@wasker Which one is yours? I believe, not the 3rd one. Even it is real separation of infrastructure configuration and application versioning, it would be nasty to work with. Since, it would mean, when you will need to update the infrastructure, you will lose all the versions. You will need to update it in Chart, manually, or kubectl set image for each deployment.

So, we are speaking about the second one, helm install chart, "w/o the packaging". So, helm is always in picture. The problem is, package is build, but in the runtime - when we deploy our app. So the CI build is in charge of package creation, implicitly, when we need to deploy it.

And if project it on golang, such practice looks like sending source code and running it as go run in the docker, instead of building it and using the binary.

So, the real reason to skip the packaging step is to simplify whole picture for the engineer. Is it?

This is where we can start talking. Here https://github.com/helm/helm/issues/3555#issuecomment-529022699 is my proposal. Add helm run and model it as go run.

If we really need to split the infrastructure and app versioning, that would mean to use helm only to update/seed to infrastructure. Even tho I would like to see a way of doing this, I can see the one, which won't add headache on updates. We can ignore current deployments versions and stuff... but I feel it is so wrong, it would be a waste of time to create.

Let's project it on, for example, docker. Docker image is a package too. It ties together binaries with OS image = infrastructure. I believe, the reason to make a docker image each time we make a build, is the same as to make a helm package.

I guess the issue is that if you're making a new Docker Image, it's because something within that image has changed. In the scenario being described here, the contents of the packaged Helm Chart haven't changed aside from a single line - the app version. This does affect the end result, but it doesn't change how the Helm chart on it's own behaves. It will do the same thing, in the same way, just with different values - the Helm Chart as an entity on it's own hasn't changed in the slightest as a result of that app version changing; only what is released at the end of it has.

You could draw parallels here with the ability to do things like use configuration for Docker Images. You pass in environment variables to a Docker Image, this affects how it runs at runtime, and you don't rebuild an image to change those variables. The contents of the image haven't changed, but the end result has - very similar situation there, but in that case the behaviour is desirable and normal.

And if project it on golang, such practice looks like sending source code and running it as go run in the docker, instead of building it and using the binary. [...] So, the real reason to skip the packaging step is to simplify whole picture for the engineer. Is it?

Not in my view. Realistically the argument here is whether or not people considering the app version "part of the chart", and also whether they consider a Helm Chart to be distinct from the Docker Images that are deployed as a result of the chart. My view on this is what I've mentioned above. It's like taking a compiled Go binary, in a Docker image, and running it with some different environment variables.

That being said, the arguments that have been made for repackaging a Helm Chart with a new application version and using the Chart Version as some sort of "super version" are compelling (namely for the advantage of always having a compatible app version deployed with a chart - provided that application version isn't customisable via values).

My question is - _why not support both approaches?_ There are pros and cons to each approach. Fundamentally, not supporting this is only making some perfectly valid workflows more difficult. For example, using Flux CD and it's Helm Operator. If you have a shared Helm Chart (i.e. because you have a certain type of service that you deploy many of and they share many of the same characteristics), then to get useful helm list output you have to have a new Helm Chart for each app, and each release also needs it's own Helm Chart. This alone complicates pipelines, because if the Chart could be shared, it could have it's own pipeline that only ran when the Chart was updated, and the application pipelines wouldn't even need to run a single Helm command (provided Flux CD added support for a new app version flag on install / upgrade).

My question is - why not support both approaches?

That's exactly what I am thinking.

In my case the "super version" is not the helm chart but another layer that just uses a plethora of helm charts. For me a single Helm chart is meaningless as it describes only a small service among many others. Only together they form a meaningful release.
Thus in my case the "super version" is the summary of all of those releases together (which is how it is actually versioned).

Still, there is an argument towards having a Helm chart itself as the descriptive "super version".

Back to @seeruk's point: Why not support both?

It may be helpful for the current debate to get an outside voice. For a little bit of context, I have been using helm for a sum total of _11 days._ I think this gives me a unique perspective to add because I haven't been involved in any advanced learning. Everything I've gleaned has come from documentation and experimentation.

How I View Helm

Up until reading this current debate about helm installing packages rather than charts I have believed that Helm is mainly an interface for describing related Kubernetes resources. This belief comes mainly from the Helm documentation which says this:

Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.

For context, the current stable helm docs also state:

A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster. Think of it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.

So now there is some confusion! The Helm docs say clearly that "a chart is a helm package," but if that's the case, then why on earth does helm install accept non-packaged Chart repos?

It's this behavior that has influenced my current view of what helm is and how it's supposed to work:

Helm acts as a mapper between the structure of _what_ is going into the cluster and _what properties_ those things have.

So now the question is: "What is Helm deploying?"

What is Helm Deploying?

When I run helm install release-name ./local_chart I am expecting helm to render all the chart templates locally with the values specified (either through defaults or overrides) and push the rendered versions into Kubernetes. I also expect Helm to keep the previously deployed Kubernetes objects in the event that I rollback. This concept of "a collection of rendered templates" (which contains some metadata) is a release and is a package. All of those resource definitions (even if they didn't change) need to be in _their state described in the bundle_ for the release to exist (or be rolled back to).

From this, I surmise that helm only ever truly deploys packages. It seems to be the only semantically correct thing that you can say; however, the argument about how these packages are distributed seems to be the root cause of this debate in practice. Specifically, "does upgrading or changing app version constitute a new package?"

By _my personal semantics_ the answer to this question is yes. Going by the argument that you wouldn't bump the version number unless something changed, you would only need to adjust the version number of your application if some underlying properties changed. This would probably involve pulling a different docker image from a registry, or setting a feature flag through an environment variable, or any number of different practices that can be used to change the behavior of some code artifact.

It is because of this that I've started to clean up our registries and never deploy from :latest except in development. Using a "meta-tag" instead of a release tag from a docker image makes tying a given deployment to a given code base impossible. We learned this one the hard way (but thankfully in test, and not in prod).

Which Pattern Should be Used?

This is already opinionated by Helm: packages.

Given that this pattern is the enforced pattern even if it's not 100% evident, it seems logically consistent that an --appVersion flag be provided. Answering the "why" of this is probably more important than anything else so let me wrap up my contribution with that answer.

Why support --appVersion?

Let's take a look at a special case of deployment:

A company has an application with two major versions. Some of this company's clients have not committed to upgrading to the newest major version of this application and are using the older of the two. Because of paid development contracts, live development still takes place on the old major version... but the product is the "same." The infrastructure to deploy for both the versions of this application is the same; however, the app version will be drastically different between these deployments.

What is this company to do?

  1. Make two separate, almost identical, helm charts which only differ in appVersion?
  2. Use one helm chart but constantly update appVersion flopping back and forth between major app versions?
  3. Override the appVersion with a flag (currently unsupported) leading to potential developer error on the command line?
  4. Rescope appVersion out of Chart.yaml and into values.yaml?

Proposal 1 introduces slightly more overhead than the other proposals, but also has the benefit of keeping the charts for these application versions separate if they diverge. It has a clear use case and would probably be adopted in many instances of this problem.

Proposal 2 has less overhead than proposal 1 but introduces high variability into a chart. What happens if you go to run helm install release-name https://remote-repo.com/chart and the most up-to-date version of the chart is the wrong version of the application? Whoops. Probably not the best approach.

Proposal 3 is what we're currently discussing. I personally dislike the option but only because I feel like it's the wrong solution to the problem. Does it make the appVersion configurable? Sure. But it also has the same problem that you run into when you run helm install release-name https://remote-repo/chart: The metadata is ephemeral and ONLY maintained by Helm.

I'm actually pretty shocked that no one has offered Proposal 4 or something like it yet. It puts the appVersion into a state where it can be overridden (enabling something resembling helm run), can be contained in the package generated by helm package, and truly de-tangles the concept of an application version from a chart version while keeping the concept of appVersion coupled to a helm deployment (the appVersion has to be somewhere right?).

I hope that this was helpful. 👀 this PR.

@jrkarnes : In a sense 4) has already been proposed and is in use as a workaround by many people (see here https://github.com/helm/helm/pull/5492#issuecomment-517255692 ). You can use something like this in your Chart templates:

{{ default .Values.appVersion .Chart.AppVersion }}

This would allow you to use the appVersion in the Charts.yaml as a default and override it with something in the Values.yaml (which can be overridden during install/upgrade calls). The downside is that when doing e.g. a helm ls it would show you either no or an incorrect appVersion.

@BenjaminSchiborr Thanks for letting me know about this. As I said, I've been inside the helm workspace for a very limited time so any knowledge is good knowledge at this point in time for me.

I think that my fourth proposal was slightly misunderstood. Rather than having something such as {{ default .Values.appVersion .Chart.AppVersion}} you would use {{ .Values.Helm.AppVersion}} and the values.yaml holds appVersion instead of Chart.yaml

@jrkarnes
That's what I'm thinking right now. Like, why app version should be treated as a uniq snowflake. It is the value of the chart.

Reasoning behind this is easy: everything is a part of the infrastructure. So, infra has version. Why two versions?

But I'm too busy to wrap my head around side cases and projections. But in general, that is the question: Why we need app version, when in the nutshell it is all the infrastructure? or Can we use Chart version as infrastructure version when Chart is infra template only, and as app version when infra includes app version?

I'll think about it a little more

@jrkarnes
That's what I'm thinking right now. Like, why app version should be treated as a uniq snowflake. It is the value of the chart.

Reasoning behind this is easy: everything is a part of the infrastructure. So, infra has version. Why two versions?

Fundamentally it makes sense to keep the chart version separate from the application version. A quick example is probably the best way to prove that this is the case.

Let's say that you have an application deployed which is ver 4.0.0 running on your chart versioned ver 1.1.0. During your operations you realize that you're going to need to start running a cron task for this application. Rather than writing the cronJob object and applying it to the cluster, you realize that other people who run this chart will probably need the cron task as well... so you add it into your chart. Your chart has now progressed to ver 1.2.0 but no change to the application that the chart manages has taken place, it is still at ver 4.0.0.

The inverse of that is also applicable as well and is already the subject of debate in this PR.

But I'm too busy to wrap my head around side cases and projections. But in general, that is the question: Why we need app version, when in the nutshell it is all the infrastructure? or Can we use Chart version as infrastructure version when Chart is infra template only, and as app version when infra includes app version?

I'll think about it a little more

Rather than thinking about a side case or projection, think of things like MySql which has three widely used and supported engine versions out there: [5.6, 5.7, 8.0]. To deploy a mysql instance into a cluster you will always have:

  • A Pod(s) running an instance of MySql of the chosen version
  • A Service which allows kube-dns resolution to the pod (or pods if running in HA)
  • A PV(s) for the pods to write their data into with accompanying PVC(s)

The chart for deploying MySql 5.6, 5.7, or 8.0 should be relatively the same for all the Engine (application) versions. The only real difference is the application version and the docker image (which is probably tagged semantically according to the engine version).

I see what you mean about wondering on the "need" of an app version. I think that comes down to developer or operations convenience when running helm ls or helm inspect.

+1 to @jrkarnes last post. There's a lot of value in keeping chart and app versions as separate concepts precisely because chart version is the "infrastructure version".

If I'm publishing the chart for others to consume, it becomes part of the infrastructure for projects that take dependency on it. However, if I never intend to have my application to be consumed by others, all I care is to rev my own chart version from time to time, when the chart itself changes, but otherwise I just need my app version being in line with CI/CD output. In this flavor of usage chart revs relatively rarely, yet app version revs on every CI occurrence. My code repo maintains the relationship between the code and infra version that's applicable for this version of code to run in. In other words, instead of rolling back my deployment with helm install last-known-good-chart-version, I simply rerun my CD pipeline with a pointer to a last known good commit ID.

@iorlas I've read your proposal for helm run and I have no issues with that. While I think it's not necessary to have the install/run dichotomy, if it puts helm maintainers' mind at ease about making app version mutable, I'm OK with that. :)

@iorlas Have you had a chance to think about what you would like to do with this proposal?

I don't think I'm understanding how the work-around functions involving {{ default .Values.appVersion .Chart.AppVersion}}. I'm getting this error:

Error: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.appVersion | default \"0.0.1\"":interface {}(nil)}[

Here's my Chart.yaml:

name: demo-helm
version: 0.0.1
appVersion: {{ .Values.appVersion | default "0.0.1" }}
home: http://example.com
description: Demo Helm

@IRobL You need to put this snippet into templates/deployment.yaml, where version is used. Files like values.yaml are not treated as templates.

@jrkarnes I'm not a maintainer, so final word will be on other guys, I guess. But anyway, I was quite busy during last few weeks. On upcoming week I'll be reevaluating our current approach to manage helm and its packages.

We use approach I described:

  • Helm Chart is a part of the app repository
  • Application build produces:

    • Docker image -> docker registry

    • Static files -> CDN service

    • Helm package -> CI storage

  • So, Helm package is the main artifact, binding all application artifacts together
  • On deployment, install said package

Current concerns:

  • Complexity of the build process.

    • Instead of docker image and static files, additional file(helm package) is generated

    • What are the key reasons we need anything beside the go build/make install resulting file?

    • What is the cost?

    • Why to apply?

    • When to apply?

  • Build duration

    • Even if we don't need to deploy it, we still waste some time and money. 2-5 seconds. Not much, but meaningless work is meaningless.

  • Complexity of infrastructure template updates

    • When Chart is updated, values should be updated as well

    • Chart is one repo, values is another, each update to the values means s little headache

Such reevaluation could lead to additional simplifications and ideas. We'll see :)
I'll send an update closer to the next Friday.

Ah, I see why that trick wouldn't work now that you pointed this out for me, thanks. Here's a trick that I think works fine for many people using Helm 2.x assuming you're comfortable wrapping your helm tool with these kinds of overhead pieces:

APP_VERSION=0.0.7
sed -i.bak "s/^appVersion:.*\\\$/appVersion: $APP_VERSION/" helm/Chart.yaml
helm install --name helm_demo helm/

@IRobL

In general, if you’re wrapping a deployment with sed it means that a templating engine isn’t doing what you need it to, which is the entire point of this discussion.

When gitlab was in its infancy and didn’t have helm support, we literally sed in values on replacement targets on a handcrafted manifest file.

It’s a bad practice for something like this and I would urge you to get away from it if at all possible.

On Oct 7, 2019, at 3:33 PM, IRobL notifications@github.com wrote:

Ah, I see why that trick wouldn't work now that you pointed this out for me, thanks. Here's a trick that I think works fine for many people using Helm 2.x assuming you're comfortable wrapping your helm tool with these kinds of overhead pieces:

APP_VERSION=0.0.7
sed -i.bak "s/^appVersion:.*\\$/appVersion: $APP_VERSION/" helm/Chart.yaml
helm install --name helm_demo helm/

You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@jrkarnes, when you say bad practice, are you saying that needing to wrap the helm command with scripting is undesirable? If so then I agree completely. I'm not saying I'm against adding an --app-version flag at all; on the contrary I think that would be a very convenient addition to helm. Clearly based on this PR, I'm not the only person using helm that wants to keep the appVersion variable consistent with the actual build being deployed. I happen to be developing a re-usable build pipeline library which wraps various build tools together to produce reproducible builds --this is a common practice for large tech organizations. In my use case, the pipeline library builds the docker containers, publishes them, and ultimately deploys them via helm from the application's build pipeline (for instance, consider the app version 1.0.1-pr-3.1 for the first build of the third PR of an app, which is potentially a pre-release to version 1.0.1).

Even after this issue is worked around in my company's build pipeline library, I would definitely feel more comfortable having an --app-version switch built in to Helm; it just feels like a more flexible way of handling deployments. I mean, why should an external system or an engineer having to go into the Chart file and update a yaml before every deployment if it can be automated by a build platform that can't mess up the numbers accidentally? From my perspective, the appVersion functionality would need to be abandoned entirely by my organization, or the "sloppy" sed work around would need to be added to our pipeline library code, so I thought I'd share it for anyone else who was solutioning around this problem.

@IRobL
In general, any utility should be self-sufficient on its own level of abstraction: it should abstract away the problem it solves by providing sufficient API. Helm is no exception. So if you have a need to customise how it behaves, you need to question first: is it goes in line with architecture, design principles or maybe I miss something?

This is why this PR wasn't that easy resolved. Since, fix is obvious, but it is not in line with Helm design. That's why few temporary solutions are provided.

And you are right about the app-version flag. You should provide it, then Helm should handle everything on its own.

Can I ask you a question? How you utilise Helm in your product? When you use helm install? How exactly you use it? Have you considered using helm package?

I took another look at helm package last night. I wasn't really sold on it. sed has been around for a very long time and is very stable. All these tiller/ package/ install subcommands are relatively new, and less stable. To help articulate my point, months ago, I decided "sure tiller could work out" even though I had seen someone's plugin that bypassed the need for Tiller. I looked at the plugin as something that must have been less mainstream, but I have been kicking myself ever since. Had I trusted the plugin, I would be in a far better position than I am now. The maintainers at Helm have even confirmed that they agree that it was an unmaintainable design and it will be going away in future releases.

I think it would be a mistake on my part to use helm package for doing these simple sed operation. What is your use-case for package anyway? I feel as though the whole concept of helm package misses the point of web 2.0/ version control releases by packaging into a binary zip in the first place when modern technology groups have been leveraging the power of tagging to achieve the same process but in a leaner and more audit-able way.

For my use-case, I'm enabling app developers to codify their containerized applications and deploy them in a maintainable way, so minimizing overhead (ops/ tiller system admins, redundant release artifact management, etc.) is of chief importance. I think my usage more closely follows the Unix philosophy to be decided about using a tool for what it does best, and then switch to other tools (etc. sed) where appropriate. I don't think you'll ever find one tool to do everything perfectly for you, but if you're happy with your current workflow, don't let me dissuade you from following your own philosophy.

@IRobL

when you say bad practice, are you saying that needing to wrap the helm command with scripting is undesirable?

Yes. This precisely.

In my use case, the pipeline library builds the docker containers, publishes them, and ultimately deploys them via helm from the application's build pipeline

This is exactly what we are doing as well.

Even after this issue is worked around in my company's build pipeline library, I would definitely feel more comfortable having an --app-version switch built in to Helm

I would take this a step further and say that having the appVersion be a property of the Chart.yaml file is probably incorrect. If the value can be changed on the fly, it shouldn't be in what is considered to be an "immutable value set." I believe I advocated for the same thing in a previous comment.

All these tiller/ package/ install subcommands are relatively new, and less stable.

FWIW, Tiller is not going to be a thing in Helm 3. You alluded to this later in your post; however, I'm just reiterating this because it does show that the helm package syntax of creating a "kubernetes binary" and shipping it to Tiller is probably a bad practice.

I think it would be a mistake on my part to use helm package for doing these simple sed operation. What is your use-case for package anyway?

I can probably advocate for the Helm team on this one. The sense I get behind helm was a method of having application developers specify how to run their application correctly from inside Kubernetes. That's why application providers run their own Helm repos which you can add to download a given version of their deployment. The Helm team probably saw the application code & the infrastructure code as intertwined because their intended target production teams were not going to be using Helm in daily workflows like we do in CI/CD. Example: we use helm upgrade 130 times a day on average. I don't think that was ever the intended use.

It was probably a lot more common for people to say, "I just want to install mysql into kubernetes" and helm was a (relatively) easy way of doing that for people who knew little about kubernetes and were just playing around with it.

Thus, helm package was ultimately intended to be consumed by _that audience_. The tool is definitely seeing a lot more use in realms that the team (I think) either didn't believe would pick up the tool, or never intended to use it the way they are.

I think my usage more closely follows the Unix philosophy to be decided about using a tool for what it does best, and then switch to other tools (etc. sed) where appropriate.

I'm basically treating Helm like awk with a bunch of kubectl apply -fs after it. It's just much cleaner to have an automated tool take care of values to avoid human error.

Sounds like you and I have a lot of the same values and may be doing a lot of similar things.

@IRobL

tiller

For me, tiller is not acceptable. Since it adds one more exposure point, additional security risks and, most important, does nothing but creates one more way to apply yaml files, but with different API. Tiller was designed to secure, align Helm packages applying process, but having so many risks, additional software(and versioning!). That's why Helm the 3rd does not use it.

I think it would be a mistake on my part to use helm package for doing these simple sed operation.

I think you are missing my point. Let me try again. What is sed made for? To transform a stream of data. It should abstract away the problem of transformations, giving you API and result for any given input.

What if you made a script, where your sed command does not work(i.e. you have a mistake in you regex)? So you made a conclusion, it does not work. Will you try to understand why sed does not work on its own, or you would add one additional pipe with perl script?

The same thing goes with every solution: it should provide API, take input and provide output, abstracting one problem away. You know, Unix style.

Projecting onto Helm, it is designed to version your release and push it into the K8s. It allows to customise configuration using templates. So, you are observing a problem, you need to provide a version. Helm provides a simple mechanism to manage versions and provides easy way to customise the way your build works. So, why not to try to understand how it works, instead of adding a workaround with additional software?

@jrkarnes

Yes, we're both approaching helm with similar interests in mind. I hadn't really realized that the root of the package command was intertwined with the mistakes paved by tiller, thank you for sharing these insights with me!

I was actually reviewing the history of why this feature isn't just added and saw two arguments as to why they couldn't add this feature, the one was that since it's already defined in package they shouldn't also have to define it in install/ upgrade as well. I have sympathy for that, this sounds like a tech debt problem. There's no such thing as a publicly used software that doesn't have tech debt. The other reason was that the Chart.yml file was metadata and shouldn't be updated. That struck me as odd... as people develop helm charts, surely they update that file manually as things change and so it isn't immutable itself. It's easier for me to view the Chart.yml file as a way of feeding parameters into the helm binary as it builds the deployment objects which in-contrast are actually immutable.

What's your build platform btw? The pipeline code I'm writing is written for Jenkins as a Global Pipeline Library.

@IRobL The key problem is: You are looking at Helm as deployment script. But Helm is not like that. Helm is an abstraction layer. Helm takes all your artifacts and applies as one unit of work onto K8s as a platform.

Helm is a packager. Helm is designed to ease deployment. It creates "installer" out of your artifacts, so you can "install" it onto your OS - K8s.

app-version in install has nothing to do with tech debt. It is not needed at all when you want to install or upgrade it. Same is the Chart.yml. It should not be changed at all, since it is a default configuration file, which contains version of the actual Chart, but chart is not your software. You are just using it wrong.

From that standpoint, why won't you consider using package? It looks too complex for you or what?

Been out of the loop on this issue for a little while, but I've seen this sort of point crop up a few times:

Helm is a packager. Helm is designed to ease deployment. It creates "installer" out of your artifacts, so you can "install" it onto your OS - K8s.

Fundamentally, Helm _does not in any way_ create an installer. It does not create a "binary". It does not create something similar to a ".deb" file or similar. It creates an archive of some templates of Kubernetes manifests, with some default and/or preset values. Your actual software does not live in that Helm Chart. It isn't packaged with it. It's not guaranteed to be immutable.

I think it's fair to say that in most cases, your Helm Chart is going to change a _lot_ less than the software you're deploying via your Chart is.

This is the fundamental reason (IMO at least) for --app-version to be available on helm install and helm upgrade. Why should you have to package your chart again if literally nothing has changed?

I see a Helm Chart as a versioned description of Kubernetes manifests, describing a set of Kubernetes components that will successfully run an application, and that's all I see it as. If those instructions need to change, that's when I'd like to update my chart - not every time my application changes and I only need to update an image version (which you often set via values anyway).

Take a look at Flux for example, and how their Helm Operator works. You can have it automatically update an image tag value - that doesn't update the Chart, just the image tag that's being deployed.

It creates an archive of some templates of Kubernetes manifests, with some default and/or preset values.

But deb file is the same set of configuration files, commands, manifests and/or preset values. Same as MSI installer or even, which is the closer one, ebuild in gento emerge package system. Also, same as Brew packages.

So what is the Helm if not a package manager for K8s? What is the difference you see?

It's not guaranteed to be immutable.

Why not? If you mutate package after package generation, it is wrong. If you supply additional options during install/upgrade process, it is intended, like in all packaging systems.

I see a Helm Chart as a versioned description of Kubernetes manifests

You already have one - GIT. So, why would you need a Helm?

I think it's fair to say that in most cases, your Helm Chart is going to change a lot less than the software you're deploying via your Chart is.
This is the fundamental reason (IMO at least) for --app-version to be available on helm install and helm upgrade. Why should you have to package your chart again if literally nothing has changed?

In this design, appVersion should not be treated as attribute of the Helm Package build. It should be treated as configurational parameter, in values.

Take a look at Flux for example, and how their Helm Operator works. You can have it automatically update an image tag value - that doesn't update the Chart, just the image tag that's being deployed.

In this case you will loose coupling of app infrastructure manifests and app version. Since, changing image tag won't trigger new helm upgrade(correct me if Flux guys are doing it other way). In that case, you'll have Helm as configuration template. In that case, you don't need --app-version at all.

But deb file is the same set of configuration files, commands, manifests and/or preset values. Same as MSI installer or even, which is the closer one, ebuild in gento emerge package system. Also, same as Brew packages.

Your description here for .deb and .msi packages is missing one key component - the actual thing that is being installed. If you go look at the contents of a .deb file, you'll find the built software - _THE_ software that is going to be installed. Generally speaking (always in the case of .deb?) the application being deployed is intrinsically linked, and part of that package (not the case with brew).

Brew packages are different, and not really comparable in the same way. Brew is actually much more similar to Helm currently though, as it is just the instructions on how it should be installed, and where the source / package should be downloaded from.

To be absolutely clear here; a Helm Chart _is not tied intrinsically to a specific application version_, and does not contain the artefact that is being deployed (i.e. the Docker image). It only contains a reference to it, and the value behind that reference can even change (i.e. you can push to the same Docker tag, if you did so wish). So no matter what, a Helm Chart is not a packaged version of an application, and it's not strictly linked to a specific version of application either.

You need only to go look at the stable charts repo for an example. How many applications let you override the image being used via values? (A _lot_)

So what is the Helm if not a package manager for K8s? What is the difference you see?

It's a tool that facilities templating Kubernetes manifests, and easily distributing and installing them. The key here is that, that is all Helm deals with - Kubernetes manifests.

This all comes back to my main point - if those manifests change, or the templating needs to change for those manifests for whatever reason, then _that_ is when a Helm Chart needs to really be changed.

The main complication I see is that there are 2 use-cases:

  • Deploying third-party applications.
  • Deploying first-party applications.

In the case of third-party applications, as a Helm consumer it's desirable for a Helm Chart to be released with each new application version. One key difference here is with the frequency of releases. It's likely that a third-party Helm Chart for something like MySQL or whatever won't change several times a day. In this case you also don't want to accidentally use an old version of a chart with a new version of software - a mistake that's much easier to make with software and charts you haven't written yourself.

In the case of first-party applications, you may have a standard way of deploying a class of applications. At Icelolly for example we write and deploy our Go services all in pretty much the same way. To that end, we are actually able to use a single chart right now for all of our Go services deployed in Kubernetes (we use the helm package workaround right now). If the approach we take to deploying our own applications changes, we'll update the chart. We version our chart with SemVer, so the application that aren't updated won't be affected until we want to update them.

Just on this note; our go-service chart was last updated about a month ago. During that time we've probably had tens to hundreds of deployments - all without that chart changing.

In one case, you just want simplicity. In the other case you want control, and ease of management.

In this case you will loose coupling of app infrastructure manifests and app version. Since, changing image tag won't trigger new helm upgrade(correct me if Flux guys are doing it other way). In that case, you'll have Helm as configuration template. In that case, you don't need --app-version at all.

Flux will actually change the values it uses to upgrade the chart, and then run the upgrade with the new image value. Your point about losing the coupling of infrastructure manifests and app version still stand. The point I'm arguing is that it's actually desirable for that to be the case in some use-cases. You're right though, in this use-case, I don't need --app-version, it wouldn't be used because it doesn't exist right now. If it did though, maybe Flux could use it. In that case, it'd actually be helpful.

helm list is a useful command. Being able to see which application versions are deployed is indeed still useful. For our applications currently installed via Helm with the helm package approach we only set the application version (via --app-version on helm package) so that the output of helm list is useful. That's why if we could set it on helm install|upgrade it'd just be simpler for us. We wouldn't have to fetch the chart and repackage it just to change the version.

In fact, helm list and handling rollbacks are probably the only reasons we're using Helm at all for first-party software.

Your description here for .deb and .msi packages is missing one key component - the actual thing that is being installed.

"Install" is a process of setting up necessary facilities(folders, configurations, binaries, db fetching, data refreshing/migration) on the target platform.

deb handles all of that. So does the Helm package. What you mean Helm package is not "actually installed"?

If you go look at the contents of a .deb file, you'll find the built software - THE software that is going to be installed.

False. Sometimes you'll find the software itself. Sometimes you'll find some software pieces. Sometimes you'll find nothing but a set of scripts to fetch such software. So, the key point here - it does not matter, since Linux and K8s are platforms to host given application, accepting one universal application format. And image names, configuration parameters - are the pieces of the package.

Brew is actually much more similar to Helm currently though, as it is just the instructions on how it should be installed, and where the source / package should be downloaded from.

Exactly. Are you trying to convince me Brew is not a package manager?

To be absolutely clear here; a Helm Chart _is not tied intrinsically to a specific application version ...
How many applications let you override the image being used via values? (A lot)

You are absolutely right. Helm could be no more than a handy templating engine for k8s templates. I don't have problems with such software existence: it helps a bit, won't change modern delivery practice.

The problem is, Helm is more than templating engine. It is a packaging manager, with all benefits and downsides. And in the ecosystem where package manager exists, it is a bad practice to have something manager by other package manage. Worse - working w/o package manager.

I see the reasons behind making app version as package argument for these packages. And I see reasons you and all you guys have not to make a packages. The problem is, it is outdated, complex and harder to manage approach. Funny thing, the cost is small, but gain it awesome.

The point I'm arguing is that it's actually desirable for that to be the case in some use-cases.

Yup, this is is the central point: Is it desirable for any product? If yes, is it right thing to do?

Your argument is that Helm Chart rarely changes, so why should we pack it together each release. I agree with you, it feels redundant. But anyway, we still package some old source files, old side-cars(if Helm consists of multiple apps), old configs, old Dockerfile.

So the question is, if we package whole Chart as artifact, on each build, what is the gain? For Dockerfile it is obvious(but sure thing it wasn't obvious when containerisation appeared on the market). For source files too.

Back in the days we had clearest possible delivery mechanism: upload only changed files over FTP. Now we have so many things. We need to decide what is good, why it is good, who should use it. I'm not sure I would be happy with Helm dealing with both approaches at the same time - too complex.

Deploying third-party applications.

I would be sooooo happy if I could install any PSQL/MySQL version using Helm chart solely. It would be soo much easier to maintain legacy projects, introduce infrastructure to newbies. It even be easier to be notified about Chart updates. Why we have so many tar.gz files for each release for the binaries, but can't have same set of tar.gz files for Helm Packages?

@iorlas I've just read through this and the rejected PR and you make some very good points. You've convinced me that I need to start packaging up my helm charts as another artifact of my build/release.

But I would like to mention that I didn't even know helm had a package command and I'm guessing I'm not alone. That's probably because it's just so easy to install a chart from the source directory, but also the documentation doesn't really sell the concept or even explain it in detail.

The package command is obviously documented but there are only a couple very general mentions of packages in the quickstart guide, in fact, the word package shows up a lot in the quick start but it's mostly talking about how to install Helm and the different OS packages. Packaging also isn't mentioned in best practices and I think capturing what packaging is and why it's helpful should be included there. I've also checked the v3 docs which have a slightly different structure but seem to also be slim on suggesting users package up their charts.

Normally I'd like to submit a PR and not just sound like I'm complaining about something, but I'm not sure what's going on with the 3.0 documentation changes.

@jonstelly There is definitely a gap in documentation. Even I was thinking at first that --app-version is good to go, but then though it just could not be left missing w/o a reason.

Docs definitely need some clarification and common problems introduction. Then, Helm development cycle introduction. But I believe team is busy on 3rd version. And I'm too busy right now too :(

"Install" is a process of setting up necessary facilities(folders, configurations, binaries, db fetching, data refreshing/migration) on the target platform.

deb handles all of that. So does the Helm package. What you mean Helm package is not "actually installed"?

I don't mean that a Helm Chart itself isn't installed - all I'm saying is, the Helm Chart doesn't contain the actual application you're deploying. A Docker image is not packaged into a Helm Chart. It's pulled by Kubernetes from some external source.

False. Sometimes you'll find the software itself. Sometimes you'll find some software pieces. Sometimes you'll find nothing but a set of scripts to fetch such software. So, the key point here - it does not matter, since Linux and K8s are platforms to host given application, accepting one universal application format. And image names, configuration parameters - are the pieces of the package.

As far as I'm aware, actually you're incorrect here. A .deb file is an AR archive. You can extract it and look at the contents, and ultimately it's some metadata, and some files. .deb files could in theory contain patches, but often don't. If a .deb file contains a script to go fetch the software, then it would mean that it's the script that's being installed by the .deb file, not the software itself. That's like installing an installer.

If you have an example of a piece of Linux software packaged in a .deb where the .deb goes and downloads the software to install as part of the process of installing the .deb file, then I would really like to see it - as it's something I have literally never come across before in many years of using Linux.

Exactly. Are you trying to convince me Brew is not a package manager?

No. All I'm saying is that like Helm, the scripts provided to install software via Brew are just that - scripts. The applications are built, packaged, and distributed separately and pulled in by those scripts. That doesn't make Brew any less of a package manager, just like what I've been saying doesn't make Helm any less of a Kubernetes package manager. That's not the point of this issue though, we're not debating whether Helm is or is not a package manager, we're trying to decide whether or not an --app-version flag should be added to helm install|upgrade to facilitate a common use-case.

I see the reasons behind making app version as package argument for these packages. And I see reasons you and all you guys have not to make a packages. The problem is, it is outdated, complex and harder to manage approach. Funny thing, the cost is small, but gain it awesome.

Apologies, I'm not clear what you mean by this. What is outdated / complex / harder to manage?

So the question is, if we package whole Chart as artifact, on each build, what is the gain? For Dockerfile it is obvious(but sure thing it wasn't obvious when containerisation appeared on the market). For source files too.

Well again, the difference is basically that all of those things you mentioned are intrinsically linked. You need all of the source code to run the app - not just what changed. You don't package the Dockerfile, so not sure what that point is about - but you will often use the same Dockerfile without changing it to build new versions of images (I mean, why bother with automation if you had to change something manually every time, right?). With all of that kind of stuff, you're creating something that encapsulates everything you need, so that it can be deployed in isolation. So you can do things like spin up a new node, and deploy to it the same way as you would to an existing node, etc.

There are a _lot_ of perks, as I'm sure you're already aware, over using the old upload over FTP.

Anyway, this is all by the by. I think ultimately as I've mentioned already, in my opinion, all this comes down is; do the Helm maintainers want to enable this other use-case, or make it more difficult for people to use it this way? In the end, it's not _really_ a huge difference either way. To me, it'd just be nice if I could not have to package up our internal chart, that rarely changes, on each build just to set an application version, that again is only used to show the current version in a list. For first-party apps, I'll be honest, I've questioned whether Helm is the right approach anyway.

A Docker image is not packaged into a Helm Chart

Actually, I wish it would be that way. But it is not right now. My vision is, K8s will/should be a platform, which will incorporate Helm(so there will be no Helm) as API to install packages: you will need to pack your stuff into the archive and install it. Back to simplicity of the deb files, but with proper isolation and k8s resources as first class citizens.

A .deb file is an AR archive
You can extract it and look at the contents
and ultimately it's some metadata
and some files

Like.... a Helm package!

If you have ... a .deb where the .deb goes and downloads the software to install as part of the process of installing the .deb file ...
That's like installing an installer. ...

Yes, it would be like an installer which has installer inside. Funny, right? I would use one artifact if it is possible, if it would be enough to setup app instance. But we have different software, different sources of truth, sometimes it is even handy to have multiple sources.

  • Brew has YAML as a package, but fetches binaries from remote storage
  • Emerge(gentoo) has ebuild as definition, which downloads even git clones

Debian tries to package everything inside. And it is right thing to do, if possible. But to prove my point, metapackages will do. Have you heard about it? It is a package, which installs some other packages. Isn't it a package?

But please, don't miss the main point: It does not matter! Even empty package, which has only reference within, is a package. Maybe you would accept another term, installer?

I'm saying is that like Helm, the scripts provided to install software via Brew are just that - scripts
The applications are built, packaged, and distributed separately and pulled in by those scripts

And we all have same pipeline - to build a docker image.

we're trying to decide whether or not an --app-version flag should be added to helm install|upgrade to facilitate a common use-case.

That's a key. But how can we decide? We should ask two questions:

  • Is it possible? Yes
  • Is it right thing to do? Yes? No?

If there are ppl doing something, does it mean it is a good thing? In order to make a progress, we need to question everything.

Apologies, I'm not clear what you mean by this. What is outdated / complex / harder to manage?

let project onto Brew, since Brew is really close to the Helm and already widely, successfully used. You may project onto Gentroo Ebuilds or deb, picture won't change.

  • Outdated. When was the last time you had to install MySQL/PSQL manually? Why we have moved from it? These reasons, bud.
  • Complex. This is one of "why"s: you need to setup infrastructure independently, you need to know which one works best with which software versions. Sometimes you'll need to customise infrastructure to run certain software version. Why bother? Why won't you delegate whole question?
  • Harder to manage. You will need to manage both infrastructure and app versions when you could have only one artifact. Why making your life harder?

Sorry, kind of lazy right now to describe all use cases, like clean rollbacks, graceful upgrades, anyway these are the main bonuses.

Well again, the difference is basically that all of those things you mentioned are intrinsically linked.

Not always. For example, docker image could have SSR BE and reference to CDN.

Dockerfile, so not sure what that point is about - but you will often use the same Dockerfile without changing it to build new versions of images (I mean, why bother with automation if you had to change something manually every time, right?).

That's the point. Even when Dockerfile is not changed, you build new image. If your source code is not changed, but Dockerfile is, you build new image. So, in nutshell, Dockerfile is a package too. Same goes for Helm. Don't you think so?

With all of that kind of stuff, you're creating something that encapsulates everything you need, so that it can be deployed in isolation. So you can do things like spin up a new node, and deploy to it the same way as you would to an existing node, etc.

But it turns out docker image is not enough to run an app. We need a configuration instance, we need service definitions. Why not package it all?

In the end, it's not really a huge difference either way.

I believe, it is. Maybe it is not much of the deal in codebase, but it will stagnate containerisation evolution.

A Docker image is not packaged into a Helm Chart

The image itself is not packaged, but the reference (read as: pin) is. Sure, we can be pedantic and get caught up on whether or not the literal image of varying size (from MBs to GBs) is included in the artifact of helm package (spoiler: it isn't), but the essence of the statement "A given version of an application's code is included in a helm package" is still fundamentally correct. Whether or not you want to get caught up in the how is irrelevant.

Going back to the land of examples, let's say that you have an application which you've internally versioned as 1.9.9 running on a chart versioned at 1.2.5. So that there's no confusion, the Docker image sha for the application container is fakeshaA.

Your team decides that in version 2.0.0 of your application there is going to be a local filesystem version of a file that you used to have to reference over HTTP. The reason for this is un-important but the consequence to you is pretty severe. Now you need a pv and a pvc for your deployments so that these now-local files are not lost between upgrades. Seeing the need, you go ahead and update your Helm chart to have this pv and pvc so that the move to 2.0.0 isn't super disruptive.

Before you change the chart, you have Artifact A linking application version 1.9.9 to infrastructure version 1.2.5. Now you change the chart... _your Helm Chart is now v. 1.3.0_ and you will produce an artifact linking 1.9.9 to infrastructure version 1.3.0. We'll call this Artifact B

When the code deployment for 2.0.0 goes live with the Docker image sha fakeShaB, you will create another artifact linking 2.0.0 to infra version 1.3.0. This is Artifact C

Now let's say that it turns out there is a problem that you don't understand fully with the 2.0.0 release and you have to roll back. You rollback using Artifact B... but this doesn't solve the problem so you rollback again to Artifact A and the problem is solved.

The only issue that you run into is whether or not the Docker Registry which your artifacts reference still has the image referenced in those artifacts.

No matter what, you still have a linkage between a version of an application and a version of infrastructure. This is the purpose of Helm. To argue otherwise is folly.

@iorlas:

Let's set aside the .deb comparisons. I think it's just us getting sidetracked.

Not always. For example, docker image could have SSR BE and reference to CDN.

That's very true. More on that later though.

That's the point. Even when Dockerfile is not changed, you build new image. If your source code is not changed, but Dockerfile is, you build new image. So, in nutshell, Dockerfile is a package too. Same goes for Helm. Don't you think so?

You do, but that's because the product at the end of that build process (i.e. a Docker Image) depends on _both_ the Dockerfile, and the thing you're putting in it. An image cannot exist without those two components.

On the other hand, a Helm Chart can exist before an application is even built - literally before a single line of code is written. You could build an imaginary one that would fail to install - but nonetheless, the Helm Chart could exist entirely without anything else. Like I said, this would be useless, but I'm just trying to illustrate my point that they're not at all linked.

My point here, and how it related to this particular issue is just that Helm Charts aren't always intrinsically linked to the application(s) being deployed by the Chart. I don't think that's a bold claim, it happens - it's already fact. I'm doing it right now with production applications, so are other that have commented on this issue. So as I've said before, all this comes down to is; do the Helm maintainers want to enable this use-case, or not - there's nothing else to it.

But it turns out docker image is not enough to run an app. We need a configuration instance, we need service definitions. Why not package it all?

Actually, I wish it would be that way. But it is not right now.

If this was the case, and a Helm Chart did actually package everything it deployed (you mention CDN earlier, but you're not deploying that to Kubernetes then, so it wouldn't go in your Chart even still), then I think that this conversation wouldn't be happening. Your Helm Chart _would be_ intrinsically linked to the application version being deployed - just like building a Docker image. To build a Helm Chart in that scenario you would be required to rebuild it when your application changes, at which point there's no question. You couldn't use Helm the way I'm using it today - it would be far clearer that way.

That is not the reality though. It's not how Helm works, and I don't know that it will ever end up being either really. But never say never, right?


@jrkarnes:

The image itself is not packaged, but the reference (read as: pin) is.

Sure, but a common use-case is using values to override this value. I've used it with both third and first-party charts. It wouldn't be an option if it weren't something people used.

Sure, we can be pedantic and get caught up on whether or not the literal image of varying size (from MBs to GBs) is included in the artifact of helm package (spoiler: it isn't),

I don't think we're being pedantic about anything - like you've already pointed out, it would be factually incorrect to say that the Docker Image is packaged inside "built" Helm Chart.

"A given version of an application's code is included in a helm package" is still fundamentally correct.

But not really, as my first point there would argue against. You can change what is being deployed. Hell, you can change most charts to run the hello-world image if you want. That'd be useless, but it nicely proves my point - the Helm Chart isn't linked to your application. There's an _expectation_ that you'll use it with the right image, and probably by default it will do, but it certainly doesn't _have to_ - and in _no way_ is the application's code included in a Helm Chart, packaged other otherwise.

Going back to the land of examples, [...] and the problem is solved.

You've made it sound like this isn't possible without using Helm in what is apparently currently the intended way. But in reality, you can just have 2 versions of the chart (i.e. your two infrastructure versions), and 3 versions of your application. If you want to rollback, then do it, you can pick and choose quite easily which Chart and image(s) you wish to deploy. Run Helm with your Chart, set your values accordingly for the image, and you're all set.

No matter what, you still have a linkage between a version of an application and a version of infrastructure. This is the purpose of Helm. To argue otherwise is folly.

I think to argue (ideally discuss) how things could change often leads to improving things. I don't think the "purpose" of Helm is to link a Chart and application version. I think it's purpose is to make it easier and safer to deploy applications into a Kubernetes cluster, whilst keeping your Kubernetes manifests DRY and reusable. Nowhere in there do you require a Chart and application version to be strictly linked (just like in reality right now, you don't need them to be).

So again, as I've said to @iorlas, the question is, should Helm adapt? What is the reason for not enabling this use-case? If the reason is just "because it doesn't currently" then that's a pretty poor reason if you ask me. None of this discussion so far seems to have answered this question.

... the product at the end of that build process (i.e. a Docker Image) depends on both the Dockerfile, and the thing you're putting in it. ... An image cannot exist without those two components.

So... Helm package needs Chart and app version(= Docker image) and cannot exist w/o it.

a Helm Chart can exist before an application is even built - literally before a single line of code is written. ... the Helm Chart could exist entirely without anything else. Like I said, this would be useless

Funny thing is, on one project we used to use stub docker images to create a prototype architecture. We literally used Charts w/o writing a single line of code. Also, it is always viable case to have a Chart which consists of subcharts only.

So, my hypothesis is: Helm package is almost useless w/o a Docker image. Docker image is almost useless w/o a source code. The difference is a level of abstraction. Both things are package-like objects.

but I'm just trying to illustrate my point that they're not at all linked.

Yup, yup! That is really nice we have ppl ready to discuss everything to details. W/o you, w/o projections and assertions, we won't make a future wort living 😃

I don't think that's a bold claim, it happens - it's already fact. ... So as I've said before, all this comes down to is; do the Helm maintainers want to enable this use-case, or not - there's nothing else to it.

Fact. In order to make and accept such change, it should be evaluated: is it right thing to do?

Let me tell you a story. Have you heard about martini web framework and GoLang interface{}-based generics implementation? Martini was a great-great, widely used web framework. Then it was abandoned. The reason wasn't lack of hours to invest in, the reason wasn't some shenanigans with licensing. The issue was the only one: this framework was built on bad practices, which felt good to the most developers. So the only way to make future brighter was to deprecate whole framework, making angry some guys, leaving some projects orphaned, forcing some guys to reevaluate things they do. And right now we have much better GoLang community, wiser ideas, we no longer treat GoLang as a Python replacement.

So, I'm not that against this approach. I see how it can live(see my proposal with helm run). But while we have a chance to make intervention and, possibly, fix whole industry while it is not too late, I would evaluate each usage, discuss any disadvantages and problems.

To build a Helm Chart in that scenario you would be required to rebuild it when your application changes

Yup. Even right now it could be a case. We have one pipeline where instead of push we are doing save/load of the docker image. And it works quite well. I don't feel like it right now, but fundamentally it is much cleaner way. The problem is, K8s will still need remote docker registry as a bus to transfer the "starter" - docker image.

Let's narrow our focus, guys

Your Helm Chart would be intrinsically linked to the application version being deployed - just like building a Docker image.

This is a key difference right here. And @seeruk nailed it. And we can focus onto it. Let me paraphrase it into the facts:

  1. Docker Image is not bound with Helm package. Only reference to it.
  2. This gives opportunity to release these paths independently.

Key questions:

  1. What are risks of independent approach? (i.e. what if some devops will use it, what arguments we will say against it)
  2. How packaging approach solves it?
  3. What is the cost?
  4. How we see a future of containerisation in this particular question?

@seeruk:

So again, as I've said to @iorlas, the question is, should Helm adapt? What is the reason for not enabling this use-case? If the reason is just "because it doesn't currently" then that's a pretty poor reason if you ask me. None of this discussion so far seems to have answered this question.

You make a lot of great clarifications and points. Personally I think Helm should adapt. CI/CD is the future of how software will be built, and honestly with treats like --atomic helm is already getting more flexible as a reliable deployment tool. Though, this issue is a pretty old one so I think merging that PR isn't "the next step" in the process.

Would creating a plugin, say helm-ci-cd be feasible for this particular --app-version feature (@jrkarnes can probably speak to this one as a PR contributor)? I think the community's needs around getting Tiller out were really only acknowledged after that plugin took off. There are various other issues with Helm that are easily bypassed which may be other good candidates for helm-ci-cd look the install/ upgrade duality that us CI/ CD ppl have already unified via wrapping.

Even if the --app-version switch isn't a direct value-add for the end-users who are trying to install a k8s app without having to look at the template files (which, btw, has never actually worked out for me due to needing to add network policies to be compliant with my work k8s infrastructure) the end user still is getting more value because the person who built that chart had an easier time doing it because of the helm ci/ cd features that make building stable and reliable software easier.

We just read in the Chart.yaml with groovy then set the app version and overwrite the file during deploy time. Then do the helm upgrade. It would be nice if it was part of helm but I wouldn't count on it.

Found this via Google'ing. Same boat, actually.

If verison: is the Chart version, that implies this version changes when the Chart's YAMLs change. Since a chart is a template with configurable values, it's safe to assume you can use Chart 1.3 to deploy apps of multiple versions, 1.2, 1.3, 1.6, 1.8, etc, without every modifying the Chart's YAML files.

Now comes in appVersion: hard coded within Chart.yaml -- forcing you to edit a chart file in order to update (and reflect) the version of the deployed application.

There is definitely a need for a --app-version CLI option we can use within the Chart templates to reference the version of the _application_ to deploy different versions with the same version: 1.3.0 Chart.

@seeruk

I don't think the "purpose" of Helm is to link a Chart and application version. I think it's purpose is to make it easier and safer to deploy applications into a Kubernetes cluster, whilst keeping your Kubernetes manifests DRY and reusable.

This is our point of contention and I don't think either of us will convince the other. Sometimes in a professional setting it's _okay_ to have irreconcilable differences when it comes down to a matter of opinion. I certainly don't agree with everything Stallman says and if we were to get into the trenches about everything he and I disagree on we would die before reaching a consensus.

I said it further up in the discussion and I think it bears repeating:

[...] I surmise that helm only ever truly deploys packages. It seems to be the only semantically correct thing that you can say; however, the argument about how these packages are distributed seems to be the root cause of this debate in practice. Specifically, "does upgrading or changing app version constitute a new package?"

A 1st party helm chart (I like to use MySQL so I'm going to keep using it) without configuration should install a resource into a cluster as the chart creator described it and intended it. Looking at the actual chart for mysql there are two properties which are configurable involving the actual mysql engine:

  • image (default mysql)
  • imageTag (default 5.7.14)

Then, in their Chart.yaml file:

apiVersion: v1
name: mysql
version: 1.4.0
appVersion: 5.7.27

Note that the appVersion and the default imageTag don't match. If I run helm list I'm going to get a report that the "app version" (read; engine version) is a state _that does not reflect the actual app version that is installed into the cluster_.

Nowhere in there do you require a Chart and application version to be strictly linked (just like in reality right now, you don't need them to be).

This is correct; and in my opinion, a design flaw.

So again, as I've said to @iorlas, the question is, should Helm adapt?

Yes. I will address some suggestions in a moment.


@IRobL

Would creating a plugin, say helm-ci-cd be feasible for this particular --app-version feature (@jrkarnes can probably speak to this one as a PR contributor)?

You answer your own question with:

Even if the --app-version switch isn't a direct value-add for the end-users who are trying to install a k8s app without having to look at the template files the end user still is getting more value because the person who built that chart had an easier time doing it because of the helm ci/ cd features that make building stable and reliable software easier.

If we have to do it as a plugin to get the function correct, then I would advocate for that approach; however, I think that's addressing the wrong problem. As I said earlier to @seeruk, I think that having the appVersion be an intrinsic property in the immutable Chart.yaml file is a design flaw. appVersion is a property of the image that is being installed into the cluster through the chart, and it is derived in some way from the image referenced with its tag.

In thinking about a helm-ci plugin, what other features or add-ins would you expect to be there? I don't think that simply toggling the appVersion out of the immutable Chart.yaml properties is enough of a value-add to warrant being a plugin.


@IRobL and @seeruk together:

I think our differing opinions come from who we see the more common end-user of helm to be. If the end-user is supposed to be someone who is not going to be doing a lot of configuration or digging into templates, then helm ls may not be particularly useful and the point is moot.

However... if you're using helm as an administrative tool and assistant to manage a cluster OR if you're utilizing Helm in more of a CI/CD context, then the --appVersion switch ends up beingmuch much more useful and thus a point of concern (and configuration),

In a perfect world, I would contend that appVersion should be a derived property and come from docker image metadata; this isn't feasible for helm to do, ergo, the lack of contention.

To what I said about

Yes. I will address some suggestions in a moment...
... in thinking about a helm-ci plugin, what other features or add-ins would you expect to be there?

I have a personal list that might be a good starting off point:

  • Running helm in CICD mode would not compare _only_ the state of previously released packages against what is currently being applied.. Instead, every deployment of a release would fully apply every manifest templated out when upgrade is run.
  • helm-cicd should wrap base kubernetes commands. I can't count the number of times I've tried to run helm describe or helm logs.
  • helm-cicd should allow me to see what the results of commands are when run by a different user. If we're using RBAC, I would like to see what happens when an unauth'd user attempts to do something.
  • helm-cicd should be able to decompose a namespace into a collection of manifests to edit later.
  • helm-cicd should be able to _transplant_ a release into a namespace.

Those are the big ones... but discussing a full-fledged helm-ci plugin is outside the scope of this PR/Issue (currently).

I do read everything that you guys type and I appreciate the discourse. I look forward to your responses.

Quite busy on my time right now, but wanna correct you on few points. @jrkarnes

I think that having the appVersion be an intrinsic property in the immutable Chart.yaml file is a design flaw

It is not. When something is made this way and not another, there is always a reason, even it is the wild one. In this case, it is the ideology I stand: one package = one build, chart = template for the builds, package = app + infra.

Helm is designed around this ideology, to treat K8s as OS and package as installer/updater for your app. It has some problems, it feels like too much sometimes, but it is certainly a future.

_Why I think it is designed like that? Same helm list is set to display current packages versions installed._

Okay, we have many Charts(public like mysql and local, like you probably have) which are not build to create a package each time new release appears. And I admire this need, since migration to high-level solution needs a time. Also, someone needs to build packages and it would be hard to convince mysql maintainers to create Helm package for each build.

With MySQL chart there is an additional problem. You are right, that would be better to see MySQL version installed in helm list, but it is just a part of the version, since there is also an Image tag, which probably is a part of the version here.

So again, as I've said to @iorlas, the question is, should Helm adapt?
Yes. I will address some suggestions in a moment.

Again, there was a proposal to add helm run, which is what you all are looking for. Is is intended to use Chart as package instantly, allowing to provide app versions and all.

In a perfect world, I would contend that appVersion should be a derived property and come from docker image metadata; this isn't feasible for helm to do, ergo, the lack of contention.

You are seeing Docker image as end product, as the last package, which is the last source of truth. It is not. When you have nothing but a Docker image, you can be tricked to believe it is your software. When you are writing a module in your code you are tricked to believe this module is your software.

Problem is, it is not. It is just a part, artifact. From small to big products you will have many artifacts tied together. Sometimes you will pass versions in docker file within build process, sometimes you will have a ConfigMap, which ties together multiple artifacts. And you won't have one docker file which will have it all.

I have a personal list that might be a good starting off point

I believe you have many suggestions, but it feels more live a Helm fork, rather some wild plugin. I'd say it has nothing to do with CI of CD exclusively. I'd argue not to build such fork/plugin, but discuss and come up with proper solutions. No one needs 3 Helm forks, considering current community is not that big right now.

Attention please!

Okay, we have many thought there and there. I believe best case scenario would be to have:

  • Ability to change app version, w/o need to create packages first
  • Ability to see proper app version in helm ls
  • Allow any API(like Helm operator) to specify/upgrade app version w/o middle state

We have two approaches provided:

  1. helm install is created to install a package. It supports install of the Chart too, but it is limited. Let's simplify helm install and leave it to serve its purpose - install packages. Secondly, let's add helm run, which intent is to provide simplified flow: combine helm package and helm run, provide both commands arguments, but filter out ones which have no sense in such case.
  2. Add app-version into helm install cmd. It will bloat this API and hide idea of using packages, it will hide(like it does right now) ideology of using packages as installers, which makes total sense at least for some, if not most, projects.

Can we agree these both approaches will resolve all the struggles we have right now?

A plugin is a workaround for a missing feature that core devs dont want to support or dont have time to so I wouldn't care if it's a plugin.

Helm is designed around this ideology, to treat K8s as OS and package as installer/updater for your app.

Oooh. I get it. A helm chart [package] is meant to be an "rpm" [per-say] for Kubernetes. Completely not the impression I got: helm uses Charts to deploy an App. The chart/package is the app "formatted for k8s."

I'm fine with that. It makes sense - we just now need to update our build server to up the appVersion as we build new containers:

  • Build Container - tag "v1.2"
  • Update Chart.yaml - appVersion: "v1.2" --- I see there's even a helm package --app-version command already.
  • Package chart: helm package --app-version v1.2 => package[chart]-v1.2.tgz (i.e. a "package[chart]-v1.2.rpm")
  • Deploy package using deployment servers, helm install package[chart]-v1.2 (e.g. apt install [email protected])

Now, am I mistaken in the understanding of any of this? If package-v1.2 isn't v1.2 appVersion, why not? Wouldn't that be the intent? e.g. rpm is version of the app, not package (Chart).

Edit:

If package-v1.2 isn't v1.2 appVersion, why not? Wouldn't that be the intent?

Now I see why people are commenting on Chart.version and Chart.appVersion being in unison. Arguments can go both ways here... an app with a stable "chart" build, you'd expect package-v1.2 to change it's version numbers. But you could also argue package-v1.2 would be the Chart version number - when yaml files change.

How do you manage a stable Chart version (1.2), differentiating from an increasing app version (1.6)? Will package-[version] be 1.2? or 1.6? Say you deploy a Chart version 1.2 but the appVersion changed on packaging: helm package --app-version 1.6?

➜  chart git:(master) ✗ helm package --app-version 1.5 nginx
Successfully packaged chart and saved it to: /Users/Documents/source/docker/nginx/chart/nginx-0.1.0.tgz

:(

.... So confusing.

A helm chart [package] is meant to be an "rpm" [per-say] for Kubernetes

Exactly! But sometimes it feels too strict or too much hassle, this is where shortcut is needed.

Now, am I mistaken in the understanding of any of this? If package-v1.2 isn't v1.2 appVersion, why not? Wouldn't that be the intent? e.g. rpm is version of the app, not package (Chart).

This is a problem for another discussion, but currently package will be named after chart version, not app version. I don't know the reason for it, I feel it should be other way around. I think, it is historic issue, but in my mind, as in yours, it should be package-{app-version}.tgz.

As per my previous messages, there are 4 components to version:

  • Chart
  • App
  • Package
  • Release

It is a headache to version all these things independently, but it is how it works right now. With one exception: package is versioned after chart version.

If we'll pick an ideology of packaging the app, app version would make total sense, since the same process produces both app, image and package. So, when we'll go into delivery step, it would be obvious how files are called, which file to install. Right now we have to hardcode package name in pipelines >_<

@iorlas

You are seeing Docker image as end product, as the last package, which is the last source of truth. It is not. When you have nothing but a Docker image, you can be tricked to believe it is your software. When you are writing a module in your code you are tricked to believe this module is your software.

I took a couple of weeks to think about this statement and I'm 100% in agreement with you. Additionally, I now understand where our difference in opinions originates from...

I follow a development and deployment philosophy that you should be building a system out of small components that do a suite of things incredibly well (think UNIX); however, modern systems may treat an "application" as being a grouping of these small tools. How are you supposed to mark an application's "version" when it's dependent on not only the docker artifact, but also the other sub-components (which may also be docker artifacts)? It's not such a straightforward answer when you start to throw around this type of coupling.

Answering this question is far beyond the scope of this issue / request. To get back at the root of the issue, I would want to make a differentiation between install and run. On the grounds of semantics, install should only operate on packages, and run should "run" helm through the process of generating templates and applying them _without maintaining state_.

While a lot of times we should be using helm template to see how a prospective chart is going to deploy, there is a lot of use in watching it happen vi-ca-vi run which has a dual purpose of being a star in development pipelines (where we don't necessarily want to keep a package because it doesn't have value if velocity is very high).

With MySQL chart there is an additional problem. You are right, that would be better to see MySQL version installed in helm list, but it is just a part of the version, since there is also an Image tag, which probably is a part of the version here.

Like I said, in a perfect universe, the version of an installed object would be introspected. This does however give me an idea.

If we keep what I said early in mind about how charts are deployed, what if we had the ability to splay out all the sub-components with a flag on helm describe? That doesn't fix the need to specify the app-version, but it does make it more clear exactly what is installed (which is part of the driving force behind wanting to adjust app-version with a flag).

I read all comments and don't have a fully qualified opinion on the matter.

I arrived here because my company maintains a private Helm repository and 90% of our charts are mainly one deployment which has one container spec. In these cases, if we could use appVersion to list the image's tag, we'd avoid duplicating a variable and we'd be able to see this version running helm version.

After reading this thread, it seems to me this would be a convenience, albeit a really nice one I'd use if it ever gets merged.

As requested, I'm going to include my last reply from the previous thread for others to view


Hmm. This is where things start coming at odds with each other when you start tying the appVersion to a Docker tag (a logical association). This is where we're having trouble, i.e. the dev scenario I mentioned above. Since version must be a SemVer, we simply cannot use Docker tags as a Chart version.

How do we create a visual version difference to the developers when appVersion isn't apparent on the charts?

Because of how k8s works with it applications: in a dev world, there has be be some way to allow tag versions across the board.

It couldn't do things like SemVer can in terms of the ~ or ^ operators, because the version was purely ordered, without semantics.

Why not? We do this all the time with php composer. We can use SemVer or we can use strings versions which are simply parsed or ignored in versioning schema, i.e. if version is not a SemVer number, don't include it in ~ and ^ pattern matching.

Since you're quoting my comment on #7299, I'll clarify "It couldn't" as "It didn't" (and maybe still doesn't).

For .deb and .rpm packages, the version string is split in specific ways (by hyphens), but they do not have a semantic meaning such as "This is API-compatible with that" and so you cannot generate an expression like "Give me the latest API-compatible version" or "Give me the latest version with an unchanged API", as you can with SemVer.

I recall that both Debian and RedHat used package aliases to achieve those use-cases (and ABI-compatibility) generally based on soversion numbers. This allowed reasonably consistent behaviour using only package names and ordering-only comparisons.

On the general topic, the way we're using Helm charts for our product is for packaging our various services. The Docker images are however a mere artifact, and their naming is driven by the service version, for which we've adopted SemVer because they offer APIs.

Our CI pipeline turns git repos of code and related scripts, and produces Helm charts that can be installed, and happen to reference Docker images. The tags on the Docker images aren't interesting to our users. We tag then with the git SHA they came from, and then retag the images used in a release. The primary benefit of retagging is we know never to untag those, while we can untag the git-SHA versions after a short period.

So I'm quite happy with the way Helm works for us, because version contains the exact version of our software, and appVersion contains the same thing but as a string, and no one ever looks at our Docker repo.

I'm a bit less happy with the way charts are versioned in https://github.com/helm/charts/, as there the chart is versioned, not the software, leading to occasional minor (stable) chart version updates that break backwards compatibility. I think this is a likely and hard-to-avoid consequence when you separate a chart's version from the version of the things it contains.

We have a similar problem with the stable/prometheus-operator chart, in our internal "Externally consumed libraries and artifacts" page. That contains a bunch of different pieces of software, so the question "What version are we on?" and particularly "Is it safe to upgrade?" are much more difficult to answer than for Agones, which versions the same way we do.

@jrkarnes

If we keep what I said early in mind about how charts are deployed, what if we had the ability to splay out all the sub-components with a flag on helm describe? That doesn't fix the need to specify the app-version, but it does make it more clear exactly what is installed (which is part of the driving force behind wanting to adjust app-version with a flag).

I'd absolutely love to see that. There's a related feature request at #6932 for example.

Having just flicked back up the discussion, the idea that appVersion is related to Docker image metadata definitely doesn't fit our use-case, as at least some of our charts (the ones our users primarily deal with) do not contain Docker images, being mostly hosts for shared resources (e.g., JWT public keys, values.yaml) plus a requirements.yaml to pull in other charts.

the idea that appVersion is related to Docker image metadata definitely doesn't fit our use-case, as at least some of our charts

I'm not saying that was _the_ intended use. I merely stated it's a logical association. You are still using appVersion as a "logical container" of your internal yamls.

I still don't know how locking version to SemVer has any benefits. Could helm just parse-test version (and appVersion) and proceed from there?

I guess my point was we're not using appVersion at all, it's usually not present in our Chart.yaml, and when it is present, it's identical to version.

The benefit of locking version to SemVer is that you can use the various SemVer operators on it, and reliably parse it to produce orderings and matchings by install.

The RPM and DEB packaging systems have the same thing, except that their versioning systems use a different syntax, but are still a restricted syntax for semantic parsing reasons. They also have different semantics they care about.

Given how the helm/charts repo was run, I feel like a single version field with a DEB or RPM-style version would have been a better choice than SemVer plus appVersion string. However, that's a completely different, already sailed, ship. And having been both an upstream vendor and Debian packager in my youth, I appreciate not having to juggle "Which of the version numbers needs to be bumped here?" in our "version is the one truth" packages.

The problem with "sometimes it's SemVer" is that SemVer looks a lot like is parser-indistinguishable from something you might write by hand, or encounter elsewhere, such as a Debian package version that doesn't have an epoch, with disastrously-confusing results.

Hi. There is no news on this feature.

After reading all the comments, i can see that it would be really helpful.

Indeed, in our company as we got several applications which used same technologies and are deployed the same way, we have one chart for different applications to avoid duplications.
We package new chart only when there is infra, structural changes.
And it's only when we upgrade or install a release that we apply specific values as tag, env variables...

We consider that helm chart packaged is the abstraction layer representing kubernetes resources and structure expected for one kind of applications and it's only during deployment that we say "ok i want this kind of application to be deployed on that env with these specific values"

As helm list should display release information we should be able to see the version of the app deployed within this release.

I left the comment in similar issue https://github.com/helm/helm/issues/7517
Can we add ability to override this in values.yaml ?
Then we get command line option for free --set

If we attempt to use helm for any application this absolutely sucks. Nobody uses semantic versioning for production applications.

I agree. We're currently blocked for using Chart Museum for immutable _application_ based charts. Chart version != app version, making it impossible for us to released through Chart Museum.

I read through a bunch (not all) of the discussion above, so sorry if I'm repeating some points / views. I'm trying to put forward a more considered response.

I like seeing appVersion when I do a helm ls and conceptually moving from .Values.image.tag was a good one BUT not being able to set it at deployment time is a real show stopper and is something I've had to revert back to.

I'm firmly that (chart) version is the version of the chart and appVersion is the docker tag. In our CI process the docker tag is also a git tag.
We also have multiple microservices and have a desire to keep things as DRY as possible. We have generic charts in a local chart repo because the vast bulk of java-springboot apps are the same. The bulk of tomcat apps are the same (but different from the springboot ones). Rinse and repeat for other technologies. We then have environmental values as the deployment makes its way through various environments
Each of these microservices then makes use of the generic chart through CI/CD
eg helm upgrade release-name private-repo/generic-chart --values <environment>.yaml --set image.tag=<docker tag from build step> --namespace <environment> --install I would prefer to use .Chart.AppVersion rather than .Values.image.tag but I MUST be able to deploy in a way that is efficient for our Org.

If I do a helm ls I have both CHART and APP VERSION so the whole chart version must match app version falls flat on it's face right there. Continuing down that route will just alienate people and at some point the project will be forked because that mentality is too strict and not what many people are after. It's also starting to go down the route of "Lets remove image.*, nameOverride & fullnameOverride. image.* can be hard coded in deployment.yaml etc." for VERY similar reasons.

A final point is that many public charts will not exactly match the docker container version they make use of. Take a look at most well known docker containers eg alpine or nginx where the major and minor versions are rolling tags with only patch versions not rolling. Having a 1:1 mapping for every patch version introduces quite a significant overhead for little to no benefit.
It is not uncommon for production environments being unable to upgrade to latest version for a multitude of reasons. Don't even speak to most places about rolling versions in production.

The upshot of all of the above then begs the question "Why can helm use a chart repo at all?".
Not being able to overwrite appVersion at install / upgrade time means you already need to download and unpack the chart, edit appVersion per install/upgrade, or you might as well package the needed docker containers into the chart.
The exception is where a completely standard install is happening, and there's already plenty of debate surrounding auto generating passwords and the like.
I know the last paragraphs seemed like I was going down a rabbit hole and painting myself into a corner, but that IS where "appVersion is the docker tag AND cannot set appVersion via the command line or values" takes us.

@timothyclarke: What you might want to do, for the helm upgrade use-case you described here, is helm package first, which lets you set --version and --app-version, and then you can helm install the tarball, and keep it around as a CI artifact, which increases your reproducibility for the install, as it won't need any --set parameters added. That's what we've moved to, although without the "generic-chart" aspect, as our charts are not generic.

It's also a good chance to add build metadata to the Version, with something like +g<shortCommitSHA>.

Per #7517, that let me remove a bunch of sed calls that were rewriting image.tag before installing onto our CI testing cluster, and then again when later packaging.

This approach might actually resolve the problems most people have hit here, if they're building their own charts, and particularly if they're installing from the chart source in their checkout. It doesn't really help if they need this functionality for a chart coming from a repo, but I think that's a different issue than most people here are hitting?

To me, the risk from overriding an app-version (or version) at install-time is that it's not as clearly visible to someone else trying to recreate the chart, that this was done. Unless it's somehow hacked into the values support, it won't be there when one extracts the current config of the chart using helm get values -o yaml, so it becomes _one more thing_ that makes your live chart deployment different from what you get with helm install <some way to specify a particular package> --values <values from helm get values>, e.g., when trying to reproduce an issue seen in production on a testing setup.

To me, the risk from overriding an app-version (or version) at install-time is that it's not as clearly visible to someone else trying to recreate the chart, that this was done. Unless it's somehow hacked into the values support, it won't be there when one extracts the current config of the chart using helm get values -o yaml 

You hit the nail on the head. This should have bee in values.yml from day one.

While I understand philosophical arguments against this feature, the field practice shows it would help people a lot -- including us.

There are many charts in the wild which let you set the version of the app through values.yml, specifically because it can't be set here.

I'm not sure if that has been discussed (did a quick CTRL+F and couldn't find any trace), but what about removing appVersion all together as an alternative solution? It seems to me, that it would avoid the whole confusion.

Right now appVersion is treated as kind of "special" value. I assume, that it's there to provide the visibility e.g. I can have chart version 123.5.6 of Prometheus, but it will have appVersion: 2.17.1, so I know what security patch version does it have and what Prometheus features to expect and I can look it up using helm ls.

I guess that could be provided in some different way. Maybe via release labels? Or maybe jsonPath query over all releases, similar to what is possible with kubectl e.g.:

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

Then, the support of that would be shifted to best practices, instead of being enforced by helm itself. It could be linted too.

On the other hand, it might be, that many people rely on existing implementation of appVersion, that also needs to be considered.

Maybe taking a step back and understanding why exactly appVersion has been added would help resolving this issue?

@bokysan It was previously in values.yaml it was moved to Chart.yaml I'm guessing for the whole helm ls showing both the chart and docker tag rather than having to run a command such as
kubectl get deployment <release name> -o jsonpath='{.spec.template.spec.containers[0].image}'

@TBBle I'd address each of your points, but it would make this post as long as the previous one. I think this whole issue comes down to someone deciding that a generic chart is not a valid use case solely by looking at the charts in the public repo.

The entire premise of appVersion falls flat on it's face as soon as you need to start using initContainers and sidecars. To give a real world example one of the projects I currently need to manage has nginx with a php sidecar. The nginx and php tag's don't change often. The php container / version is very important to the developers writing the code. The container that changes most frequently is the initContainer which provides the content.
Do I set appVersion to the initContainer, to the php container or is it nginx, and by choosing only one of these what information has been lost ?

If it's important to your users, then it should be the PHP version, surely? That's what you're advertising. You could also stick all three in your appVersion, it's a freetext field after all.

If you want to force appVersion == container image tag, then you're going to find that challenging with non-trivial charts, i.e. charts with more than one container, or with no containers. That's not really the point of it though, or it would be imageVersion. If you are packaging a single upstream app, use its version. If you are building a package of several upstream apps, pick one, e.g. prometheus-operator's chart, or ignore appVersion, since it's an optional field.

If your chart source is _part of_ the application, e.g. Agones, just leave appVersion blank, or copy it from version if you have tooling that depends on it.

None of these things need to be helm install-time decisions. As I mentioned earlier, helm package is late enough for all the workflows except "switch out a different upstream version at install time for a third-party chart" and that should be in --values or --set like all the other "change X at install time" actions.

Honestly, the actual missing feature is probably "pass appVersion through tpl", so you can have it read from .Values.image.tag or whatever works for you.

If you want to force appVersion == container image tag

Then we're probably in https://github.com/helm/helm/issues/7517 That maybe where all this stems from.

I do not understand this discussion at all. Why not giving people the option to use the app version in a way they think is the best fit for them is so much a big issue?

In a current form for me will be best to not have this APP VERSION at all. It is bringing only confusion to people in our project. We have >80 services which are using the same helm chart and because it is not possible to easily change this APP VERSION in the helm upgrade -i ... I see that all of our applications will forever stay with 1.0 here. And I do not plan to repackage the already packaged chart to just change the app version. Why I should complicate my CI to fit your design???

I also see that I just need to say to everyone to not use the helm list as it will be something not useful for them. To check which version of our applications they have they will need to use something else.

I was optimistic at the start of reading this conversation but after going to the end seeing how you discuss this and how you fight to force users to have your way of thinking I lost hope now :(.

Having two different outputs "CHART(version)" and "APP VERSION" in helm list, helm history and alike is very helpful and avoids the need to dig deeper into command line options and output parsing to get the most important facts.

If "CHART(version)" and "APP VERSION" are tied at build-time ("helm package"), the whole benefit of having two different values is somewhat lost. Building a chart and updating the APP VERSION without incrementing/updating the CHART VERSION will result in big trouble as the same CHART VERSION will give you very different results. :-<

So for now we are forced to build the package with every release and increment "CHART(version)" and "APP VERSION" in sync to not run into an insane/unclear situation.

As I just learned, we could drop "APP VERSION", as it is optional und use some custom value and use that instead of {{ Chart.appVersion }} for our image.... but then helm list would be way less informative. :-<

From Users(Developers) Perspective:

  • ability to set some version property/flag/value at install time
  • ability to see that version propert/flag/value in helm list/history output with label "XX version"

Any chance we can get that done?

If "CHART(version)" and "APP VERSION" are tied at build-time ("helm package"), the whole benefit of having two different values is somewhat lost.

I think this is the crux of the disalignment. The benefit of App Version as I use it, and as appears to be intended by the current setup, is that you know the version of a wrapped Application _for that chart version_, because the Chart Version is the version of the whole chart, not the version of the templates in the chart. I'd hate to have every statement like "We require Version ~X.Y of the Helm chart" to require "Oh, and don't mess with the AppVersion" added to the end.

This benefit is lost of App Version (and the actual version of the app) is changed at install time, because now the Chart Version doesn't tell you what App you're using, and you lose the ability to use SemVer to ensure, e.g., you have the latest-but-API-compatible release.

For the use-case @pniederlag is describing, being able to have appVersion be a template that points at a Values entry would make helm list do what is desired, as long as the Chart supports having its application version (probably a container tag) changed at install time, via --set or --values like every other "changed at install time" configuration option.

Here where I got an issue with the AppVersion.

We are using both the Release version and AppVersions.

To set them now - I have to call the helm package explicitly beforehand of helm upgrade --install to create a local tar-archive with both version being set.

Now I'm adding the helm-secrets support, and...
And it's wrapper can't work with the helm package!

So - what now?
Drop all our versions' support and flow?
Drop using Secrets?
Any ideas?

Actually it's more an issue for the helm-secrets, but it's related to the --set app-version ability discussed here as well, because if I could use it in this way - I don't need to call the helm package at all.

UPD Oh, wait... I'm still able to use helm secrets upgrade chart.tgz -f secrets.yaml...
Okay.
But still, +1 to add the --set app-version.

So - what now?
Drop all our versions' support and flow?
Drop using Secrets?
Any ideas?

We build two packages: a helm package with just the charts, sans the env values and secrets file. We rename this package to chart-{app-version}.tgz since chart-version means nothing for us, nor does chart-version support our app-version syntax. Our app updates include any potential chart updates (same repo, using git tagging).

Then we have a second tgz that's environment specific, chart-{app-version}-{env}.tgz that includes the chart tgz, values yaml, and encrypted secrets file. This file also includes a "package.yaml" that contains values such as tag, app, and environment name for our automated scripts to deploy using helm-secrets.

We are used to identify our applications versions, or most of them, with a semantic version number. Then use use this number in the APP VERSION to identify it easily in the helm history list for rollbacks or other operations.
As the system does not allow to inject it automatically in deploy time we execute this simple command automatically on our CI/CD pipeline before the deploy command:

sed -i -E "s/^appVersion: (.*)/appVersion: ${deploy.project.version}/" ${chartPath}/Chart.yaml

It's tricky, but it works as expected.

@bvis
The issue with your workaround is that you have to edit the chart in your CI/CD pipeline.
If you use a centralized chart repo, then you're forced into helm pull repo/chart --untar

Any further progress with programatically injecting Chart.yaml/appVersion ? Are there workarounds? It would give a tremendous boost in CI/CD Helm .

@jakovistuk as far as I can tell, charts that use appVersion to show the container version do this directly via Chart.yaml, as seen in nginx-ingress/Chart.yaml for example...

I haven't given much thought to this issue for quite some time, so this may be a really dumb question, but is there a way to use Helm CLI to override appVersion?

It seems like a lot of people here are asking for a way to override the ‘appVersion’ field. The original intent/request in this issue is to allow —app-version as a replacement for —version, so a user could run ‘helm fetch —app-version=v0.15.0’ and Helm would work out what the latest chart version last specified v0.15.0 as an appVersion and fetch that.

In our project/chart (cert-manager) we want to make it as clear to end users as possible which version they are installing, so allowing them to install by app version instead of chart version would be a far more natural installation experience.

That said, this issue was opened 2y ago now, and since then we have opted to just keep both of these version numbers in sync/lock-step. After a couple of years doing this, it’s been surprisingly easy and pain free, albeit users sometimes have to wait a couple of weeks for a new official release if there are changes made to our deployment manifests.

Given the age of this issue, it’s length, the huge variety of slightly different feature gates, and the changes in the Helm project since then (Helm 3, OCI charts etc), I don’t think this issue is in a good state to be driven forward as a feature request in its current form. I’m going to close this issue, but anyone else who has a similar feature request Is best to open a new issue and link to any other relevant comments in this issue to provide evidence. Hopefully that’ll work better for the Helm team’s triage process so your requests can get the visibility they need!

I also think this sort of functionality could be, and probably is best, implemented as an external tool or wrapper around
Helm, especially when taking account of the OCI changes which I think would make this trickier to implement.

Until this is solved (or not) Here is how i solved this in my CI/CD (GitLab):

package the chart with the app-version, then deploy it.
I know that the Chart version is not meant to be the same as the appVersion, but in our case it is fine as a workaround.

deploy:
  image: alpine/helm:3.2.4
  stage: deploy
  environment:
    name: ${ENV}
  script:
    - helm package --app-version=${CI_COMMIT_TAG} --version=${CI_COMMIT_TAG} ${NAMESPACE}
    -  | 
       helm upgrade -i --wait ${CI_PROJECT_NAME} ./${NAMESPACE}-${CI_COMMIT_TAG}.tgz \
       --set image.repository="${CI_REGISTRY_IMAGE}" \
       --set image.tag="${CI_COMMIT_TAG}" \
       --set-string ingress.enabled="${INGRESS}" \
       --set service.port="${CONTAINER_PORT}" \
       --set service.targetPort="${CONTAINER_PORT}" \
       --set dc="${CI_ENVIRONMENT_NAME}" \
       --set project="${CI_PROJECT_NAME}" \
       --namespace ${NAMESPACE}
    - helm history ${CI_PROJECT_NAME} -n ${NAMESPACE}
  tags:
    - kubernetes
  only:
    - tags

If you default your image.tag to {{ .Chart.AppVersion }} then you won't need to --set it during the install, it'll already be correct. This works nicely for auto-builds as well, when your Docker images are tagged with SHA1 so the AppVersion matches the Docker image tag, and the Version is an auto-build SemVer.

There's no problem with Version being the same as AppVersion if your AppVersion happens to be SemVer.

For packages produced by my team, we're moving towards things that look for AppVersion, e.g. image.tag, defaulting to Version if AppVersion is unset. It's not a huge difference, just one less argument to helm package for tagged releases, but only makes sense if your chart is built from the same SCM as the thing you're packaging.

@TBBle that won't work if you are using a sub-chart to set your image tag

Do you mean the image.tag is in a subchart, but you're trying to use the version of a parent chart? If so, yes, that's very awkward, and won't be easy to manage. I just bounced off exactly this layout in https://github.com/googleforgames/open-match/'s Helm charts. I suggest rolling the sub-charts in question back up into the main chart in this case.

Charts should be independently isolated/usable units, not relying on parent-chart behaviours to function. The subchart has its own version, _that_'s the one that its images should be using, otherwise, why is it a subchart?

In Open Match's case, the subcharts appear to be used so that XXX.enable can be used as a shortcut in the values.yaml to disable a bunch of stuff at once, but then it introduces a bunch of structural issues like this. Open Match's subcharts all make heavy use of the parent chart named templates, and also have a local version of 0.0.0-dev, so there's already two code-smells that something is not structured well.

Or perhaps I've misunderstood the observation you're making.

@haimari Unfortunately, its not working (relate to https://github.com/helm/helm/issues/6921 ?):

> helm package $DIR/deployment/chart --app-version="1111e8" --version="3454e5" --namespace stage
Error: Invalid Semantic Version

But, this works:

> helm package $DIR/deployment/chart --app-version="0.0.0-1111e8" --version="0.0.0-3454e5" --namespace stage
Successfully packaged chart and saved it to: /Users/aws/service-0.0.0-3454e5.tgz

and even this (but seems dirty):

> helm package $DIR/deployment/chart --app-version="0-1111e8" --version="0-3454e5" --namespace stage
Successfully packaged chart and saved it to: /Users/aws/service-0-3454e5.tgz

helm version version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"dirty", GoVersion:"go1.15.3"}

I think @haimari 's solution as-is only works because they use semver-compatible tags in their CI pipeline (i.e. this will be an example for a job run for tagged releases, not run on every-commit)

@a0s: I generally suggest:

helm package $DIR/deployment/chart --app-version="<container image tag>" --version="<semver version>"

And then have your container image tag value be something like {{ Values.image.tag | default .Chart.AppVersion | default .Chart.Version, so that you don't need to change it on-the-fly, as @haimari does.

In your examples you have what appear to be two different git versions, is that right? Is one for the container image, and one for the chart?

With SemVer, you can't really put a git commit SHA into the meaningful part of the semver, because semver implies ordering, and git commit SHAs are not sortable.

So you'll want to use a version something like 0.0.1-alpha.<build-id>+g<gitcommitsha> where <build-id> is somelike like the pipeline or job ID from your CI system, so it's always going up you commit to your project. That way you always get the latest build when you ask for it.

In SemVer, using a - means it's a pre-release for that version, so 0.0.1-<anything> falls between the 0.0.0 and 0.0.1 releases. The part after + is the build-info, and it's ignored for sorting, hence a good place to put git SHAs or branch names or other non-sortable tracking information.

So with what you have used here, 0-3454e5 will appear to be newer than the next commit, if its SHA happens to start with a 2, e.g., 0-2764e1.

In your examples you have what appear to be two different git versions, is that right? Is one for the container image, and one for the chart?

Yes, the app and the chart - there are two independent pieces of software.

That way you always get the latest build when you ask for it.

What if i don't want (and even can't imagine) ask for the latest.

0.0.1-alpha.<build-id>+g<gitcommitsha>

This string (after interpolation) seems too long to fit into one column of standard helm list output :)

I always know what version (sha hash) of the app i want to install (pass it with --set args). And i always know what version of the chart i use (as @haimari described, i will always use git checkout chart && helm pack && helm upgrade .tar.gz locally in my ci/cd)

What could go wrong?
1) An error during regular helm upgrade. Ok, i will fix the error and try again (with another sha commit of the app - 99%) (or using --atomic instead)
2) Manual rolling back: helm rollback <RELEASE_NAME> or deployment previous sha commit through CI/CD.

What am i missed?

PS To be honestly, i want to use short sha part in version and app-version for information purpose only (during helm list)

If it's just for information purposes, then it goes after the + in a SemVer, not -. If you _never_ care about ordering of releases, or distributing Helm charts to anyone, and your chart or app aren't already SemVer'd, then 0+g<commitsha> is a valid SemVer (equivalent to 0.0.0).

This is what Open Match's Helm auto-built charts do, for example; they are all currently 0.0.0-dev, and we've started looking at making that 0.0.0-dev+g<commitsha> so that if you're looking at what you have installed, you can at least tell _which_ master build you have.

Was this page helpful?
0 / 5 - 0 ratings