Helm: Scoping releases names to namespaces

Created on 3 Mar 2017  ·  46Comments  ·  Source: helm/helm

Hi all,

After commenting on other issues related to this topic and talking about it with @technosophos on slack, I wanted to open an issue to have a wider and more persistent medium to discuss how Helm handles release names and Kubernetes namespaces.

Background

When I started out on our Kubernetes journey, I read up about namespaces and liked the idea of being able to create multiple environments as namespaces with scoped resource naming, keeping my environments as identical as possible. In my first attempts of CI/CD with homegrown kubectl wrappers this worked well, but we moved quite quickly to Helm. This is where we had to start struggling to achieve this, as I soon ran into the problem that Release Name should be a unique value across namespaces (Cfr. https://github.com/kubernetes/helm/issues/1219). I tried to stick to this approach by using name: {{ .Chart.Name }}, but this brings plenty of problems on its own.

Problem description

The more I think about it and read @technosophos other comments on issues such as https://github.com/kubernetes/helm/issues/1768 and https://github.com/kubernetes/helm/issues/980, the more I wonder if the inconsistencies compared to the native kubernetes namespace handling are really needed or worth it.

To summarise, I understand from these that a Helm Release is not bound to a namespace, but it does define the namespace in which it will create (most likely) its resources. You could theoretically install in multiple namespaces by overriding .Release.Namespace, but it's strongly recommended not to do that to prevent problems as Helm can't reliably operate in multiple namespaces.
Nor is Helm very strict about doing peculiar stuff with namespaces such as upgrading a release with a different namespace than it was installed in or not passing the namespace at all anymore after installing (things which kubectl does not allow you to do).

Kubernetes on the other hand scopes almost all its resources to the namespace, to quote from the docs: Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.. Kubectl is also very strict during usage in always passing your namespaces to address resources.

Combining these two I have the impression Helm's current approach both blocks users from using Kubernetes' native handling of namespace scoping, while at the same time doesn't support cross-namespace charts/releases. Especially the fact that Helm handles natives features in a different way and essentialy blocks you using them feels a bit wrong to me?
Related to the remark that this decision was made to be able to support cross-namespace releases in the future, I don't see how namespace scoping would block this? You would have to be careful about the naming (similar to how you need to be careful today) and passing namespaces, but the current approach of passing a single namespace at install wouldn't work either.

keep open proposal

Most helpful comment

Explicitly, It would be really nice if I could do the same things as I can do with services and other k8s native types with helm charts relating to namespace.

For example, I would like to be able to able to do the following:

helm install --namespace abc --name redis stable/redis
helm install --namespace def --name redis stable/redis

All 46 comments

I'm not sure if I understand you. You want to deploy to multiple namespaces while only having one release name?

@21stio Exactly. from the Kubernetes docs:

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.

and

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.

Personally, I can't think of a good reason why helm wouldn't respect this concept of namespaces.

I agree. All my namespaces are in the form of ${site}-${environment}, but my releases are ${site}-${environment}-${description}. Where the site might be internal or www and environment might be dev, staging, or team-a, team-b, and description could be something like nginx, migrations, cache etc.

But the ${site}-${environment} is extremely redundant:

NAMESPACE                    NAME
www-dev                     www-dev-redis-1234567890-cj241
www-dev                     www-dev-proxy-1234567890-kfd44
www-staging                 www-staging-redis-1234567890-cj241
www-staging                 www-staging-proxy-9876543210-kfd44
internal-team-b             internal-team-b-redis-1234567890-cj241
internal-team-b             internal-team-b-nginx-1234567890-cj241

Is what I end up with, but I'd prefer that the pods were just redis-1234567890.. or proxy-9876543210..

I use my release name in my chart template, so all my service and pod names include all this extra stuff. I already pass the namespace to the templates, so if I wanted to, I could easily include that in the name if I wanted, but the way it is now, it's part of all my resource names by using the default helm scaffolding.

K8s namespaces already namespace for us, I really dislike having to prefix all my things with the namespace when namespaces are designed to help prevent clashes.

Explicitly, It would be really nice if I could do the same things as I can do with services and other k8s native types with helm charts relating to namespace.

For example, I would like to be able to able to do the following:

helm install --namespace abc --name redis stable/redis
helm install --namespace def --name redis stable/redis

@Janpot @bcorijn The assumption made above is that Helm charts only work with objects that are encapsulated inside of namespaces. We do not wish to confine Helm to only those resource kinds.

What about Third Party Resources, which are not namespaced? Or RBACs, where "namespace" is a policy attribute, not a location (https://kubernetes.io/docs/admin/authorization/)?

I know I've said it several times elsewhere, but our ultimate goal is to make it possible to deploy resources into multiple namespaces from the same chart. (Use case: An app has a control plane and a data plane, and you want to deploy them each into separate namespaces to create security boundaries)

If we bind a release to a namespace, we lose the ability to:

  1. Directly manage namespaces
  2. Manage RBACs and service accounts
  3. Manage TPRs (and any other non-namespaced objects)
  4. Eventually support multi-namespace charts.

I understand that this makes the naming problem a little harder for you, but it allows Helm to operate on a much wider array of Kubernetes resources.

would it be possible to support both namespaced and non-namespaced releases?

@technosophos so to summarize there's two main drivers:
1) Managing resources that are not namespaced
2) Future plans of allowing charts to install across namespace

I do see your point, but I'm not sure if it's a reason to stick to the current implementation, as I have the impression you also need to force it a bit to address these concern?

For multi-namespace charts to work nicely/natively, you would most likely need quite a overhaul of the namespacing system, as the current concept of Helm putting a releases into a namespace won't work? _EDIT: Just realized if releases were actually namespaced, a multi-namespace chart could just be an umbrella chart containing two releases with a different namespace?_

For managing non-namespaced resources; I don't have personal experience with it so it's a bit hard to judge, but again I feel this is forcing Helm into a less than perfect way of working right now, as a Release that manages namespaces, RBAC or TPR's will have a namespace but just ignore it?
I might just be missing something due to not having experience with them, but wouldn't scoping the names and ignoring the namespace has the same end-result, it would just put more responsibility on the user to verify that his release names and selectors are correct/unique when dealing with these resources. (which I agree is quite the responsibility)

So maybe just scoping releases is not the way to go, but taking another look at the way they are handled in Helm and will be handled in the future is worth it? Having both options as @Janpot mentions could work, "global" releases and namespaced releases?
My _very personal_ opinion is also that deploying in the way @kylebyerly-hp, @chancez and I described above is a lot more common than the two usecases that prevent this way of working.

First, let me re-iterate the main point: Helm charts operate on a global level, not on a namespace level. So their names are globally unique.

For multi-namespaced charts, what we need to fix is Tiller's ability to query across namespaces. (You can actually _install_ multi-namespaced charts now. You just can't reliably upgrade or delete them, because Tiller can't reliably query them).

For non-namespaced items, things would get very complicated. We'd have namespaced releases managing un-namespaced things that, in turn, could impact other namespaces. Please take a look at how RBACs and TPRs work. These are not things that Helm can simply decide not to support, and "faking" a namespace would cause more problems than it's worth, especially with RBACs.

I still haven't seen a good reason to namespace a release name. Your initial complaints are based on a misunderstanding that all (important) things in Kubernetes are scoped to a namespace. But important things like TPRs and RBACs are not. The bulk of the other complaints seem to be more about the fact that the _ad hoc_ naming schemes they use are "not pretty" with Helm. Working around that by creating a HUGE compatibility-breaking change that mis-represents releases as "in a namespace" seems like the wrong approach to take.

@technosophos

You can actually install multi-namespaced charts now

How? Where the notion about namespaces should be put in config?

Do you plan to officially support multi-namespace releases?

We do not plan on fully supporting multi-namespaced releases until Helm 3.0 because doing so will break backwards compatibility and require a major refactor of much of Helm/Tiller's Kubernetes code.

Unfortunately for us not being able to deploy & manage multiple namespaces using helm is a deal-breaker.

Our plan was to create an umbrella chart, which would have all our apps (e.g. smaller charts) as dependencies. All our apps live in their own namespaces, this was by design (in future we'd like to have RBAC per-namespace). With an umbrella chart we could install & upgrade entire cluster of different microservices at once, given only one values.yml, which would be really convenient.

@technosophos, thanks. Noted on the fact that support for the above will not arrive soon, not until Helm 3.0 at least.

Is there a general idea of what exactly needs to be refactored in Helm/Tiller to support multiple namespaces? Or is 3.0 it too far away?

We've resorted to treating the helm name as more of a UUID, using --name-template and letting it generate a simple but random name. I cannot say I prefer this over respecting the namespace itself but I do see both points and for us this will suffice w/ minimal overhead.

e.g. https://github.com/kubernetes/helm/pull/1015#issuecomment-237309305

> helm install --namespace www-dev --name-template "{{randAlpha 6 | lower}}" stable/redis
> kubectl --namespace www-dev get pods
NAME                                    READY     STATUS    RESTARTS   AGE
uvtiwh-redis-4101942544-qdvtw           1/1       Running   0          14m
> helm list --namespace www-dev
NAME    REVISION        UPDATED    STATUS          CHART                   NAMESPACE
uvtiwh  1               ...        DEPLOYED        redis-0.8.0             www-dev

@icereval how will you find the name for redis (uvtiwh) in you apps to connect to it?

A pattern I'm considering using in our clusters is:

  • One Tiller instance in kube-system, to be used by cluster admins
  • One Tiller instance per namespace, with more limited RBAC permissions, to be used by the developer team that owns that namespace

The "Helm release names are globally-unique" design principle is a headache for soft-multitenant deployments like ours, so I'm interested in hearing more about the recommended approach!

I was very disappointed when I found out that Helm does not adhere to the concept of identifying releases based on their name and namespace. In my opinion this does not follow the design principles of Kubernetes where resources are unique within their respective namespace (with exception of some global resources).

As other posters have commented in this thread we have have multiple environment-suffixed namespaces for different groups of applications. We have hundreds of different deployments each in three or four environments. We rely heavily on the unique DNS names within namespaces so that we can refer to services with the same name within different namespaces; eg. our redis service can be accessed over tcp://redis in both namespace a-test and a-prod, where both namespaces has a deployed version of redis.

Targeting this as a discussion point for helm 3. It appears there is a huge amount of demand for this.

Contrary point:

Pretty much all of our chart trees deploy artifacts across multiple namespaces split along persistence / API / Level7 ALB(+static) lines. From that stand-point LOVE the fact that helm release names are global.

Found presence of --namespace option in helm semi-useless from the stand-point of assembling of multi-layered applications, where base layers can be reused by red/blue deployed upper layers. Instead of injecting {{ .Release.Name }}-derived strings into the names of the artifacts, we create a new namespace for each deploy. This allows us to propagate deterministically-formed service URLs through the chained configs (same_service_name.some_product_release20171102a.svc.cluster.local > same_service_name.some_product_release20171105c.svc.cluster.local).

Since automatically-generated release names are gobbledygook anyway - no fidelity into what stands behind that thing in helm list, we hard-override --name with a string derived from product/stack name and monotonically-increasing release / build version ("appname-v20171103xyz") Would love to be able to define value of --name-template somewhere in the chart and just have it use Chart name + datetime derived or explicit build ID value.

Example

Base persistence layer

apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: {{ .Values.global.product }}-persistence-{{ .Values.global.tier }}
  labels:
    app: redis
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
...

Consumed from another namespace like so:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ .Values.global.product }}
  namespace: {{ .Release.Name }}
  labels:
    app: {{ .Values.global.product }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
...
          env:
            - name: REDIS_SERVER_HOSTNAME
----->      value: "redis.{{ .Values.global.product }}-persistence-{{ .Values.global.tier }}.svc.cluster.local"

Above 2 templates are parts of 2 separate charts (persistence chart and API chart) and can be ran separately or jointly through a 3rd, overarching chart. In both cases, because of use of .global. the values are overridden once at command line and apply cleanly to all all sub-charts.

With this approach, since destination namespace value is sometimes a partial derivative of ReleaseName and sometimes is semi-fixed, we pretty much rely on ReleaseName to be global so that system would complain if we try to create a stack with same global ReleaseName.

One of the benefits of having and using namespaces is that object names (incl. DNS names) within them are local and don't have to change from namespace to namespace. Specifically in @dvdotsenko example above, the REDIS_SERVER_HOSTNAME should be the same (e.g., just redis) and should not need to be injected with globalized values. The reason is avoiding repetition.

From simplicity standpoint (and putting aside some naturally complex cases, like multi-namespace deploys and non-namespaced objects), the ideal case is that the namespace "assembles" your stack and contains exactly one instance of your application.

This allows names within the stack to be local, simple and, most importantly, fixed as they are namespace-relative.

A possible approach is for helm to support the simple case more or less as it does today (while avoiding prefixing objects with namespace); this will produce reasonable, safe best-practice default that will work out of the box for most uses. It can also have advanced namespace mode which will allow more freedom (at the expense of complexity), to allow for the use cases @dvdotsenko and @bcorijn describe.

My $.02

I have to concur with @pnickolov, this is a major blocker for us. In our use case we have over 150 environments, and multiple clusters, that must run variants of the same application stack. Namespaces solve this problem allowing the separation of environments and simplicity of configuration, particularly relative to service discovery.

Without an easy way for service end-points to be configured in sibling charts... Purely via values...

I find this confusing too. As @technosophos writes:

A release is not bound to a namespace. (That's why a release itself can contain a namespace definition). In fact, it should be possible (though I can't say I've personally tried) to deploy a single chart that creates multiple objects in multiple namespaces.

I am struggling to understand exactly this. I've looked at the documentation and I've looked at several issues here on GH and I am still confused:

  • On one hand, I can use helm install --namespace to specify the namespace I'd like to target
  • On the other hand, my chart can specify whatever namespaces it wants inside its metadata objects.

So, my questions:

  • If the namespace specified by helm install --namespace does not exist, does Helm create it? Does it then set that namespace on all the resources it creates from the chrt?
  • If a resource template specifies a namespace in its metadata, does helm overwrite it?

These questions have made me hesitant to even play with --namespace, it's so unclear. If anyone can help me make sense of it, I would really appreciate it. Thank you!

f the namespace specified by helm install --namespace does not exist, does Helm create it?

Yes. If the namespace does not already exist, --namespace creates the specified namespace for the chart.

If a resource template specifies a namespace in its metadata, does helm overwrite it?

No. If you happen to provide the same namespace in --namespace as well as the Namespace resource in the chart, there will be a conflict as the namespace will be first installed by tiller, and then bork when the chart tries to re-install the same namespace again.

For further context, the idea for helm is to install all the resources in the namespace provided by helm install --namespace. Users who are "hardcoding" the namespace in the chart usually want to install a chart in multiple namespaces.

This is a little off-topic from what OP is suggesting, but please feel free to open a new ticket or join us on Slack if you have further questions! :)

Not sure I want to wade into this discussion 😄 Be kind please 🙏

The helm --namespace parameter is really --default-namespace

Reading the stack and related there appears plenty of confusion around the --namespace option because people (quite reasonably) assume it is like kubectl --namespace they are used to, which effectively limits activity to that namespace (by the side effect of a parsing error, not actually security). That is not the case for helm since tiller is a cluster service that operates over the whole cluster. The option would have been better named --default-namespace, ie. this is the namespace your resources will go to if they don't specify a particular namespace.

Multi-namespace releases are needed too

I already rely on charts that deploy different components of each release into multiple namespaces, and I am looking forward to enhanced support in helm 3.0. 👍 🎉

Multi-tenant with helm and namespace restrictions is already possible

I also see the use case where people want multi-tenant, namespace-restricted installs. IMHO scoping or restricting releases to namespaces is not something helm/tiller should concern itself with enforcing, that is the job of RBAC. There are at least two models for achieving this, one is possible right now:

  1. Deploy a per-namespace tiller with a Service Account and RBAC that only allows operations in that namespace. This works now and I see people using it. Ideal for multi-tenant clusters.
  2. For tiller to support k8s user impersonation, and so deploy each release as the helm user. This is being discussed for future helm versions, and appears to have some implementation challenges. But this would allow a cluster-service tiller to enforce RBAC namespace restrictions, whilst still supporting multi-namespace-spanning releases.

Same-named resources for different releases in different namespaces is already possible

For people wanting to install the same chart into different namespaces but with the same resource names (e.g. redis). That is entirely possible, that's down to how you write your chart templates. You don't need to prefix resource names with the release name, that is just a default/convention a lot of charts follow. Recent charts already have the .fullnameOverride value that lets you nix the release name prefix. You can deploy your redis as redis with every single helm install if you like.

We are in a similar situation as @gmile and we wanted to know what is the best practice for doing it. Our core application, ingestion-service has a dependency on kafka which in turn has dependency on zookeeper. But, we want all the three in their own namespaces but want to manage through a single helm install. We were planning to add kafka in the requirements.yaml of ingestion-service. But getting kafka in its own namespace doesn't looks straightforward with helm so what we went for was removing from requirements.yaml and having different helm install for both deployments.

Just an FYI that this is noted and part of the Helm 3 proposal listed under Section 3: State Management. Feedback welcome!

That is fantastic news @bacongobbler 😄🎉

@bacongobbler Is Helm 3 looking to support specifying separate namespaces for dependent charts in requirements.yaml as @prat0318 described?

From the proposal doc (give it a read! :smile:):

$ helm install -n foo bar --namespace=dynamite
# installs release, releaseVersion, and un-namespaced charts into dynamite namespace.

As with Helm 2, if a resource explicitly declares its own namespace (e.g. with metadata.namespace=something), then Helm will install it into that namespace. But since the owner references do not hold over namespaces, any such resource will basically become unmanaged.

@bacongobbler I read it, but I still don't see it supporting it. I don't mean hardcoded metadata.namespace in charts that I control, that's always been supported. What I mean is specifying the namespace for a third-party chart that I don't have the ability to edit. For example, in my requirements.yaml I depend upon stable/kubernetes-dashboard and want it installed into kube-system, but my chart to go into development namespace.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

It appears that this very feature request can be fulfilled by helmfile. From what is in the readme, it should be possible to specify different releases scoped to namespaces of their own.

@gmile I'm 99% sure the helmfile maintainers haven't fixed this particular issue in helmfile. If you declare two releases named vault with different namespaces in your helmfile.yaml and run helmfile sync, it'll fail because the release name vault was claimed by the first release.

disclaimer: I haven't tested this using helmfile, so I would love to be told I'm wrong. 😄

Just in case the last comment was missed, we are addressing this in Helm 3 with the changes to how Helm handles releases. :)

@steven-sheehy that particular issue could probably be fixed via the sandboxing model by extending the subchart to deploy to a particular namespace than what's defined.

/remove-lifecycle stale

Implemented in Helm 3. Changing the namespace context refers to a different instance altogether.

><> ./bin/helm version
version.BuildInfo{Version:"v3.0+unreleased", GitCommit:"5eb48f4471ac3aa9a3c87a07ee6f9e5bbc60a0e1", GitTreeState:"clean"}
><> ./bin/helm list --all-namespaces
NAME            REVISION    UPDATED                                 STATUS      CHART               NAMESPACE
chartmuseum     1           2019-02-08 08:56:29.566393188 -0800 PST deployed    chartmuseum-1.9.0   default  
chartmuseum     1           2019-02-08 09:14:01.978866327 -0800 PST deployed    chartmuseum-1.9.0   foo

Great news @bacongobbler

Given this change, would it make sense for the namespace column to be moved to the first column in list output. So that the first two columns uniquely identifies the release?

The default sort could be by namespace and release, so that releases in the same namespace are grouped together, e.g. all the kube-system releases would be together.

Sure.

for now, I just use

helm install --name <namespace>-<name> ...

Yes the current way things work stinks, but, all you need is globally unique names to manage, and there's no reason you can't just create a compound key for the name of the release.

Ok so it sounds like there are 3 fundamental scenarios (with potential for various permutations mixing each):

  1. single namespace'd chart.
  2. resource which is not namespaceable.
  3. multi-namespace'd chart.

Would this be a reasonable approach to address the different scenarios:

  1. inject/override the namespace when supplied with the --namespace flag.
  2. same as 1 but ignore the namespace for those resources that lack a namespace.
  3. exit citing "can't override a multi-namespace" resource or similar.

Aside: I don't use tiller, preferring helm template so not sure how much that changes the challenges.

@technosophos

I'm trying to understand how Helm v2 interacts with namespaces and how v3 will be different, and one of your old comments in this thread confuses me:

First, let me re-iterate the main point: Helm charts operate on a global level, not on a namespace level. So their names are globally unique.

....

For non-namespaced items, things would get very complicated. We'd have namespaced releases managing un-namespaced things that, in turn, could impact other namespaces. Please take a look at how RBACs and TPRs work. These are not things that Helm can simply decide not to support, and "faking" a namespace would cause more problems than it's worth, especially with RBACs.

It sounds like releases deployed from Helm v3 will in fact be namespaced; is that correct? Do you know how the RBAC issue has been resolved? The only resolution I can think of that would avoid the issue you pointed out is for Helm v3 charts not to support modifying RBAC objects, but I haven't found anything in the various blog posts and such about v3 indicating whether v3 charts will support managing RBAC objects or not.

All we really need is for us to be able to use the namespace parameter and
name parameter as a compound key to identify a release rather than affixing
a namespace onto a name.

I haven't read the proposal for helm v3, but the sensible thing to do is
adopt the selector pattern that k8s already uses and there's no need to
support any specific fields.

On Tue, Jun 25, 2019, 11:01 AM BatmanAoD notifications@github.com wrote:

@technosophos https://github.com/technosophos

I'm trying to understand how Helm v2 interacts with namespaces and how v3
will be different, and one of your old comments in this thread confuses me:

First, let me re-iterate the main point: Helm charts operate on a global
level, not on a namespace level. So their names are globally unique....

For non-namespaced items, things would get very complicated. We'd have
namespaced releases managing un-namespaced things that, in turn, could
impact other namespaces. Please take a look at how RBACs and TPRs work.
These are not things that Helm can simply decide not to support, and
"faking" a namespace would cause more problems than it's worth, especially
with RBACs.

It sounds like releases deployed from Helm v3 will in fact be namespaced;
is that correct? Do you know how the RBAC issue has been resolved? The only
resolution I can think of that would avoid the issue you pointed out is for
Helm v3 charts not to support modifying RBAC objects, but I haven't found
anything in the various blog posts and such about v3 indicating whether v3
charts will support managing RBAC objects or not.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/helm/helm/issues/2060?email_source=notifications&email_token=AACFHREXHFSKFSB7FXQ5VPTP4JMP3A5CNFSM4DCII7X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYRCWFI#issuecomment-505555733,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACFHRH2JPXPKMX23WVQLCDP4JMP3ANCNFSM4DCII7XQ
.

@BatmanAoD @gyndick In Helm v3, charts are installed in the user context. This means it is installed in that user namespace and will use the RBAC of the user. Release names are on a namespace basis also.

You can try it out with the Alpha.1 release: https://github.com/helm/helm/releases/tag/v3.0.0-alpha.1 or build from the dev-v3 branch.

I will not be using helm v3. Every operations team has different
constraints and ways of doing things. Operational tools should be simple,
single purpose utilities i.e. Unix philosophy compatible.

My scripting, logic, etc live outside of my package manager.

TLDR;

The most important aspect of being Unix philosophy compatible is the
ability to provide escape hatches between steps.

Having a long, automated workflow that takes care of the logistics from
cradle to grave is awesome, until it breaks. If users aren't provided the
ability to manually perform every step of the flow of needed, automation
becomes Pandora's box.

The complexity proposed for v3 will invite many many mistakes and bad
design from people who don't have the benefit of 25 years of experience.

The added complexity only will make things harder to do because invariably,
operational tools that become development environments of their own only
slow down triage.

The perfect example is when someone codifies into one massive, horribly
written script. An outage happens and parts of the script need to be run,
other parts need to be strictly avoided, yet those parts are integral to
the main logic. What the hell do you do then? Sit there frantically trying
to refactor code that you don't really have a good way of debugging.

Think about all of the tools that go into an ecosystem to support
developing and operating software in any specific language. You're not
going to be able to provide that for helm for quite some time.

So, keep the responsibility of how to manage migration between versions of
software with the people developing the software being deployed.

A package manager should be simple and light with just a few of
responsibilities.

  1. Deliver artifacts
  2. Remove artifacts
  3. Run the scripts provided in the artifacts
  4. Keep track of the artifacts it thinks it delivered
  5. Most importantly, KISS

Anything else is asking for pain. Frankly, helm v2 would be nearly perfect
if it just fixed how it kept track of releases.

On Wed, Jun 26, 2019, 1:31 AM Martin Hickey notifications@github.com
wrote:

@BatmanAoD https://github.com/BatmanAoD @gyndick In Helm v3, charts are
installed in the user context. This means it is installed in that user
namespace and will use the RBAC of the user. Release names are on a
namespace basis also.

You can try it out with the Alpha.1 release:
https://github.com/helm/helm/releases/tag/v3.0.0-alpha.1 or build from
the dev-v3 branch.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/helm/helm/issues/2060?email_source=notifications&email_token=AACFHREUTX77SJCPWZLQKATP4MSNRA5CNFSM4DCII7X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYSYR7A#issuecomment-505776380,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACFHRCAQLWUYHH6RJSUYF3P4MSNRANCNFSM4DCII7XQ
.

@hickeyma Thanks for the reply! I'm actually not wondering so much about how Helm's operations will be access-controlled (though that's a related issue) as whether Helm itself will still be able to perform global operations such as creating ClusterRoles in v3.

@BatmanAoD That should work as they are cluster-scoped resources. It might be worth trying it out if you get a chance.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

danielcb picture danielcb  ·  3Comments

InAnimaTe picture InAnimaTe  ·  3Comments

macknight picture macknight  ·  3Comments

vdice picture vdice  ·  3Comments

sgoings picture sgoings  ·  3Comments