Helm: Unable to perform helm upgrade due to resource conflict

Created on 31 Oct 2019  ·  61Comments  ·  Source: helm/helm

Output of helm version: v3.0.0-rc.1

Output of kubectl version: Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.10-eks-5ac0f1", GitCommit:"5ac0f1d9ab2c254ea2b0ce3534fd72932094c6e1", GitTreeState:"clean", BuildDate:"2019-08-20T22:39:46Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS

Seem to be experiencing a weird bug when doing helm upgrade. The error states "Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceMonitor, namespace: dcd, name: bid-management".

We've tested on the following helm versions:

Helm Version:"v3.0.0-beta.2", "v3.0.0-beta.3"

We get the following error - "Error: UPGRADE FAILED: no ServiceMonitor with the name "bid-management" found". Though I can confirm it exists.

Helm Version:"v3.0.0-rc.1", "3.0.0-beta.4", "3.0.0-beta.5"

We get the error above "Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceMonitor, namespace: dcd, name: bid-management"

questiosupport

Most helpful comment

This doesn't really help because I still need to manually remove old resources. I'd expect that flag for helm update command like --force would automatically remove and add resources that have api-incompatibility, but it's not the case. This makes upgrades of apps in kubernetes clusters very cumbersome. If --force is not responsible for this then other flag would be useful.

It's very important issue right now because kubernetes 1.16 just drops support for old apis so we need to upgrade.

All 61 comments

Can you provide a set of steps to reproduce the issue?

@bacongobbler Apologies for the delay. Realised it's harder to reproduce locally with minikube since we've got everything setup for/with AWS EKS. At the moment I can confirm the apiVersion of the serviceMonitor doesn't change so it doesn't seem to be a relation to #6583.

When I run helm template the first time:

# Source: k8s-service/templates/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: bid-management
  namespace: dcd
  labels:
    chart: k8s-service-v0.0.11
    app: bid-management
    heritage: "Helm"
    release: prometheus
spec:
  endpoints:
      - honorLabels: true
        interval: 10s
        path: /prometheus
        port: metrics
        scheme: http
        scrapeTimeout: 10s
  selector:
    matchLabels:
      app.kubernetes.io/name: bid-management

After upgrading and once the resource gets created successfully, I run helm template again and get back the following below:

# Source: k8s-service/templates/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: bid-management
  namespace: dcd
  labels:
    chart: k8s-service-v0.0.11
    app: bid-management
    heritage: "Helm"
    release: prometheus
spec:
  endpoints:
      - honorLabels: true
        interval: 10s
        path: /prometheus
        port: metrics
        scheme: http
        scrapeTimeout: 10s
  selector:
    matchLabels:
      app.kubernetes.io/name: bid-management

After running helm upgrade a second time, I get back the error mentioned above

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceMonitor, namespace: dcd, name: bid-management

@bacongobbler Going to still try and re-produce the steps locally with minikube but might take longer than expected.

Facing the same issue here. @bacongobbler , @efernandes-ANDigital
kubectl version: 1.14.7
kubernetes version 1.15 (PKS)
First time happened to me using Helm v3.0.0-rc.1, after updating to Helm v3.0.0-rc.2 this is still happening.
Made a rollback successfully to the previous state with rc.2 and upgraded again but it didn't solved it: after upgrading successfully trying a new upgrade gave the same error message.
helm diff shows no issues (it detects the resources correctly), and even if I look inside the secret related to the revision, it shows the resources there, so it shouldn't try to re-deploy them.
Not a complex chart, just using range to iterate a list (~100 elements) to generate some resources (namespaces, configmaps, etc.)

Cannot repro it (tried on GKE)

@aespejel What kinds of resources did it fail on?

Namespaces, which makes sense having in mind the order helm is trying to apply manifests, right @thomastaylor312 ?

Yep, Namespaces go first, but I was just checking if this was happening with specific kinds of resources or with an assortment

Just to add we noticed something else, upon disabling the service monitor. When running helm upgrade, it returns back a success message "Release "bid-mangement" has been upgraded. Happy Helming! etc. However upon checking the api-resources servicemonitors we still see the servicemonitor that was created.

What we've just noticed is for the same charts with other services it works just fine and we don't have the issue. The services use the same exact charts with just a few configuration changes for the services....very weird

The problem also happen while trying to install a chart (e.g. prometheus-operator), if the install fails and you try to install it again, helm complains about resource conflict, and if you try to remove the chart, complains saying it never been deployed.

@vakaobr I doubt it's the same issue. When the first install fails (and only with the first install), as you noticed, helm doesn't make a release. Hence helm wont have any information about a release to compare with already deployed resources and will try to install them showing that message because actually some of the resources were installed. You can probably solve this by using --atomic with the installation or using helm upgrade --install --force being careful with the --force since it will delete and re-create resources.
Here we are facing an issue that happens with already successfully installed charts.

Update: still happening after updating to helm v3.0.0 (stable). @efernandes-ANDigital , @bacongobbler , @thomastaylor312
If I use helm diff, it shows NO differences, if I use helm upgrade --install --dry-run, it fails with the following error: "Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists."
Using helm get manifest (of the last release), shows these resources.
Using helm template shows theses resources too.
Maybe this is related with how helm compares resources with the manifest vs the templates?
I'm creating resources iterating a list with range, could it be related with this?
This piece of code maybe?

Workaround:
Running the upgrade --install with --dry-run --debug and -v 10 option showed us that helm was somehow using some really old revisions. We deleted all revisions but the last 2 and it started to work again.

Did you manage to save the release ledger before deleting them? Would've been helpful to reproduce the issue if we had our hands on a solid reproducible case.

I get this error when trying to change api version of deployment to apps/v1 from deprecated extensions/v1beta1. Helm refuses to deploy without me manually removing old deployment.

@sheerun did you see my answer in regards to apiVersion changes in this comment: https://github.com/helm/helm/issues/6646#issuecomment-547650430?

The tl;dr is that you have to manually remove the old object in order to "upgrade". The two schemas are incompatible with each other and therefore cannot be upgraded from one to the next in a clean fashion. Are you aware of any tooling that handles this case?

This doesn't really help because I still need to manually remove old resources. I'd expect that flag for helm update command like --force would automatically remove and add resources that have api-incompatibility, but it's not the case. This makes upgrades of apps in kubernetes clusters very cumbersome. If --force is not responsible for this then other flag would be useful.

It's very important issue right now because kubernetes 1.16 just drops support for old apis so we need to upgrade.

I see your point... We could potentially support a new flag for that.

If you have suggestions, we'd love a PR with tests, docs, etc. It's certainly cropping up more and more especially with the 1.16 release, so we'd be happy to look at proposals to handle that case.

any updates on this?

7082 should handle this case if someone wants to start working on that feature.

If you are having to use these workarounds: https://github.com/helm/helm/issues/6646#issuecomment-546603596, you can use the following script I created to automate that process: https://gist.github.com/techmexdev/5183be77abb26679e3f5d7ff99171731

a similar error

  • reproduce step
    I have installed 10 revisions, when I install new sub charts in revision 11, deloyed OK.
    then upgrade again with same charts, helm complains rendered manifests contain a new resource that already exists.
  • reason
    helm compare the current revision with the first deployed revision which has no resources we installed in the last revision just now
  • fix
    use the last deployed revision as currentRelease instead of first revision
  • work around
    delete old secrets owns by helmkubectl get secret -L owner=helm

@jason-liew This issue is about different thing that is not related to number of releases. You're fixing other bug with similar error. This bug is related to change of resource's api version.

@sheerun sorry, i have delete the reference in the commit message and edit above comment

I have the same issue. Production env is blocked and not possible to fix.
$ helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Service, namespace: mynamespace, name: my-service

Also Amazon EKS

Please add "Warning! There is risk to have brick instead of chart after update" to https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/

Amazing work guys. I'm proud!

Also bumped into this problem.

After migration to v3 with helm-v2-to-helm-v3 plugin I'm unable to update charts :

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Deployment, namespace: default, name: grafana-main

This would be totally understandable if would have happened in a beta version, but I'm running v3.0.2 after following instructions on the official blog, reaching a issue/ bug reported during the beta version. It doesn't feel nice.

Is there any non-destructive work-arround for the time being? What this comment proposes feels quite destructive https://github.com/helm/helm/issues/6646#issuecomment-546603596

Also having the same problem when upgrading a resource to a new api version using helm3.

For the time being, we're in need of a clean workaround as well. Deleting the old resource isn't really an option for our production workloads.

Helm 3.0.2. Can't deploy or even rollback when the previous deploy changed the number of deployments (removed or added a Deployment). Fails with error:

Error: no Deployment with the name "server2" found

Extremely frustrating.

I agree. It's frustrating that the Kubernetes API treats updating the apiVersion as a breaking change.

If anyone is aware of a a solution within Kubernetes that allows one to upgrade apiVersions without having to re-create the object, we'd love to hear about it. We are currently unaware of an API endpoint available for third party tools like Helm to "convert" an object from one apiVersion to the next.

The only option we're aware of from Kubernetes' API is to delete and create the object. It is certainly not ideal, but it's the only option we are aware of at this time.

It was mentioned in https://github.com/helm/helm/issues/7219 that upgrading from Kubernetes 1.15 to 1.16 migrated the objects from a deprecated apiVersion (like extensions/v1beta1) to apps/v1 on the backend. If someone could confirm that behaviour as well as gain a better understanding about how Kubernetes achieves this, that could give us a possible solution to this problem.

I tried to perform the redeploy from another machine (not the one that had previously deployed the modified number of deployments), and it went smoothly. Could possibly be a local caching issue?

@youurayy Do you use Stateful sets ? Or Deployments ? Stateful sets only support seamless resources upgrade in K8S. Deployments still have issue with "Upgrade error, Not possible to upgrade" service for example.

I faced the same issue here. For your information, I migrated from v2 to v3. At first it didn't work since it was complaining that some of the APIs were not valid. I upgrade the APIs (Deployments and Statefulsets), and then I had this issue.

Helm version used is 3.0.2

Same issue here. Doesn't look like a PR has been opened yet. Anyone working on this?

nope, feel free to investigate further. Have a look at my previous comment for more information on where to get started.

@bacongobbler This is kind of a big deal no? If I'm not mistaken, you have no choice with helm3 to change your api revisions from v1beta1 to v1 due to the OpenAPI validation, so doesn't that mean everyone that uses helm pretty much has to delete deployments prior to upgrading a chart?

Actually, this may not be as big of a problem as i thought, dropping --force gets rid of part of the problem since that bypasses the 3-way merge. I believe the only outstanding issue is if you have a helm 2 installed chart that is using an older k8s api ver i.e v1beta1, you can't do a helm 3 upgrade to say v1.

I've been running into a similar issue with one of my deployments.
The error I get is

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceAccount, namespace: default, name: my-program
--
306 | helm.go:76: [debug] existing resource conflict: kind: ServiceAccount, namespace: default, name: my-program
307 | rendered manifests contain a new resource that already exists. Unable to continue with update

I got this error even after deleting the service account it was complaining about.

I got similar error when trying to change api version of DaemonSetto apps/v1 from deprecated extensions/v1beta1. Helm3 refuses to upgrade DaemonSetfrom the version extensions/v1beta1 to apps/v1.

Here is the exact error message with Helm3 when I try to upgrade the chart which was installed with Helm3

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: DaemonSet, namespace: kube-system, name: omsagent

Helm2 does the upgrade without any issues.

I tried with all the released versions of Helm3, but no luck.

Appreciate addressing this in Helm3.

@ganga1980 it's not a great solution but the way we've been handling this issue is to delete any resources that it complains about (the omsagent daemonset in your case) until it works. again, not ideal, but it eventually will work once the conflicting resources using deprecated API versions no longer exist and it will re-create the resources using the updated API version. we're on Kubernetes v1.15 and Helm 3.0.2.

this has been a better upgrade path for us because many of our charts have persistent volumes, so deleting the chart from our cluster and re-deploying was not an easy option. fortunately, persistent volumes are not on the list of deprecated APIs.

Same problem here upgrade a stable/nginx-ingress running:

# helm upgrade nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true --set controller.tcp.configMapNamespace=tcp-services

output:
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ClusterRole, namespace: , name: main-nginx-ingress

# helm version

output:
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}

I agree. It's frustrating that the Kubernetes API treats updating the apiVersion as a breaking change.

If anyone is aware of a a solution within Kubernetes that allows one to upgrade apiVersions without having to re-create the object, we'd love to hear about it. We are currently unaware of an API endpoint available for third party tools like Helm to "convert" an object from one apiVersion to the next.

The only option we're aware of from Kubernetes' API is to delete and create the object. It is certainly not ideal, but it's the only option we are aware of at this time.

It was mentioned in #7219 that upgrading from Kubernetes 1.15 to 1.16 migrated the objects from a deprecated apiVersion (like extensions/v1beta1) to apps/v1 on the backend. If someone could confirm that behaviour as well as gain a better understanding about how Kubernetes achieves this, that could give us a possible solution to this problem.

What is the real problem here? It's possible to do update object with kubectl even with api changes without any issues. The object does not have to be deleted (can be simply kubectl apply/replace) why Helm can't do the same?

@bacongobbler I agree that from k8s point of view, it's the broken change between API versions. However, in k8s there be design to handle such a case to migrate one object from one version to another.
For example, in one 1.14 cluster, if a deployment created in version 'apps/v1', it's also available in the version 'apps/v1bet1', 'apps/v1bet2', 'extensions/v1beta1'. See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#deployment-v1-apps.
So I think the gvk design of the helm3 is ok, but the implementation should be more complicated. The old release object should not only be retrieved from helm release storage, but also from the current running environment.

Thanks

I agree. It's frustrating that the Kubernetes API treats updating the apiVersion as a breaking change.

If anyone is aware of a a solution within Kubernetes that allows one to upgrade apiVersions without having to re-create the object, we'd love to hear about it. We are currently unaware of an API endpoint available for third party tools like Helm to "convert" an object from one apiVersion to the next.

The only option we're aware of from Kubernetes' API is to delete and create the object. It is certainly not ideal, but it's the only option we are aware of at this time.

It was mentioned in #7219 that upgrading from Kubernetes 1.15 to 1.16 migrated the objects from a deprecated apiVersion (like extensions/v1beta1) to apps/v1 on the backend. If someone could confirm that behaviour as well as gain a better understanding about how Kubernetes achieves this, that could give us a possible solution to this problem.

A k8s single object may convert from one version to another version if they are compatible. See https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/v1/conversion.go as an example.

I've ran into an issue that's related to this.

In my case, I've enabled NamespaceAutoProvision admission controller to avoid failures due to non-existent namespaces. That mostly helped, but here's where it created a new problem: in our charts, we created few namespaces explicitly in order to set some labels on them, which are used for network policies. This was done as a separate step before installing charts that use those namespaces. With helm 3 and the NamespaceAutoProvision controller, chart installation now fails with

Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Namespace, namespace: , name: monitoring

which makes sense, but blocks provisioning. I've tried this with and without --force to the same result.

I don't know if this has been floated before, but maybe there could be a way to tell Helm to "adopt" resources if they already exist – i.e., in case of an existing namespace it would be patched with user-supplied manifest and understood to be managed by Helm from that point.

A k8s single object may convert from one version to another version if they are compatible. See https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/v1/conversion.go as an example.

That is a conversion from apps/v1 to another internal representation. You cannot use this to convert from v1beta1 to v1. Look at the code more closely.

For example, in one 1.14 cluster, if a deployment created in version 'apps/v1', it's also available in the version 'apps/v1bet1', 'apps/v1bet2', 'extensions/v1beta1'

Kubernetes clusters support multiple API versions, but they are treated as separate discrete objects. The internal schemas are completely different. There is no "convert from v1beta1 to v1" API we're aware of at this time.

I don't know if this has been floated before, but maybe there could be a way to tell Helm to "adopt" resources if they already exist – i.e., in case of an existing namespace it would be patched with user-supplied manifest and understood to be managed by Helm from that point.

See #2730

@bacongobbler thanks for your answers and help here. I have same issue with api version, but in cluster itself our deployment have - apiVersion: apps/v1
New version of chart also have: apiVersion: apps/v1
But in Helm 3 metadata/release we have this:
apiVersion: extensions/v1beta1
kind: Deployment

Its really not convenient that you need to reinstall production workload just to fix Helm metadata, since real deployment have correct API version. Any suggestions here? I am thinking to tweak metadata manually.

@bacongobbler
I will show how kubernetes hanlde multiple API versions from design, code, and runtime.

  1. Desing doc
    https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md#operational-overview.
    There is an example in the doc on how to create a resource in v7beta1 and retrieve in v5. So from kubernetes design perspective, it allows a single object conversion among multiple versions.

    The conversion process is logically a "star" with the internal form at the center. Every versioned API can be converted to the internal form (and vice-versa)

  2. Kubernetes source code
    As I mentioned above, there are such conversions
    https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/v1/conversion.go Convert_v1_DeploymentSpec_To_apps_DeploymentSpec and Convert_apps_DeploymentSpec_To_v1_DeploymentSpec
    Also, https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/v1beta2/conversion.go Convert_v1beta2_DeploymentSpec_To_apps_DeploymentSpec and Convert_apps_DeploymentSpec_To_v1beta2_DeploymentSpec.
    The code use the interal data structure as a hub in different versions.

  3. Runtime behavior
    I use a 1.14 cluster and kubectl.
    kubectl get -n kube-system deployment coredns -o yaml -v 8
    Force to use another api version
    kubectl get -n kube-system deployment.v1.apps coredns -o yaml -v 8
    You can see a single object (by its uid) can be retrieve by multiple API versions.

Hey,

I am having the same issue. My issue is regarding a pre existing stateful set. Any advice would be much appreciated.

Thanks,
Richard

Let's open up discussions:
https://github.com/helm/helm/pull/7575

Hello,

I am facing same issue the only think i have done i upgraded from helm 2.14.1 to latest, we are getting the error as mentioned above : rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Service, namespace: *, name: * . All stuff mentioned up about deleting that wont work for us as this is production and the API is critical with 0 down time required. kindly assist ...

Thanks
Hany

Here's a dirty hack that we use whenever a resource, such as a PV or PVC, already exists and we don't want to delete it, but do want to upgrade containers. This typically happens whenever we do a helm upgrade whatever and the old deployment and new deployment get stuck in a race.

kubectl get deployments -n monitoring
kubectl get -n monitoring deployment/prometheus-operator-grafana -o jsonpath="{.spec.replicas}"

# Set spec.replicas to 0
kubectl edit -n monitoring deployments/prometheus-operator-grafana
watch -n1 -d "kubectl get pods -n monitoring"

# Once all pods have terminated, set the spec.replicas back to 1 or whatever value you want
kubectl edit -n monitoring deployments/prometheus-operator-grafana
watch -n1 -d "kubectl get pods -n monitoring"

# At this point you'll have an updated pod/deployment/whatever

I got this error just following the basic tutorial

Hitting this for a ClusterRoleBinding when installing kubernetes dashboard chart

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace: , name: kubernetes-dashboard, existing_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding

Same issue when installing chart in two namespaces. My chart depends on prometheus-operator chart which will create ClusterRole .
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: ClusterRole, namespace: , name: p8s-operator

Same here. I migrated the helm2 to a helm 3 deployment and afterwards its no longer upgradeable because of the same error

./helm upgrade public-nginx-ingress stable/nginx-ingress --install --namespace ingress --wait -f public-nginx-ingress.yaml coalesce.go:199: warning: destination for extraContainers is a table. Ignoring non-table value [] coalesce.go:199: warning: destination for extraVolumes is a table. Ignoring non-table value [] coalesce.go:199: warning: destination for extraVolumeMounts is a table. Ignoring non-table value [] Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace: , name: public-nginx-ingress, existing_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole
I tried to skip the rbac part, ais it already seems to exists and don't need to established again
Then i get the next issue
./helm upgrade public-nginx-ingress stable/nginx-ingress --install --namespace ingress --wait -f public-nginx-ingress.yaml --set rbac.create=false coalesce.go:199: warning: destination for extraContainers is a table. Ignoring non-table value [] coalesce.go:199: warning: destination for extraVolumeMounts is a table. Ignoring non-table value [] coalesce.go:199: warning: destination for extraVolumes is a table. Ignoring non-table value [] Error: UPGRADE FAILED: cannot patch "public-nginx-ingress-controller-metrics" with kind Service: Service "public-nginx-ingress-controller-metrics" is invalid: spec.clusterIP: Invalid value: "": field is immutable && cannot patch "public-nginx-ingress-controller" with kind Service: Service "public-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && cannot patch "public-nginx-ingress-default-backend" with kind Service: Service "public-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable

Could someone clarify what the solution is here? I see that this got reopened then closed again 39 minutes later but I didn't see an obvious solution in this thread.

Could someone clarify what the solution is here? I see that this got reopened then closed again 39 minutes later but I didn't see an obvious solution in this thread.

There is no solution yet but this one is promising and almost ready to implement:

https://github.com/helm/helm/pull/7649

7649 was merged this morning.

7649 was merged this morning.

Ohh, missed that ;) well, then answer to @micseydel question is in the first post of #7649 in Release Notes section

Was this page helpful?
0 / 5 - 0 ratings