Ansible: meta: flush_handlers doesn't honor when clause

Created on 8 Jun 2018  ·  54Comments  ·  Source: ansible/ansible

SUMMARY

meta: flush_handlers gets triggered even within a when clause that evaluates to false.

ISSUE TYPE
  • Feature Idea
  • Documentation Report
COMPONENT NAME

ansible-playbook

ANSIBLE VERSION
ansible 2.4.1.0
  config file = /home/XXX/REPOS/physics-development/ansible/ansible.cfg
  configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /home/XXX/.virtualenvs/ansible-2.4/local/lib/python2.7/site-packages/ansible
  executable location = /home/XXX/.virtualenvs/ansible-2.4/bin/ansible
  python version = 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0]
CONFIGURATION
ANSIBLE_PIPELINING(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = True
ANSIBLE_SSH_RETRIES(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = 2
DEFAULT_LOOKUP_PLUGIN_PATH(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = [u'/home/XXX/REPOS/physics-development/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = [u'/home/XXX/REPOS/physics-development/ansible/roles']
OS / ENVIRONMENT

Ubuntu Bionic, running Ansible on a virtualenv (but don't think this is that relevant in this case)

STEPS TO REPRODUCE

The playbook below calls flush_handlers within a block that evaluates to false. To probe this, an additional task is within the block which, as expected, never gets run.

---
- name: Triggers a flush_handlers bug
  hosts: localhost

  tasks:
  - debug:
      msg: "Playbook running Ansible version {{ ansible_version.string }}"

  - name: First task
    set_fact:
      my_task: 'first task'
    changed_when: true
    notify: my_handler

  - block:
    - name: This task will never be run
      set_fact:
        my_task: 'task within block that evaluates to false'
      changed_when: true
      notify: my_handler

    - meta: flush_handlers
    when: 1 == 2

  - name: Last task
    debug:
      msg: 'This is the last task'

  handlers:
  - name: my_handler
    debug:
      msg: "Handler triggered from {{ my_task }}"

EXPECTED RESULTS

The expected result is for the handler be triggered after all tasks. Something like...

PLAY [Triggers a flush_handlers bug] **************************************************

TASK [Gathering Facts] *******************************************************************
ok: [localhost]

TASK [debug] *******************************************************************************
ok: [localhost] => {
    "msg": "Playbook running Ansible version 2.4.1.0"
}

TASK [First task] ****************************************************************************
changed: [localhost]

TASK [This task will never be run] *******************************************************
skipping: [localhost]

TASK [Last task] *****************************************************************************
ok: [localhost] => {
    "msg": "This is the last task"
}

RUNNING HANDLER [my_handler] *****************************************************
ok: [localhost] => {
    "msg": "Handler triggered from first task"
}

PLAY RECAP *********************************************************************************
localhost                  : ok=5    changed=1    unreachable=0    failed=0   
ACTUAL RESULTS

What happened is that the handler was triggered immediately when the ansible interpreter reached the flush_handlers spot, despite being within a block that shouldn't be run because of the wrapping when clause:

PLAY [Triggers a flush_handlers bug] ****************************************************

TASK [Gathering Facts] *********************************************************************
ok: [localhost]

TASK [debug] **********************************************************************************
ok: [localhost] => {
    "msg": "Playbook running Ansible version 2.4.1.0"
}

TASK [First task] *******************************************************************************
changed: [localhost]

TASK [This task will never be run] **********************************************************
skipping: [localhost]

RUNNING HANDLER [my_handler] ********************************************************
ok: [localhost] => {
    "msg": "Handler triggered from first task"
}

TASK [Last task] *******************************************************************************
ok: [localhost] => {
    "msg": "This is the last task"
}

PLAY RECAP ***********************************************************************************
localhost                  : ok=5    changed=1    unreachable=0    failed=0   

NOTE: I also tried applying the when clause just to the meta task, with same result. This problem can also be reproduced on Ansible 2.3.0.

affects_2.4 docs feature has_pr core waiting_on_contributor

Most helpful comment

FWIW. Let me describe a valid scenario (below). "flush_hadlers" is needed after tasks abc.yml complete and before tasks def.yml start. But the file with the tasks xyz.yml is imported when OS is RH only. Here Ansible complains:
[WARNING]: flush_handlers task does not support when conditional

It would be nice to be able to suppress the warning.

# cat xyz.yml
- include_tasks: abc.yml
- meta: flush_handlers
- include_tasks: def.yml

# cat playbook.yml
  ...
  tasks:
    - import_tasks: xyz.yml
      when: (ansible_os_family == "RedHat" )

All 54 comments

Files identified in the description:

If these files are inaccurate, please update the component name section of the description or use the !component bot command.

click here for bot help

The behavior you're describing is expected. I merged PR recently that adds a warning when using conditional on meta tasks that don't support them - https://github.com/ansible/ansible/pull/41126. Also see https://github.com/ansible/ansible/issues/27565.

Stop by on IRC or mailing list if you have any questions:

Thanks!

mkrizek wrote:

The behavior you're describing is expected.

No, it isn't.

It obviously wasn't expected by me, that's why I opened this bug but, more interestingly, even you know this is unexpected, or else, why investing the time to warn about it?

All in all, warning at run time about this behaviour makes for a nice addition, so thank you very much for that but, please, consider to reopen this so it's considered as what it in fact is: a bug or, at the very least, as a feature request.

Oh, sorry for incorrect wording, let me rephrase, it works as intended and as such it's not a bug. We should at least clarify the behavior in the documentation.

Thanks a lot for reopening this, if even as a feature request!

mkrizek wrote:

let me rephrase, it works as intended and as such it's not a bug.

Humm... I'd probably better say "current behaviour is known to developers" (but then, also known bugs are known to developers and that doesn't make them any less of a bug).

In order to say as much as it works as intended there should be some kind of rationale supporting this behaviour as the desired one (I mean: not a technical explanation on why it happens to behave the way it does, but a functional requirement for the behaviour to be this instead of any other -which I doubt there exists; I think you and I could agree the proposed behaviour to be better than the current one).

Well, enough chatting: thanks again for your effort.

One of the comments in the issue I linked for you (https://github.com/ansible/ansible/issues/27565) states that some of the meta tasks are not host specific and so when is ignored there. So in your case, flush_handlers runs handlers on those hosts which have been notified (so far at the point you call flush_handlers). This means that flush_handlers is not a host specific meta task, it can affect multiple hosts. And so imagine this:

- meta: flush_handlers
  when: condition

The condition might be true only on one of your hosts and false on the rest which means that handlers would be flushed even on hosts where the condition is false.

Like I said we didn't do a good job documenting this and we should fix that.

The more I look at it, the more convinced I am that this is in fact a bug.

mkrizek said:

One of the comments in the issue I linked for you (#27565) states that some of the meta tasks are not host specific and so when is ignored there. So in your case, flush_handlers runs handlers on those hosts which have been notified (so far at the point you call flush_handlers).

I read #27565 and, while I understand the part about meta being "...a special kind of task which can influence Ansible internal execution or state" (which is the basis for the offered rationale), they don't even work the way you state: flush_handlers is not triggered here because the when clause resolved to true on one host and then run on every host because of a side-effect of meta's nature (that, for instance, could be the case for, say, the refresh_inventory or end_play meta tasks) but it's simply that meta is ignoring the when clause in full. See my example: the when clause (when: 1 == 2) simply can't resolve to true in any circumnstance and, still, flush_handlers is triggered.

Now, some other details:

  • #41126 subject is: "Warn when cond is used on meta tasks that don't support them". This is not a bug-resolving statement (by itself), but a remediation one. I.e.: it's good getting a warning if I put, say, meta: end_play within a when block because it's easy that I overlook the real implications of what I'm doing. But then, flush_handlers is a host-oriented meta task, as "...Handlers are lists of tasks, not really any different from regular tasks", so it should fully support being called within conditional blocks and work as expected (honoring them, that is).
  • #27565: "meta tasks do not appear to respect conditionals". It is (somehow) a general case of this bug. While I considered only the meta: flush_handlers case, #27565 is about all meta tasks. As I said, I understand the rationale given there (which basically ends up being "because of its very nature, meta may not work the way you think it works" and, thus, calls for #41126), I don't think it offers the proper solution because, as the case here illustrates, it is not that the when clause is processed properly but having hard to anticipate side-effects, but that meta tasks are simply not evaluated on their proper context. I'm almost 100% sure, even without looking at the underlying code, meta tasks are not being processed they way they should (and I'm also almost sure, it'll be a bug hard to track and solve).
  • #41313: this issue. Despite #27565 being the general case of this one, it still holds merit splitting it on a case-by-case basis if only to produce valid test-cases that can be hunted down one-by-one to insure they are solved so, once again, I suggest you to reconsider this issue's status (not feature request but bug).

I know this comment is on the verge of being more proper for general mail list than issue request, as it's almost as much about "what a bug is and is not" than about the very nature of the problem at hand here, but I hope you won't see it as a critic (it's certainly not my intention) but as an attempt to clarify the full situation that's going on here (with my user's hat on, not a developer's one).

I was hoping that:

- when: condition
  block:
    - meta: flush_handlers

would be a suitable workaround, but the handlers are even flushed in that case even when condition doesn't hold.

I'm ok with the flush not supporting conditionals. But it would sure be nice to be able to suppress the (new) warning since there is nothing that can be done about it.

FWIW. Let me describe a valid scenario (below). "flush_hadlers" is needed after tasks abc.yml complete and before tasks def.yml start. But the file with the tasks xyz.yml is imported when OS is RH only. Here Ansible complains:
[WARNING]: flush_handlers task does not support when conditional

It would be nice to be able to suppress the warning.

# cat xyz.yml
- include_tasks: abc.yml
- meta: flush_handlers
- include_tasks: def.yml

# cat playbook.yml
  ...
  tasks:
    - import_tasks: xyz.yml
      when: (ansible_os_family == "RedHat" )

I completely agree with @vbotka, having higher conditioned imports makes the warning quite out of context, although I understand how tasks tags and conditions are processed at low level.
There should be a way to get rid of the warning, or, at least, mute it in some circonstances (such as conditionals on outer inclusions).

I completely agree with @vbotka, having higher conditioned imports makes the warning quite out of context, although I understand how tasks tags and conditions are processed at low level.

I agree that all _"helpful warnings"_, like this one, should be programmatically silenceable, so meta:[whatever] should allow for the ignore_warnings: yes option.

After all, _"Meta tasks are a special kind of task"_. So, if they are tasks, even if of a special kind, why they don't support ignore_warnings: yes? why they don't support being evaluated by conditionals? (and, again, here I'm not asking for the underlying code or technicality that makes this happen -or not happen, but the rationale that supports this as _the_ desired behaviour).

But, please, re-read this issue's subject: _"meta: flush_handlers doesn't honor when clause"_. You see, this one is about a different beast. I'd suggest you open a feature request about it (while properly solving this one would make your further feature request moot).

I'd support @jmnavarrol in this request. IMHO this behaviour is unexpected (if you are not an ansible developer) and the first (minor) bug is, that there hasn't been any warning about it until recently. The current implementation explains some eventual problems we had with our roles and which we couldn't really explain.

If the current behaviour is in fact intended, then it should be (prominently) mentioned in the documentation.

Our context of the warning is:

- hosts: all
  name:  "====== BLAH ======"
  roles:
    - { role: blah, when: has_blah|default(False) }
  tags:
    - blah-only

we have several such role constructs. Silencing the warning would't help us. It would be nice if it worked ;-)

+1 on this

A big pain when integrating roles which rely on when: conditions.

ie will die if I integrate my colleague's role (which is using handlers)

- { import_role:  { name : docker },    tags: debian-docker,   when: debian_docker is defined and debian_docker }`

A must have!

Just ended up here because of the exact condition that @vbotka describes.

We have an NFS role and two plays that are included based on server or client configuration. In the Server play, which runs first, there is a flush_handlers meta to ensure changes to the NFS server config are applied before the clients are configured and attempt to mount based on the presumption these changes are applied. This warning is not helpful here, because the meta task _has no when clause_ - the entire set of tasks is conditional.

I'm facing the same situation as @vbotka, @marcinkubica and @Routhinator.
Looks like I'll have to deploy a patched ansible that at least partially reverts #41126.

I also have the same issue as @vbotka, @marcinkubica and @Routhinator.

For me, this issue makes handlers useless unless you know you will never need to flush them.

Over the maintenance life of our ansible playbooks, there is a high chance that parts may in future be refactored to be included conditionally. So any handler that you may at some point want to flush is just a booby-trap waiting to happen as far as I am concerned.

Is it possible to have a status about this issue ? (cc @bcoca or anyone)

@jbarotin this is not something we are planning to add, it really does not make sense since handlers are flushed for all hosts, making it conditional to 'the first host' seems incorrect.

But we are open to considering an implementation, which is why this feature request is kept open.

Thanks @bcoca for your answer, so I have to wait a PR about this. I'll not propose it, I have no more time to spend for this annoying warning message that appear in ansible 2.7 (but I understand why it appears).

@bcoca:

@jbarotin this is not something we are planning to add, it really does not make sense since handlers are flushed for all hosts, making it conditional to 'the first host' seems incorrect.

I'm not sures if this point of view is valid. See our use case above:

- hosts: all
  name:  "====== BLAH ======"
  roles:
    - { role: blah, when: has_blah|default(False) }
  tags:
    - blah-only

We are importing complete roles based on specific features. Basically IMHO this "feature" breaks the functionality of the imported role, even though we (would like to) flush the handlers for _all_ hosts (which have been assigned to the respective role). So far we haven't found a workaround and we _really_ hoped, that this behaviour would be changed. Or did I miss the point?

@fthommen i would argue what you have there is a use case for include_role instead

I ran into the same issue today; I have a scenario where I can only check the status of a service if the program is installed (manually installed binary). The status of the program depends on a configuration file; if that has changed I would like to reload the service, but if the service hasn't been installed yet, it will fail.

So after reading everything here I was able to come up with a working workaround. Instead of making the meta: flush_handlers conditional, I made "notify". This may not work for cross-role scenario's, but it works for mine, so maybe its useful to someone:

# at the config file definition
# do not set a 'notify' here, instead,
# register the output of the task
- name: 'set config file'
  template:
    src: 'configfile.j2'
    dest: '/path/to/configfile'
  register: config_changed

# set a fact "program_installed" somewhere which determines if the program is installed

# notify only when both conditions are true
- name: 'notify handler when program is installed and config file changed'
  debug:
    msg: "conditional notify / program installed [{{ program_installed }}] / config changed [{{ config_changed.changed }}]"
  notify: myservice_restart
  changed_when: program_installed and config_changed.changed

I also hope this helps show that there are plenty of scenario's where having a condition on the flush_handlers meta before the end of the script can be useful. I understand that under the hood this may not be easy, but you are smart people, I'm sure someone will figure it out! :-)

I ran into the same issue today; I have a scenario where I can only check the status of a service if the program is installed (manually installed binary). The status of the program depends on a configuration file; if that has changed I would like to reload the service, but if the service hasn't been installed yet, it will fail.

So after reading everything here I was able to come up with a working workaround. Instead of making the meta: flush_handlers conditional, I made "notify". This may not work for cross-role scenario's, but it works for mine, so maybe its useful to someone:

```

I also hope this helps show that there are plenty of scenario's where having a condition on the flush_handlers meta before the end of the script can be useful. I understand that under the hood this may not be easy, but you are smart people, I'm sure someone will figure it out! :-)

Your workaround works because you are lucky enough not to need the service running on "stage 2" setup. I use a somehow similiar approach for Jenkins (but I need to use two different "restart-like" handlers since Jenkins' behaviour is different upon first installation than once configured, but whatever).

BUT, think of this quite standard use case I found, for instance with postgres: a role installs postgres and configures some database within:

  1. Install postgres, configure auth.
  2. If auth config changed, restart the service (and, maybe, run other pending handlers) right now.
  3. Other postgres configuration , create and configure a database within, maybe some of the steps here also requires a service restart.
  4. Process pending handlers as usual.

While I might come with a custom case solution based on a collection of registered variables (not without their own quirks, i.e.: what happens to a registered variable on a skipped loop, etc?)... the "first stage, do something, second stage, do something else once first stage is stable" is a common enough pattern that all configuration management tools, including Ansible, support it... unluckily not within roles (at playbook level, one could play with pre_tasks, tasks, post_tasks stages, but AFAIK not within a role).

it is a natural case for flush_handlers and a generic approach that can't be used as soon as there is a "when" wrapping the flush_handlers call at basically any level, against common expectation as well as against the documentation stating that "meta is basically just a slightly different kind of task".

@fthommen i would argue what you have there is a use case for include_role instead

As far as I can see (and understand), this would require to restructure major parts of our roles/playbooks/tasks that we have built up in the last two years. We don't have the resource to do that. include_role is not a simple drop-in replacement for what we are doing. So that doesn't seem a way to go.

Hi, I need to support flush_handlers using a condition.
When will this problem be fixed?

Ex:

- name: Force execute handlers immediately or not
  meta: flush_handlers
  when: condition

As @bcoca say in a previous post of this thread, @cesarjorgemartinez you should try to put this in a yml file let's say flush_handlers.yml

- meta: flush_handlers

And then run

- name: Force execute handlers immediately or not 
  include_tasks: flush_handlers.yml
  when: condition

I tested that, and it seems to work, it's a workaround for my use case.

As @bcoca say in a previous post of this thread, @cesarjorgemartinez you should try to put this in a yml file let's say flush_handlers.yml

- meta: flush_handlers

And then run

- name: Force execute handlers immediately or not 
  include_tasks: flush_handlers.yml
  when: condition

I tested that, and it seems to work, it's workaround for my use case.

Well, kindof...

At the very least, the _when_ clause gets to be honored this way, and this is good, a working workaround!!!

But then (and only then), when the _when_ clause gets triggered and the file included, you still get the _"[WARNING]: flush_handlers task does not support when conditional"_.

Tested on ansible 2.8.5.

Having quite the same situation as described in https://github.com/ansible/ansible/issues/41313#issuecomment-533099471

With ansible 2.9.4 however it only warns if import_tasks is used with a when but for include_tasks it is now working as expected without warning.

As @bcoca say in a previous post of this thread, @cesarjorgemartinez you should try to put this in a yml file let's say flush_handlers.yml

- meta: flush_handlers

And then run

- name: Force execute handlers immediately or not 
  include_tasks: flush_handlers.yml
  when: condition

I tested that, and it seems to work, it's a workaround for my use case.

@DEvil0000 This doesn't seem to work if I import conditionally whole role into another role, it shows warning and also don't execute flush_handlers. I didn't tried include_role, because I need to apply tags added to task which import this role.

@bcoca I'm used to execute roles according to configured host_vars, is it anti-pattern? What should be correct way to apply conditionally some role to host?

@elcomtik normally you do that with a play since the definition of a play is 'mapping hosts to tasks' ... i would expect that a play will target hosts to which you want to apply tasks (including those in roles).

You can work around that with conditionals .. but it just seems simpler IMHO to use the facility that does this naturally.

@elcomtik normally you do that with a play since the definition of a play is 'mapping hosts to tasks' ... i would expect that a play will target hosts to which you want to apply tasks (including those in roles).

You can work around that with conditionals .. but it just seems simpler IMHO to use the facility that does this naturally.

Quite off-topic here, but anyway...

  1. I'm with @bcoca: I use _roles_ to _"abstract away"_ common functionality; as such, there should never be a mention to hosts or hostgroups at that level. Then, playbook-level serves the purpose of defining an architecture (how those _"abstracted services"_ are to be tied together and how different groups of hosts relate each other). Finally, inventory-level vars configures a specific instance of the playbook-level defined arch (for this to work ansible.cfg's precedence option should be tweaked away from it's default value -and I consider _"my"_ sorting should be the default, since current one doesn't really make sense, but that's another story...). That's for using Ansible for configuration-as-code; operations are bit different, but you should avoid 'operations' as much as possible anyway...
  2. BUT the point above makes no difference about @elcomtik 's problem, since there still are cases where it can make sense conditionally including full roles from within roles, i.e.: _"include_role subfeature_role when activate_subfeature == true"_.
  1. BUT the point above makes no difference about @elcomtik 's problem, since there still are cases where it can make sense conditionally including full roles from within roles, i.e.: _"include_role subfeature_role when activate_subfeature == true"_.

I created one role elcomtik.common (not yet publicly available), which has tasks itself. However its main task file makes a composition of own tasks and other roles. The sub-roles are independent(they may be part of play), but I want to by able control if some sub-feature is configured. I can tear apart the master role and move subroles directly into play, but the conditional remains and as I understand it the problem(handlers not executed) will persist.

I must point out that import_role can't be replaced by include_role, because of tagging (if include is used tags are not applied to included tasks).

I tried to use workaround, inclusion of meta handler from task file, proposed by @bcoca. It doesn't worked in sub-role. Without flush_handlers this sub-roles workflow breaks. I can make a hack and replace handler by conditional task, but this seems to me a lot worse ansible usage anti-pattern.

Bellow you can see part of tasks/main.yml

- name: Install & configure firewalld
  import_role:
    name: elcomtik.firewalld
  when: firewall_enabled
  tags: firewall

- name: Install & configure fail2ban
  import_role:
    name: robertdebock.fail2ban
  when:
    - firewall_enabled
    - fail2ban_enabled
  tags: fail2ban

- name: Install private CA to trust store
  import_role:
    name: bdellegrazie.ca-certificates
  when: install_cacerts | bool
  tags: cacerts

I'm personnally not a fan of including roles from within other roles; instead I believe a role should be about a single function/piece of software/etc. If you use sub-roles to specify a sub-function of another role, you could simply include task files instead. If it is about inter-role dependencies, why not use inventory vars to configure both roles?

Example: I have a set of variables which I use to define 1 website on a virtual host. These vars are then used by a multitude of roles: nginx, wordpress, firewall, icinga2 monitoring, fail2ban, etc. Since all of these depend on eachother, this seems like the cleanest approach as there is no mixing of functions within roles.

In addition, I have created a single playbook which serves all hosts (so I can easily run this periodically for all hosts) and using those same inventory vars at various levels, I decide which roles apply to which hosts. So the main playbook has constructs similar to those mentioned by @elcomtik.

This means playbook changes are not neccessary for individual hosts, everything is controlled by group/inventory vars.

That being said, this discussion was about using the when clause when flushing handlers, which didn't work as I originally expected. The workaround above (https://github.com/ansible/ansible/issues/41313#issuecomment-517417809) works just fine in conjunction with the role/playbook structure I just described.

I'm personnally not a fan of including roles from within other roles; instead I believe a role should be about a single function/piece of software/etc. If you use sub-roles to specify a sub-function of another role, you could simply include task files instead. If it is about inter-role dependencies, why not use inventory vars to configure both roles?

Inclusion of task files doesn't include handlers, vars, defaults.

That being said, this discussion was about using the when clause when flushing handlers, which didn't work as I originally expected. The workaround above (#41313 (comment)) works just fine in conjunction with the role/playbook structure I just described.

What about ansible-lint? doesn't it complain about E503 - "Tasks that run when changed should likely be handlers" (https://docs.ansible.com/ansible-lint/rules/default_rules.html).

Inclusion of task files doesn't include handlers, vars, defaults.

Very true, but the principle is the same and using my workaround earlier in this thread accomplishes the same thing: only flushing handlers when a condition is met. As far as I am aware, the when condition works just fine when including vars (not defaults as they are a last resort and as such always loaded).

What about ansible-lint? doesn't it complain about E503 - "Tasks that run when changed should likely be handlers" (https://docs.ansible.com/ansible-lint/rules/default_rules.html).

It does, but it also complains about 503 in many other scenarios where using handlers is not the correct solution, so in light of the many false positives you get for E503, I take this one for granted. Even when conditions which do not have a "changed" condition are highlighted sometimes.

I'm personnally not a fan of including roles from within other roles; instead I believe a role should be about a single function/piece of software/etc. If you use sub-roles to specify a sub-function of another role, you could simply include task files instead. If it is about inter-role dependencies, why not use inventory vars to configure both roles?

Example: I have a set of variables which I use to define 1 website on a virtual host. These vars are then used by a multitude of roles: nginx, wordpress, firewall, icinga2 monitoring, fail2ban, etc. Since all of these depend on eachother, this seems like the cleanest approach as there is no mixing of functions within roles.

Interesting but, IMHO, orthogonal. Yes, there are cases where role relationships makes more sense at the playbook level (when they are related to a full architecture-level decision), but there are still cases where it makes more sense at the role level (when the presence of a subsystem defined on role B modulates the behaviour of role A). And both these cases bear no relationship to where the variables that "trigger" one behaviour or the other are managed, be it at playbook or inventory level.

An example (I'll reuse "wordpress" since you mention it, but imagine wordpress were a java-based application on tomcat since that way my example makes more sense):
You could think to yourself "hey, I'll will deploy WP on top of Apache" or "hey, I'll deploy WP on top of nginx" (for whatever reasons). You have roles for WP, Apache and Nginx configuration.

Then, you could control the required configurations at playbook level (i.e.: somehow invoking the apache or nginx roles first, then the WP role with a lot of fine-grain params) or you could invoke WP role "with_apache" or "with_nginx". The WP role would then configure itself with an HTTP listener at 127.0.0.1:8080 for nginx and, maybe, accepting a custom set of HTTP headers (since nginx uses the reverse proxy approach for this kind of configuration) OR it would configure itself to use ajp workers, no HTTP/HTTPS listeners and another set of headers to be used with mod_jk. Bear in mind it offers no real value to the WP role user to know all this internal configuration as long as, once run, the WP role ends up offering you a working WP instance.

As said, you, the WP role user could pass a lot of fine-grain params so WP works on top of one http server or the other, which in turn would require a lot of knowledge from all the incumbents, but it makes a lot more sense if it is the role the one that _"encapsulates"_ all that required knowledge abstracting it out of the reach of role users.

Again, depending on your architectural decisions and your needs, it might make sense choosing between wp on top of nginx or on top of apache at the playbook level (i.e.: your designed architecture is to work on top of nginx, full stop), or inventory level (the same arch is designed to work both ways, to be decided at instantiation time).

Finally, I think it doesn't matter that there are (maybe a lot, maybe even a majority of) cases where it makes sense not to include/import (full or partial) roles from within a role as long as there are cases (and I'm convinced there are) where it makes more sense to include/import them from a role (i.e.: when the usage of role A modulates the internal behaviour of role B in ways that are of concern just to role B itself). And in those cases, it seems this flush_handlers bug (excuse me... feature) still triggers as per @elcomtik 's comment above.

Finally, I think it doesn't matter that there are (maybe a lot, maybe even a majority of) cases where it makes sense not to include/import (full or partial) roles from within a role as long as there are cases (and I'm convinced there are) where it makes more sense to include/import them from a role (i.e.: when the usage of role A modulates the internal behaviour of role B in ways that are of concern just to role B itself). And in those cases, it seems this _flush_handlers_ bug (excuse me... _feature_) still triggers as per @elcomtik 's comment above.

Indeed, regardless of whether or not including roles from within other roles is valid or not, have you tried my workaround? I don't see why it wouldn't work in your scenario as well, as the problem is still the same: flushing handlers does not respect the when condition.

Indeed, regardless of whether or not including roles from within other roles is valid or not, have you tried my workaround? I don't see why it wouldn't work in your scenario as well, as the problem is still the same: flushing handlers does not respect the when condition.

Do you mean refactoring code so instead of flush_handlers you use notify and then having a when clause on the handler itself?

That would be IMHO going back to square one, see https://github.com/ansible/ansible/issues/41313#issuecomment-517572516:

  1. I need a service restarted upon configuration updates (being first setup just one case of "configuration update") "right here and now (if some condition is met), I can't wait for the time handlers get usually triggered" -that's exactly the case for flush_handlers to exist.
  2. OK then, use flush_handlers where needed but move the effective triggering evaluation to the handler task itself: that may work... sometimes, but it would mean a quite ugly workaround since triggering handlers is a decision taken in a very local context (thus the notify) and now you are moving it to more global context. With such an approach you might completely get rid of handlers: just add a set of tasks at the end of the given context, with enough logic based on registered variables etc. wrapping them but then, why offering the handlers feature to start with? Add to that, for instance, the irks about registered variables and skipped tasks.

Indeed, regardless of whether or not including roles from within other roles is valid or not, have you tried my workaround? I don't see why it wouldn't work in your scenario as well, as the problem is still the same: flushing handlers does not respect the when condition.

Do you mean refactoring code so instead of _flush_handlers_ you use _notify_ and then having a _when_ clause on the handler itself?

That would be IMHO going back to square one, see #41313 (comment):

  1. I need a service restarted upon configuration updates (being first setup just one case of "configuration update") _"right here and now (if some condition is met), I can't wait for the time handlers get usually triggered"_ -that's exactly the case for _flush_handlers_ to exist.
  2. OK then, use _flush_handlers_ where needed but move the effective triggering evaluation to the handler task itself: that _may_ work... sometimes, but it would mean a quite ugly workaround since triggering handlers is a decision taken in a very local context (thus the notify) and now you are moving it to more global context. With such an approach you might completely get rid of handlers: just add a set of tasks at the end of the given context, with enough logic based on registered variables etc. wrapping them but then, why offering the handlers feature to start with? Add to that, for instance, the irks about registered variables and skipped tasks.

Exactly mine comment was pointing to another issue, not conditional used on flush_handlers task itself, but inherited conditional by importing(or including) role. It looks, that similiar issue for import_task was already solved.

I think these issues have something in common, however I didn't looked into source code yet. Maybe this should be discussed in separate issue. I would describe it like "Import_role doesn't honor flush_handlers task"

I think these issues have something in common, however I didn't looked into source code yet. Maybe this should be discussed in separate issue. I would describe it like "Import_role doesn't honor flush_handlers task"

No, I don't think so.

I don't think you really mean "Import_role doesn't honor flush_handlers task" but "flush_handlers doesn't honor when clause when within imported role" (sorry if it sounds presumptuous from my side, not my intention, please correct me if I'm wrong)... but that's just a particular case of this bug since the problem is that flush_handlers doesn't honor when clauses at all.

Please note that given workarounds are all based on the same thing: don't associate the when clause directly to flush_handlers (since that won't work despite the obvious expectation) but instead to another object conceptually "near" enough in code as to emulate the desired behaviour (i.e.: use the when clause along an include_* task instead, which in turn only runs the desired flush_handler one).

Please note that given workarounds are all based on the same thing: don't associate the when clause directly to flush_handlers (since that won't work despite the obvious expectation) but instead to another object conceptually _"near"_ enough in code as to emulate the desired behaviour (i.e.: use the when clause along an include_* task instead, which in turn only runs the desired flush_handler one).

I could not have said it any clearer myself; thank you for explaning!

I think I initially misunderstood topic of this issue, I thought that flush_handlers are not executed when associated with when clause. I tested it more thoroughly and found that the sub role I mentioned earlier is using include (https://github.com/elcomtik/ansible-firewalld/blob/master/tasks/main.yml#L9-L10)

- include: custom_zones.yml
  when: firewalld_custom_zones is defined

This behaves like import_tasks, which works in ansible 2.9.5 like described in https://github.com/ansible/ansible/issues/41313#issuecomment-600056730. I can refactor this code to use include_tasks and get away the annoying warning.

However this works only if not used with import_role, which use case I mentioned earlier in https://github.com/ansible/ansible/issues/41313#issuecomment-618290173. In this case it actually honors when clause, but throws the warning.

I don't think you really mean "Import_role doesn't honor flush_handlers task" but "flush_handlers doesn't honor when clause when within imported role" (sorry if it sounds presumptuous from my side, not my intention, please correct me if I'm wrong)... but that's just a particular case of this bug since the problem is that flush_handlers doesn't honor when clauses at all.

@next-jesusmanuelnavarro Now I would describe mine issue like "Ansible throws warning [WARNING]: flush_handlers task does not support when conditional, when they are inside of import_tasks/import_role files". This relates to topic of this issue, because the proposed workarounds cause same warnings as mine use case. Although mine use case isn't same as @jmnavarrol described originally.

The questions:

  1. I didn't see here any reference to code changes which disabled warnings for include_tasks. Does it actually relate to topic of this issue?
  2. Do we want to disable warning also for import_tasks? If yes I will create new issue, with clear description

@next-jesusmanuelnavarro Now I would describe mine issue like "Ansible throws warning [WARNING]: flush_handlers task does not support when conditional, when they are inside of import_tasks/import_role files". This relates to topic of this issue, because the proposed workarounds cause same warnings as mine use case. Although mine use case isn't same as @jmnavarrol described originally.

The questions:

1. I didn't see here any reference to code changes which disabled warnings for include_tasks.  Does it actually relate to topic of this issue?

2. Do we want to disable warning also for import_tasks? If yes I will create new issue, with clear description

See https://github.com/ansible/ansible/issues/41313#issuecomment-442955228 for a general rationale.

For the specific case of import_tasks I'm not aware it can throw a warning on itself (see above: warning is not thrown by import_tasks but by the imported/included flush_handlers task within). I suppose that the intended effect of "disable warning" at include/import level would be to silence all warnings within, a shortcut for "please, silence warnings on all these included/imported tasks".

Given the nature of those warnings, I find valuable they are triggered at least once within the development cycle so context can be considered on a case by case basis before silencing them, if that's the preferred outcome. Now, applying "disable_warnings" in bulk to an include subverts that notion: you keep developing the "internal" code and now, warnings offered for your (developer's) consideration, will never have the chance to show.

Probably a more useful approach (maybe it already exists, I don't know) would be to allow it at ansible.cfg level (and related): consider third party code, you see those warnings and you are not in a position to modify the code that triggers them. It might be nice to turn them off globally. It doesn't itch me enough to open a feature request for that, though.

For the specific case of _import_tasks_ I'm not aware it can throw a warning on itself (see above: warning is not thrown by _import_tasks_ but by the imported/included _flush_handlers_ task within). I suppose that the intended effect of "disable warning" at include/import level would be to silence all warnings within, a shortcut for _"please, silence warnings on all these included/imported tasks"_.

I wasn't accurate, I wanted to stress that now in ansible 2.9.5(possibly also earlier version) doesn't show warning if include_tasks include some flush_handlers. You @next-jesusmanuelnavarro wrote in https://github.com/ansible/ansible/issues/41313#issuecomment-533099471 that it does, later @DEvil0000 in https://github.com/ansible/ansible/issues/41313#issuecomment-600056730 wrote that it doesn't. My point was to find out if we can have same behavior for import_tasks also.

Given the nature of those warnings, I find valuable they are triggered at least once within the development cycle so context can be considered on a case by case basis before silencing them, if that's the preferred outcome. Now, applying "disable_warnings" in bulk to an include subverts that notion: you keep developing the "internal" code and now, warnings offered for your (developer's) consideration, will never have the chance to show.

I agree disabling warnings for includes would be bad idea.

Probably a more useful approach (maybe it already exists, I don't know) would be to allow it at ansible.cfg level (and related): consider third party code, you see those warnings and you are not in a position to modify the code that triggers them. It might be nice to turn them off globally. It doesn't itch me enough to open a feature request for that, though.

I'm not aware of its existence either. I think it would be sufficient for many people.

Just created a repo to showcase the desired behaviour on its simplest form.

Just hit this issue with a 3rd party playbook.
IMO, if flush_handlers does not work with conditionals, it should either be fixed (bug) or error out (not intended).
Having a warning makes no sense to me given that whatever depends on this handler acting will actually fail, unnecessarily making it more difficult to pinpoint the issue.

Hi good afternoon,

The motivation for this issue not is find a workaround, is that in Ansible
include the flush with a when, because is a obvious behaviour. If I don't
find a workaround, then I will be breaded...

The fix is include in a flush task a when clause that works...

Regards,
Cesar Jorge

El dom., 16 ago. 2020 18:46, Eduardo notifications@github.com escribió:

Just hit this issue with a 3rd party playbook.
IMO, if flush_handlers does not work with conditionals, it should either
be fixed (bug) or error out (not intended).
Having a warning makes no sense to me given that whatever depends on this
handler acting will actually fail, unnecessarily making it more difficult
to pinpoint the issue.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ansible/ansible/issues/41313#issuecomment-674549510,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AA5N2CK7OFH65XOGIFI4VHDSBAEQDANCNFSM4FD66ROQ
.

flush_handlers requires a when clause because of the way that conditional dependencies have been implemented.

flush handlers may appear in a role, roles may be a conditional dependency.... all tasks that may appear in a role require a When clause.

Personally, I have the following:

- name: 'Installing utilities for Belgian eID.'
  …
  notify: configure eid
  changed_when: True    # NOTE: A notify signal is normally produced only when the task introduced a change!

- meta: flush_handlers    # Execute the notified handlers now, without delay.

In other words, a changed_when: True is required to execute the notified handlers even though nothing had changed. I hope this counts towards reinstating this desired behaviour without any warning.

Hi,

This not works conditionally as related the issue.

Regards,
Cesar Jorge

El mar., 8 dic. 2020 15:05, Serge Y. Stroobandt notifications@github.com
escribió:

Personally, I have the following:

  • name: 'Installing utilities for Belgian eID.'

    notify: configure eid

    changed_when: True # NOTE: A notify signal is normally produced only when the task introduced a change!

  • meta: flush_handlers # Execute the notified handlers now, without delay.

In other words, a changed_when: True is required to execute the notified
handlers even though nothing had changed. I hope this counts towards
reinstating this desired behaviour without any warning.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ansible/ansible/issues/41313#issuecomment-740639121,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AA5N2CJPXSB5KOWICW2VP4DSTYXDZANCNFSM4FD66ROQ
.

Personally, I have the following:

- name: 'Installing utilities for Belgian eID.'
  …
  notify: configure eid
  changed_when: True    # NOTE: A notify signal is normally produced only when the task introduced a change!

- meta: flush_handlers    # Execute the notified handlers now, without delay.

In other words, a changed_when: True is required to execute the notified handlers even though nothing had changed. I hope this counts towards reinstating this desired behaviour without any warning.

IMHO this would completely defeat the sense of a handler. You could as well set the handler content as regular next task.

Would it be possible to at least change the error message to state the file and line number where the problematic "flush_handlers:", as well as the offending "when:" around it, occurs?

We have thousands of lines of ansible code, and every time we run it we see "[WARNING]: flush_handlers task does not support when conditional" at the start, but how on earth do we go about finding the cause?

I still maintain that this issue makes "flush_handlers" and/or conditional inclusion of roles problematic features in ansible. Either fix the warning, or remove those features.

Was this page helpful?
0 / 5 - 0 ratings