At time of writing, the v2 branch has a Group
class that should be capable of serving as the units formerly known as 'roles', aka "a bunch of hosts to do stuff with/on".
However, there's no specific way of organizing or labeling Group
objects yet; it's "done" enough for the pure API use case of advanced users who want to roll their own specific way of creating them, but lacks anything for CLI-oriented users or intermediate folks who want something frameworky to build around.
Put another way, unless you're rolling purely with the API, having Group objects lying around somewhere is useless if the CLI or task-calling bits have no way of finding them!
In v1, roles were effectively a single flat namespace mapping simple string labels to what would be Groups in v2, and they could be selected on the CLI at runtime (fab --roles=web,db
) and/or registered as default targets for tasks (@task('db') \n def migrate():
), much like hosts.
Users defined them in env.roledefs
, a simple dict; any intermediate to advanced functionality revolved around modifying it, usually at runtime (via pre-task or subroutine), sometimes at module load time.
Group
s and/or Connection
s.Lexicon
instead of a dict
.db
, web
, lb
, but then a 2nd-tier name called prod
that is always the union of the other three. I forget if I added that to Lexicon
yet. Possible there's other map subclasses out there that already do it too.if cxn in group
would work even if cxn
is a distinct object from the equal member inside group
.db
role"@group
like @task
, and the functions aren't executable units of work, but instead yield Group objects.From the mailing list:
We impemented our own internal REST API which populates env.roledefs dynamically depending on the project being deployed and heavily rely on not embedding host strings into project's fabfile or specifying them in CLI.
Our use cases are:
EnvironmentDatabaseAPIClient(
'https://rest.api.url/schema/',
env.service_name,
).apply_env()
Number of server environments - multiple testing evironments (some of them are private, some public) and multiple production environments (for different clients). Each environment consists of one or more hosts and is mapped to fabric role.
Each service (env.service_name
in the example above) has different set of environments.
Also we have meta-roles (groups of roles). They are prefixed with group-
: group-production
, group-test
, group-external
, group-internal
, group-all
. This allows us to deploy to multiple server roles without specifying them one-by-one, for example group-all
deploys to all roles, both production and test.
We have special fabric tasks to print information about role groups, roles and hosts.
We also rely heavily on reverse mapping host strings back to role names (hosts strings are unique per service_name). This is used for deployment logging and notifications. Basically, we log service deployments to each host and send Slack notification when service has been deployed to all hosts in a role. EnvironmentDatabaseAPI server is responsible for this (it keeps logs and deployment state). This is done by decorating fabric tasks with a decorator which submits env.host
, env.port
and env.service_name
(plus commit info) back to API server.
We plan to add deployment authentication in the future, also very likely to pull more env
variables from the server to make them available within task context.
Thanks @max-arnold! I recognize many of those from my own use cases in the past as well. The reverse mapping bit in particular I remember coming up in v1 a few times, so I added it to the list.
For Fabric v2 to become useful to me, I would need a way to tell fab
which set of hosts to execute a task on.
Previously I defined roles and then ran fab -R ...
. (Actually the roles were defined programmatically using an IP address range, but that is no requirement and a static list inside a YAML file would be fine.)
I fail to find an equivalent in Fabric v2, and I also failed to emulate this feature using:
fabric.yaml
configuration file containingactive_hostset: null
hostsets:
myhostset:
- ...
active_hostset = config["hostsets"][config["active_hostset"]]
in fabfile.py
env INVOKE_ACTIVE_HOSTSET=myhostset fab ...
Instead of the expected list of hosts I get KeyError: 'active_hostset'
.
We map different sets of hosts to each role for each of our environments in fabric v1, and the environment is set by running a role.environment:staging
task to specify it. So this task influences the hosts used by the following tasks.
In v2 we tried using a custom Task, but the problem is Executor.expand_calls
runs before our role.environment
task runs and so none of the following tasks know the environment in order to dynamically build their hosts lists.
Making Executor.expand_calls
a generator allows task execution to influence later tasks execution. So my example above works, where we have a custom Task
that needs to know it's environment to properly expand roles to hosts. e.g. fab role.environment dev deploy.app
- the role.environment
task is now run before deploy.app
is expanded, and so deploy.app
knows the environment and can configure it's hosts and then is expanded into the correct set of tasks.
I prototyped this in my forks:
https://github.com/pyinvoke/invoke/compare/master...rectalogic:expand-generator
https://github.com/fabric/fabric/compare/master...rectalogic:expand-generator
Hi, I don't know what happened to this software after many years, but I really missed the "roles" concept in [email protected], especially when running $ fab -R dev
We also use roles to represent the same set of operations across different environments. Perhaps separating the concept of a named role and a named environment would be useful? As in, the web role in the dev environment.
Most helpful comment
Hi, I don't know what happened to this software after many years, but I really missed the "roles" concept in [email protected], especially when running
$ fab -R dev