Fabric: Optionally avoid using ssh if going to localhost

Created on 19 Aug 2011  ·  59Comments  ·  Source: fabric/fabric

Description

run()/sudo() would intelligently see that you're going to localhost and just run local() instead. This would probably be an optional thing.

Comments from Jeff on IRC:

and yea, I mean there's always going to be overhead with ssh vs straight pipes offhand I don't think it would be terrifically difficult to update run/sudo (especially in master now that they've been refactored) to call/return local() intelligently I'm not positive that I'd want that semi magical behavior in core (even with it off by default with an optin to enable it, tho that would help) but even so, it'd be an interesting experiment. and if it is as simple as I'm thinking I honestly can't come up with a good reason not to (again provided it is not the default behavior)

Originally submitted by Nick Welch (mackstann) on 2009-11-11 at 01:39pm EST

Relations

  • Duplicated by #364: Allow for local operation to bypass SSH layer
  • Related to #26: Implement "dry run" feature
Feature Network

Most helpful comment

If anyone is wondering "why would anyone do this?", the answer is that if you have a deployment pipeline, it can be helpful to run the same exact deployment script, no matter which environment, rather than having a special setup script for localhost vs. everything else.

All 59 comments

James Pearson (xiong.chiamiov) posted:


As also mentioned on irc, I don't normally run ssh server on a desktop machine, so I can't actually ssh to localhost.


on 2009-11-11 at 03:13pm EST

Travis Swicegood (tswicegood) posted:


I've just implemented something similar this evening in the form of a new fabric.operations function called do. It looks at env.run_as to see if it equals "local", and in doing so switches out to the local method instead of the run (or sudo if sudo=True is passed in as a kwarg). It also handles prefixing local commands with sudo in the event they're running local.

This is sort of a different way around this problem which works without changing the behavior of run or sudo. These changes are available in my repository.


on 2010-01-11 at 12:22am EST

Morgan Goose (goosemo) posted:


I really don't see this being plausible. What's the point in doing run as local. One of the requirements of Fabric is sshd running on the machine, remote or loopback. The other problem being that only changing local doesn't take into account put, get, rsync_project, and others that would all still need ssh. Trying to implement those, would just really cause more issues, since it's now in the realm of making fabfiles translate to bash.


on 2011-03-13 at 11:14pm EDT

Jeff Forcier (bitprophet) posted:


While I'm also not 100% convinced this is a great idea, it's clearly something a number of users feel the need for -- another request has been lodged as #364 with another explanation of the use case.

I've also added the dry-run ticket as related to this one, because (I assume -- if any of the requesting users can verify this that'd be great) the main use case for this feature is for testing/dry-running.


on 2011-06-23 at 11:26am EDT

As noted in #538, if we're ever able to fully normalize the three runners so they can be used interchangeably, we'll need to make sure that shell escaping works consistently across them. Right now we don't shell escape local, though that's at least partly because it's not using a shell wrapper.

If anyone is wondering "why would anyone do this?", the answer is that if you have a deployment pipeline, it can be helpful to run the same exact deployment script, no matter which environment, rather than having a special setup script for localhost vs. everything else.

+1 for the feature

+1

+10

+1

+1

To hold you over, you can just make sure you have the OpenSSH server running. First do sudo apt-get install ssh to make sure you have it installed (even if you think you do). Then do sudo service ssh start|stop|restart as needed. Learned from this thread.

+1

My use case is simple: I want to use the same django-deploy script to configure ec2 instances both with cloud-init through CloudWatch (the case for running local commands) and using the regular fab deploy_django -H foo@bar.

+1

This would be really useful. One use case I have is using vagrant shell provisioner to configure particular vm using fabric and without the need to ssh localhost.

+1

I was surprised not to see this in Fabric already.

FYI: Implementation of this feature gets more complex when you think about fabric functions like reboot().

+1

Should be part of core already !

+1

It would perfectly make sense: from an abstract point of view, local is just a special case of run, where no SSH machinery is involved.

One more thing to point out (maybe obvious): Fabric should be smart enough to decide if a run should be converted to local AFTER reading /etc/hosts.

I mean: if we have

env.host = [ 'mywebserver' ]

and in /etc/hosts we have:

127.0.0.1 mywebserver

then, any run calls should actually be local calls.

Taking this concept a step further, we should also treat run as a local call when the remote host resolves to an IP which is assigned to a network interface of the local machine.
E.g.:
fabfile:

env.host = [ 'mywebserver' ]

/etc/hosts:

192.168.1.1 mywebserver

ip addr:

[...]
eth0:
  inet 192.168.1.1
[...]

+1

+1 :+1:

:+1:

+1

+1

Fabric 2 will use pyinvoke/invoke so this should be pretty easy to do there. I would wait for Fabric 2 for a non-hacky way to do this.

:+1:

+1

:+1:

:+1: Please implement this, especially as mac computers aren't automatically set up to have SSH tunnels configured for remote access to the localhost server.

+1

+1 :)

+1 please

:+1:

:+1:

:+1:

We're using Fab to build debian packages and this adds extra complexity

guys, hello all
i try to create clone of fabric with difference:

  • run() function works in the same way with subprocess.popen under localhost as under ssh connect to remote host
  • Factory uses openssh or any another ssh client (you should modified config for this), so you can use all power of ssh sockets
  • Factory uses gevent library for asynchronous executing

You can take a look if you need this feature
https://github.com/Friz-zy/factory

I may be missing something in this discussion, but here is what I did to use the same code with fab run command on both localhost and remote machines.

  1. I set env.use_ssh_config = True in my fabfile.py
  2. ssh-copy-id localhost

This doesn`t solve your issue if you are not running ssh server on your local machine

:+1:

+1

+1 Please implement this feature :)

+1

Could be very useful to bootstrap Docker images using existing Fabric scripts. This feature would avoid to install an SSH server on the container, which is against the Docker best practices

+1

+1

+1

Further to the answer provided by @AntoniosHadji, here are the complete instructions to make this work;

# Generate new SSH key for local usage
ssh-keygen -f ~/.ssh/id_rsa -N ''

# Add server keys to users known hosts (eliminates 'are you sure' messages);
ssh-keyscan -H localhost > ~/.ssh/known_hosts

# Allow user to ssh to itself
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Actually, this can be done using cuisine. You need to change all run executions to reference cuisine.run function, which can be done easily with an import, and change the mode to local:

from cuisine import run, mode_local

mode_local()
print run("echo Hello")

Great @cgarciaarano

For simple use cases, this works for me:

from fabric.api import run, local
# ...
# in task:
  if env.host is None or env.host == 'localhost':
    run = local

:+1:

I want my fabfile to run remotely or locally when ssh isn't an option. This includes local wrappers for get/put/exists etc.

:+1: I have fabfiles that run both locally and remotely and I've ended up hacking my own wrapper functions for run/local/get to deal with all of subtle differences such as output capture and error handling.

What if you have a ssh connection doing dynamic port forwarding and binding on 127.0.0.2 (still technically localhost) on port 2223. I can see how this could cause issues, to that end matching on localhost and resolving to 127.0.0.1 rather than also supporting the entire 127.0.0.0/8 class might be a good idea to handle.

@blade2005 Yep, the whole 127._._.* range point to your localhost(except 127.0.0.0 and 127.255.255.255) but when you are actually pointing to your localhost you won't use port right?
So I believe that we can safely assume that 127.*.*.* == localhost and ssh can be avoided but 127.*.*.*:* point to a forwarded port and ssh is needed.

Honestly, this feature would probably make more sense as a 3rd party plugin built on fabric, similar to the cuisine library. Then we would just import wrapped functions for run/get/put/etc, which would know wether to run locally or remotely based on an env variable. At least this way, somebody could get this started for everybody to use.

I implemented something locally, and its a lot more work than just switching between local/run. You have to consider prefixes, changed directories, sudo users, etc.

Was briefly thinking about this in the context of another 2.0 related ticket, and realized that there's more that comes up besides just "run becomes a rebinding of local":

  • Any sort of truly-mixed-mode task using both local and run, or either of put/get, becomes inherently problematic: operations with clearly defined 'local' and 'remote' "ends" now both point locally.

    • I'd assume that to be a minority use case (if it's one at all) but it still needs to be figured out, even if it's "calling any operation but run or sudo raises DoesntMakeAnySenseError" or whatever.

    • put/get could presumably just turn into shutil.copy or similar

    • local would presumably not be changed (though when printing what's happening, probably still want it differentiated from what run-except-locally is prefixed with...?)

    • Touched on above, the various context-manipulating methods/contextmanagers like prefix, cd etc all need similar questions answered.

  • That aside, locally running sudo commands at all, is a potentially enormous footgun and probably wants additional safety checks.

    • Unless it, too, becomes just another binding to local, which is another possibility. Though not a large one, any sudo commands that even work locally (i.e. one is deploying to, and deploying from, Linux) would presumably need to remain privileged locally (e.g. apt/yum and friends, firewall tinkering, etc).

  • sudo also (as noted above by Jon) needs to grow possibility of configuring distinct local-vs-remote config vectors since the sudo user, password etc is likely to differ between the two sides.

    • Though since I'm thinking of all this in the context of Fab 2, the expected per-host config overrides would probably solve that part of things at least - the localhost context would simply be handed the appropriate values. (Plus, as a dedicated "for running remote things locally" Context subclass it could do other things too, if needed).

@max-arnold was trying this out in the v2 alpha and ran into confusing issues, which is to be expected at this point since - I hadn't gotten to this particular ticket's use case yet, other than ensuring run and local have as similar APIs as possible.

At the moment, the big issue is simply the nature and API of the object bound to a task's context (c or ctx or whatever one names it) posarg. At present, and again, this is not intended to be final, it's just how it's ended up so far:

  • by default, when executed by Invoke's Executor, or by Fab 2's FabExecutor when no hosts are present, it's invoke.Context, which has a run that runs locally, and lacks a local;
  • when Fab 2 has host(s) to run on, it creates a fabric.Connection, whose run runs remotely, and whose local is a rebinding of Invoke's run (and so runs locally, natively, not over SSH to localhost or anything.)

More specific thought is needed, including looking at use cases here and in linked tickets. Offhand brainstorm:

  • A useful solution (or at _least_ documentation) for this should almost definitely exist in core (per earlier chat about it living outside core) because:

    • it's a common enough use case

    • it's easy to mess up

    • it's required to usefully implement v2 compatible versions of patchwork (née contrib) and/or invocations (Invoke's version of same), especially since it informs how much code sharing they can do. Many tasks and/or subroutines in those kinds of codebases may want to run locally-or-remotely.

  • At heart it's about what API to expect from Context objects where the task may not know for sure "how" it is being invoked
  • Could hinge it upon how the task is generated, i.e. different versions of @task and/or kwargs to same, where the user may declare their expectations (i.e. "I really want to be given a remote-capable context", "please don't ever give me a remote-capable context", etc)

    • We may want to _require_ this to avoid ambiguity (ZoP #12)

    • The more I think on it the more it seems clear that we do want Fabric to grow its own lightweight wrapper around @task/Task; pure-Invoke codebases would just use its @task which would always trigger being given a vanilla Context, while tasks created via Fabric's version would at least have the option of being given a Connection, if not require one.

    • A downside is the "I can be useful locally XOR remotely" type of task mentioned above; a task that only wants a single "run commands please" option and is not mixing both modes simultaneously. This _doesn't_ work well with "the decorator declares the type of context" solutions because it _needs_ to 'toggle' context type depending on who's calling it and how.

    • Though that is actually one entire point of the current API; those tasks _don't care_ about context subclass, _as long as ctx.run() exists_.

    • So those presumably want to be decorated with the "I only need a base, vanilla context" version of @task, with the understanding that somebody from a Fabric (or Fabric-like) invocation standpoint has the option of giving those tasks a Connection instead of a Context.



      • Which brings us back around to wondering exactly how to execute tasks, aka pyinvoke/invoke#170



  • Regardless of implementation, we have to ensure we minimize footgun potential re: users doing things like:

    • Expecting local when it does not exist (Fabric/Connection-expecting code run via Invoke)

    • Expecting run to run locally when one was instead given a context with a remote run (Invoke/Context-expecting code run via Fabric)

    • Anything else from Max's additional comment here

  • As seen in much older comments, a sub-use-case here is users expecting the v2 equivalent of Connection('localhost').run('foo') to _not use SSH_ but instead to act exactly like Connection('localhost').local('foo').

    • I'm _guessing_ we don't actually want to do this as it feels like a nasty footgun for anybody attempting to do localhost sanity checks. It just feels too magical to me offhand. But I'm open to arguments, probably on an opt-in basis (e.g. set a config option like ssh.localhost_becomes_subprocess = True or whatever.)

My only usecase here at the moment would be upload_template() being able to render a template locally.

Of course one could do it like this:

#http://matthiaseisen.com/pp/patterns/p0198/
import os
import jinja2


def render(tpl_path, context):
    path, filename = os.path.split(tpl_path)
    return jinja2.Environment(
        loader=jinja2.FileSystemLoader(path or './')
    ).get_template(filename).render(context)

But why not have an option to render locally?

The main use of this feature, in my case, would be to deploy application configuration to my local machine for local testing.

Consider you have a settings.py.j2 that gets rendered to the destination server upon deployment and there it's named settings.py and contains only python code, no jinja.
Now you want to test locally, but locally there is no settings.py yet, because it needs to get rendered from settings.py.j2.
So your app can't start, and you'll have to create a seperate settings.py manually for your local testing.

This is very tiring, and it should be easier.

For example, in Ansible i'd simply tell the task that it's gonna use "local connection", and it would render on the local host without trying to ssh into it.

Until this feature is available in Fabric, i'll use the solution pasted above, of course, as it's just a few lines of code. It should be easier though imho. I feel like that's really the kind of stuff fabric should be making easy for me.

@fninja I haven't ported upload_template itself yet but I definitely agree that it falls under this problem space. Arguably one could handle this by just splitting up the Jinja-wrapping render step and the upload-some-string upload step, esp since the latter already exists in the form of "hand a FLO to put". E.g.:

from StringIO import StringIO # too lazy to remember the newer path offhand
from somewhere.jinja_wrapper import render
from invoke import task

@task
def render_settings(c):
    rendered = render('settings.py.j2', {'template': 'params'})
    c.put(StringIO(rendered), 'remote/path/to/settings.py')

But there's probably still room for an even shorter 1-stop analogue to upload_template that would either be a Connection method or a subroutine taking a Connection argument.

Either way, it raises more questions re: exactly how to treat this sort of thing - for example, Invoke-only Context objects have no put/get. Is it worth adding them? It makes plenty of sense for Fabric users in the context of this ticket (then upload_template or w/e can simply call put in either case), but for pure-Invoke users, it's a bizarre and useless part of the API.

+1 to make this a core feature

Crosspost from #1637. Just an idea:

from fabric import task, local

@task
@local
def build(ctx):
    with ctx.cd('/project/dir'):
        ctx.run('build > artifact.zip')

@task
def deploy(conn):
    build(local(conn))

    with conn.cd('/remote/path'), local(conn).cd('/project/dir'):
        conn.put(remote_path='build.zip', local_path='artifact.zip')

Basically local() can act as decorator/context manager/function and transform Connection to Context.

Another use case that I don't think I saw mentioned: Building a library of reusable functions. In my case, it's mostly git commands. I wrote an overly-simplistic dorun that hides the differences between the run and local function parameters (on v1); which function is chosen is passed as a parameter. Here's a git checkout for example:

def git_checkout(branch, remote='origin', run=run):
    """Checkout a branch if necessary."""

    if branch == git_current_branch(run=run):
        return
    elif branch in git_local_branches(run=run):
        dorun('git checkout ' + branch, run=run)
    else:
        dorun('git checkout -t -b {0} {1}/{0}'.format(branch, remote), run=run)


def git_current_branch(run=run):
    """Get the current branch (aka HEAD)"""

    output = dorun('git name-rev --name-only HEAD', run=run)
    return output.strip()


def git_local_branches(run=run):
    """Get a list of local branches; assumes in repo directory."""

    output = dorun('git branch --no-color', run=run)
    branches = {l.strip().split(' ')[-1]
                for l in output.strip().split('\n')}
    return branches

It looks like this:

from fabric.api import run as run_remote, local as run_local

def dorun(*args, **kwargs):
    """Work around the fact that "local" and "run" are very different."""
    kwargs.setdefault('run', run_remote)
    run = kwargs.pop('run')

    if run == run_local:
        kwargs.setdefault('capture', True)
    elif 'capture' in kwargs:
        del kwargs['capture']

    return run(*args, **kwargs)

I have no idea what happens with sudo and there are issues that I cannot easily deal with, like expanding ~remoteuser to produce a path.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

TimotheeJeannin picture TimotheeJeannin  ·  3Comments

bitprophet picture bitprophet  ·  6Comments

bitprophet picture bitprophet  ·  4Comments

peteruhnak picture peteruhnak  ·  4Comments

amezin picture amezin  ·  5Comments