Machine: Attach to existing machine from another client

Created on 8 Jun 2015  ·  85Comments  ·  Source: docker/machine

Lets consider I created a machine on Digital Ocean running some containers.
After creating the machine, I can run eval "$(docker-machine env test-machine)"
Now I'm moving to another local computer which does not know about that particular machine and I want to attach to that machine.
How do I do that?

kinenhancement

Most helpful comment

Seriously - what good is Docker-Machine unless you can access it from another pre-defined device....

USE CASE - Built the docker-machine at work but then need the laptop for vacations just in case a server blows up...

Come on - where is Docker Admin to chime in on this?! That's a use-case that EVERYONE can appreciate..

All 85 comments

:+1:

How about adding it to the 2nd system using the 'generic' driver, and then use the same eval command there?

@clnperez is this a proposal or something you're confident that it work (means that will reuse the existing remote machine even if currently running)?

Well, in hindsight, I don't think you can do this b/c you'd have to set up the ssh keys again, or import them from your other system.

I see your case. One can't add a docker-machine entry on the second system using the generic driver unless you want to invalidate the original docker-machine setup (since e.g. new creds are generated). One may run docker-machine create -d none --url [...] on the second system mirroring important options (like the swarm flags) from the original create on the first system and then manually copying selected .pem files and the id_rsa file from the first machine to the second machine AND manually add sections for SSH access (and manually change the driver to generic from none). It is a PITA. A proper export/import function would be nice to allow sharing. One can also share just the cred files needed to manually configure docker.

Correct. The only current way would be to take the entire directory but this won't work with some drivers (i.e. VirtualBox because it registers VMs and networks with UUIDs that would not match). There has been discussion around an import/export feature in the past (https://github.com/docker/machine/issues/23)

I have a PR/hacky solution to this in the wings... Generally I think I want to move config over to being portable / templated instead of hardcoded like things are now.

+1

I would love to be able to quickly reconnect to cloud instances (I'm using GCE) that already exist.

Certainly having importable/exportable configs would be very useful, but I wonder if (additionally) addressing the issue as a driver concern might not yield a simpler user experience.

That way, using the google driver, one could connect to an existing instance on an alternate computer simply by providing a valid access-token (which the driver may prompt the user to generate automatically).

Similarly, when using the aws driver for instance, (which I have yet to do myself, but I presume) one could connect to an existing instance by providing a valid key/secret pair (perhaps through environment variables corresponding to the relevant driver-specific flags -- assuming that the process will occur through some docker-machine subcommand other than "create", since the expectations are a bit different).

Just want to chime in that this would be a really great feature to have. I'd really like to be able to share a machine with my teammates and was disappointed to find out there was basically no way of doing this right now. It'd be awesome if, e.g. the generic driver could automatically detect whether a particular box had already been provisioned with docker-machine and re-use tls certs, etc. when someone ran docker-machine create on that box again.

:+1 . I would love to see it working. Currently we are co-managing the same machines (on Google Compute Engine) with another person and the only way I found working is to copy the whole directory (+ change the absolute links in the config.json file). That's lame. I think generic driver cannot be used easily this way - there is an issue of authentication of course (tls certs etc.) cannot be simply re-used when you run --create with generic driver (somehow you need to authenticate and prove that you have access to the machine which is different for every driver - in GCE you'd have to check if your gcloud authentication allows you to access the machine). Also there is a small issue that unless you already created the machine before with given driver, your authentication piece is missing (the only way to authenticate is to .. create machine).

What I think is the best solution is to have "import" command (with different implementation for different drivers). For example in GCE, you could store all necessary details (keys etc.) somewhere in meta-data of the machine: https://cloud.google.com/compute/docs/metadata?hl=en#project_and_instance_metadata and then via specifying the project/machine name (and authenticating) you could get all the necessary keys and setup the machine.

I would really appreciate this feature!

@potiuk Which directory do you copy?

@AlexZeitler ~/.docker/machine/machines/<machinename>

+1!

+1 I'd love to see a solution for this, too :-)

I ran in exactly that problem today to give access to a colleague.

+1 !!!!!

Seems to be a duplicate of #23, right?
Almost 1 year since we talk about this feature, some have tried to make PRs for it, but they were closed...
Hope this feature will be on the next (major) release :)

This is absolutely required in continuous delivery scenarios, where you want to deploy using those keys from Travis or Circle CI. Any clue regarding ETA?

gotta give this a +1 as well

+1

+1

Is there anything you have to do besides copying the ~/.docker/machine/machines/<name> folder and changing the absolute paths? I get an error message related to my certs, and attempting to regenerate them fails as well.

@jbasrai Did the IP of what you're trying to access change?

I've filed https://github.com/docker/machine/issues/2516 to start considering steps in the right direction to make this easier.

This is a vital feature, and I'd love to see it in a near-future release. In my opinion machine configuration should remain unique to a client, not be imported/exported. Instead (as others have mentioned) docker-machine create run with the same arguments should be able to create a configuration for the machine even if it already exists remotely instead of failing like it does now. When re-running my create command for an existing amazonec2 machine, I get this error telling me that the host already exists:

Error creating machine: Error with pre-create check: There is already a keypair with the name testing-recreate.  Please either remove that keypair or use a different machine name.

Instead it might warn me that the host already exists and continue to add the machine as it would in an initial creation (perhaps requiring that an override flag be passed). That way I can keep my dev/CI environment set-up scripts simple and not worry about having to store this configuration somewhere that my teammates (or other parties) can access it.

It is indeed astonishing that for multiple persons to work on the same vm, we have to export/import certificates from one machine to another. If someone found a practical, production-ready solution, that would be good to know.

+1

+1

Correct. The only current way would be to take the entire directory but this won't work with some drivers (i.e. VirtualBox because it registers VMs and networks with UUIDs that would not match). There has been discussion around an import/export feature in the past (#23)

@ehazlett so I'm using aws driver can I:

  1. compress the cloud machine ~/.docker/machine/machines/staging
  2. share with team members, they'll decompress at ~/.docker/machine/machines/
  3. they'll have staging machine as I have? docker-machine ls (or do they need to do another command)

@leandromoreira one barrier to that approach is that the docker-machine config files have hard-coded paths specific to the host machine:

cat ~/.docker/machine/machines/local/config.json

outputs:

...
        "AuthOptions": {
            "CertDir": "/Users/pretzel/.docker/machine/certs",
            "CaCertPath": "/Users/pretzel/.docker/machine/certs/ca.pem",
            "CaPrivateKeyPath": "/Users/pretzel/.docker/machine/certs/ca-key.pem",
            "CaCertRemotePath": "",
            "ServerCertPath": "/Users/pretzel/.docker/machine/machines/local/server.pem",
            "ServerKeyPath": "/Users/pretzel/.docker/machine/machines/local/server-key.pem",
            "ClientKeyPath": "/Users/pretzel/.docker/machine/certs/key.pem",
            "ServerCertRemotePath": "",
            "ServerKeyRemotePath": "",
            "ClientCertPath": "/Users/pretzel/.docker/machine/certs/cert.pem",
            "ServerCertSANs": [],
            "StorePath": "/Users/pretzel/.docker/machine/machines/local"
        }

so simply copying the entire dir isn't a complete solution

@bhurlow thanks a lot :smile: , is there any tool to help this? or should I edit the config.json manually by my own? is that the only barrier?

@leandromoreira I have scripted around it like this, more recent versions of docker-machine no longer base64 encode keys in the config file. At the end of the day, anyone who wants to use a remote docker-machine _must_ have the TLS certs so some exchange between parties is required I think

Thanks @bhurlow

@bcwalrus did a great tool until we get something official.

npm install -g machine-share

# export
machine-share export amazon

# import
machine-share import  amazon.tar

# fix locations :D (it seems this is not using base64 anymore)
machine-share driverfix amazon

@leandromoreira looks great dude, I was able to export and import the configs successfully.

@muhammadghazali it was @bhurlow idea and effort :stuck_out_tongue:

+1 Any updates regarding an official solution for this?

With docker version 1.10.1, i noticed that config.json file has references to the following from ~/docker/machine/certs directory

        "CertDir": "/home/abc/.docker/machine/certs",
        "CaCertPath": "/home/abc/.docker/machine/certs/ca.pem",
        "CaPrivateKeyPath": "/home/abc/.docker/machine/certs/ca-key.pem",
        "ClientKeyPath": "/home/abc/.docker/machine/certs/key.pem",
        "ClientCertPath": "/home/abc/.docker/machine/certs/cert.pem",

You need to copy ~/.docker/machine/certs folder as well from the original machine for this scenario to work.

The current solution for this seems to be (e.g. if you want to create a Docker Machine on AWS on one computer and view the logs or SSH into the machine from another):

  1. Create a new directory my-dir and my-dir/machine for the Docker Machines you want to share so it does not use your default certs
  2. Create your Docker Machine using the --storage-path my-dir/machine option (make sure you specify the absolute path)
  3. To share the Machine, edit the config.json in my-dir/machine/machines/machine-name and replace the absolute path to my-dir/machine with $MACHINE_STORAGE_PATH
  4. Upload my-dir somewhere, e.g. to Github

When someone wants to import this Machine:

  1. Clone or download my-dir
  2. Edit the config.json for the Machine in my-dir/machine/machines/machine-name and replace $MACHINE_STORAGE_PATH with the absolute path to my-dir/machine on your local computer
  3. chmod 0600 the id_rsa in my-dir/machine/machines/machine-name

You can now use Docker Machine commands using the --storage-path my-dir/machine option (make sure you specify the absolute path).

This could possibly be improved by:

  • Docker Machine storing relative paths in config.json, so this didn't have to be edited
  • Docker Machine SSH (and related commands) chmodding the id_rsa to 0600 automatically (if they have permission to)

One quick point, if you use the envsubst then you can programmatically replace the $MACHINE_STORAGE_PATH and do not have to manually edit. Still the whole thing is kind of inconvenient for teams trying to use a farm of docker-machine systems.

However if people are looking for a workaround, the easiest I've found is to:

  1. Copy the .docker/machine/certs to a private spot. Note do _not_ put this in a repo as it has secrets that give you access to the other machines. We use a private store for this purpose
  2. On the new host machine, copy the certs into the new .docker/machine/certs
  3. Now rerun your docker-machine creates and you will have the ability to use this without changing all the configurations. It takes longer, but is more portable and you do not have to edit all those config files.

I have two different computers that I work from, and this is a real problem for me.
I describe here the dream behaviour I expect when using docker-machine:

1) Create a droplet in DigitalOcean with docker-machine and the DigitalOcean driver (With some token that you get from DigitalOcean's control panel).

docker-machine create --driver digitalocean --digitalocean-access-token \
    [token_goes_here] --digitalocean-image ubuntu-16-04-x64 --digitalocean-size \
    1gb [host_name_goes_here]

2) Go to a different computer, get another token from DigitalOcean and attach to the existing machine with a magical attach command, like this:

docker-machine **attach** --driver digitalocean --digitalocean-access-token \
    [token_goes_here]  [host_name_goes_here]

What are the obstacles to making this work? I think that DigitalOcean access-token gives one enough privileges to attach to an existing host and set up all the secure communication.

For now I'm going to try machine-share by @bhurlow: https://github.com/bhurlow/machine-share

+1 Bump on this - anyone have an update on this?

@brandontamm: I wrote some scripts to handle this problem for myself. I don't know if they will solve yours, but I can at least try. Check out the gist here

A summary of the gist: There are two functions - store_machine and load_machine. store_machine stores all the information about a machine inside the secure stash (Encrypted on disk datastore). You will have to provide a password. The load_machine function loads a machine from on disk datastore.

Note that this python code assumes that you have sstash (Python Secure Stash) installed. You can install it by running

pip install sstash

+1

Seriously - what good is Docker-Machine unless you can access it from another pre-defined device....

USE CASE - Built the docker-machine at work but then need the laptop for vacations just in case a server blows up...

Come on - where is Docker Admin to chime in on this?! That's a use-case that EVERYONE can appreciate..

@realcr Did you tried machine-share ?

I refuse to use any more dependencies :) Copying .docker folder to both OSX machines worked perfectly for me. My paths and usernames were the same on both machines tho so that is the key without manually editing paths.

Brandon Tamm
Sent from Mobile

On Nov 4, 2016, at 3:36 AM, Sébastien Boulet [email protected] wrote:

@realcr Did you tried machine-share ?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

+1

+1

+1

+1

I wrote docker_machinator to try to solve this problem.
This is a python tool that allows you to save all your docker-machine credentials and configuration into an encrypted stash that you can store at your cloud provider, for example. You can then download this from another host and load your machines back from the stash.
This is a python tool, so you should be one pip install from using it.

I don't feel that this is the perfect solution, but this could get you going until we come up with a better one.

Guys, you should know that machine-share exports your private ssh key that you used creating docker host via docker-machine with generic driver. So everyone who you send exported archive will be able to gain access to server running docker.

@mxl docker-machine provides an ssh subcommand which will grant you access to the server, so the situation you are describing is unavoidable if you have a tool which creates an entire configuration as an importable file.

➜ docker-machine
Usage: docker-machine [OPTIONS] COMMAND [arg...]
...
Commands:
...
  ssh                   Log into or run a command on a machine with SSH.

I guess the way that you would avoid this would be to create a command which was able to download the current configuration from the remote machine. Such a download would require that you were able to ssh to the machine, rather than packaging access in the importable file.

Being only able to control docker-machine from one host is an uncomfortable limitation.
I'd also love to see something like docker-machine config-from <otherhost>.

So +1 from me as well.

/Edit: I'm currently solving the problem by syncing the .docker from a "master server" to all other servers which need the same configs - via cron and rsync. This is e.g. needed for multiple build slaves. Not a very nice solution.

+1

Here's a different scenario which brings me here.

I created a droplet to build a bunch of docker images to later realise I need to move host region...

The question is how do I attach the docker-machine instance restored from the snapshot running on the new host?

On Fri, Mar 10, 2017 at 6:16 AM, exjimsk notifications@github.com wrote:

I created a droplet to build a bunch of docker images to later realise I
need to move host region. How do I attach the docker-machine instance
restored from the snapshot running on the new host?

If the certs haven't changed, you should be able to just change your local
docker-machine config to point it to the new IP address. You'll find the
file at ~/.docker/machine/machines/your-machine-name/config.json.

Alternatively, if you never persist data in your Docker containers, instead
of moving the host, just kill it, make a new one, and start up the same
containers on the new host.

+1

+1

+1

+1

+1

+1

+1

+1

docker-machine attach, please.

It's pretty remarkable that such an obvious functionality still does not exist out of the box. We're going to jointly administer docker hosts, and this is such a nuisance.

In my case, very happy to attach existing host ${HOST} with

docker-machine --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem create \
    --drive none --url tcp://${HOST}:2376

But need copy certificates(ca.pem, cert.pem, key.pem) to DOCKER_CERT_PATH manually.

Any plans for this? Having full paths recorded in config.json is frustrating.

My use case: I have a git repo with machine configs in in it (I use the -s to point docker-machine into it). Secrets are stored with git encrypt and the idea is for CI jobs to be able to make use of these configs to manipulate the machines they need to access.

FYI: #3212

@lyda We're using such an approach with https://github.com/dmstr/docker-roj - but without encryption, which would be a very nice feature actually!

While roj always works with the same paths, since it's in a container, there are other solutions like:

which basically change a few paths in the config.json.
It's no big magic, unless I am totally missing something here.

Is docker-machine being actively developed by docker? I ask because it's been over a month since a commit made it to master: https://github.com/docker/machine/commits/master

+1

+1

+1

My god, the horror! This thread is still alive after almost three years?!? This is a use-case that everyone bumps into, or would seem to. What am I missing?

Well I assume docker-machine is dead (at least for me :D). I switched to kubernetes. Even self hosted kubeadm in alfa version works better that this actually. I can recommend it :)

please support this :(

add "~/.docker" to a folder that is rsynced or maybe symbolic linked to cloud folder on both machines. there are a couple pre-built solutions. not too hard guys, just do some research - never had an issue after setting up one time for 30 seconds.

+1

+1

How this feature, as well as specifying a static IP--- the two most requested features in the history of the docker-machine project--- goes unimplemented is beyond me.

Almost 4 years have passed 😮 Are there any update on this?

At the moment, many articles/tutorials about Docker are still suggesting to use docker-machine as the de-facto tool to manage hosts. However, this issue presence is a strong limiting factor!

I am currently keep using docker-machine and use the “copy-certs-dirs” approach to share between our local computers. I would like to upgrade to Kubernetes but it looks just too much for my project.

How about running docker-machine create from within a docker container? That container could then be exported, imported on another computer and then run there.

Still doesn't have an attach, oh my god

Was this page helpful?
0 / 5 - 0 ratings