Machine: 【Resolved】How can docker-machine add an exist docker host?

Created on 21 Mar 2016  ·  63Comments  ·  Source: docker/machine

It's an old problem, but i can't find the useful answer.

I have the following env :
Local Host (My Laptop) Name : Chris-Laptop
docker-machine is already installed , version 0.6.0, build e27fb87, Mac OS X 10.11
Remote Host (My VPS) Name : li845-130 (139.162.3.130)
docker-engine is already installed, CentOS 7.0

1.Docker daemon Process in Remote Host
[root@li845-130 ~]# ps -ef|grep docker | grep -v grep
root 12093 1 0 02:09 ? 00:00:00 /usr/bin/docker daemon -H tcp://0.0.0.0:2376

2.Configure the ssh connection without password
[tdy218@Chris-Laptop .ssh]$ ssh [email protected]
Last failed login: Mon Mar 21 02:54:06 UTC 2016 from 125.88.177.95 on ssh:notty
There were 54 failed login attempts since the last successful login.
Last login: Mon Mar 21 02:25:25 2016 from 114.248.235.223
[root@li845-130 ~]#

3.Add remote docker host to local docker machines
[tdy218@Chris-Laptop .ssh]$ docker-machine create --driver none -url=tcp://139.162.3.130:2376 linodevps
Running pre-create checks...
Creating machine...
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env linodevps
[tdy218@Chris-Laptop .ssh]$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
linodevps - none Running tcp://139.162.3.130:2376 Unknown Unable to query docker version: Unable to read TLS config: open /Users/tdy218/.docker/machine/machines/linodevps/server.pem: no such file or directory
[tdy218@Chris-Laptop .ssh]$
[tdy218@Chris-Laptop .ssh]$ docker-machine -D regenerate-certs linodevps
Docker Machine Version: 0.6.0, build e27fb87
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Found binary path at /usr/local/bin/docker-machine
Launching plugin server for driver none
Plugin server listening at address 127.0.0.1:54648
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
command=configureAuth machine=linodevps
Waiting for SSH to be available...
Getting to WaitForSSH function...
(linodevps) Calling .GetSSHHostname
(linodevps) Calling .GetSSHPort
(linodevps) Calling .GetSSHKeyPath
(linodevps) Calling .GetSSHUsername
Using SSH client type: external
{[-o BatchMode=yes -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none @ -p 0] /usr/bin/ssh}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255: usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b
..................

Error getting ssh command 'exit 0' : Something went wrong running an SSH command!
command : exit 0
err : exit status 255
output : usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
.....................

It report "Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded..." at last.

Why ? i have configurated the ssh connection between local host and remote docker host without password.

How can export the docker host to local docker-machine command line ?

Most helpful comment

@dweomer Isn't adding existing docker machine from another computer a very common and basic usage case?

All 63 comments

Does the docker-machine support add an existing docker host ?

As I was searching the docs for the last 5 hours, in fact, there is "--driver=none" option (undocumented). See https://github.com/docker/machine/issues/2270 .

@tdy218 @atemerev As I remember it the --driver none options was intentionally buried (and I thought removed from the released executable) because it is only used for testing purposes.

Related issue: I have a droplet on digital ocean created with docker-machine. How can I manage an instance from another laptop ?

A colleague recreated the certs from his laptop using docker-machine regenerate-certs [name]. and now I cannot access my instance anymore. Should I copy the new certs somewhere manually? The documentation about this is really confusing.

$ docker-machine ls
NAME           ACTIVE   DRIVER         STATE     URL                        SWARM   DOCKER    ERRORS
default        -        virtualbox     Stopped                                      Unknown   
gapp-sandbox   *        digitalocean   Running   tcp://xx.xxx.xxx.xx:2376           Unknown   Unable to query docker version: Get https://xx.xxx.xxx.xx:2376/v1.15/version: x509: certificate signed by unknown authority

I've found a workaround: https://github.com/docker/machine/issues/2270

Edit the config.json of the machine to point to the CA and client keys for that specific machine instead of the global machine ones.

So I copied all the files from my colleague's folder and replaced all the paths with that of my machine. In my case the final path is /Users/mturatti/.docker/machine/machines/gapp-sandbox/.
Nasty solution, but it seems to work.

ca.pem
cert.pem
config.json
id_rsa
id_rsa.pub
key.pem
server-key.pem
server.pem

@dweomer Isn't adding existing docker machine from another computer a very common and basic usage case?

@dweomer Isn't adding existing docker machine from another computer a very common and basic usage case?

@atemerev: No, I do not think that it is. As I understand it, Docker Machine exists to create/provision hosts that are Docker-enabled.

That being said, there exists the somewhat less-than-obvious generic driver I think @tdy218 should be using. The generic driver will take over a host and re-provision it. All that it requires is a running host with an ssh daemon and an user on that host with password-less sudo access (or just root). This re-provisioning is non-destructive in that an existing Docker installation will at most be upgraded.

@dweomer
I tried using generic driver to add an exist docker host in the following testing.
Local Host (My Laptop) Name : Chris-Laptop
docker-machine is already installed , version 0.6.0, build e27fb87, Mac OS X 10.11
Remote Host (My VPS) Name : li845-130 (139.162.3.130)
docker-engine is already installed, CentOS 7.0

1.Docker Daemon Process in Remote Host
[root@li845-130 ~]# ps -ef|grep docker | grep -v grep
root 12093 1 0 02:09 ? 00:00:00 /usr/bin/docker daemon -H tcp://0.0.0.0:2376

2.Configure the ssh connection without password
[tdy218@Chris-Laptop ~]$ ssh [email protected]
Last login: Mon Mar 28 03:06:07 2016 from 111.193.199.188

3.Add remote docker host to local docker machines
[tdy218@Chris-Laptop ~]$ docker-machine -D create --driver generic --generic-ip-address 139.162.3.130 --generic-ssh-user root linodevps
Docker Machine Version: 0.6.0, build e27fb87
Found binary path at /usr/local/bin/docker-machine
Launching plugin server for driver generic
Plugin server listening at address 127.0.0.1:50319
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(flag-lookup) Calling .GetMachineName
(flag-lookup) Calling .DriverName
(flag-lookup) Calling .GetCreateFlags
Found binary path at /usr/local/bin/docker-machine
Launching plugin server for driver generic
Plugin server listening at address 127.0.0.1:50323
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(linodevps) Calling .GetMachineName
(linodevps) Calling .DriverName
(linodevps) Calling .GetCreateFlags
(linodevps) Calling .SetConfigFromFlags
Running pre-create checks...
(linodevps) Calling .PreCreateCheck
(linodevps) Calling .GetConfigRaw
Creating machine...
(linodevps) Calling .Create
(linodevps) Calling .GetConfigRaw
(linodevps) No SSH key specified. Connecting to this machine now and in the future will require the ssh agent to contain the appropriate key.
(linodevps) DBG | IP: 139.162.3.130
(linodevps) Calling .DriverName
(linodevps) Calling .DriverName
Waiting for machine to be running, this may take a few minutes...
(linodevps) Calling .GetState
Detecting operating system of created instance...
Waiting for SSH to be available...
Getting to WaitForSSH function...
(linodevps) Calling .GetSSHHostname
(linodevps) Calling .GetSSHPort
(linodevps) Calling .GetSSHKeyPath
(linodevps) Calling .GetSSHUsername
Using SSH client type: external
{[-o BatchMode=yes -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none [email protected] -p 22] /usr/bin/ssh}
About to run SSH command:
exit 0
SSH cmd err, output: :
Detecting the provisioner...
(linodevps) Calling .GetSSHHostname
(linodevps) Calling .GetSSHPort
(linodevps) Calling .GetSSHKeyPath
(linodevps) Calling .GetSSHUsername
Using SSH client type: external
{[-o BatchMode=yes -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none [email protected] -p 22] /usr/bin/ssh}
About to run SSH command:
cat /etc/os-release
SSH cmd err, output: : NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Couldn't set key CPE_NAME, no corresponding struct field found
Couldn't set key , no corresponding struct field found
Couldn't set key CENTOS_MANTISBT_PROJECT, no corresponding struct field found
Couldn't set key CENTOS_MANTISBT_PROJECT_VERSION, no corresponding struct field found
Couldn't set key REDHAT_SUPPORT_PRODUCT, no corresponding struct field found
Couldn't set key REDHAT_SUPPORT_PRODUCT_VERSION, no corresponding struct field found
Couldn't set key , no corresponding struct field found
found compatible host: centos
Provisioning with centos...
No storagedriver specified, using devicemapper

(linodevps) Calling .GetMachineName
(linodevps) Calling .GetSSHHostname
(linodevps) Calling .GetSSHPort
(linodevps) Calling .GetSSHKeyPath
(linodevps) Calling .GetSSHUsername
Using SSH client type: external
{[-o BatchMode=yes -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none [email protected] -p 22] /usr/bin/ssh}

4.The remote docker host command line.
[root@linodevps ~]# ps -ef|grep docker | grep -v grep
root 17079 1 0 03:06 ? 00:00:00 /usr/bin/docker daemon -H tcp://0.0.0.0:2376
root 17185 1 0 03:06 ? 00:00:00 sudo docker version
root 17190 17185 0 03:06 ? 00:00:00 docker version

The docker daemon process was restarted, but there was two more docker version process, then i tried executing the docker version command manually, it was hang.

It's so difficult to add an exist docker host to docker-machine command line...

If the remote host without any docker engine in it, it's easy to create docker host(install docker engine) from docker-machine command line, but as the problem above, the docker-machine is limited.

@dweomer This is true, but if I provisioned docker-machine or docker-swarm, how do I, say, enable another developer to deploy containers there? How do I transfer configuration / env variables between machines? So far, I only can use provisioned docker-machine hosts myself, and only from a single machine (what if it's broken? How do I restore the configuration on another one?).

For me, docker-machine/docker-swarm are very far from being production ready, and should be marked beta at least. Or I'd like to hear from anyone actually using it in production...

FWIW, if you just drop ~/.docker from that workstation host onto the new one (presuming the home directory is the same path), it will work. If the home directory differs, you will need to edit the .json files in several places (keys and certificates, et c) to change e.g. /Users/macuser/.docker to /home/linuxuser/.docker.

Seems this question has been thoroughly addressed and/or covers ground already established in other issues. Thanks all.

@nathanleclaire Just to be clear, is the official approach for adding existing docker hosts (whether created by docker-machine or otherwise) to copy the ~/.docker folder between the client machines? If so, is there any particular subset of the files/a file we need to copy?

If the official approach is instead to use the generic driver, it would be useful to address the issue @tdy218's outlined in the comment here.

I dont think that this question has been addressed at all. I'm trying to push code to a new machine and the best answer that anyone can come up with it to find the guy that made the machine and copy his files to mine.

I'm not sure if anyone has tried this out here in the real world, but it sucks.

@TheSeanBrady Let's keep discussion on issues civil. If you have proposals for solutions you would like to see, please share those. Let's stay focused on solutions over problems.

I'm just being honest. You try sending emails asking for files, because so far it's not working for me. This was closed without addressing the issue.

I'm just being honest. You try sending emails asking for files, because so far it's not working for me. This was closed without addressing the issue.

You don't feel that saying something "sucks" and implying that the rest of us don't live "in the real world" is unnecessarily harsh and non-constructive?

We try to foster a community where collaboration and positivity are encouraged. I ask that if you want to participate, you follow these principles as well.

Due to the use of stored API credentials, SSH keys, and certificates sharing docker-machines across computers is a full-blown secrets management / ACL problem, the likes of which entire other classes of technology have been invented to address. It's quite a large problem in scope. Steps can potentially be taken to mitigate it that help your use case, so why not suggest some proactive solutions to implement?

If you'd like to submit a proposal for dealing with this, feel free. If you'd like to also make a proposal backed up by code in a pull request, I also encourage you to do that. But at any rate, please focus the discussion on solutions and stay positive.

You don't feel that saying something "sucks" and implying that the rest of us don't live "in the real world" is unnecessarily harsh and non-constructive?

Not really when it's following up...

Seems this question has been thoroughly addressed and/or covers ground already established in other issues. Thanks all.

Thats pretty much what I call a summary dismissal. And by "real world", I mean the environment where we use this stuff, not where you can pass one of the developers in the hall.

This would have been a more appropriate answer...

Due to the use of stored API credentials, SSH keys, and certificates sharing docker-machines across computers is a full-blown secrets management / ACL problem, the likes of which entire other classes of technology have been invented to address.

...even though those technologies have been invented and most of them are open source.

However, since you asked, what about a docker-machine add <hostname>, using SSH key authentication to pass the required certs for the TLS connection?

what about a docker-machine add , using SSH key authentication to pass the required certs for the TLS connection?

That could solve the problem ... for dev environments at least

Another option is to use a docker socket on a host available via SSH, which is vastly preferable to me as it uses the existing SSH authentication mechanism (which in my case is backed up with HSMs) and doesn't require adjusting ports/firewalls to support TLS.

I have hacked together my own solution for this using socat, but it would be very nice if docker-machine could support it. It's not hard to ssh to a host, install socat if it's not already installed, and use socat to turn the ssh session into a local socket for communicating with the remote /var/lib/docker.sock. It also means docker-machine wouldn't have to know anything about authentication or certificates or TLS in this driver mode, though it does depend on socat locally to create the local socket...

It would be nice to see a driver mode that does this sort of local-socket-setup the way the existing ssh driver sets up all the TLS stuff. Many orgs already have SSH key distribution/portforwarding solved, and expecting them to now distribute/manage a PKI for the TLS (and open another port) is burdensome.

tyrell:~▻ cat Library/Local/bin/ber1docker 
#!/bin/bash

DOCKER_REMOTE_HOST="ber1.local"
DOCKER_SOCK="$TMPDIR/docker.sock"
export DOCKER_HOST="unix://$DOCKER_SOCK"
rm $DOCKER_SOCK

socat UNIX-LISTEN:$DOCKER_SOCK,reuseaddr,fork \
   EXEC:"ssh root@$DOCKER_REMOTE_HOST 'socat STDIO UNIX-CONNECT:/var/run/docker.sock'" &

So, no "docker-machine add ..."?

+1 for docker-machine add ...

We want to have the beauty of eval $(docker-machine env mymachine) for all of our team members (for shared dev environments, of course).

I spun up a droplet with the digitalocean driver and got a site up and running using docker-compose from my workstation.

Then I had to do work on the site but from a completely different location, far away from my workstation where I did the original docker-machine commands.

Isn't this a common enough situation?

It is a very common solution. Especially if you work in a thing called "team". But they avoid this topic for about 18 months now. All they do is closing issues like this and then say something like "its super complicated to implement such a feature" and that you can do it yourself and propose a pull request at any time.
I appears to me like they have not even started working on a solution over the last 18 months.

Thanks for your reply, although no easy way to realize yet.

Keep the credentials and whatnot in the cloud.

Give me instructions how to use my dropbox or google drive to store it.

Let me query the info from digitalocean, since that's the driver I was using.

This really can't be that hard.

Yeah that sounds secure.
On Wed, Nov 9, 2016 at 09:28 Michael Schwartz [email protected]
wrote:

Keep the credentials and whatnot in the cloud.

Give me instructions how to use my dropbox or google drive to store it.

This really can't be that hard.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/machine/issues/3212#issuecomment-259457472,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABxbKbZQLUfK9ENnWpNKiF207ZGpXFs2ks5q8fSagaJpZM4H07I2
.

No less secure than sending the commands and all my code over the internet to DO in the first place.

Or uploading all my proprietary stuff to docker hub or github, even a private repo.

For those who are trying to add a docker machine to the dev environment, you can do like this:

docker-machine create -d "none" --url http://192.168.10.100:4243 bla

this is if you have the HTTP and not the TSL authentication enabled on the host.

I've spent a lot of time finding this undocumented stuff

I think it seems a little short sighted to allow only one machine to manage a docker cluster as well. More so than just sharing the responsibility of managing the cluster with your team, what about having more than one failure point or allowing multiple automated agents be able to interact with the docker cluster?

+1 for the export command.

It could export all the settings and certs into a secure tar or zip file that can be docker-machine imported with a password.

Seems like a reasonable request.

@bmmathe That command is already tar -c ~/.docker/machine | bzip2 > docker-machine-config.tbz2.

Here's how lxd deals with this. Perhaps a password or key could be configured on the machine that would allow it to copy or regenerate certs.

That's so sad 😖. My remote machines now show locally timeout status when running docker-machine ls, something messed up the machine configurations. Cannot seem to figure out how to reconfigure/fix them in the local Docker machines folder. No more local provisioning ? Should I delete the VMs & recreate them ?

Voting for an option to add remote machines to the local Docker machines:

docker-machine add --driver

⏳🤒

This is a really important feature. I dont understand why is so difficult. Even, If I need to regenerate the certs won't to be a problem for me, but I would like to have this documented and a command to work with.

If I can remember correctly, my issue was caused by having wrong permissions and/or local username not matching the remote one (The default user is docker-user). Its a problem on your behalf probably, you need to dig deeper.
The generic driver for docker-machine command works just fine for different providers e.g. google.

Check these:
https://github.com/docker/machine/issues/3522#issuecomment-280275707
https://docs.docker.com/machine/drivers/generic/
• Cannot find the other conversations I had on this topic which lead me to the solution.

Today, i found the cause of adding remote docker host failure in my case.
[root@linodevps ~]# ps -ef|grep docker | grep -v grep
root 17079 1 0 03:06 ? 00:00:00 /usr/bin/docker daemon -H tcp://0.0.0.0:2376 //This style of writing is not allow local fd or unix socket communication between docker client and server, so it hangs at sudo docker version command when execute the docker-machine add.

To correct it, just need to edit the docker service config file(default is /usr/lib/systemd/system/docker.service), change the value of ExecStart parameter from /usr/bin/docker daemon -H tcp://0.0.0.0:2376 to /usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock(default is /usr/bin/docker daemon -H fd://) or /usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H fd:// or export DOCKER_HOST=tcp://0.0.0.0:2376,
and then sudo systemctl daemon-reload && sudo systemctl restart docker, re-execution the docker-machine add command, wait for a moment, it will add successfully. In the process of adding, the docker-machine will generate ssl certificates for the target docker host and generate new docker service config file 10-machine.conf under a new directory which named /etc/systemd/system/docker.service.d

tdy218@Chris-Laptop$ dm ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
docker-vm119 - generic Running tcp://192.168.135.119:2376 v17.03.1-ce
docker-vm120 - generic Stopped Unknown

docker-vm119 is the exist docker host before adding.

From this case, we know docker-machine is supported adding a exist docker host, thanks @Bean Young.

docker-machine create --driver none -url=tcp://123.123.123.123:2376 dockerhost1 is the best thing I've learned about docker in a long time!

It really should be easier to share a development machine between users.

//cc @nathanleclaire

update: to be clear: specifically, I don't mind pre-sharing the certificates to other team members. There is room for improvement there as well..

@dhrp what process do you follow to preshare the certificates? Where can they be found?

Oh my dear lord, cannot believe in the past more than one and half year no one want to shot this problem 😞 😞 😞

@thaJeztah @AkihiroSuda @albers @tianon do you really think this is not a very common and critical requirement?

I tried to re-provision an existing docker host (docker-machine create --driver=generic --generic-ip-address <ip> <name> and it seemed to work without invalidating existing certs. This was done from the same machine which originally provisioned the docker host...is this an expected result?

Can you reopen this issue? I don't think this is resolved yet.

Stumbled upon this by accident.

Well, @nathanleclaire made clear the point: this is NOT the point of adding a feature do docker-machine, but rather the point of sharing keys that are NOT supposed to be stored anywhere else other that the original client machine. This is exactly how this is supposed to be done and some people here are getting angry without spending sufficient thought on the subject.

If you really want to share the keys with a team just go on with any source control private repo and accept the risks you are taking. To export the needed keys there is a script someone already made: https://gist.github.com/schickling/2c48da462a7def0a577e

"There is no reason we can't be civil" - Leonidas

I still think docker-machine should behave more like scp. What prevents to add support for multiple keys?

One workaround to this, which I have just found.
If you have an existing docker-machine on the same platform (mine is google) and you can ssh / scp to that machine without requiring docker-machine, you can copy the other machine and get it working.
I did:

mkdir ~/.docker/machine/machines/new-machine
cp -r ~/.docker/machine/machines/old-machine/* ~/.docker/machine/machines/new-machine/

#then replace all instances of "old-machine" with "new-machine" in ~/.docker/machine/machines/new-machine/config.json

#then add the public key from your new-machine folder into the authorized_keys file for the docker-user on the new machine, e.g.
scp ~/.docker/machine/machines/new-machine/id_rsa.pub docker-user@new_machine:/home/docker-user
ssh docker-user@new_machine -c "cat ~/id_rsa.pub >>~/.ssh/authorized_keys"

(Don't copy the ssh commands verbatim, lookup how to do it properly - https://encrypted.google.com/search?hl=en&q=ssh%20copy%20public%20key)

+1 for the export/import command.
+1 for multi key support.

Not being able to move management workstations is simply unacceptable and creates a massive single point of failure.

e.g. Lost computer, theft, fire, hardware failure, software failure, team access issues, admin issues etc...

Docker-machine was not conceived with that in mind - this is a fast-food tool for single developer provisioning. This is it, plain and simple, and it works - and it is free!

Some features mentioned here are implemented in the Docker EE offering (like teams and RBAC).

If people can't be thankful try at least to be reasonable.

I dont believe machine-share has been mentioned in this thread yet:

machine-export <machine-name>
>> exported to <machine-name>.zip
machine-import <machine-name>.zip
>> imported

Hey @andrevtg

Sorry if I seemed ungrateful. I really love docker-machine and thought this would make it even better.
I think the ability for single developers to easily move between computers would be really helpful.

If my schedule frees up I might take a stab at learning GO and try to add this feature.

Hey @dmitrym0

I think this ticket was more focused on the lack of ability to connect to existing remote docker hosts.
That being said machine-share seems useful when moving local docker hosts.
Thanks for the suggestion.

@Jared-Harrington-Gibbs machine-share specifically solves the problem of "_adding existing docker-machine hosts_". We deploy various apps via docker-compose, and it works well for us. We export the set of certificates with machine-share and distribute it to all the members of the team that need it. It's not ideal but it works ok.

I tried using generic driver to add an exist docker host, it will add successfully after several times retry.

@sneak This solution doesn't seem to work for me, as the paths in config.json will not be pointing to the correct files. I would assume docker-machine relies on the config.json file, but I could be wrong, haven't tested it yet.

This presents a problem for me, because I maybe be executing from an environment where I might not know the exact path the files are stored in. Maybe there could be an option for a relative path?

If the problem is man power, I could spare a few days to try to provide a solution for the community. Would prefer to have some direction from the maintainers though. Does [Resolved] in this case indicate a decision not to support this feature natively, or is it already supported somewhere that I'm not seeing?

This is feature that is NOT INTENDED to be implemented at all. It makes no sense to expect the remote machine to store the private keys that are not supposed to exist anywhere except on the original developer's machine.

Even Docker EE makes sure that a different "client bundle" is generated each time it is requested, and does not store them on the host.

People here are asking for a feature that is simple to implement, but that makes no sense at all to be implemented due to a reasonable security constraint.

@andrevtg The request is not for the VM itself to store the keys (which would make the keys entirely useless), but for the docker-machine client (which is made of code, it's an actual application), to provide a way for users to voluntarily transmit keys from one client to another.

One simple way to do this might be to provide a docker-machine export command that bundles up the keys in to an archive, and a corresponding docker-machine import command for a recipient of this archive to import the machines described in the archive. Encryption/decryption at the boundaries of transmission of the keys is then the users' business (they can use PGP email, SFTP, whatever they want).

Exactly docker-machine create makes a huge configuration file and keys that novices don't understand or know how to replicate. Maybe creating an ssh key is as simple as adding a key to the hosts file, but that is not documented. Which container do we add it to? And how do we set it up on a different machine and notify docker-machine about it? Can we still use docker-machine use or do we have to manually set all the environmental variables.

Ideally, there should be a docker-machine add [name] to add a key, docker-machine export to export the configuration files, and docker-machine import to import those configuration files on another machine. It would also be nice to have docker-machine rm [name] to revoke access to keys.

@dhrp I got a docker server running on an rpi. I would like to access it from my localhost with docker machine. So all I would have to do is:
docker-machine create --driver none -url=tcp://raspberry.local:22 rpihost
Does this work?

So @360disrupt, did it? I struggling with the same usecase.

So @360disrupt, did it? I struggling with the same usecase.

No, not yet.

Hi everyone. This was not solved yet? We use docker for every apps in the company where I'm working for. Our biggest problem is related with sharing the machines. I tried to use the generic driver multiple times, but the problem with that approach is that when you create a machine with the generic driver and connects it with an existing server, the creation just drops that containers that are currently running in the server.

The only way that we've found was to build a simple script in Python that just import and exports the machine that the developer wants to access. This script is responsible to copy all the configuration files and the certs from the machine owner to the new source. It works well and we're not having any problems with that, it's just that I cannot believe that there isn't an official way to share machines without having to share our private certs.

I'm working in a project and I will publish it in GitHub to solve this problem in a different way. Basically, I'm just building a PaaS that will concentrate all the certs of all the machines that we have here in our company. And when someone wants to deploy or do something like that, you just need to connect with the PaaS, and not with the server. It's like a tunnel. Soon I will release the first version of this PaaS.

+1 for docker-machine add or docker-machine create --existing

this solved my problem: machine-share :computer: :rabbit2:

@dweomer Isn't adding existing docker machine from another computer a very common and basic usage case?

@atemerev: No, I do not think that it is. As I understand it, Docker Machine exists to create/provision hosts that are Docker-enabled.

That being said, there exists the somewhat less-than-obvious generic driver I think @tdy218 should be using. The generic driver will take over a host and re-provision it. All that it requires is a running host with an ssh daemon and an user on that host with password-less sudo access (or just root). This re-provisioning is non-destructive in that an existing Docker installation will at most be upgraded.

I know this looks to be a settled issue, but I just wanted to state I have to agree with @atemerev: as a development team we have the need to connect to other provisioned machines on an almost weekly basis.

this solved my problem: machine-share 💻 🐇

I can confirm that this npm package does work.

Was this page helpful?
0 / 5 - 0 ratings