Moby: Document how to connect to Docker host from container

Created on 5 Jul 2013  ·  263Comments  ·  Source: moby/moby

I had some trouble figuring out how to connect the docker host from the container. Couldn't find documentation, but did find irc logs saying something about using 172.16.42.1, which works.

It'd be nice if this behavior and how it's related to docker0 was documented.

Most helpful comment

I think the requirement is clear from the issue title. There needs to be an easy and well-documented way to talk to the host from the container, however it's implemented.

All 263 comments

when you look inside network.go you find that docker probes for internal networks that are not routed.

first 172.16.42.1 is guessed as a bridge address then others.

So documenting this wont help much. Its a dynamic scheme you can not rely on.

I think what you require is more a way to define the addresses used for bridge and client.
could that be?

I think the requirement is clear from the issue title. There needs to be an easy and well-documented way to talk to the host from the container, however it's implemented.

+1 It would be really nice to have a good way to connect to the host system

+1, the 1.0 branch will define an introspection API so that each container can interact with the host in a scoped and controlled way.

This is currently planned for 0.8. 


@solomonstre
@getdocker

On Thu, Aug 8, 2013 at 1:10 PM, E.J. Bensing [email protected]
wrote:

+1 It would be really nice to have a good way to connect to the host system

Reply to this email directly or view it on GitHub:
https://github.com/dotcloud/docker/issues/1143#issuecomment-22351792

So how can I connect to the docker host from within a container? I am trying to connect to a docker container via the host port rather than the container's private IP.

@gerhard: an introspection API is planned for 0.8. Meanwhile, if you want to access the Docker API from the containers, you can setup Docker to listen on the IP address of the Docker bridge.

To do that, you would:

  • create a bridge

ip link add docker0 type bridge

  • assign an IP address to it

ip link set docker0 up
ip addr add 172.17.0.1/16 dev docker0

  • start docker and bind the API to the bridge

docker -d -H 172.17.0.1:4242

Now you can access the Docker API from your containers.

Currently (version 0.7) docker does not reliably support granting unlimited access to its own control socket to one of its containers. The workarounds explained in this thread are hacks which are not guaranteed to work, and if they do might break at any time - please don't use them in production or expect us to support them. Since there is no official feature to document, this doc issue can't be fixed.

To discuss hacks and workarounds for missing features, I recommend either the _docker-user_ mailing list, or the _#docker_ irc channel on Freenode.

Happy hacking

@shykes Is there another issue that tracks the creation of such a feature, in that case?

By the way, to give motivation for such a feature: this is useful when locally testing a server (where I would've used vagrant in the past) and I want to connect a server in the container to a database or other server running on my dev machine (the docker host).

I am already sold on the value of this feature :)

On Mon, Dec 2, 2013 at 9:09 AM, Caleb Spare [email protected]
wrote:

By the way, to give motivation for such a feature: this is useful when locally testing a server (where I would've used vagrant in the past) and I want to connect a server in the container to a database or other server running on my dev machine (the docker host).

Reply to this email directly or view it on GitHub:
https://github.com/dotcloud/docker/issues/1143#issuecomment-29636528

I am on Fedora 20 with docker 0.7.2, setting up Docker UI. I had to open the port on which docker daemon listens so the firewall does not block it:

  • firewall-cmd --permanent --zone=trusted --add-interface=docker0
  • firewall-cmd --permanent --zone=trusted --add-port=4243/tcp

After that _docker-ui_ was able to connect to the docker daemon control socket.

HTH
This is a clear and legitimate need for such feature.

I'm sorry, if I'm keeping a die hard thread alive.

The title of this issues says: "How to connect to host from docker container".
I don't see how this relates to the docker inspect feature. The inspect feature is used on the host-side to find the IP of the container, if I'm not mistaken.

I think the issue by bkad is finding the host IP from within the container. Granted I'm not networking wizard, but isn't is fairly safe to assume that the ip gateway (from inside the container) maps to the host.
(Assuming one doesn't configure a bridge setup or something).

Using the gateway for 0.0.0.0 from netstat -nr I certainly had no problems reaching a server running on my host machine. I suspect that the gateway ip is static (once docker is started), can anybody confirm that?

A alternative would be to pass my hosts public ip to the container using environment variables, but the public IP may not be static. And whilst hostname might work better in production, they are hard to use locally.

I still would prefer a way to call from docker container to host through the loopback and appear as 127.0.0.1 on the host. Or if security is a concern another loopback device that always has the same ip.
Or maybe the thing I found doesn't expose my communication to the public? Like said I'm no network wizard :)
Note, if using ip-gateway to call docker host is the "right way", can't we document it?

For those looking to find the gateway ip from container /proc/net/route is probably the right place to read it.

The Motivation, for this feature includes various meta-data services. Exposing things from ec2 meta-data service would be nice. Distribution of credentials and more complex structured data that doesn't fit into environment variables.

@jonasfj: there is actually an easier way, now that Docker supports bind-mounting files from host to container. You can bind-mount the Docker control socket, i.e. : docker run -v /var/run/docker.sock:/var/run/docker.sock …; this is easier than fiddling with networking rules.

I am not sure how the docker socket helps. I still think the issue should be reopened, there is no documentation for the following scenario:

1) On the 'host' a service runs on port 8080 (say 'etcd')
2) From that host a docker container is started
3) How can the service on port 8080 on the host be reached from the docker container? What would be the URL/IP?

@jpetazzo How does setting the docker.sock come in to play to solve the above problem?

@vrvolle, exposing docker.sock doesn't solve the original issue described by @bkad.
But one would expose a different unix domain socket for communication between host and docker-container.

For example, if you wanted to expose mysql from the host you would expose the mysql socket: /tmp/mysql.sock.

Or if you like me have a metadata API through which containers should be able to query the host for various useful things, you create your own unix domain socket and expose it to the container. HTTP over unix domain sockets should work very well. Also you don't have all the network configuration issues and security issues.

just had to read up upon 'unix domain sockets'.
but:

http://stackoverflow.com/questions/14771172/http-over-af-unix-http-connection-to-unix-socket

claims there is no URL and therefore no usual client can use that mechanism out of the box.

On the other hand:

http://stackoverflow.com/questions/14771172/http-over-af-unix-http-connection-to-unix-socket

shows that it is somehow possible to use such a socket.

But still I would like to simple be able to have an IP adress and port, which a program inside a docker could simply use. I will -- for now -- use

 netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'

Should I create a new issue or can someone reopen this one.

@vrvolle
I'm not networking guy, but I imagine that there is some tricks one can do to proxy localhost into the container over a unix socket and then inside the container proxy unix socket to the containers-localhost (loopback device)...

It would be nice to document how to do that. But it seems that it's not necessarily a feature docker needs to active support.

There are multiple ways to address that, depending on what exactly you want to achieve.

If you want to connect to a service running on the host (any kind of service, e.g. some API or database that would be running straight on the host), you have to figure out the IP address of the host.

One way is to rely on the fact that the Docker host is reachable through the address of the Docker bridge, which happens to be the default gateway for the container. In other words, a clever parsing of ip route ls | grep ^default might be all you need in that case. Of course, it relies on an implementation detail (the default gateway happens to be an IP address of the Docker host) which might change in the future.

Another way is to use the Docker API to retrieve that information. Then, the problem becomes "how do I connect to the Docker API from a container?", and a potential solution (used by many, many containers out there!) is to bind-mount /var/run/docker.sock from the host to the container. This has a big downside: the API becomes available in full to the container, which might do bad things with it.

In the long term, Docker will expose a better introspection API, allowing to access that information without giving away too much privilege to the containers.

TL,DR: short term, check the default route of the container. Long term, there will be an awesome introspection API.

i have also problem due to upstart. i can't find anything in /var/run/docker.sock. and used command "-v /etc/run/docker.sock:/etc/run/docker.sock" but nothing happened.
i think is issue due to some new updates about capabilities of kernel. please give me brief note on this issue. and full command to solve this. thanks

use -v /var/run/docker.sock - not /etc (which is normally reserved for conf files).

any update about this in the new 1.0.0 release?

There is nsenter - but I think the encouraged way is to run sshd at this
stage.

On Friday, June 13, 2014, Camilo Aguilar [email protected] wrote:

any news about this in the new 1.0.0 release?


Reply to this email directly or view it on GitHub
https://github.com/dotcloud/docker/issues/1143#issuecomment-45958769.

Michael D Neale
home: www.michaelneale.net
blog: michaelneale.blogspot.com

vrvolle, thanks for that. A lot of people like us are looking for a little tidbit like this

netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'

Docker auto updating /etc/hosts on every container with the host IP, e.g. 172.17.42.1 and calling it for example dockerhost would be a convenient fix.
I guess for now we are stuck with netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'

+1 for dockerhost in /etc/hosts

+1 for dockerhost in /etc/hosts, sounds like a good idea

Editing a file in an image should never be done UNLESS specifically give an argument or flag to do so. Also, it isn't mandatory that 100% of images will follow the LSB, so there might not even be an /etc directory. The filename to use for containing host IP should also be specified by command argument.

docker --host /etc/hosts

+1 for dockerhost in /etc/hosts

+1 for an /ec/hosts entry

@Sepero docker is already populating /etc/hosts with the linked containers, so your arguments don't really hold.

@matleh That's true, but I think the correct way isn't populate /etc/hosts directly, or at least leave us to specify the location in case we don't have it.

Anyway, dockerhost entry in hosts.

I'd also like to have the docker host IP in /etc/hosts.

+1 on docker host IP in /etc/hosts

Perhaps a better way to facilitate access to network services on the host would be to make it easy to set up iptables entries which forward ports in reverse, i.e. from the container to the host. I think the -p or --publish option would be nicely complemented by a -s or --subscribe option which has the reverse effect. I guess I would have called them --forward and --reverse, though. Whatever you call it, this seems to me a much more consistent approach than the other suggestions here.

This might be stating the obvious, but a simple way to do this which works currently and is perhaps less implementation dependent would be to determine the ip address on the host before starting the container and set an environment variable in the container appropriately. Along these lines:

#!/bin/bash
HOSTNAME=$(hostname)
HOST_IP=$(ip route | awk '/docker/ { print $NF }')
docker run -e HOST=$HOSTNAME -e HOST_PORT=tcp://$HOST_IP:8000 mycontainer

This would be essentially parallel to the way --link works. It does still depend on the bridge having a name which matches /docker/ and there being only one such bridge, but the bridge is set when the docker daemon is started. It would be nice if docker info would give us the name of the bridge in use and/or the host ip address. Another thought would be to add a --link-host PORT option to docker run which would do essentially the above.

IMO this is the best option. With --link-host, the container wouldn't need to know whether the service it is accessing is on the host or in another container.

I'm not sure how other's are invoking their containers but when I run them with the --net=host then I'm able to see the same networking setup as my docker host. Without this switch I get a standalone network stack, as described in the docker-run man pages.

$ docker run -i --net=host fedora ip route ls
default via 10.0.2.2 dev eth0  metric 1
10.0.2.0/24 dev eth0  proto kernel  scope link  src 10.0.2.15
127.0.0.1 dev lo  scope link
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.42.1
192.168.59.0/24 dev eth1  proto kernel  scope link  src 192.168.59.103

running without the switch yields this:

$ docker run -i fedora ip route ls
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0  proto kernel  scope link  src 172.17.0.4

What we're looking for is a method to get the IP address of either 10.0.2.15 (the IP of my docker host's eth0 i/f) or 172.17.42.1 (the IP of my docker's docker0 bridge i/f).

I don't particularly care for the approach of adding some host to my containers /etc/hosts file, that seems a bit hackey to me. Rather having this info exposed internally so that I can grab it when needed would seem to be the more prudent way to go here.

+1 for --link-host or some other intuitive way

+1 for an /etc/hosts entry
Seems pretty convenient and follows current communication conventions.

Just to clarify my earlier comments: what's great about --link <container>:<alias> is that it exposes a _service_ through an _alias_. Similarly, a mechanism for exposing the host to the container should provide access to a specific service, not just an ip address. The container would access that service via an alias; it shouldn't need to know where the service really is. The option should have the reverse semantics of -p and behave like --link. In other words --link-host <ip address>:<port>:<alias> would set up env variables and a /etc/hosts entry, and would set up iptables entries as necessary, i.e. if the service on the host is listening on an ip address which is otherwise inaccessible to the container.

@altaurog How about something like this?

docker run --name someservice -d host-exposing-image --expose 8000

someservice would just be any name that you can then --link to, and host-exposing-image would be a special image that forwards host ports on exposed ports. Probably the idea would be implementable by sharing the host's /var/run/docker.sock with the image.

Maybe something like this already exists, dunno.

Edit:

docker run --name someservice -d --expose 8000 host-exposing-image

Forget the above, haven't read all of the comments here (still didn't). This is what works for me on docker-osx (and sorry for not posting to the mailing list instead):

docker run --rm -ti -e HOST_IP="$(docker-osx ssh -c 'route -n' 2> /dev/null |
  awk '/^0.0/ { print $2 }')" debian:jessie

+1. --link-host is a good idea as well as just having a dockerhost entry in /etc/hosts

I created a new issue since this issue is closed (but not resolved): #8395

+1 for dockerhost or any other convenient way.

+1 for any convenient way

As the dockerhost approach got some votes here, the easiest way I found (from the comments to the 3 related issues #8395 #10023) to have such a hosts entry is to add the argument --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') when running the image, e.g.:

run --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }')  ubuntu ping -c2 dockerhost

While this adds the required /etc/hosts entry it is still a shame that dockerhost isn't there by default. When distributing an image I have the choice to either

  • instruct the users to add the above parameter
  • make some scripts running in the container that adapt the config files based on the routing table

I personally don't understand why dockerhostisn't there by default, it would make creation and distribution of images that access services typically on the host (XServer, CUPS, Puleaudio) so much more convenient.

+1 for dockerhost. I would actually expose this both as an env var, /etc/hosts entry and cli flag. It's bound to be used in many ways.

:+1: for dockerhost

inside your container:

cat << EOF > /etc/profile.d/dockerhost.sh
 grep dockerhost /etc/hosts || echo $(ip r ls | grep ^default | cut -d" " -f3) dockerhost >> /etc/hosts
EOF

works for me whenever I login(with root account)

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost :yum:

+1 for dockerhost

+1 for dockerhost :+1:

Ended up writing this script, someone might find it useful:

#!/bin/bash

SED="$(which sed)"
NETSTAT="$(which netstat)"
GREP="$(which grep)"
AWK="$(which awk)"
CAT="$(which cat)"


$SED '/dockerhost$/d' /etc/hosts > /etc/hosts.tmp
DOCKERHOST="$($NETSTAT -nr | $GREP '^0\.0\.0\.0' | $AWK '{print $2}')"

echo "$DOCKERHOST dockerhost" >> /etc/hosts.tmp

$CAT /etc/hosts.tmp > /etc/hosts

rm -rf /etc/hosts.tmp

+1 dockerhost

as there is quite interest in this topic i think it would make sense to instead of discussing a closed ticket to open a new feature request.

I opened https://github.com/docker/docker/issues/8395 some time ago -- closed as well. Still no documented solution

ok, while there are workarounds, i think the main reason is probably that accessing the host is a partly isolated usecase.

as there is a docker links feature i would reason it makes sense to be able to provide a link to the host.

Hello!

FYI this is now documented:

Note: Sometimes you need to connect to the Docker host, which means getting the IP address of the host. You can use the following shell commands to simplify this process:

$ alias hostip="ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print \$2 }'"
$ docker run --add-host=docker:$(hostip) --rm -it debian

Thanks, @icecrime.

Can a docker person please lock this issue so folks stop updating it?

+1 for dockerhost

+1 for dockerhost - having to run a local command alias hostip is not compatible with e.g. docker-compose, docker-swarm etc. etc.

+1 for dockerhost

@icecrime , that gives the host gateway and not the host ip on my ubuntu. Shouldn't it be the below command only?

$ ip address show

+1, something usable without having to run commands would be ideal

How is this still unresolved? Running commands in a container to make container to host networking functional is completely unacceptable.

+1 for anything simple
BTW the workaround with ip works on my Ubuntu, but doesn't work on OS X, right?

+1 This is a 2 year old issue and a useful feature. I want an easy way to connect to the docker remote api from inside a container.

@ianchildress if you have your daemon configured to accept socket connections, simply bind-mount the socket into your container, e.g. docker run -d -v /var/run/docker.sock:/var/run/docker.sock myimage

+1 for dockerhost
Found this issue when browsing solutions to use case: xdebug.remote_host=dockerhost

+1 for dockerhost

@thaJeztah and how do I get to hostname or IP address from there?

@mbonaci I was responding to @ianchildress, who wanted to connect to the Docker API inside a container (for which using a socket connection is the general approach)

I'm confused. @icecrime above said above that this is now documented, but the link he gave is dead. I can't quickly find the quoted portion of the documentation. I note that the apt-cache-ng example uses dockerhost, but doesn't define what it is (#11556). The only reference I could easily find was this thread. I've searched all of docker's source and documentation and it doesn't seem to be mentioned in this context.

@pwaller Since March, we've split that huge long document into separate references. The new location for this material is:

http://docs.docker.com/reference/commandline/run/#adding-entries-to-a-container-hosts-file

+1 for dockerhost

@moxiegirl I think the --add-host param is satisfactory. Thanks

And how to connect to host from docker container? For example, for making git pull? Without volumes using?

@a93ushakov it's all explained in the link that @moxiegirl provided.
When you're doing docker run, add the following parameter: --add-host=dockerhost:replace_with_docker_host_ip, which creates an entry in the container's /etc/hosts file.
Which, of course, means that you can refer to your docker host from within that container using its name, dockerhost.

@mbonaci > Which, of course, means that you can refer to your docker host from within that container using its name, dockerhost.

Through ssh?

@thaJeztah > if you have your daemon configured to accept socket connections, ...
How to do this?

@a93ushakov SSH: If you have it installed and running on the host (and its port is not blocked), yes.

@a93ushakov @thaJeztah refers to Unix socket connection. I think that's the default - see if you have a file /var/run/docker.sock on your host.

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost
It would be nice to get this hostname without any extra steps.

1 for dockerhost

+1 for dockerhost

Can't seem to connect to the docker host from inside a container. Any ideas what I'm doing wrong?

$ hostip=$(ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print $2 }')
$ nc -l -p 1234 &
[1] 17361
$ docker run --add-host=docker:$(hostip) --rm -it hiromasaono/curl curl docker:1234
curl: (7) Failed to connect to docker port 1234: Connection refused

+1 for dockerhost

When I add the IP of the Docker bridge it works.

First listen on port 1234 with netcat

$ nc -l -p 1234

Get the bridge's IP

$ ifconfig docker0 | grep 'inet addr'
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0

Then connect

$ docker run --add-host=docker:172.17.42.1 --rm -it hiromasaono/curl curl 172.17.42.1:1234

Then I see the response

GET / HTTP/1.1
User-Agent: curl/7.35.0
Host: 172.17.42.1:1234
Accept: */*

+1 for dockerhost
Hell, when?

+1 for dockerhost that works on Mac as well, i.e., handling the VM transparently

Could some one please tell me how will I call Docker API from inside the container? I am not a linux guy. I have already started container with -v /var/run/docker.sock:/var/run/docker.sock .
Every one is talking how to do the mount, but no one has mentioned how to call api inside.

I tried calling using curl it didn't work. I used the host IP e.g.

curl -XGET http://hostip:2375/images/json

This is how i started my daemon. i.e. docker -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375

Any help will be greatly appreciated.

@jprogn the github issue tracker is not meant for general support questions; it's better to ask these questions in the #docker IRC channel, the docker-users group on Google, or forums.docker.com

This question is related to the original topic, which haven't been answered completely. Please see the title above

Can some one please reply how to call docker api inside from container??????????????

@jprogn please follow my comment above; this is an issue tracker, used for tracking bugs and feature requests; not a support forum for using docker; use the other methods I mentioned above https://github.com/docker/docker/issues/1143#issuecomment-146924892

+1 for dockerhost

On my CentOS7 docker host i cant get a route from the container to the host:

[root@docker-host-fkb slehmann]# ifconfig docker0 | grep 'inet'

inet 172.17.42.1  netmask 255.255.0.0  broadcast 0.0.0.0
inet6 fe80::42:a7ff:fe4d:4cb2  prefixlen 64  scopeid 0x20<link>


[root@docker-host-fkb slehmann]# docker run --add-host=docker:172.17.42.1 --rm -it hiromasaono/curl curl 172.17.42.1:1234

curl: (7) Failed to connect to 172.17.42.1 port 1234: No route to host

Any ideas?

+1, May be there should be an Environment variable for this.

I myself would like it the other way around; a link from my host machine to the docker containers by name

+1 for dockerhost environment variable

It's really tricky to combine a bespoke dns entry accessing the host without hard-coding 172.17.42.1. e.g

extra_hosts:
     - "docker:172.17.42.1"

+1 for dockerhost wherever (/etc/host of ENV)

+1 this is still needed!

my +1 too because adding ip route list dev eth0 | grep -Eo 'via \S+' | awk '{ print \$2 }' to every project (because in dev, all projects are in the same host and need to be able to call each other) is starting to seem like at bad hack

How do we solve this problem in a cluster, e.g. Docker Swarm, where we don't know which machine a container will be assigned to when we run it? The docker0 gateway IP could be different from one host to another within the swarm cluster, so we can't just run the ip route command on one host and then assume the IP is the same for all hosts.

I would also like the ability to map a container port to a port on the docker0 bridge without having to know what the IP of the docker0 bridge is. Something like this:

eval $(docker-machine env --swarm swarm-master-node)
docker run -d -p "$HOST_GATEWAY_IP:80:80" my-image

$HOST_GATEWAY_IP gets replaced with the same IP you would get from running the ip route command on the host the container ultimately gets deployed to in the cluster.

This would need to be supported for any other commands that involve IP's, e.g. the --dns option on docker run.

I found this command to be easier:

ip ro | grep docker | sed 's|.* \(\([0-9]\+\(.[0-9]\+\)\{3\}\)\)\s*|\1|'

+1 for dockerhost in /etc/hosts

+1 dockerhost

+1 for having dockerhost in /etc/hosts

+1000 for dockerhost

+1 dockerhost

+1

+1, or more.

I'm running a web server in a container and need to connect it to my mysql running on the host.
The ip of the host changes with dhcp so something dynamic is a must.

@wederbrand an alternative approach (and probably better performing) could be to connect to the mysql socket, for example;

docker run -v /var/lib/mysql/mysql.sock:/mysql.sock mywebapp

That would make the MySQL socket available as /mysql.sock inside the container

@thaJeztah That could be an option. However, the docker image I'm using is also used for other environments where the mysql server is on a remote server and the image have a configuration for host:port for the database.

I'd like my container to behave as close to the one in production as possible, so no tinkering with connection properties except to set the host and port.

+1

+1

+1

+1

@peterbollen @radek1st @BradRuderman @aldarund @wederbrand @pataiadam @jllado @geniousphp @coreylenertz @dgtlmoon @xlight (all +1 in December 2015 alone)
_this_ issue is closed as well as #8395
I do not think adding +1 to an old issue will help. Create a new one (I did with #8395) or try some other route to address this.

Thx!

+1

+1 dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost, c'mon...

+1 for dockerhost

using docker-compose and scaling n-ary containers in a swarm doesn't naturally provide a means to perform node IP address lookups on the fly across all machines.

This, as stated above and modified to the gateway, is not an option:

extra_hosts:
     - "docker:172.18.0.1"

👍 for dockerhost as well...

+1 for dockerhost

+1

+1

+1

+1 dockerhost

+1 dockerhost

+1 dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost.
Since this issue is closed opening a new ticket.

+1

+1

To get the host in a docker-machine setup, you may use this command:

docker-machine ssh "${DOCKER_MACHINE_NAME}" 'echo ${SSH_CONNECTION%% *}'

It reports the machine to which you connect via ssh, so it should work reasonably well both for local machines and for remote ones, as long as there is no NAT along the way. It would be nice if docker-machine inspect would report this value somewhere as well, if that's technically feasible.

+1

+1 for dockerhost . Sounds like a valuable feature with practically no effort to implement.

For now if you need to access dockerhost within containers created from images you control, you can add this gist to the entrypoint.sh script for your image: https://gist.github.com/dimitrovs/493678fd86c7cdf0c88312d9ddb4906b

Or I guess you can add it to /etc/rc.local if your image doesn't use a shell script as its entrypoint. What this gist does is it checks for dockerhost in /etc/hosts and adds it if not there. It should happen every time the container starts.

Get the gateway ip out of /proc/net/route:

export ipaddr=$(printf "%d." $(
  echo $(awk '$2 == "00000000" {print $3}' /proc/net/route) | sed 's/../0x& /g' | tr ' ' '\n' | tac
  ) | sed 's/\.$/\n/')

@dimitrovs Can you show a example config? I tinkered around w/ that and couldn't get the correct results.

What results were you getting @amcdnl ? You can either put the gist I linked in the entrypoint.sh file so it runs every time the container starts or in /etc/rc.local inside the container. You will have to do this when you are building the image, but the script itself will execute when a container starts.

+1 for docker host

@dimitrovs I ended up writing a script that would generate a compose on runtime.

StartDocker.sh

#!/bin/bash
built_docker_file="docker-compose.dev.built.yml"

rm -rf docker-compose.dev.built.yml
localhost_ip="$(ifconfig en0 inet | grep "inet " | awk -F'[: ]+' '{ print $2 }')"
sed -e "s/\${localhost}/$localhost_ip/" docker-compose.dev.template.yml > $built_docker_file

docker-compose -f $built_docker_file build
docker-compose -f $built_docker_file up

docker-compose.dev.template.yml

version: '2'

services:
  nginx:
    container_name: sw_nginx
    image: nginx:latest
    ports:
      - 80:80
    links:
     - search
    volumes:
      - ./Web/wwwroot:/var/www/public
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
    extra_hosts:
      - "dockerhost:${localhost}"

@amcdnl Good workaround. Note that docker-compose supports (environment) variable substitution.

e.g. (bash):

$ export localhost=$(...)
$ docker-compose (...)

@sebastiannm my suggestion should be independent of the host because it executes inside the docker image, but if you are running it on Mac in VirtualBox then the IP you will get is the IP of the VirtualBox VM, not your Mac IP. I think this is the requirement we are discussing. If you want to know the Mac IP from inside a docker instance then a different approach is needed.

+1 for dockerhost.

But for now, I doing a hack similar than posted above, that allows yo to get dynamically the host IP address with a simple script that works both in Linux and OSX: http://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach#38753971

+1 for dockerhost.

+1 for dockerhost

+1+1 dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

I'm unable to connect to a basic host netcat server from within a docker container when --net=bridge (the default) via the various techniques discussed in this issue.

@frankscholten I'm unable to reproduce your successful netcat test

I've made a stackoverflow issue describing the exact issue: http://stackoverflow.com/questions/38936738/communicate-to-docker-host-from-docker-container
and wanted to post here in case it helps someone in the future

While all this talks about communicating from container to host using docker0 ip in bridge mode, I want to know whether container can also talk to host's public ip (say its original eth0 ip). Appreciate your reply. I have not been successful in this.

@sburnwal I don't see why you wouldn't be able to communicate with the host's eth0 ip. The container will send a request to its default gateway (docker0) and the host should reply without further forwarding because it knows it has the ip. Are your pings to the eth0 ip from the container timing out or you get no route to host or what's happening? Is the container able to connect out to the internet?

Unfortunately this is not working for me. While I can ping the container IP from the host, I am not able to ping the host (LAN/eth0 of the host) ip from the container. I am using docker 1.9.1. I am stuck here as I need my container to communicate to web server listening only at the eth0 ip of the host.

I customized docker0 using the option:
/bin/docker daemon --bip=169.254.0.254/24 --fixed-cidr=169.254.0.0/24

So I have these interfaces on my host (not listing veth* interfaces here):

[root@pxgrid-106 irf]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtg 1500
        inet 169.254.0.254  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::42:84ff:fe87:d510  prefixlen 64  scopeid 0x20<link>
        ether 02:42:84:87:d5:10  txqueuelen 0  (Ethernet)
        RX packets 512  bytes 150727 (147.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 653  bytes 281686 (275.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtg 1500
        inet 172.23.166.176  netmask 255.255.255.128  broadcast 172.23.166.255
        inet6 fe80::20c:29ff:fecc:7d0f  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:cc:7d:0f  txqueuelen 1000  (Ethernet)
        RX packets 58462  bytes 12056152 (11.4 MiB)
        RX errors 0  dropped 69  overruns 0  frame 0
        TX packets 30877  bytes 18335042 (17.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

My container has ip 169.254.0.2 and I can ping this container ip from the host. Of course, I can ping from container its gateway 169.254.0.254 (docker0 ip) but I am not able to ping host eth0 ip 172.23.166.176.

I tried by completely stopping firewall and I added these explicit rules as well to firewall to the INPUT chain but no luck yet. Who can help me on this ? Or is this a bug ?

ACCEPT     all  --  172.23.166.176       169.254.0.0/24      
ACCEPT     all  --  169.254.0.0/24       172.23.166.176   

In addition, let me also give 'ip route' output on my host:

[root@pxgrid-106 bin]# ip route
default via 172.23.166.129 dev eth0 
169.254.0.0/24 dev docker0  proto kernel  scope link  src 169.254.0.254 
172.23.166.128/25 dev eth0  proto kernel  scope link  src 172.23.166.176 

@tn-osimis thanks for the suggestion, I updated and works nicely, heres my code for others:

docker-compose.yml

version: '2'

services:
  nginx:
    image: nginx:latest
    ports:
      - 80:80
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
    extra_hosts:
      - "dockerhost:${localhost_ip}"

StartDocker.sh

#!/bin/bash
dev_docker_file="docker-compose.yml"

export localhost_ip="$(ifconfig en0 inet | grep "inet " | awk -F'[: ]+' '{ print $2 }')"

docker-compose -f $dev_docker_file build
docker-compose -f $dev_docker_file up

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

@magicalbob's solution is indeed magical 🎉

+1 for dockerhost

+1, by the way, why is this closed? The issue/feature request has not yet been resolved

Any updates on this ? What is the right way ?

+1

Does anyone have update on how to connect to host (host's public IP like eth0 ip and not docker0 interface ip) from the container ?

@sburnwal docker run --add-host=publicip:$(hostname --ip) ubuntu ping -c2 publicip

See @retog's answer

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

leaving it here as maybe it will help at least some of you:

I was told that the default network interface does not always have to be docker0 also, getting the docker0 IP won't work on systems other than linux, this was a problem for us because some of our developers where using macs

so instead of using the magic scripts to get host IP from the network connection, I got recommended to force the container's network settings instead of trying to find it out programmatically

I configured the docker network settings in my docker-compose file like this:

``` networks:
default:
driver: bridge
ipam:
config:
- subnet: 10.10.0.0/24
gateway: 10.10.0.1

if you are only running one container, you could first create a network (`docker network create`) and connect to it with `docker run --network=<network-name>|<network-id> <image name>`

BUT if you don't want to do this (i.e. force the docker network) and really want to get the default gateway IP, you could get it more cleanly than using `docker0` network interface, for example by parsing the `docker network inspect <network name>`, which outputs the gateway IP (among other things):

...
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
}
...

you could also use the `--format` option of `docker network inspect` to get only the fields that are of interest, like this:

$ docker network inspect bridge --format='{{range .IPAM.Config}}{{.Gateway}}{{end}}'
$ 172.17.0.1
```

NOTE: if there were more .IPAM.Config entries you'd get all of them in the output, so additional logic of picking the right one would be needed

+1 for dockerhost

Note, this would be very useful for when you simply want to use xdebug inside a php-fpm container to connect to your IDE for debugging since you obviously can't run an IDE in a container.

+1 for dockerhost

My summary of the above hackarounds:

in docker-compose.yml:

nginx:
  restart: always
  build: ./nginx/
  ports:
    - "80:80"
  extra_hosts:
    # requires `export DOCKERHOST="$(ifconfig en0 inet | grep "inet " | awk -F'[: ]+' '{ print $2 }')"` in ~/.bash_profile
    - "dockerhost:$DOCKERHOST"

in ~/.bash_profile:

# see https://github.com/docker/docker/issues/1143
export DOCKERHOST="$(ifconfig en0 inet | grep "inet " | awk -F'[: ]+' '{ print $2 }')"

in nginx-conf:

location / {
        proxy_pass http://dockerhost:3000;
        proxy_set_header host $host;
        proxy_set_header x-real-ip $remote_addr;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
}

I tend to use the svendowideit/ambassador docker image to work around this issue.

For example, if I want to bind to an elasticsearch node running on the Docker host:

docker run --restart always -d --name elasticsearch --expose 9200 -e ELASTICSEARCH_PORT_9200_TCP=tcp://<host>:9200 svendowideit/ambassador

Now, other docker containers can just find elasticsearch by using --link elasticsearch, while remaining agnostic as to whether elasticsearch is running in a docker container, on the host, or wherever.

+1 for dockerhost

... for now though you could do this:

$ docker build --build-arg HTTP_PROXY=172.16.42.1 .

... or in my similar case with docker-compose - I do this:

client:
        build: ./client
        extra_hosts:
            - "foobar:${HTTP_PROXY}"

set your host at build time:

export HTTP_PROXY=172.16.42.1 && docker-compose build

@rattrayalex on mac, I found this prints out the IP directly: ipconfig getifaddr en0

+1 for dockerhost

Can those of you just doing +1 for dockerhost, just add a thumbs up (reaction) to an existing comment of the same nature? +1 comments just extend/fill up this issue thread unnecessarily. There's already so many, no need to add more, but we could always use a thumbs up! 👍

I think the commenters are writing an extensive +1 to highlight this is a common use case, with known workarounds, and should require time estimable in hours to be coded in by docker team
Still, it's open since 3 years 👍

So, it looks to me it's more a "are we still here yet"? Instead just an emoji

@magicalbob solution in https://github.com/docker/docker/issues/1143#issuecomment-233152700 works flawlessly in every container setup and bridge I've tried so far!

solution from https://github.com/docker/docker/issues/1143#issuecomment-70052272 doesn't work when using docker compose extra_hosts

+1 for dockerhost

Still no dockerhost?

+1 for dockerhost

+1 for dockerhost

it won't happen, it was closed 3 years ago, and they don't plan to ever implement this. the same reason docker.sock is a footgun for security, dockerhost is as well. having a resolvable domain name from inside your application is a major security problem IMHO. if you must, just use the workarounds, selectively only where accessing host services by IP won't be an increased attack surface.

Don't disagree about it not happening, but I don't see how dockerhost is a security risk unless you shut down the many easy work arounds as well ....⁣

Sent from BlueMail ​

On 16 Dec 2016, 17:58, at 17:58, Paulo Cesar notifications@github.com wrote:

it won't happen, it was closed 3 years ago, and they don't plan to ever
implement this. the same reason docker.sock is a footgun for security,
dockerhost is as well. having a resolvable domain name from inside your
application is a major security problem IMHO. if you must, just use the
workarounds, selectively only where accessing host services by IP won't
be an increased attack surface.

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/docker/docker/issues/1143#issuecomment-267655915

+1 for dockerhost

+1 for dockerhost

Ip of host machine is 192.168.0.208.

docker-compose file is as follows:

version: '2'
 services:
   zl-tigervnc:
     image: zl/dl-tigervnc:1.5
     container_name: zl_dl_tigervnc
     restart: always
     tty: true
     ports:
       - "8001:8888"
       - "6001:6006"
       - "8901:5900"
       - "10001:22"
     devices:
       - /dev/nvidia0
     volumes:
       - ~/data:/root/data
       - /var/run/docker.sock:/var/run/docker.sock
     extra_hosts:
       - "dockerhost:192.168.0.208"

A container was launched by this script. The container want to access port 8080 on the host machine (e.g. 192.168.0.208:8080). But it doesn't work.

However, I use port forwarding to map 8080 on host machine to 8080 on router. Router's IP was 63.25.20.83. The container could access host machine's 8080 by port forwarding(e.g. 63.25.20.83:8080).

I have tried many solutions from this page, but it still does not work.

Note, this would be very useful for when you simply want to use xdebug inside a php-fpm container to
connect to your IDE for debugging since you obviously can't run an IDE in a container.

Exactly @colinmollenhour ! Except there's an additional problem. You can ping the host, but you can't actually connect to the host's ports (e.g. a remote debugger running on 9000).

Lots of old and/or not quite right stuff on the net. Lots of people seem to think setting up an alias IP and attaching it to the lo interface should work, but doesn't.

(testing with netcat on the host and telnet in the docker container, so I'm well stripped down).

@bitwombat check your host's firewall for a port 9000 rule

@gregmartyn That was exactly it. Thanks! Would have been the first thing I checked in a simpler setup! All the layers of deception had me checking weirder stuff.

July, 2013 to 2017. Almost 4 years, how is this not a feature yet? This is clearly not an isolated use case.

It's not a feature because it doesn't align with Docker's strategy to be a multi-host deployment management solution.

I still believe there are many valid use-cases and adding the feature would not cause any harm to Docker's strategy. Anyway, here is a somewhat simple solution to resolve the docker host address from within a container which I think should work pretty universally:

ip route | awk '/^default via /{print $3}'

+1 for dockerhost

Very much needed. +1 for dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

+1 for any solution which is not a hacky workaround. Nothing works here.

Agreed. +1 for a solution. Need it for doing development and testing work on projects that use docker.

+1 for dockerhost

+1 for dockerhost

+1 for dockerhost

wow.. 4 years and going strong. +1 for dockerhost?

+1 dockerhost!!

+1 dockerhost!!!

This is probably muted. We should maybe make a new issue referencing this one.

:+1: for dockerhost... Right now I'm setting it up by env:

export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)

It makes things cumbersome.

Even something as simple as using docker-compose in a container with ssh-tunneled private registry doesn't work because Docker is so dead set on not wanting dockerhost.

NOTE: this issue is closed, so there is no point adding more comments to it :-)

If you need this feature, you can easily add a dockerhost DNS alias using --add-host and equivalent options.

Why is this not available by default? Because it only works in trivial cases. I.e.:

  • what about containers that _don't_ have network access?
  • what about containers that have _multiple_ networks?
  • what about containers running on a Swarm cluster?

If you have a good use case (i.e. "I would like to do XXX, and to do so, I'd like to have dockerhost...") as well as solid answers to the questions above (and other corner cases that might arise), feel free to open a new issue referencing this one!

Thank you.

The problem is your host ip could be dynamic and depending on which network you are on. If you are developing connecting to the host for debugging is a must have. The reason everybody want the dockerhost is not because the options that are not there are convient or useful at all.
Docker has tons of options that are not relevant to all, what makes this any different? Why not have the option in docker to enable dockerhost if needed, that would make 99% happy i guess.

@jpetazzo

what about containers that don't have network access?

Then dockerhost won't work.

what about containers that have multiple networks?

Then dockerhost won't work.

what about containers running on a Swarm cluster?

Who cares?

If you're developing some software involving components outside of the docker ecosystem (like some software running inside a Windows VM on the host), then it's a pain to connect it to docker. Why does it have to be a pain?

All of the cases you have listed are not what the 226 comments in this thread are talking about, they are talking about the basic case.

Edit: Sorry, I've deleted part of my comment. I didn't mean to rant. It's just a bit frustrating, this is a much needed feature for some users of docker and if you're developing software that requires it it is even more frustrating to have to hack around it for seemingly no reason.

@jpetazzo sometimes the "trivial" cases are 80% of use cases. Dockerhost would be very valuable for people using Docker in development or staging environments, or really any environments that don't rely on swarm / multi-host networking. Here are some examples where I would like to use Dockerhost:

1) Docker-Compose on Windows with private registry. We have a self-hosted private Docker registry. Developers can access the registry through SSH tunnel with port forwarding (i.e. on localhost:5000). But when Docker-Compose is running in its own Docker container there is no way to specify the private registry host. If we had dockerhost, we could use dockerhost:5000 in our docker-compose.yml file giving it access to the forwarded port on the host.

2) Any application needing to communicate with services running on the host. For example, a Web-based SSH client running in a docker container could be used to establish a SSH connection with the host if there was dockerhost. There are countless examples of services running on the host of a Docker container that can be used if there is dockerhost.

3) Reverse port-forwarding. We can have our SSH or OpenVPN server running in a Docker container and it could give clients access to services running on the host if there was dockerhost. You could setup port forwarding from the container to dockerhost.

I would love to hear any technical justification why Moby developers refuse to hear the community when it comes to dockerhost. So far I am hearing only political/business reasons.

Came across this thread while trying to find a way to attach a macvlan network interface to a swarm service... Swarm seems like a half-finished solution, the inability to expose services directly to outside world is a continuous frustration for me. It feels like Docker lost traction with real world use cases, fast-forward 4 years from when this thread started and there is still no native implementation to have containers manage themselves.

I'm relatively new to Docker. Everything I've read from the official docs makes sense. _Docker is actually a fairly easy tool to pick up._

In fact, the only thing that I've spent several hours trying to figure out is _how to connect to the host from within a container._ Everything else was easy to determine. I still don't know how to do it. A lot of the "hacks" in this forum and on SO just aren't working. We need to access the host in order to consume a legacy app that is not set to be "Dockerized" for many months.

Won't ever happen I'm afraid Josh. The developers have made that clear.
Which makes docker worse than useless for a large class of applications,
unfortunately docker (or "moby") couldn't give a toss about those
developers or those applications.

On 19 Jun 2017 01:08, "Josh Wedekind" notifications@github.com wrote:

I'm relatively new to Docker. Everything I've read from the official docs
makes sense. Docker is actually a fairly easy tool to pick up.

In fact, the only thing that I've spent several hours trying to figure out
is how to connect to the host from within a container. Everything else
was easy to determine. I still don't know how to do it. A lot of the
"hacks" in this forum and on SO just aren't working. We need to access the
host in order to consume a legacy app that is not set to be "Dockerized"
for many months.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/moby/moby/issues/1143#issuecomment-309311997, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA-shyjglsJXawxHHWGhQH0d1LlZeJqxks5sFbwXgaJpZM4Ayw00
.

It is really frustrating that you can handle every situation with a simple declarative docker-compose file but you should use a thousand tricks to connect to a legacy application from your container in your development environment.
@thaJeztah It does not make a good first impression of docker at all.

FYI, we ended up abandoning Docker in production for precisely this reason. I know the Docker devs are dead-set against it (and I get that it's not as trivial a feature as some would claim) but I just wanted to chime in: you're losing real-world customers because of stubborn refusal to even consider this issue. As a reluctant Docker "expert" at my company, I now have to caution people who ask me about it by saying "Docker is great... except if you need to communicate with anything running on the localhost."

So, again, I'm sure the issue is muted, but if you ever look back at this thread, Docker devs, this is causing real pain and is causing real Docker customers to stop using Docker. I recommend reconsidering your stubborn refusal to address the issue; just because it doesn't fit your vision of how Docker is "supposed" to be used doesn't mean it is a useless or unnecessary feature.

Not only could (are) they losing existing clients. We could have migrated everything to docker if it lived up to it's promise on development. This is real anti-promotion for docker to use on a large scale for all processes.

The way I make it working for now is by creating a network. Here is how my docker-compose file looks like :

version: '2'
services:
  <container_name>:
    image: <image_name>
    networks:
      - dockernet

networks:
  dockernet:
    driver: bridge
    ipam:
      config:
        - subnet: 192.168.0.0/24
          gateway: 192.168.0.1

After doing this, you can access the host with 192.168.0.1.
Might be useful to some of you since this feature is not coming anytime soon.

@deltabweb Will requests coming into apps on the host machine appear as "localhost" traffic, or will I have to modify the apps to respond to 192.168.0.1?

Thanks for your help.

@halfnibble
I haven't looked at the documentation of networks in details but my understanding is that a new network on your host machine is created where :

  • IP of the "dockerhost" is 192.168.0.1
  • IP of the first container is 192.168.0.2
  • IP of the second container is 192.168.0.3
  • and so on ...

Starting from there, everything should work as if you had physical machines connected together on a local network :

  • machines can connect to each other
  • and you should even be able to use ping 192.168.0.2 from the host - this will only work if your container responds to pings though.

So to answer your question, I think that your apps on the host machine will need to respond to 192.168.0.X (depending on the container trying to connect).

@deltabweb how cloud I access the server host port? >>> Connection refused

@nooperpudd Just to be sure : you are trying to access an application running on the host from a container right ?
I would first check if the application allows incoming connections from outside (0.0.0.0 and not just localhost). And maybe also make sure that you don't have a firewall blocking the connection ?

@nooperpudd if you are using Docker for mac you cannot use host mode, @deltabweb solution is also not working for me (my servers are all listening to 0.0.0.0 and my host machine firewalls were turned off but I get Connection refused every time). After about 2 days of try and error the only way that I found to fix this issue is the bellow script:

#!/bin/bash
export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)

# You should use DOCKERHOST env variable in your `docker-compose.yml` 
# and put it everywhere you want to connect to `localhost` of the host machine
docker-compose $@

Only problem with this approach is if your IP changes after running your containers you should run them again and they cannot find your new IP.

A remarkable observation for me is that the command, which is executed when starting docker containers, cannot connect to the container via its host IP address. However when that command does not perform such connection attempts and finishes executing successfully , the container is then able to reach itself via its host IP address.

I made this observation when trying to expose the endpoints of NoSql database cluster instances to clients outside the swarm cluster. After all, these endpoints need to be configured with private or public IP addresses of the VM in order for the external client to reach them. Cassandra is however designed in such a way that when it starts it tries to immediately connect with the host IP address (set as CASSANDRA_BROADCAST_ADDRESS environment variable -- see below) and therefore fails. On the other hand, Mongodb replicasets nodes are first all started in a clean state, and then a separate initating command is executed such that the primary and secondary nodes can form a replicaset.

Below you see a detailed account of this observation for cassandra (I create these with docker swarm but the same problem appears in docker run -d (in NAT mode, thus without --net=host option)

1) On the one hand, a container created by

docker service create  --name cassandra-service
--publish mode=host,target=7000,published=7000,protocol=tcp 
-e CASSANDRA_SEEDS=host IP address  -e CASSANDRA_BROADCAST_ADDRESS=host IP address

fails with the message that it cannot connect to the listen address: <host IP address>:7000

2) On the other hand, a container attached to an overlay network, created by

docker service create  --network cassandra-net --name cassandra-service
-e CASSANDRA_SEEDS=cassandra-service  -e CASSANDRA_BROADCAST_ADDRESS=cassandra-service

starts correctly and at the same time I can connect to the host ip address on any port that is exposed in the Dockerfile of the cassandra:2.0 image:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                                         NAMES
07603a75a379        cassandra:2.0       "/docker-entrypoin..."   About a minute ago   Up About a minute   7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp   cassandra-service-1.1.m243u97zku15w08m6puytdngs

$ docker exec -it 1e61ec16f8d0 bash
root@1e61ec16f8d0:/# cqlsh 172.17.13.151
Connected to Test Cluster at 172.17.13.151:9160.
[cqlsh 4.1.1 | Cassandra 2.0.17 | CQL spec 3.1.1 | Thrift protocol 19.39.0]

Similarly, the same can be observed during the creation of a second cassandra node

1) If I create a second cassandra container on another node by

docker service create  --network cassandra-net --name cassandra-service-2
-e CASSANDRA_SEEDS=172.17.13.151  -e CASSANDRA_BROADCAST_ADDRESS=cassandra-service-2

the container fails with the runtime exception that it cannot gossip with the seed:

java.lang.RuntimeException: Unable to gossip with any seeds
        at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1322)
        at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:457)

2) On the other hand, if I create a cassandra container via docker run -d, I can reach the seed node via its host IP address:

$ docker run -d cassandra:2.0
d87a79cc3de8cd7e4cf40284d1eca91ceb660581cc71082fe64a6b84a09fbd77
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                         NAMES
d87a79cc3de8        cassandra:2.0       "/docker-entrypoin..."   3 seconds ago       Up 2 seconds        7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp   trusting_ardinghelli
$ docker exec -it d87a79cc3de8 bash
root@d87a79cc3de8:/# cqlsh 172.17.13.151
Connected to Test Cluster at 172.17.13.151:9160.
[cqlsh 4.1.1 | Cassandra 2.0.17 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh>

Specifically for Cassandra, you solve this problem by turning auto bootstrapping of cassandra nodes off. You do that by setting auto_bootstrap to false in /etc/cassandra/cassandra.yaml using an entrypoint command in Compose V3:

version: '3'
services:
  cassandra-1:
    image: cassandra:2.0
    entrypoint:
    - "sh"
    - "-c"
    - "echo auto_bootstrap: false >> /etc/cassandra/cassandra.yaml; /docker-entrypoint.sh cassandra -f"
    environment:
      CASSANDRA_BROADCAST_ADDRESS: 172.17.13.151
    volumes:
    - volume1:/var/lib/cassandra
    ports:
    - "7000:7000"
    - "7001"
    - "7199"
    - "9042:9042"
    - "9160:9160"

and then manually start cassandra nodes by executing docker exec -it <container id> nodetool rebuild.

I could use this feature in development, well ...

@jpetazzo We develop PHP solutions in teams on a mix of platforms. Our debugger (xdebug) needs to connect back to the IDE on the host. On Windows and Linux this works 'out of the box' but on mac our developers have to change the xdebug.ini file to specifically mention their local IP. But the Dockerfile is under source control... queue constant conflicts and swearing as developers clash over editing this file. Yes there are scriptable workarounds but why does docker for windows and mac have docker.for.win.localhost and docker.for.mac.localhost? It's partially helpful but we still need scripts to detect which platform we're on to set this up right. It just seems so much more complicated that it should be. Please reconsider this feature. Docker can be a steep learning curve but issues like this leave your users searching in disbelief on google for hours on end.

Checking the https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds page helped, using docker.for.mac.localhost worked for us :)

For better or worse, docker-compose is still by far the easiest way I know of to spin up a Selenium Hub and headless Firefox/Chrome grid nodes for testing a web server running on a local dev machine or in CI (launching the web server in a docker container is just too slow to be convenient for development). It's not at all what Docker was intended for, but Docker's the best tool for that job, and it's Docker's own fault 😆 Except the only problem is easily figuring out the host ip in a way that works on any OS.

@rskuipers can you explain what exactly did you do with docker.for.mac.localhost? I'm trying to make requests from inside of my containers to resolve to the host machine. My containers are running on traefik which means I can access them through domain.docker.localhost, but if I try to access an URL that starts with that from inside my container it's not resolving.

Currently what I did, is I added this to my docker-compose.yml, which adds a line to /etc/hosts so that the domain resolves nicely:

extra_hosts: - "domain.docker.localhost:172.18.0.1"

The IP is the host IP from within my container, which I can get by using ip route | awk '/^default via /{print $3}'. But I wouldn't like to hardcode that if possible ...

@jzavrl All I did was use docker.for.mac.localhost to have my HTTP requests go through a proxy running on the host. I don't use any other layers aside from docker-compose.

That's exactly what I'm interested in. What sort of changes specifically you had to make?

@jzavrl None :P it just worked.

I don't get it, what did you do with docker.for.mac.localhost then?

@jzavrl I used that instead of the IP to connect to. So docker.for.mac.localhost:8888

Ahhhhhh, now that's starting to make sense now. Will try this then. Cheers @rskuipers.

just use the "en0" 's ip on your computer.

for example

ssh [email protected]

'192.168.1.100' maybe from your router's DHCP service.

@acuthbert, thanks for your the suggestion.
docker.for.win.localhost works for me in Docker for Windows. There is hope for Docker and Windows yet. 😣

there is little technical reason why this cannot be done and satisfy the 90% of the people on this thread, the corner cases and situations where it doesn't really work, the people who are developing in that situaton could be satisfied with a simple set of "use-cases" which explain what scenarios its likely to not work.

This is mostly just political trash and not actual technical reasoning here. I'm hoping one of the other container engines picks up and I can swap kubernetes to using that instead. Then I won't have to deal with this rubbish anymore.

@NdubisiOnuora, what type of your application? web-application?

I have 2 console apps (tcp-server in host and tcp-client in container).
Because they use tcp, i need exactly IP (docker.for.win.localhost do no fit, because it's domain).

For example, Which ip:port i must set in tcp-client, If i set ip:port 127.0.0.1:9595 in tcp-server?

Just resolve the domain to an IP address?

@orf,
I want use this code in C#:
IPAddress hostAddr = Dns.Resolve("docker.for.win.localhost").AddressList[0];
But before that I try ping to docker.for.win.localhost, but don't see that, error: Ping request could not find host docker.for.win.localhost. Please check the name and try again.
My Dockerfile:
FROM microsoft/windowsservercore
ADD . /
ENTRYPOINT powershell ping docker.for.win.localhost

In case anyone missed it I believe the solution as of 18.03 is host.docker.internal although for some reason this only works on Docker for Windows!? Why not others?

EDIT: Didn't see that comments were collapsed by Github... 🤦‍♂️

Works for me:
docker run --rm -it --add-host "docker.for.localhost:$(ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+')" alpine:latest ping docker.for.localhost

@lukasmrtvy That works for a shell, but how about for docker-compose.yml?

I've created a container to solve this problem in a generic way working an all platforms https://github.com/qoomon/docker-host

Was this page helpful?
0 / 5 - 0 ratings

Related issues

peterbraden picture peterbraden  ·  160Comments

shykes picture shykes  ·  153Comments

Timunas picture Timunas  ·  146Comments

mjsalinger picture mjsalinger  ·  174Comments

digital-wonderland picture digital-wonderland  ·  141Comments