Moby: Forward ssh key agent into container

Created on 13 Jun 2014  ·  190Comments  ·  Source: moby/moby

It would be nice to be able to forward an ssh key agent into a container during a run or build.
Frequently we need to build source code which exists in a private repository where access is controlled by ssh key.

Adding the key file into the container is a bad idea as:

  1. You've just lost control of your ssh key
  2. Your key might need to be unlocked via passphrase
  3. Your key might not be in a file at all, and only accessible through the key agent.

You could do something like:

# docker run -t -i -v "$SSH_AUTH_SOCK:/tmp/ssh_auth_sock" -e "SSH_AUTH_SOCK=/tmp/ssh_auth_sock" fedora ssh-add -l
2048 82:58:b6:82:c8:89:da:45:ea:9a:1a:13:9c:c3:f9:52 phemmer@whistler (RSA)

But:

  1. This only works for docker run, not build.
  2. This only works if the docker daemon is running on the same host as the client.

 

The ideal solution is to have the client forward the key agent socket just like ssh can.
However the difficulty in this is that it would require the remote API build and attach calls to support proxying an arbitrary number of socket streams. Just doing a single 2-way stream wouldn't be sufficient as the ssh key agent is a unix domain socket, and it can have multiple simultaneous connections.

aresecurity exintermediate kinfeature

Most helpful comment

@kienpham2000, why this solution would not keep key into image layer? The actions of copy and remove the key are being done in separated commands, so there is a layer that should have the key.
Our team was using yours solution until yesterday, but we find out an improved solution:

  • We generate a pre-sign URL to access the key with aws s3 cli, and limit the access for about 5 minutes, we save this pre-sign URL into a file in repo directory, then in dockerfile we add it to the image.
  • In dockerfile we have a RUN command that do all these steps: use the pre-sing URL to get the ssh key, run npm install, and remove the ssh key.
    By doing this in one single command the ssh key would not be stored in any layer, but the pre-sign URL will be stored, and this is not a problem because the URL will not be valid after 5 minutes.

The build script looks like:

# build.sh
aws s3 presign s3://my_bucket/my_key --expires-in 300 > ./pre_sign_url
docker build -t my-service .

Dockerfile looks like this:

FROM node

COPY . .

RUN eval "$(ssh-agent -s)" && \
    wget -i ./pre_sign_url -q -O - > ./my_key && \
    chmod 700 ./my_key && \
    ssh-add ./my_key && \
    ssh -o StrictHostKeyChecking=no [email protected] || true && \
    npm install --production && \
    rm ./my_key && \
    rm -rf ~/.ssh/*

ENTRYPOINT ["npm", "run"]

CMD ["start"]

All 190 comments

I wonder if #6075 will give you what you need

A secret container might make it a little bit safer, but all the points mentioned still stand.

+1 I would find this capability useful as well. In particular when building containers that require software from private git repos, for example. I'd rather not have to share a repo key into the container, and instead would like to be able to have the "docker build ..." use some other method for gaining access to the unlocked SSH keys, perhaps through a running ssh-agent.

+1. I'm just starting to get my feet wet with Docker and this was the first barrier that I hit. I spent a while trying to use VOLUME to mount the auth sock before I realized that docker can't/won't mount a host volume during a build.

I don't want copies of a password-less SSH key lying around and the mechanics of copying one into a container then deleting it during the build feels wrong. I do work within EC2 and don't even feel good about copying my private keys up there (password-less or not.)

My use case is building an erlang project with rebar. Sure enough, I _could_ clone the first repo and ADD it to the image with a Dockerfile, but that doesn't work with private dependencies that the project has. I guess I could just build the project on the host machine and ADD the result to the new Docker image, but I'd like to build it in the sandbox that is Docker.

Here are some other folks that have the same use-case: https://twitter.com/damncabbage/status/453347012184784896

Please, embrace SSH_AUTH_SOCK, it is very useful.

Thanks

Edit: Now that I know more about how Docker works (FS layers), it's impossible to do what I described in regards to ADDing an SSH key during a build and deleting it later. The key will still exist in some of the FS layers.

+1, being able to use SSH_AUTH_SOCK will be super useful!

I use SSH keys to authenticate with Github, whether it's a private repository or a public one.

This means my git clone commands looks like: git clone [email protected]:razic/my-repo.git.

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good. I cannot however mount my ~/.ssh during a docker build.

:+1: for ssh forwarding during builds.

As I understand this is wrong way. Right way is create docker image in dev machine, and than copy it to docker server.

@SevaUA - no that's not correct. This request is due to a limitation when doing docker build.... You cannot export a variable into this stage like you can when doing a docker run .... The run command allows variables to be export into the docker container while running, whereas the build does not allow this. This limitation is partially intentional based on how dockerd works when building containers. But there are ways around this and the usecase that is described is a valid one. So this request is attempting to get this capability implemented in build, in some fashion.

I like the idea of #6697 (secret store/vault), and that might work for this once it's merged in. But if that doesn't work out, an alternative is to do man-in-the-middle transparent proxying ssh stuff outside of the docker daemon, intercepting docker daemon traffic (not internally). Alternatively, all git+ssh requests could be to some locally-defined host that transparently proxies to github or whatever you ultimately need to end up at.

That idea has already been raised (see comment 2). It does not solve the issue.

+1 for ssh forwarding during builds.

+1 on SSH agent forwarding on docker build

+1 for ssh forwarding during build for the likes of npm install or similar.

Has anyone got ssh forwarding working during run on OSX? I've put a question up here: http://stackoverflow.com/questions/27036936/using-ssh-agent-with-docker/27044586?noredirect=1#comment42633776_27044586 it looks like it's not possible with OSX...

+1 =(

Just hit this roadblock as well. Trying to run npm install pointed at a private repo. setup looks like:
host -> vagrant -> docker can ssh-agent forward host -> vagrant -! docker

+1
Just hit this while trying to figure out how to get ssh agent working during 'docker build'.

+1 same as the previous guys. Seems the best solution to this issue when needing to access one or more private git repositories (think bundle install and npm install for instance) when building the Docker image.

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good.

@razic Can you share how you get that working? Because when I tried that before it did complain about "Bad owner or permissions"

Unless you make sure that all containers run with a specific user or permissions which allows you to do that?

+1 to SSH_AUTH_SOCK

@tonivdv have a look at the docker run command in the initial comment on this issue. It bind mounts the path referred to by SSH_AUTH_SOCK to /tmp/ssh_auth_sock inside the container, then sets the SSH_AUTH_SOCK in the container to that path.

@md5 I assume @razic and @tonivdv are talking about mounting like this: -v ~/.ssh:/root/.ssh:ro, but when you do this the .ssh files aren't owned by root and therefore fail the security checks.

@KyleJamesWalker yup that's what I understand from @razic and which was one of my attempts some time ago, so when I read @razic was able to make it work, I was wondering how :)

@tonivdv I'd also love to know if it's possible, I couldn't find anything when I last tried though.

+1 I'm interested in building disposable dev environments using Docker but I can't quite get it working. This would help a lot in that regard.

To anyone looking for a temporary solution, I've got a fix that I use which brute forces things in:

https://github.com/atrauzzi/docker-laravel/blob/master/images/php-cli/entrypoint.sh

It's by no means a desirable solution as it requires a whole entrypoint script, but does work.

@atrauzzi interesting approach. For our dev env we build a base image and copy the ssh key directly in it. It has the advantage of not needing to provide it on each run. And every image inheriting from that mage by default has the key in it also. However with our way you cannot share it publicly obviously ;p

+1 this would be great

@tonivdv The container that script is for is made and destroyed frequently as it's just a host for CLI tools. You're of course free to only do the operation once. But if someone changes their settings and re-runs a command through the container, it has to be a fresh copy every time.

@atrauzzi I understand. Your approach should be adopted by docker images which could require a private ssh key. For example, a composer image should include your entrypoint script in case of private repos. At least until docker comes with a native solution.

:+1: for ssh forwarding via build

Must-have here as well!

@atrauzzi I'm using another approach currently which I really like. It's making a data volume container with the ssh stuff in it. When you want to use you ssh keys into another container I can simply use that with following command:

docker run -ti --volumes-from ssh-data ...

This way you don't have the need to put an entrypoint on each image and it can work with all images.

To create that container I do the following

docker run \
  --name ssh-data \
  -v /root/.ssh \
  -v ${USER_PRIVATE_KEY}:/root/.ssh/id_rsa \
  busybox \
  sh -c 'chown -R root:root ~/.ssh && chmod -R 400 ~/.ssh'

Hope this can help others :)

Cheers

@tonivdv - I took my approach because if someone has to add or update SSH settings, they have to be re-imported. The specific container I'm using is one that gets built to run single commands, so every time it runs, it takes the copy to ensure it's up to date.

@atrauzzi Yup I understand. That being said, it's up to the user to maintain it's ssh volume container correctly. He can even use different ones if necessary. And optionally it can be generated on the fly with a script. But I don't think there is one and only good solution. It all depends on the needs. Just wanted to share so others could choose what solution based on their needs. Hope to blog about this soon, and I'll forward to your solution too! Cheers

I wouldn't make it a requirement that people running your containers maintain a data-only container full of ssh keys. Seems involved.

@atrauzzi It's true that the volume container must be there, but in your way the user must share it's ssh key upon running too right? So besides to have the need for a ssh volume container the only difference in both solutions from a running point of view is:

docker run ... --volumes-from ssh-data ... php-cli ...

and

docker run ... -v ~/.ssh:/path/.host-ssh ... php-cli ..

right? Or am I missing something else :)

But I completely get why you are doing it your way. However, should you want to use e.g. a composer image from someone else, the volumes-from way will work out of the box. At least it avoids to create your own image with the "entrypoint hack".

As I said, both are a work around and both have pros and cons.

Cheers

Would be really great to get an update from the Docker team about the status of this feature. Specifically, SSH authentication from docker build.

This is approaching 1 year already. Kinda surprising, given the practicality of real life use cases for this. Currently, we are dynamically generating images by committing running containers. We can't have a Dockerfile in our application's repository. This breaks the flow for practically everything. I can't really use my application with any Docker services like Compose or Swarm until this is solved.

An update would be super appreciated. Please and thank you.

/cc @phemmer

It's not that we don't want this feature or anything, I really see a use case for something like this or secrets in build we would just need a proposal from someone willing to implement and then if approved the implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.

@jfrazelle

I know you guys aren't ignoring us :)

So the status is:

It's something we'd consider implementing if there is an accepted proposal
and engineering bandwidth.

Does this sound accurate to you?

Also, are there currently any open proposals that address this issue?

On Tuesday, April 7, 2015, Jessie Frazelle [email protected] wrote:

It's not that we don't want this feature or anything, I really see a use
case for something like this or secrets in build we would just need a
proposal from someone willing to implement and then if approved the
implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90737847.

It's something we'd consider implementing if there is an accepted proposal
and engineering bandwidth.

Yes

And I do not think there are any open proposals for this.

On Tue, Apr 7, 2015 at 2:36 PM, Zachary Adam Kaplan <
[email protected]> wrote:

@jfrazelle

I know you guys aren't ignoring us :)

So the status is:

It's something we'd consider implementing if there is an accepted proposal
and engineering bandwidth.

Does this sound accurate to you?

Also, are there currently any open proposals that address this issue?

On Tuesday, April 7, 2015, Jessie Frazelle [email protected]
wrote:

It's not that we don't want this feature or anything, I really see a use
case for something like this or secrets in build we would just need a
proposal from someone willing to implement and then if approved the
implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90737847.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90738913.

I don't know if I'm oversimplifying things, but here is my proposal:

SSHAGENT: forward # defaults to ignore

If set, during build, the socket & associated environment variables are connected to the container, where they can be used. The mechanical pieces of this already exist and are working, it's just a matter of connecting them in docker build.

I do not have any experience working inside the docker codebase, but this is important enough to me that I would consider taking it on.

Great. Where can I find out how to submit a proposal? Is there a
specific guideline or should I just open an issue?

On Tuesday, April 7, 2015, Jessie Frazelle [email protected] wrote:

It's something we'd consider implementing if there is an accepted
proposal
and engineering bandwidth.

Yes

And I do not think there are any open proposals for this.

On Tue, Apr 7, 2015 at 2:36 PM, Zachary Adam Kaplan <
[email protected]

@jfrazelle

I know you guys aren't ignoring us :)

So the status is:

It's something we'd consider implementing if there is an accepted
proposal
and engineering bandwidth.

Does this sound accurate to you?

Also, are there currently any open proposals that address this issue?

On Tuesday, April 7, 2015, Jessie Frazelle <[email protected]
wrote:

It's not that we don't want this feature or anything, I really see a
use
case for something like this or secrets in build we would just need a
proposal from someone willing to implement and then if approved the
implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90737847.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90738913.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90739596.

I mean like a design proposal
https://docs.docker.com/project/advanced-contributing/#design-proposal

On Tue, Apr 7, 2015 at 2:39 PM, Daniel Staudigel [email protected]
wrote:

I don't know if I'm oversimplifying things, but here is my proposal:

SSHAGENT: forward # defaults to ignore

If set, during build, the socket & associated environment variables are
connected to the container, where they can be used. The mechanical pieces
of this already exist and are working, it's just a matter of connecting
them in docker build.

I do not have any experience working inside the docker codebase, but this
is important enough to me that I would consider taking it on.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90739803.

This is a really high level idea, but what if instead of attaching through the docker remote api, docker ran an init daemon, with a bundled ssh daemon, inside the container?

This could be used to solve a number of issues.

  • This daemon would be PID 1, and the main container process would be PID 2. This would solve all the issues with PID 1 ignoring signals and containers not shutting down properly. (#3793)
  • This would allow cleanly forwarding SSH key agent. (#6396)
  • This daemon could hold namespaces open (#12035)
  • A TTY would be created by the daemon (#11462)
  • ...and probably numerous other issues I'm forgetting.

you might wanna see https://github.com/docker/docker/issues/11529 about the
first bullet point

On Tue, Apr 7, 2015 at 2:46 PM, Patrick Hemmer [email protected]
wrote:

This is a really high level idea, but what if instead of attaching through
the docker remote api, docker ran an init daemon, with a bundled ssh
daemon, inside the container?

This could be used to solve a number of issues.

  • This daemon would be PID 1, and the main container process would be
    PID 2. This would solve all the issues with PID 1 ignoring signals and
    containers not shutting down properly. (#3793
    https://github.com/docker/docker/issues/3793)
  • This would allow cleanly forwarding SSH key agent. (#6396
    https://github.com/docker/docker/issues/6396)
  • This daemon could hold namespaces open (#12035
    https://github.com/docker/docker/issues/12035)
  • A TTY would be created by the daemon (#11462
    https://github.com/docker/docker/issues/11462)
  • ...and probably numerous other issues I'm forgetting.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90741192.

11529 is completely unrelated to the PID 1 issue.

shoot effing copy paste, now i have to find the other again

no it is that one, it fixes the PID 1 zombie things which is what I thought you were referring to but regardless i was just posting as its intestesting is all

@phemmer It sounds like you have the expertise to guide us in making an intelligent proposal for implementation.

It also looks like @dts and I are willing to spend time working on this.

@phemmer and @dts is there any possible way we could bring this discussion into a slightly more real-time chat client for easier communication? I'm available through Slack, Google Chat/Hangout, IRC and I'll download anything else if need be.

@phemmer It sounds like you have the expertise to guide us in making an intelligent proposal for implementation

Unfortunately not really :-)
I can throw out design ideas, but I only know small parts of the docker code base. This type of change is likely to be large scale.

There's been a few proposals in here already:

@phemmer suggested

what if instead of attaching through the docker remote api, docker ran an init daemon, with a bundled ssh daemon, inside the container?

@dts suggested

SSHAGENT: forward # defaults to ignore
If set, during build, the socket & associated environment variables are connected to the container, where they can be used. The mechanical pieces of this already exist and are working, it's just a matter of connecting them in docker build.

@razic suggested

Enable volume binding for docker build.

What we really need at this point is someone to accept one of them so we can start working on it.

@jfrazelle Any idea on how we can get to the next step? Really I'm just trying to get this done. It's clear that there's a bunch of interest in this. I'm willing to champion the feature, seeing it through to completion.

I can be available for a slack/irc/Gchat/etc meeting, I think this will make things a bit easier, at least to gather requirements and decide on a reasonable course of action.

@dts suggested

SSHAGENT: forward # defaults to ignore

This is just an idea on how it would be consumed, not implemented. The "init/ssh daemon" is an idea how it would be implemented. The two could both exist.

@razic suggested

Enable volume binding for docker run.

Unfortunately this would not work. Assuming this meant docker build, and not docker run, which already supports volume mounts, there client can be remote (boot2docker is one prominent example). Volume binds only works when the client is on the same host as the docker daemon.

@razic please see this link about the design proposal... those are not proposals https://docs.docker.com/project/advanced-contributing/#design-proposal

@phemmer

I'm failing to understand exactly why this can't work. docker-compose works with volume mounts against a swarm cluster. If the file/folder isn't on the host system, it exerts the same behavior as if you ran -v with a path that doesn't exist.

@jfrazelle Got it.

If the file/folder isn't on the host system, it exerts the same behavior as if you ran -v with a path that doesn't exist on a local docker.

I'm not sure I follow your point. How does that behavior help this issue?
If I have an ssh key agent listening at /tmp/ssh-UPg6h0 on my local machine, and I have docker running on a remote machine, and call docker build, that local ssh key agent isn't accessible to the docker daemon. The volume mount won't get it, and the docker build containers won't have access to the ssh key.

From a high level, I see only 2 ways to solve this:

1. Proxy the ssh key agent socket:

The docker daemon creates a unix domain socket inside the container and whenever something connects to it, it proxies that connection back to the client that is actually running the docker build command.

This might be difficult to implement as there can be an arbitrary number of connections to that unix domain socket inside the container. This would mean that the docker daemon & client have to proxy an arbitrary number of connections, or the daemon has to be able to speak the ssh agent protocol, and multiplex the requests.

However now that the docker remote API supports websockets (it didn't at the time this issue was created), this might not be too hard.

2. Start an actual SSH daemon

Instead of hacking around the ssh agent, use an actual ssh connection from the client into the container. The docker client would either have an ssh client bundled in, or would invoke ssh into the remote container.
This would be a much larger scale change as it would replace the way attaching to containers is implemented. But it would also alleviate docker from having to handle that, and migrate to standard protocols.
This also has the potential to solve other issues (as mentioned here).

So ultimately a lot larger scale change, but might be a more proper solution.
Though realistically, because of the scale, I doubt this will happen.

@phemmer

I'm not sure I follow your point. How does that behavior help this issue?

Because the most common use case for this is people building images with dependencies that are hosted in private repositories that require SSH authentication.

You build the image on a machine that has a SSH key. That simple.

If I have an ssh key agent listening at /tmp/ssh-UPg6h0 on my local machine, and I have docker running on a remote machine, and call docker build, that local ssh key agent isn't accessible to the docker daemon.

I know. Who cares? I'll be running docker build on a machine that has access to the auth socket.

What I'm trying to say is.... docker-compose allows you to use the volume command against a swarm cluster, regardless of if the file is actually on the host or not!.

We should do the same thing for volume mounts on docker builds.

| File is on system | Action |
| :-- | :-- |
| Yes | Mount |
| No | None (actually it kind of tries to mount but creates a empty folder if the file/folder does not exist, you can verify this by running docker run -v /DOES_NOT_EXIST:/DOES_NOT_EXIST ubuntu ls -la /DOES_NOT_EXIST) |

One of the concepts behind swarm is to make the multi-host model transparent.

It's good we're thinking about remote docker, but it shouldn't really matter.

We should just copy the behavior for volume mounting for docker build in the same exact way we do for docker run.

From https://github.com/docker/compose/blob/master/SWARM.md:

The primary thing stopping multi-container apps from working seamlessly on Swarm is getting them to talk to one another: enabling private communication between containers on different hosts hasn’t been solved in a non-hacky way.

Long-term, networking is getting overhauled in such a way that it’ll fit the multi-host model much better. For now, linked containers are automatically scheduled on the same host.

@phemmer I think people are probably thinking about a solution for the problem you described. The problem you are describing sounds like https://github.com/docker/docker/issues/7249 which is separate.

If we take my approach: just allowing volume mounting in docker build (regardless of if the file you're trying to mount is actually on the system, then we can close this issue. and start working on https://github.com/docker/docker/issues/7249 which would extend the behavior of this feature to working with remote docker daemons that don't have the local file.

@cpuguy83 Before I create a proposal, I was looking at #7133 and noticed it looks directly related.

Could you just add a few words here? Is #7133 actually related to my suggestion to fix this issue, which is to allow docker build to support volumes.

@razic It's in relation to the fact that VOLUME /foo actually creates a volume and mounts it into the container during build, which is generally undesirable.

I would also say a proposal based on using bind-mounts to get files into build containers is probably not going to fly.
See #6697

Running -v with docker build could have a differerent code execution path.
Instead of creating a volume and mounting it during build we can retain the
current behavior that volumes in dockerfiles don't get referenced. And
instead only act on -v when run using arguments to the CLI.

On Wednesday, April 8, 2015, Brian Goff [email protected] wrote:

@razic https://github.com/razic It's in relation to the fact that VOLUME
/foo actually creates a volume and mounts it into the container during
build, which is generally undesirable.

I would also say a proposal based on using bind-mounts to get files into
build containers is probably not going to fly.
See #6697 https://github.com/docker/docker/pull/6697


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-90905722.

@cpuguy83 Thanks for clarification.

6697 Also isn't going to fly since it's closed already and #10310 is practically a dupe of #6697.

+1, I just hit this today while trying to build an image for a Rails app that uses Bower to install the clientside dependencies. Happens that one of the dependencies points to [email protected]:angular/bower-angular-i18n.git and since git fails there, bower fails, and the image building fails, too.

I really like what vagrant does btw. With a single forward_agent config in the Vagrantfile, this is solved for vagrant guests. Could Docker implement something like this?

Also, as an additional note, this is happening while building the image. Does anyone know of any existing workarounds?

My workaround, was to generate a new RSA keypair, setup the pub key on github (add the fingerprint), and add the private key to the Docker image:

ADD keys/docker_rsa /srv/.ssh/id_rsa

I'd love to avoid this, but I guess this is acceptable for now. Any other suggestions appreciated!

I'm not sure who has killed more puppies. You for doing that, or Docker for not providing you with a better way as of yet.

In any case I'm going to submit a proposal this weekend probably. @cpuguy83 is right that people are at least thinking about this and discussing possible solutions. So at this point it's just a matter of us agreeing on something and getting someone to work on it. I'm totally down to work on it since it's actually one of my biggest gripes with Docker currently.

@razic It's a fairly common use-case, so thanks for looking into this, too. As for the workaround, it works. Possibly the key could be removed from the image after being used, after all, it's only used to get the application's code from github.

@fullofcaffeine I'm not 100% sure how Docker works internally but I think unless it's done in a single RUN command (which is impossible with your workaround) then the image's history maintains the SSH key.

@razic good point.

As a work around this limitation, we've been playing around with the idea of downloading the private keys (from a local HTTP server), running a command that requires the keys and them deleting the keys afterwards.

Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:

RUN ONVAULT npm install --unsafe-perm

Our first implementation around this concept is available at https://github.com/dockito/vault

The only drawback is requiring the HTTP server running, so no Docker hub builds.

Let me know what you think :)

+1
would love to see this implemented, it would help to set up containers for development environment

+1, just need forwarded ssh-agent with boot2dock

We've ended up doing a 3 step process to get around this limitation:

  1. build docker container without SSH-required dependencies, add'ing the source in the final step
  2. mount source via shared volume, plus SSH_AUTH_SOCK via shared volume and run the build step, writing the ssh-requiring output (say, github hosted ruby gems) back into the shared volume
  3. re-run docker build, which will re-trigger the source add, since the gems are now sitting in the source directory

The result is a docker image with dependencies pulled via SSH-auth that never had an SSH key in it.

I created a script to enable ssh agent forwarding for docker run in a boot2docker environment on OSX with minimal hassle. I know it doesn't solve the build issue, but might be useful for some:

https://gist.github.com/rcoup/53e8dee9f5ea27a51855

Does the Forward ssh key agent work with services like Amazon EC 2 Container service? It seems to me that this will require a specific software which may not available on all platforms or PaaS that you are using to deploy your containers.

A more generic, work-for-all, solution is required.

Currently, I'm using Environment variables. A bash script gets the private key (and known hosts) variable and prints it to id_rsa and known_hosts files. It works, but I have yet to evaluate the security implications of such a solution.

FWIW, I've found that a containerized ssh-agent and volume sharing works well with minimal goofery:

https://github.com/whilp/ssh-agent

Would be great to have first-class support for this, though.

It is important to distinguish what works in _run_ vs _build_. @whilp 's solution works wonderfully in _run_ but does not work in _build_ because you cannot access other docker's volumes during _build_. Hence why this ticket is still an aching, open sore.

@rvowles yep, agreed. I put something together to generate containers via a sequence of run/commit calls (ie, without Dockerfiles); that made sense for my particular use case, but generalized support (including build-time) for something like agent forwarding would be super helpful.

Are IPs for running containers included in the /etc/hosts during build? If so, one solution might be to start a container that served the keys, then curl to it during build.

You may all be interested to know that I've blogged about a way to use your SSH agent during docker build - http://aidanhs.com/blog/post/2015-10-07-dockerfiles-reproducibility-trickery/#_streamlining_your_experience_using_an_ssh_agent

You just need to start a single container. Once started, SSH agent access should work flawlessly with only 3 additional lines in your Dockerfile - no more need to expose your keys to the container.

Some caveats: you need Docker >= 1.8, and it won't work on a Docker Hub automated build (obviously). Please also read the note on security! Feel free to raise issues in the sshagent github repository I link to in the post if you have any problems.

I have also solved this problem in a similar way to @aidanhs - by pulling the required secret over the local docker subnet, and then removing it before the filesystem snapshot occurs. A running container serves the secret, which is discovered by the client using broadcast UDP.
https://github.com/mdsol/docker-ssh-exec

Has there been any progress on making this possible? I'm unable to bind-mount the host's ~/.ssh directory because permissions and ownership get messed up.

Wouldn't this be solvable by allowing bind mounts to force specific uid/gid and permissions?

@atrauzzi bind-mounts can't force uid/gid/permissions.
Can do this via FUSE (e.g. bindfs), but not with just normal bind mounts.

@cpuguy83 That really starts to take me down roads I don't want to have to deal with. Especially when I'm using a Windows-based host.

Is there no user friendly option here? I get the feeling like there's a problem here that's just being deferred.

@atrauzzi Indeed, it's not an easy problem to solve in the immediate term (not seamlessly anyway).

+1 this is a big blocker for an otherwise simple Node.js app Dockerfile. I've worked on many Node apps, and I've rarely seen one that doesn't have a private Github repo as an NPM dependency.

As a workaround @apeace, you could try to add them as git submodule(s) to your git repo. That way they are in the context and you can just add them during the build and if you want to be really clean delete or ignore the .git file in each one. In the docker build, they can just be installed using the local directory. If they need to be full fledged git repos for some reason, make sure the .git file is not present in the docker build and add .git/modules/<repo> as <path>/<repo>/.git. That will make sure they are normal repos as if they were cloned.

Thanks for that suggestion @jakirkham, but we've been using private repos as an NPM dependency for so long, I don't want to break the normal npm install workflow.

For now, we have a solution that works but is just icky. We have:

  • Created a Github user & team that has read-only access to the repos we use as NPM dependencies
  • Committed that user's private key to our repo where we have our Dockerfile
  • In the Dockerfile, instead of RUN npm install we do RUN GIT_SSH='/code/.docker/git_ssh.sh' npm install

Where git_ssh.sh is a script like this:

#!/bin/sh
ssh -o StrictHostKeyChecking=no -i /code/.docker/deploy_rsa "$@"

It works, but forwarding the ssh key agent would be so much nicer, and a lot less setup work!

:+1:
Can not believe that this feature request is still not implemented since here are a lot of use cases where people require access from private repos during build time.

I'm trying to build containers for various embedded system development environments, which require access to private repositories. Adding support for host ssh keys would be great feature. Most popular methods flying on SO and other pages are insecure and as long as there will be no support for this feature layers with private keys will spread around.

:+1:

:+1: Been needing this forever.

Hi @apeace, I don't know if you have seen it, but I've commented earlier about our workaround to this problem.

It is a combination of a script and a web server. What do you think https://github.com/dockito/vault ?

@pirelenito wouldn't that make the key still be available within a layer of the build? If that is the case, it is not worth it to us to add Dockito Valut to our build process--it seems just as 'jenky' to me as what we're doing now. I appreciate the suggestion!

@apeace the ONVAULT script dowloads the keys, run your command and then immediately deletes the keys. Since this all happens in the same command, the final layer will not contain the key.

@apeace At Medidata, we're using a tiny tool we built called docker-ssh-exec. It leaves only the docker-ssh-exec binary in the resulting build image -- no secrets. And it requires only a one-word change to the Dockerfile, so it's very "low-footprint."

But if you _really_ need to use a docker-native-only solution, there's now a built-in way to do this, as noted in the company blog post. Docker 1.9 allows you to use the --build-arg parameter to pass ephemeral values to the build process. You should be able to pass a private SSH key in as an ARG, write it to the filesystem, perform a git checkout, and then _delete_ the key, all within the scope of one RUN directive. (this is what the docker-ssh-exec client does). This will make for an ugly Dockerfile, but should require no external tooling.

Hope this helps.

@benton We have come up with a similar solution. :)

Thanks @pirelenito and @benton, I will check out all your suggestions!

EDIT: the following is _NOT_ secure, in fact:

For the record, here's how you check out a private repo from Github without leaving your SSH key in the resulting image.

First, replace user/repo-name in the following Dockerfile with the path to your private repo (make sure you keep the [email protected] prefix so that ssh is used for checkout):

FROM ubuntu:latest

ARG SSH_KEY
ENV MY_REPO [email protected]:user/repo-name.git

RUN apt-get update && apt-get -y install openssh-client git-core &&\
    mkdir -p /root/.ssh && chmod 0700 /root/.ssh && \
    ssh-keyscan github.com >/root/.ssh/known_hosts

RUN echo "$SSH_KEY" >/root/.ssh/id_rsa &&\
    chmod 0600 /root/.ssh/id_rsa &&\
    git clone "${MY_REPO}" &&\
    rm -f /root/.ssh/id_rsa

Then build with the command

docker build --tag=sshtest --build-arg SSH_KEY="$(cat ~/.ssh/path-to-private.key)" .

passing the correct path to your private SSH key.

^ with Docker 1.9

@benton You might want to look closely at the output of docker inspect sshtest and docker history sshtest. I think that you will find that metadata in the final image has your secret even if it is not available inside the container context itself...

@ljrittle Good spotting. The key is indeed there if you use a VAR. I guess an external workaround is still required here.

Perhaps one reason that a native solution has not yet been developed is because several workarounds _are_ in place. But I agree with most others here that a built-in solution would serve the users better, and fit Docker's "batteries-included" philosophy.

From the docs...

Note: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc.

( https://docs.docker.com/engine/reference/builder/#arg )

I don't think a path to a file apply to this, the note is about letting plain visible password / token in your console log.

I don't follow @jcrombez. The example was to pass the ssh key as a variable via ARG. So, it does apply.

In term of security risk this is very different :

docker build --tag=sshtest --build-arg SSH_KEY="$(cat ~/.ssh/path-to-private.key)" .

than this :

docker build --tag=sshtest --build-arg SSH_KEY="mykeyisthis" .

if someone find your terminal log, the consequence are not the same.
but i'm not a security expert, this might still be dangerous for some other reasons i'm not aware of.

On the command line, I suppose.

However, as @ljrittle pointed out and @benton conceded, any way that you use --build-arg/ARG will be committed in the build. So inspecting it will reveal information about the key. Both leave state in the final docker container and both suffer the same vulnerability on that end. Hence, why docker recommends against doing this.

_USER POLL_

_The best way to get notified of updates is to use the _Subscribe_ button on this page._

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@fletcher91
@benlemasurier
@dmuso
@probepark
@saada
@ianAndrewClark
@jakirkham
@galindro
@luisguilherme
@akurkin
@allardhoeve
@SevaUA
@sankethkatta
@kouk
@cliffxuan
@kotlas92
@taion

_USER POLL_

_The best way to get notified of updates is to use the _Subscribe_ button on this page._

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@parknicker
@dursk
@adambiggs

In term of security risk this is very different :

docker build --tag=sshtest --build-arg SSH_KEY="$(cat ~/.ssh/path-to-private.key)" .

apart from your bash history, they're exactly the same; there's many places where that information can end up.

For example, consider that API requests can be logged on the server;

Here's a daemon log for docker build --tag=sshtest --build-arg SSH_KEY="fooobar" .

DEBU[0090] Calling POST /v1.22/build
DEBU[0090] POST /v1.22/build?buildargs=%7B%22SSH_KEY%22%3A%22fooobar%22%7D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&memory=0&memswap=0&rm=1&shmsize=0&t=sshtest&ulimits=null
DEBU[0090] [BUILDER] Cache miss: &{[/bin/sh -c #(nop) ARG SSH_KEY]}
DEBU[0090] container mounted via layerStore: /var/lib/docker/aufs/mnt/de3530a82a1a141d77c445959e4780a7e1f36ee65de3bf9e2994611513790b8c
DEBU[0090] container mounted via layerStore: /var/lib/docker/aufs/mnt/de3530a82a1a141d77c445959e4780a7e1f36ee65de3bf9e2994611513790b8c
DEBU[0090] Skipping excluded path: .wh..wh.aufs
DEBU[0090] Skipping excluded path: .wh..wh.orph
DEBU[0090] Applied tar sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef to 91f79150f57d6945351b21c9d5519809e2d1584fd6e29a75349b5f1fe257777e, size: 0
INFO[0090] Layer sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef cleaned up

_USER POLL_

_The best way to get notified of updates is to use the _Subscribe_ button on this page._

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@cj2

I am trying to containerize a simple ruby/rack application. The Gemfile references several private gems. The moment bundle install starts and tries to access the private repos, I start getting this error

Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I was able to workaround it but not without exposing my private key. That won't do. Please enable ssh authentication forwarding.

+1 for ssh forwarding during builds. Can't use go get with private repos because of it ;(

+1 for enabling this use case in a secure manner

_USER POLL_

_The best way to get notified of updates is to use the _Subscribe_ button on this page._

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@lukad

Just reading through this very interesting discussion, I'm wondering if a simple solution might solve these issues. Off the top of my head i'm thinking, an option in the Dockerfile to just be able exclude/ignore specific internal directories/files when taking snapshots. How hard could that be?

i.e.

EXCLUDE .ssh

I'm thinking it would apply across all steps that follow, so if you placed it after FROM then you could add your keys as much as you like and build as normal and never need to worry about keys accidentally ending up in your image (granted you might need to add them at every step that requires them but you wouldn't have to worry about them ending up in an image)

@benton's suggestion works fine, and the docker daemon will only log the id_rsa key if it is in debug mode.

An even cuter way to expose your key during build is:

# Dockerfile
ARG SSH_KEY
RUN eval `ssh-agent -s` > /dev/null \
    && echo "$SSH_KEY" | ssh-add - \
    && git clone [email protected]:private/repository.git

docker build -t my_tag --build-arg SSH_KEY="$(< ~/.ssh/id_rsa)" .

Ha, though it is indeed just sitting there if you look at docker inspect my_tag.. so not sure what the real value of boot-arg is, other than being slightly tidier than ENV.

And, if you have a password on the id_rsa key, I guess you could be a bad human and do:

# Dockerfile
ARG SSH_KEY
ARG SSH_PASS
RUN eval `ssh-agent -s` > /dev/null \
    && echo "echo $SSH_PASS" > /tmp/echo_ps && chmod 700 /tmp/echo_ps \
    && echo "$SSH_KEY" | SSH_ASKPASS=/tmp/echo_ps DISPLAY= ssh-add - \
    && git clone [email protected]:private/repository.git
    && rm /tmp/echo_ps

docker build -t my_tag --build-arg SSH_KEY="$(< ~/.ssh/id_rsa)" --build-arg SSH_PASS=<bad_idea> .

It, of course, is hard to rationalize that being even remotely a good idea.. but we're all human, I suppose.

Granted, all of the biggest reasons for doing this would seem to be for people doing "bundle install" or "go get" against private repositories during a build..

I'd say just vendor your dependencies and ADD the entire project.. but, sometimes things need to get done now.

@SvenDowideit @thaJeztah Is there any solution for this problem? I tried to follow the thread but between closing and open another threads and a lot of opinions I have no idea what Docker team will do or when.

The best, but needs implementation?

Docker build uses ssh-agent within the build to proxy to your host's ssh and then use your keys without having to know them!

For anyone just learning about ssh-agent proxying: github to the rescue

@phemmer's original idea.

@yordis I don't think there's a "great" solution in the thread that's freely available yet.

This comment from docker/docker-py#980 seems to indicate that if you copy your ssh keys into your root user's key directory on your host system the daemon will use those keys. I am however mad novice in this regard so someone else may be able to clarify.


Ok, but not the best

Passing the key in with docker 1.8's build args.
Caveats.

Definitely a Bad Idea

A lot of people have also recommended in here adding the key temporarily to the build context and then quickly removing it. Sounds really dangerous because if the key creeps into one of the commits anyone who uses the container can access that key by checking out a particular commit.


Why hasn't this gone anywhere yet?

It needs a design proposal, this issue is _cah_- _luttered_ and ideas are only vague at the moment. Actual implementation details are being lost in a haze of "what if we did x" and +1s. To get organized and get moving on this much needed feature, those having possible solutions should create a . . .

design proposal

and then reference this issue.

I have some news on this issue.

At DockerCon this past week, we were encouraged to bring our hardest questions to Docker's "Ask the Experts" pavilion, so I went over and had a short chat with a smart and friendly engineer with the encouraging title Solutions Architect. I gave him a short summary of this issue, which I hope I conveyed accurately, because he assured me that this can be done with _only_ docker-compose! The details of what he was proposing involved a multi-stage build -- maybe to accumulate the dependencies in a different context than the final app build -- and seemed to involve using data volumes at build time.

Unfortunately, I'm not experienced with docker-compose, so I could not follow all the details, but he assured me that if I wrote to him with the exact problem, he would respond with a solution. So I wrote what I hope is a clear enough email, which includes a reference to this open GitHub issue. And I heard back from him this morning, with his reassurance that he will reply when he's come up with something.

I'm sure he's plenty busy, so I would not expect anything immediate, but I find this encouraging, to the extent that he's understood the problem, and is ready to attack it with only the docker-native toolset.

@benton I use the following config of docker-compose.yaml to do things described in this topic:

version: '2'
services:
  serviceName:
     volumes:
      - "${SSH_AUTH_SOCK}:/tmp/ssh-agent"
    environment:
      SSH_AUTH_SOCK: /tmp/ssh-agent

Make sure that ssh-agent started on host machine and knows about key (you may check it with ssh-add -L command).

Please note that you may need add

Host *
  StrictHostKeyChecking no

to container's .ssh/config.

Hi @WoZ! thanks for you answer, looks simple enough so I'll give it a try :)

I have a question though, how can you use this with automated builds in docker hub? As far as I now there is no way to use a compose file there :(

@garcianavalon works well, but it's only for run, not build. Not yet working with Docker for Mac either, though it's on the todo list apparently.

Edit: https://github.com/docker/for-mac/issues/410

We came up with 2 more workaround for our specific needs:

1) Setup our own package mirror for npm, pypi, etc. behind our VPN, this way we don't need SSH.

2) All the host machine we already have access to private repos, so we clone / download the private package locally to the host machine, run its package installation to download it, then using -v to map volume to docker, then build docker.

We are currently using option 2).

As far as docker run, docker-ssh-agent-forward seems to provide an elegant solution and works across Docker for Mac/Linux.

It might still be a good idea to COPY the known_hosts file from the host instead of creating it in the container (less secure), seeing as ssh-agent does not seem to forward known hosts.

But the fundamental problem with pulling private dependencies during a docker run step is bypassing docker build cache, which can be very significant in term of build time.

One approach to go around this limitation is to md5/date your build dependency declarations (e.g. package.json), push the result to an image and reuse the same image if the file has not changed. Using the hash in the image name will allow caching multiple states. It would have to be combined with the pre-install image digest as well.

This should be more robust than @aidanhs's solution for build servers, although I still have to test it at scale.

This should be more robust than @aidanhs's solution for build servers, although I still have to test it at scale.

My specific solution hasn't worked since 1.9.0 - it turned out that the feature introduced in 1.8.0 that I was relying on wasn't intentional and so it was removed.

Although the principle of my solution remains fine (it just requires you have a DNS server off your machine that a) your machine uses and b) you are able to add entries to appropriate locations), I can't really say I'd enthusiastically recommend it any more.

Thank you for the extra info @aidanhs!

Some updates regarding my proposed solution: hashes don't actually need to combined as the hash of the base image just after adding the dependencies declaration file can simply be used. Moreover it is better to simply mount the known_host file as a volume, since ssh-agent can only be used at runtime anyway -- and more secure as it contains a list of all the hosts you connect to.

I implemented the complete solution for node/npm and it can be found here with detailed documentation and examples: https://github.com/iheartradio/docker-node

Of course, the principles can be extended for other frameworks.

Same problem here, how does one build something, where that something requires SSH credentials in order to check and build a number of projects at build time, inside a docker container, without writing credentials to the image or a base image.

We work around this by having a 2-step build process. A "build" image containing the source/keys/build dependencies is created. Once that's built it's run in order to extract the build results into a tarfile which is later added to a "deploy" image. The build image is then removed and all that's published is the "deploy" image. This has a nice side effect of keeping container/layer sizes down.

@binarytemple-bet365 see https://github.com/iheartradio/docker-node for an end-to-end example doing exactly that. I use more than two steps as I use an ssh service container, pre-install (base image until before installing private dependencies), install (container state after runtime installation of private dependencies) and post-install (adds commands that you had after installation of private dependencies) to optimize speed and separation of concern.

Check out Rocker, it's a clean solution.

@Sodki I took your advice. Yes, rocker is a clean and well thought out solution. More's the shame the docker team wouldn't just take that project under their wing and deprecate docker build. Thank you.

Still no better way? :(

Have anyone tried this new squash thing? https://github.com/docker/docker/pull/22641 Might be the docker native solution we are looking for. Going to try it now and report back to see how it goes.

After 2+ years this is not fix yet 😞 Please Docker team do something about it

Looks like the new --squash option in 1.13 works for me:
http://g.recordit.co/oSuMulfelK.gif

I build it with: docker build -t report-server --squash --build-arg SSH_KEY="$(cat ~/.ssh/github_private_key)" .

So when I do docker history or docker inspect, the key doesn't show.

My Dockerfile looks like this:

FROM node:6.9.2-alpine

ARG SSH_KEY

RUN apk add --update git openssh-client && rm -rf /tmp/* /var/cache/apk/* &&\
  mkdir -p /root/.ssh && chmod 0700 /root/.ssh && \
  ssh-keyscan github.com > /root/.ssh/known_hosts

RUN echo "$SSH_KEY" > /root/.ssh/id_rsa &&\
  chmod 0600 /root/.ssh/id_rsa

COPY package.json .

RUN npm install
RUN rm -f /root/.ssh/id_rsa

# Bundle app source
COPY . .

EXPOSE 3000

CMD ["npm","start"]

@kienpham2000, your screenshot looks like it still contains the keys - could you please check the output of docker history with the --no-trunc flag and report back here on wether or not the private keys are displayed in docker history?

@ryanschwartz you are right, the --no-trunc shows the whole damn thing, this doesn't fly.

@kienpham2000
Another thing they introduced in 1.13 release is:

Build secrets
• enables build time secrets using —build-secret flag
• creates tmpfs during build, and exposes secrets to the
build containers, to be used during build.
https://github.com/docker/docker/pull/28079

Maybe this could work?

Build secrets didn't make it into 1.13, but hopefully will do in 1.14.

On 15 Dec 2016 9:45 a.m., "Alex" notifications@github.com wrote:

@kienpham2000 https://github.com/kienpham2000
Another thing they introduced in 1.13 release is:

Build secrets
• enables build time secrets using —build-secret flag
• creates tmpfs during build, and exposes secrets to the
build containers, to be used during build.
• #28079 https://github.com/docker/docker/pull/28079

Maybe this could work?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/docker/docker/issues/6396#issuecomment-267393020, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAdcPDxrctBP2TlCtXen-Y_uY8Y8B09Sks5rIXy2gaJpZM4CD4SM
.

So a year later: No, this is a bad idea. You SHOULD NOT do that. There are various other solutions. For example, Github can provide access tokens. You can use them in configuration files/environment variables with less risk as you can specify which actions are allowed for each token.

The solution is to implement SSH forwarding. Like Vagrant does it for example.

Can somebody explain me why is so complicate it to implement that ?

@omarabid - are you replying to your original proposal of using environment variables to pass private keys to be used within the Dockerfile? There is no question, that is a bad security practice.

As to your suggestion to use access tokens, they would end up stored in a layer and can be just as dangerous to leave laying around as an SSH key. Even if it only has read only access, most people wouldn't want others to have read only access to their repos. Also, frequent revocation/rotation/distribution would need to occur; this is a little easier to handle for each developer etc. rather than with "master" access tokens.

The build secrets solution mentioned a few comments back looks like it's a step in the right direction, but the ability to use an SSH agent is best. Maybe one could use an SSH agent in combination with build secrets, I'm not sure.

It's natural for developers/CI systems to use an SSH agent during git/build operations. This is much more secure than having a plaintext, password-less private key that must be revoked/replaced en masse across a variety of systems. Also, with SSH agents there's no possibility of the private key data getting committed to an image. At worst an environment variable/SSH_AUTH_SOCK remnant will be left behind in the image.

I got this latest workaround without showing secret key content or using extra 3rd party docker tool (hopefully the secret vault during built PR will get merged in soon).

I'm using the aws cli to download the shared private key from S3 into the host current repo. This key is encrypted at rest using KMS. Once the key is downloaded, Dockerfile will just COPY that key during the build process and remove it afterward, content doesn't show at docker inspect or docker history --no-trunc

Download the github private key from S3 first to the host machine:

# build.sh
s3_key="s3://my-company/shared-github-private-key"
aws configure set s3.signature_version s3v4
aws s3 cp $s3_key id_rsa --region us-west-2 && chmod 0600 id_rsa

docker build -t app_name .

Dockerfile looks like this:

FROM node:6.9.2-alpine

ENV id_rsa /root/.ssh/id_rsa
ENV app_dir /usr/src/app

RUN mkdir -p $app_dir
RUN apk add --update git openssh-client && rm -rf /tmp/* /var/cache/apk/* && mkdir -p /root/.ssh && ssh-keyscan github.com > /root/.ssh/known_hosts

WORKDIR $app_dir

COPY package.json .
COPY id_rsa $id_rsa
RUN npm install && npm install -g gulp && rm -rf $id_rsa

COPY . $app_dir
RUN rm -rf $app_dir/id_rsa

CMD ["start"]

ENTRYPOINT ["npm"]

@kienpham2000, why this solution would not keep key into image layer? The actions of copy and remove the key are being done in separated commands, so there is a layer that should have the key.
Our team was using yours solution until yesterday, but we find out an improved solution:

  • We generate a pre-sign URL to access the key with aws s3 cli, and limit the access for about 5 minutes, we save this pre-sign URL into a file in repo directory, then in dockerfile we add it to the image.
  • In dockerfile we have a RUN command that do all these steps: use the pre-sing URL to get the ssh key, run npm install, and remove the ssh key.
    By doing this in one single command the ssh key would not be stored in any layer, but the pre-sign URL will be stored, and this is not a problem because the URL will not be valid after 5 minutes.

The build script looks like:

# build.sh
aws s3 presign s3://my_bucket/my_key --expires-in 300 > ./pre_sign_url
docker build -t my-service .

Dockerfile looks like this:

FROM node

COPY . .

RUN eval "$(ssh-agent -s)" && \
    wget -i ./pre_sign_url -q -O - > ./my_key && \
    chmod 700 ./my_key && \
    ssh-add ./my_key && \
    ssh -o StrictHostKeyChecking=no [email protected] || true && \
    npm install --production && \
    rm ./my_key && \
    rm -rf ~/.ssh/*

ENTRYPOINT ["npm", "run"]

CMD ["start"]

@diegocsandrim thank you for pointing that out, I really like your solution, going to update our stuffs here. Thanks for sharing!

I am a bit new to the thread, but fundamentally it seems like people are trying to solve a problem better solved by PKI. Not everyone is necessarily trying to save the same problem where PKI would be the better solution, but enough references seem to indicate it might be something that should be considered.

It seems annoying, but fundamentally possible to

  • create a local certificate authority
  • have a process to generate a cert
  • have a process to issue the cert
  • have a process to revoke said cert
  • have ssh daemons use PKI

And if people feel this is feasible, then please by all means create it and open source it, after all the work needs doing once well. I have no idea if the roumen petrov build is secure and didn't note any source code (haven't checked the tar), so I have no idea how secure it is.

https://security.stackexchange.com/questions/30396/how-to-set-up-openssh-to-use-x509-pki-for-authentication

https://jamielinux.com/docs/openssl-certificate-authority/create-the-root-pair.html

@mehmetcodes: Having a PKI does not really solve the problem. For PKI-based SSH authentication to work, you'll still need to load the private key to the image.

Unless you have your local certificate authority issue very short lived certificates (e.g. less than an hour), and you revoke the certificate immediately after a successful build, this is insecure.

If you manage to create short lived certificate process, that's not much different than just using a new SSH key that you revoke immediately after a build finishes.

Oh it's even more annoying than that, but I must be on to something or why would it exist in the wild?

https://blog.cloudflare.com/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/
https://blog.cloudflare.com/how-to-build-your-own-public-key-infrastructure/

I don't know, an SSH temp key is probably much better for most use cases, but there is something unsettling about every recourse, including the one I suggested, particularly in this context.

You would normally just mount a volume with the key, instead, but that doesn't help the need for the docker for Mac / moby solution.

who the f.. is moby?

@whitecolor
Image of Moby

I've got as far as this on MacOS:

bash-3.2$ docker run -t -i -v "$SSH_AUTH_SOCK:/tmp/ssh_auth_sock" -e "SSH_AUTH_SOCK=/tmp/ssh_auth_sock" python:3.6 ssh-add -l
docker: Error response from daemon: Mounts denied:
The path /var/folders/yb/880w03m501z89p0bx7nsxt580000gn/T//ssh-DcwJrLqQ0Vu1/agent.10466
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
.

/var/ is an alias, which Docker seems to struggle with. But if I prefix the $SSH_AUTH_SOCK path with /private (i.e. the resolved alias path), then Docker can read the file, but I get:

bash-3.2$ docker run -t -i -v "/private$SSH_AUTH_SOCK:/tmp/ssh_auth_sock" -e "SSH_AUTH_SOCK=/tmp/ssh_auth_sock" python:3.6 ssh-add -l
Could not open a connection to your authentication agent.

At this point I'm wondering how bad it is to just…

docker run -v ~/.ssh:/root/.ssh python:3.6 bash

?

docker build  --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa_no_pass)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .

And then inside the Docker file:

ARG ssh_prv_key
ARG ssh_pub_key

# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
    chmod 0700 /root/.ssh && \
    ssh-keyscan github.com > /root/.ssh/known_hosts

# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
    echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
    chmod 600 /root/.ssh/id_rsa && \
    chmod 600 /root/.ssh/id_rsa.pub

And don't forget to include

RUN rm -f /root/.ssh/id_rsa /root/.ssh/id_rsa.pub

as the final step.

The catch here is that your private key shouldn't be password protected.

The issue with the previous comment is that the keys end up in the layers... rm won't delete from a previous layer, since each line in a docker file is one layer..

Doesn't docker secret resolve this problem?
WDYT @thaJeztah

docker secret is not (yet) available during build, and only available in services (so not yet for docker run)

When using multi stage builds something like this could work though (typing on my phone, so let me link a gist I created a while back); https://gist.github.com/thaJeztah/836c4220ec024cf6dd48ffa850f07770

I am not that involve anymore with Docker, but, how it's possible that this issue exists for such of long time. I am not trying to call out but rather than understand what is the effort needed for fix this because back when I was dealing with this it seems a really really common issue for any company that pulls private packages like ruby gems from a private repo.

Does the Moby care about this issue? Why it have to be so hard for something that seems not that big a deal I guess.

Has been almost 3 years 😢

@yordis docker builder was frozen for a year or two. Docker team stated that builder is good enough and that they focus their efforts elsewhere. But this is gone and there were two changes to builder since. Squash and mustli stage builds. So buildtime secrets may be on their way.

For runtime forwarding of ssh-agent, I would recommend https://github.com/uber-common/docker-ssh-agent-forward

Why it have to be so hard for something that seems not that big a deal I guess.

@yordis reading the top description of this issue, implementing this is far from trivial; having said that, if someone has a technical design proposal for this, feel free to open an issue or PR for discussion. Also note that for the _build_ part, a buildkit project was started for future enhancements to the builder; https://github.com/moby/buildkit

@thaJeztah I wish I could have the skills required but I don't.

@villlem do you know any roadmap from the Docker team?

Weekly reports for the builder can be found here; https://github.com/moby/moby/tree/master/reports/builder build time secrets is still listed in the latest report, but could use help

We're using @diegocsandrim's solution but with an intermediate encryption step to avoid leaving an unencrypted SSH key in S3.

This extra step means that the key can't be recovered from the Docker image (the URL to download it expires after five minutes) and can't be recovered from AWS (as it's encrypted with a rotating password known only to the docker image).

In build.sh:

BUCKET_NAME=my_bucket
KEY_FILE=my_unencrypted_key
openssl rand -base64 -out passfile 64
openssl enc -aes-256-cbc -salt -in $KEY_FILE -kfile passfile | aws s3 cp - s3://$BUCKET_NAME/$(hostname).enc_key
aws s3 presign s3://$BUCKET_NAME/$(hostname).enc_key --expires-in 300 > ./pre_sign_url
docker build -t my_service

And in the Dockerfile:

COPY . .

RUN eval "$(ssh-agent -s)" && \
    wget -i ./pre_sign_url -q -O - | openssl enc -aes-256-cbc -d -kfile passfile > ./my_key && \
    chmod 700 ./my_key && \
    ssh-add ./my_key && \
    mkdir /root/.ssh && \
    chmod 0700 /root/.ssh && \
    ssh-keyscan github.com > /root/.ssh/known_hosts && \
    [commands that require SSH access to Github] && \
    rm ./my_key && \
    rm ./passfile && \
    rm -rf /root/.ssh/

if you are using docker run you should mount your .ssh with --mount type=bind,source="${HOME}/.ssh/",target="/root/.ssh/",readonly. The readonly is the magic, it masks the normal permissions and ssh basically sees 0600 permissions which it is happy with. You can also play with -u root:$(id -u $USER) to have the root user in the container write any files it creates with the same group as your user so hopefully you can at least read them if not fully write them without having to chmod/chown.

Finally.

I believe this problem can now be solved using just docker build, by using multi-stage builds.
Just COPY or ADD the SSH key or other secret wherever you need it, and use it in RUN statements however you like.

Then, use a second FROM statement to start a new filesystem, and COPY --from=builder to import some subset of directories that don't include the secret.

(I have not actually tried this yet, but if the feature works as described...)

@benton multi-stage builds work as described, we use it. It's by far the best option for many different problems, including this one.

I have verified the following technique:

  1. Pass the _location of a private key_ as a Build Argument, such as GITHUB_SSH_KEY, to the first stage of a multi-stage build
  2. Use ADD or COPY to write the key to wherever it's needed for authentication. Note that if the key location is a local filesystem path (and not a URL), it must _not_ be in the .dockerignore file, or the COPY directive will not work. This has implications for the final image, as you'll see in step 4...
  3. Use the key as needed. In the example below, the key is used to authenticate to GitHub. This also works for Ruby's bundler and private Gem repositories. Depending on how much of the codebase you need to include at this point, you may end up adding the key again as a side-effect of using COPY . or ADD ..
  4. REMOVE THE KEY IF NECESSARY. If the key location is a local filesystem path (and not a URL), then it is likely that it was added alongside the codebase when you did ADD . or COPY . This is probably _precisely the directory_ that's going to be copied into the final runtime image, so you probably also want to include a RUN rm -vf ${GITHUB_SSH_KEY} statement once you're done using the key.
  5. Once your app is completely built into its WORKDIR, start the second build stage with a new FROM statement, indicating your desired runtime image. Install any necessary runtime dependencies, and then COPY --from=builder against the WORKDIR from the first stage.

Here's an example Dockerfile that demonstrates the above technique. Providing a GITHUB_SSH_KEY Build Argument will test GitHub authentication when building, but the key data will _not_ be included in the final runtime image. The GITHUB_SSH_KEY can be a filesystem path (within the Docker build dir) or a URL that serves the key data, but the key itself must not be encrypted in this example.

########################################################################
# BUILD STAGE 1 - Start with the same image that will be used at runtime
FROM ubuntu:latest as builder

# ssh is used to test GitHub access
RUN apt-get update && apt-get -y install ssh

# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG GITHUB_SSH_KEY=/path/to/.ssh/key

  # Set up root user SSH access for GitHub
ADD ${GITHUB_SSH_KEY} /root/.ssh/id_rsa

# Add the full application codebase dir, minus the .dockerignore contents...
# WARNING! - if the GITHUB_SSH_KEY is a file and not a URL, it will be added!
COPY . /app
WORKDIR /app

# Build app dependencies that require SSH access here (bundle install, etc.)
# Test SSH access (this returns false even when successful, but prints results)
RUN ssh -o StrictHostKeyChecking=no -vT [email protected] 2>&1 | grep -i auth

# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*

########################################################################
# BUILD STAGE 2 - copy the compiled app dir into a fresh runtime image
FROM ubuntu:latest as runtime
COPY --from=builder /app /app

It _might_ be safer to pass the key data itself in the GITHUB_SSH_KEY Build Argument, rather than the _location_ of the key data. This would prevent accidental inclusion of the key data if it's stored in a local file and then added with COPY .. However, this would require using echo and shell redirection to write the data to the filesystem, which might not work in all base images. Use whichever technique is safest and most feasible for your set of base images.

@jbiel Another year, and the solution I found is to use something like Vault.

Here's a link with 2 methods (squash and intermediate container described earlier by @benton)

I'm just adding a note to say that neither of the current approaches will work if you have a passphrase on the ssh key you're using since the agent will prompt you for the passphrase whenever you perform the action that requires access. I don't think there's a way around this without passing around the key phrase (which is undesirable for a number of reasons)

Solving.
Create bash script(~/bin/docker-compose or like):

#!/bin/bash

trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &

/usr/bin/docker-compose $@

And in Dockerfile using socat:

...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
  && apk add --no-cache socat openssh \
  && /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
  && bundle install \
...
or any other ssh commands will works

Then run docker-compose build

@benton why do you use RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*? Shouldn't it just be RUN rm -vf /root/.ssh/id*? Or maybe I misunderstood the intent here.

@benton And also it's not safe to do:

RUN ssh -o StrictHostKeyChecking=no -vT [email protected] 2>&1

You have to check fingerprint

I sovled this problem by this way

ARGS USERNAME
ARGS PASSWORD
RUN git config --global url."https://${USERNAME}:${PASSWORD}@github.com".insteadOf "ssh://[email protected]"

then build with

docker build --build-arg USERNAME=use --build-arg PASSWORD=pwd. -t service

But at first, your private git server must support username:password clone repo.

@zeayes RUN command stored in container history. So your password is visible to other.

Correct; when using --build-arg / ARG, those values will show up in the build history. It _is_ possible to use this technique if you use multi-stage builds _and_ trust the host on which images are built (i.e., no untrusted user has access to the local build history), _and_ intermediate build-stages are not pushed to a registry.

For example, in the following example, USERNAME and PASSWORD will only occur in the history for the first stage ("builder"), but won't be in the history for the final stage;

FROM something AS builder
ARG USERNAME
ARG PASSWORD
RUN something that uses $USERNAME and $PASSWORD

FROM something AS finalstage
COPY --from= builder /the/build-artefacts /usr/bin/something

If only the final image (produced by "finalstage") is pushed to a registry, then USERNAME and PASSWORD won't be in that image.

_However_, in the local build cache history, those variables will still be there (and stored on disk in plain text).

The next generation builder (using BuildKit) will have more features, also related to passing build-time secrets; it's available in Docker 18.06 as an experimental feature, but will come out of experimental in a future release, and more features will be added (I'd have to check if secrets/credentials are already possible in the current version)

@kinnalru @thaJeztah thx, i use multi-stage builds, but the password can be seen in the cache container's history, thx!

@zeayes Oh! I see I did a copy/paste error; last stage must not use FROM builder ... Here's a full example; https://gist.github.com/thaJeztah/af1c1e3da76d7ad6ce2abab891506e50

This comment by @kinnalru is the right way to do this https://github.com/moby/moby/issues/6396#issuecomment-348103398

With this method, docker never handles your private keys. And it also works today, without any new features being added.

It took me a while to figure it out, so here is a more clear, and improved explanation. I changed @kinnalru code to use --network=host and localhost, so you don't need to know your ip address. (gist here)

This is docker_with_host_ssh.sh, it wraps docker and forwards SSH_AUTH_SOCK to a port on localhost:

#!/usr/bin/env bash

# ensure the processes get killed when we're done
trap 'kill $(jobs -p)' EXIT

# create a connection from port 56789 to the unix socket SSH_AUTH_SOCK (which is used by ssh-agent)
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
# Run docker
# Pass it all the command line args ($@)
# set the network to "host" so docker can talk to localhost
docker $@ --network='host'

In the Dockerfile we connect over localhost to the hosts ssh-agent:

FROM python:3-stretch

COPY . /app
WORKDIR /app

RUN mkdir -p /tmp

# install socat and ssh to talk to the host ssh-agent
RUN  apt-get update && apt-get install git socat openssh-client \
  # create variable called SSH_AUTH_SOCK, ssh will use this automatically
  && export SSH_AUTH_SOCK=/tmp/auth.sock \
  # make SSH_AUTH_SOCK useful by connecting it to hosts ssh-agent over localhost:56789
  && /bin/sh -c "socat UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:localhost:56789 &" \
  # stuff I needed my ssh keys for
  && mkdir -p ~/.ssh \
  && ssh-keyscan gitlab.com > ~/.ssh/known_hosts \
  && pip install -r requirements.txt

Then you can build your image by invoking the script:

$ docker_with_host_ssh.sh build -f ../docker/Dockerfile .

@cowlicks you may be interested in this pull request, which adds support for docker build --ssh to forward the SSH agent during build; https://github.com/docker/cli/pull/1419. The Dockerfile syntax is still not in the official specs, but you can use a syntax=.. directive in your Dockerfile to use a frontend that supports it (see the example/instructions in the pull request).

That pull request will be part of the upcoming 18.09 release.

It looks like this is now available in the 18.09 release. Since this thread comes up before the release notes and medium post, I'll cross-post here.

Release Notes:
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds

Medium Post:
https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

Very exciting.

I think we can close this because we have docker build --ssh now

Related compose issue here: docker/compose#6865. Functionality to use Compose and expose SSH agent socket to containers noted to be landing in next release candidate, 1.25.0-rc3 (releases).

Was this page helpful?
0 / 5 - 0 ratings