Moby: Allow specifying of a dockerfile as a path, not piping in.

Created on 8 Oct 2013  ·  160Comments  ·  Source: moby/moby

Would be nice to be able to specify docker build -t my/thing -f my-dockerfile . so I could ADD files, and also have multiple dockerfiles.

Most helpful comment

So it's docker build -f Dockerfile.dev .? edit: yep

All 160 comments

I was just looking into this

Usage: docker build [OPTIONS] PATH | URL | -

so if you run

docker build -t my/thing my-dockerfile

complains about the tar file being too short.

It seems to me that the 'PATH' option isn't doccoed, so it might have some legacy meaning?

So - I wonder if detecting if PATH isa file, and is not a Tarfile.

personally, I have a set of Dockerfiles that I use to test, and would much rather have them all in one directory and also have a full context.

That PATH refers to a directory, not a specific Dockerfile.

oh, and then Tars that directory up to send to the server - cool!

so its possible to detect isafile, Tar up the dir its in, and then replace the Dockerfile in that tarball with the specified file.

or to use -f in the same way - allowing your Dockerfile definitions to live separately from the payload

now to work out how the tests work, try it out and see if it works for me

It doesn't tar anything up - the PATH is a directory in which it assumes there is a Dockerfile

thats not all it does with that PATH - reading the code, the 'context' is sent to the server by taring up the directory.

(ok, so i still don't know go, and i've only been reading the code for the last few minutes, so take it with a grain of skepticism )

Correct, so any nonremote files referenced via ADD must also be in that same directory or the daemon won't be able to access them.

Ah, I see what you're saying - yes - that's exactly what I want - a way to specify a dockerfile with -f, and then the directory PATH that might be separate.

so I could have:

docker build -t my/thing fooproj
docker build -t my/thing -f debug-dockerfile fooproj

2108 (adding an include directive to Dockerfiles) adds an interesting wrinkle

should the include be relative to the specified dockerfile, or the PATH. not important yet though.

fun:

docker build -t my/thing fooproj
docker build -t my/thing -f ../../debug-dockerfile fooproj
docker build -t my/thing -f /opt/someproject/dockerfiles/debug-dockerfile fooproj

as extra bonus - there are no CmdBuild tests yet, so guess what I get to learn on first :)

@SvenDowideit are you working on this? I was thinking of maybe hacking on it today

I'm slowly getting myself familiar with the code and go, so go for it - I'm having too much fun just writing the unit tests (perhaps you can use the testing commits to help :)

I will do :)

I would like to see this functionality too. We have a system which can run in 3 different modes and we'd like to deploy 3 different containers -- 1 for each mode. That means 3 virtually identical docker files with just the CMD being different. But because of the paths only being directories, and the directories being the context for ADD commands, I cannot get this to work right now.

So: +1 from me!

Seems like this may have the same end goal as #1618 (though with different approaches). The idea there is to use a single Dockerfile with multiple TAG instructions that result in multiple images, versus multiple Dockerfiles and an include system as outlined here. Thoughts?

It seems as though if you can pipe a Dockerfile in, you should be able to specify a path as well. Interested to see what comes of #1618 but I think this offers many more possibilities.

I was thrown by the fact that the documentation doesn't state clearly that the directory containing the Dockerfile is the build context - I made the wrong assumption that the build context was the current working directory, so if I passed a path to the Docerfile instead of it being in the current directory, files I tried to ADD from the current working directory bombed out with "no such file or directory" errors.

I'm getting the same error, Any Ideas

docker build Dockerfile


Uploading context

2013/12/11 21:52:32 Error: Error build: Tarball too short

@bscott try docker build . instead. Build takes a directory, not a file, and that's the "build context". :)

Worked Thx!, I just would like to choose between different Docker files.

+1 from me.

I need to create multiple images from my source. Each image is a separate concern that needs the same context to be built. Polluting a single Dockerfile (as suggested in #1618) is wonky. It'd be much cleaner for me to keep 3 separate <image-name>.docker files in my source.

I'd love to see something like this implemented.

This is more difficult to implement than it would at first seem. It appears that ./Dockerfile is pretty baked in. After initial investigation at least these files are involved:

api/client.go
archive/archive.go
buildfile.go
server.go

The client uses archive to tar build context and send it to the server which then uses archive to untar build context and hand the bytes off to buildfile.

The easiest implementation seems like it'd involve changing client and archive to overwrite the tar's ./Dockerfile with the file specified via this option. I'll investigate further.

@thedeeno I'll take a look really quick and show you were the change should be made. I think it is only in one place.

+1 from me!

I've been following both #1618 and #2112 and this is the most elegant solution.

There's one particular use case in my development where this feature would be incredibly handy... When working on applications that have both a "web" and "worker" roles. I would like to create two docker files for this situation "Dockerfile-web" and "Dockerfile-worker". I could then build them both, tag them, and push them to my image repository. I would then run multiple web front-end instances behind a load-balancer and multiple workers to handle the tasks being pushed into the queues.

+1 as an alternative to #2745.

+1

I was astounded to find that Dockerfile is hardcoded in, as well as that the build context is forced to be the Dockerfile's directory and can't be overridden even with command-line flags. This severely limits the usefulness and flexibility of Docker as a development, test, and deployment tool.

+1
I'd appreciate that change

+1
Really need this.

5033 should allow this feature. cc/@crosbymichael

@shykes what do you think about this change? I don't think you agreed or maybe know of a better solution for solving the same problem.

I'm hesitant.

On the one hand, I don't want to limit people's ability to customize their build.

On the other hand, I worry that the same thing will happen as with _run -v /host:/container_ and _expose 80:80_. In other words, it will allow the 1% who know what they're doing to add a cool customization - and the other 99% then shoot themselves in the foot quite badly.

For example, we have a lot of new Docker users who start out with host-mounted volumes instead of regular volumes. And we had to deprecate the _expose 80:80_ syntax altogether because too many people published images which couldn't be run more than once per host, for no good reason.

So my question is: don't we risk having lots of source repositories which cannot be built repeatably with _docker build

_, because now you have to read a README which tells you to run a shell script which then runs 'docker build -f ./path/to/my/dockerfile', simply because you didn't feel like putting a Dockerfile at the root of the repository? Or perhaps because you're a beginner user and just copy-pasted that technique from an unofficial tutorial?

Being able to drop a source repo, and have it be built automatically without ambiguity or human discovery is one of the reasons Dockerfiles are useful. Doesn't this pull request introduce the risk of breaking in that a lot of cases, for basically no good reason?

@shykes I'm running into the problem you describe _because_ of this Dockerfile limitation. Here are a couple of use cases:

  1. I have a Docker-based build environment that produces an artifact (JAR file in this case). The build environment is different from the run environment (different dependencies, larger image, etc., so I don't want to inherit the build env into runtime. It makes the most sense to me to have the Dockerfile build and run the runtime env around the JAR. So I have a separate Dockerfile.build file that builds and runs the build env and creates the JAR. But, since I can't specify the Dockerfile, I had to create a scripts/build file that does a docker build < Dockerfile.build and then mounts the host volume w/ docker run -v ... to run the build (since I can't use ADD w/ piped-in Dockerfiles). What I'd like to do instead is just be able to run docker build -t foobar/builder -f Dockerfile.build, docker run foobar/builder, docker build -t foobar/runtime, docker run foobar/runtime and just use ADD commands in both Dockerfiles.
  2. With ONBUILD instructions, I'd like to be able to put Dockerfiles into subdirectories (or have Dockerfile.env, etc. files in the root) that have the root Dockerfile in their FROM instruction, but can still use the root of the project as their build context. This is useful for, for example, bringing in configuration parameters for different environments. The root Dockerfile would still produce a useful container, but the others would create different variants that we need.

So I guess the thing I'd really like is to be able to separate the concepts of "build context" from "Dockerfile location/name." There are lots of potential ways of doing that, of course, but this seems like a relatively straightforward one to me.

@cap10morgan any chance you could point me to your example 1, or something which approximates it? So I can play around and make sure we're talking about the same thing.

@shykes https://github.com/shykes isn't this dream of portability exactly
that what the image index is for? Why do build environments need to be
portable as well?

Some build processes require more than what's possible a with vanilla
Dockerfile. Further, enforcing one Dockerfile per context (which is what
this request aims to fix) is very limiting.

This really affects our team. In our setup we'd like to produce multiple
images from the same "context". By adding the --file option we can do
what we really want to do - which is to add foo.docker and bar.docker
into our repository root. Instead, we end up writing bash scripts to copy
tons of files into temp locations to establish a context, and rename our
named docker files into the magic Dockerfile just so docker build can
work.

To me, these hoops exists for no good reason. If someone on our team wants
to build our images they need to run custom scripts to get the job done -
the exact thing you think you can avoid by sticking with this limitation.

On Mon, Apr 7, 2014 at 4:23 PM, Solomon Hykes [email protected]:

@cap10morgan https://github.com/cap10morgan any chance you could point
me to your example 1, or something which approximates it? So I can play
around and make sure we're talking about the same thing.

Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/2112#issuecomment-39778691
.

@shykes since this seems to be getting 'wontfixed' can you point to docs or an explanation of the Docker certified way to deal with this problem?

My use case is I have an app I would like to build with docker where the tests have different requirements than a production image would. I think this is pretty common.

This and the .dockerignore issue (https://github.com/dotcloud/docker/pull/3452) (copying my entire .git directory to the build context) are the first somewhat critical pain points I hit when trying out docker for one of our projects over the past few days and seem like no brainers to me but there appears to be pushback on these issues.

+1 on needing this as well - we have a single code base with multiple micro-services being developed in there. We would like to be able to generate multiple images, for example, one image containing services/base/* and services/fooservice/* ; and another image containing services/base/* and services/barservices/* and also staticfiles/*.
We can not place Dockerfiles inside the service folder because Docker treats that folder as a context then and we can not ADD anything that is higher up in the folder structure. We can not place two Dockerfiles in root because they would have the same name.
The only solution that we have is to place the Dockerfile in the service folder and then write a custom script to parse the selected Dockerfile, determine what paths it will want to add (all specified relative to project root), then copy them to a new temporary folder, copy the selected Dockerfile to the root of that new folder and finally run the "docker build tempfolder". Is that REALLY necessary?
It would have been so much simpler if you could simply tell to "docker build" separately where is the Dockerfile and where is the context.

Running into this problem as well.. would love to see some sort of fix.

+1, coding around this limitation is a huge pain.

+1 for -f flag that specifies Dockerfile path separate from build context

Docker is such a great tool but this tight coupling of context and specific Dockerfile definitely taints the the Docker team a bit. The directory structure does not always mirror the 'container/sub system' structure exactly.

How hard can it be to fix? And, there have been a few PRs for this, but they have gone ignored.

A related simple addition is to have an ignore file for stuff we don't want to add to the context -- yes, am looking at you, .git ...

This might actually lead me to skip Docker for now, and go back to Vagrant + Ansible. Nice.

@davber .dockerignore is in -> #6579

But, yes the contribution management is a little bit painful right now and one of the rare disheartening things about docker so far. I am thinking they are just overwhelmed a bit hopefully.

Docker team, how can people get things like this reviewed and in? If there is a clear thought about why this is really actually bad, and how projects should avoid this problem, we would love to hear it, but this seems like a simple problem to describe and a simple problem to solve.

/cc @shykes @creack @crosbymichael @vieux

+1 for -f flag. Also a DOCKERFILE environment variable to override build settings.

@jakehow yes there is definitely an element of being overwhelmed, for every maintainer there is at least 1,000 people actively requesting something, and demanding a detailed explanation as to why it hasn't been done yet. We are really doing our best, and if you visit the #docker and #docker-dev irc channels you will witness the firehose first-hand.

But in this case there has been a clear answer as to why it's better to keep the 1-1 mapping of Dockerfile and source directory. I will repeat it once more: we want to preserve the self-describing property of docker builds. The fact that you can point at a source code directory, _with no other information out-of-band_, and build an image out of it exactly to the specification of the upstream developer, is very powerful and a key differentiator of Docker.

Sure, being able to dynamically combine directory X with Dockerfile Y on-the-fly is mildly convenient (although I've never felt a burning need for it, and I use Docker a lot). But it makes it way too easy to do something stupid - namely, to break the self-describing nature of your build. And that seems like a massively poor tradeoff. Hence the decision to not implement this feature.

As for the "1 source repo, multiple images" examples, yes, I definitely agree that is needed. What we need is a Docker-standard way for a single source directory to define multiple build targets. That could be done with a multipart Dockerfile, or with multiple Dockerfiles _as long as there is a well-defined filename convention which allows these Dockerfiles to be listed and selected in a non-ambiguous way_.

If somebody wants to contribute this, we would love to merge it. And as usual we will be happy to help first-time contributors, come say hi on IRC and we'll get you started.

@davber I was under the impression that Vagrant has the same restriction of 1 description file per project (presumably for similar reasons). How will switching back to Vagrant solve your problem exactly?

@gabrtv do you want to try and bring multi-image back? Here's a tentative proposal:

$ ls | grep Dockerfile
Dockerfile
db.Dockerfile
frontend-prod.Dockerfile
frontend-dev.Dockerfile
$ docker build -t shykes/myapp .
Successfully built shykes/myapp/db
Successfully built shykes/myapp/frontend-dev
Successfully built shykes/myapp/frontend-prod
Successfully built shykes/myapp
$ docker build -t shykes/myapp2 --only frontend-dev .
Successfully built shykes/myapp2/frontend-dev

That proposal would work fine for us and I can see it solving other problems for other people too.

@shykes I don't think that I buy that increasing the complexity of the Dockerfile format is a worthwhile tradeoff for keeping to one Dockerfile.

Take for example makefiles - it's more common than not for a project to have a 'Makefile' but make also allows you to specify with '-f' a different makefile.

I also think that the 1:1 mapping between sourcefolder and image is less useful than you make out - I've found that it's common to generate a build folder and build from within that to keep the amount of data copied into the context to a minimum, and thus smaller images.

you say "So my question is: don't we risk having lots of source repositories which cannot be built repeatably with docker build ,"

But I think this is already the case, some projects requirements mean that a single way to build isn't practical, and the Dockerfile syntax doesn't allow for a whole lot of configurability (by design and I wouldn't want that to change...).

I also don't particularly like the --only suggestion. I think the default that people /want/ when they build is for a certain context to be built, not everything - when I run docker build, I want /only/ the Dockerfile to be run. But I also want the ability to have another image that /can/ be built.

As per my understanding, the issue here resides in the fact that when
piping a Dockerfile (cat Dockerfile | docker build -) we loose the working
directory for ADD and CP.

Will solve the issue by adding a flag for docker build that allows us to
specify where the source folder contents reside?

cat Dockerfile | docker build --cwd=~/my_docker_data_files -

In this proposal, --cwd flag specifies the base folder for ADD and CP
commands.

2014-06-29 19:42 GMT+10:00 Peter Braden [email protected]:

you say "So my question is: don't we risk having lots of source
repositories which cannot be built repeatably with docker build ,"

But I think this is already the case, some projects requirements mean that
a single way to build isn't practical, and the Dockerfile syntax doesn't
allow for a whole lot of configurability (by design and I wouldn't want
that to change...).

I also don't particularly like the --only suggestion. I think the default
that people /want/ when they build is for a certain context to be built,
not everything - when I run docker build, I want /only/ the Dockerfile to
be run. But I also want the ability to have another image that /can/ be
built.


Reply to this email directly or view it on GitHub
https://github.com/dotcloud/docker/issues/2112#issuecomment-47449994.

@llonchj actually https://github.com/dotcloud/docker/pull/5715 solves that (in a way) and that is already merged to master.

@shykes definitely open to another multi-image PR, though after looking at #5715 I'm wondering if this particular PR is relevant anymore.

Per @peterbraden's original comment...

Would be nice to be able to specify [command redacted] so I could ADD files, and also have multiple dockerfiles.

Per @cap10morgan...

So I guess the thing I'd really like is to be able to separate the concepts of "build context" from "Dockerfile location/name."

Isn't that exactly what #5715 does? As I understand it:

$ cd myapp
$ ls Dockerfile
Dockerfile
$ docker build -t gabrtv/myimage
Successfully built gabrtv/myimage
$ ls subfolder/Dockerfile
Dockerfile
$ tar -C subfolder -c . | docker build -t gabrtv/myotherimage -
Successfully built gabrtv/myotherimage

I know this doesn't solve multi-image builds from the repo root (which I agree would be nice), but wondering if this particular PR/implementation can be put to rest now with #5715 .

@gabrtv It seems to me that #5715 doesn't fit @shykes criteria of having a single command that will always work. tar -C subfolder -c . | docker build -t gabrtv/myotherimage - is, to me, extremely non-obvious.

In addition, it imposes annoying (if not onerous) constraints on the developer regarding repo layout. Not unfixable, but "Oh, and I'll have to reorganize the entire repo" isn't going to help anyone to convince their boss to switch to Docker.

I personally quite like the most recent suggestion by @shykes, with potentially multiple .dockerfile (why not just .docker?) files, all being built by default. This makes it easy to get up and running with one command no matter what, and optimize later.

Depending on the implementation, one could upload the build context only one time for this across all images, which would reduce the cost of building multiple images. Because I don't have a .dockerignore, my build context is several hundred megabytes and I have to wait quite a while for my context to upload, so this would be nice.

@rattrayalex believe me, I agree with the goal of "having a single command that will always work". There's a good chance I'll end up writing the code and submitting the PR. :wink:

My point is from a project organization standpoint we should have that discussion elsewhere rather than on a drawn-out issue titled Allow specifying of a dockerfile as a path, not piping in:

  1. Most of the discussion here is unrelated to @shykes new proposal in https://github.com/dotcloud/docker/issues/2112#issuecomment-47448314
  2. The original underlying issue (separating build context from Dockerfile) is addressed by #5715, though maybe not to everyone's liking

Quoting @llonchj...

As per my understanding, the issue here resides in the fact that when piping a Dockerfile (cat Dockerfile | docker build -) we loose the working directory for ADD and CP.

In other words, if you were OK with docker build -t <image> -f <path-to-dockerfile>, you can use tar -C <path-to-context> -c . | docker build -t <image> - as a functionally equivalent workaround until we can put a PR together for @shykes proposal.

I suggest we close this and start up a new discussion elsewhere (mailing list? new issue?).

Thanks to everyone who chimed in on this issue today:

@shykes thanks for the reasoning. I think our reasoning for wanting this(other people can chime in) is in our experience: Every application requires out of band information to run in a chosen context. All of our applications have more than one context. The way docker build was structured forced application developers to move the knowledge of that information into their repository structure, or make custom build scripts overriding the behavior of docker build which contain the information.

@gabrtv I think almost all of the discussion here is related specifically to this issue, and glad to see it finally getting discussed here. The title of the issue and a few of the 'discovery' posts are probably slightly irrelevant, so a new issue or PR with a clean start that references this one is probably good.

The proposal from @shykes comment sounds like a solution to me, was there previous work done on this somewhere that can be picked up or looked at for reference?

Wouldn't "Dockerfile.frontend-prod" be better so that all the Dockerfiles
in a directory naturally sort together?

Sorry, but that's just silly. If my repo builds 100 services, I certainly don't want to have 100 Dockerfiles at the root. (Example: the dotCloud PAAS repo.)

I don't want one single humongous Dockerfile with 100 sections, neither.

I want 100 Dockerfiles, and I want them in separate directories.

(And no, I can't just build each service in its own directory, since almost all services will use common code and libraries living _outside_ of each service's directory.)

Again, I don't understand why having an extra flag to specify the path to the Dockerfile (defaulting to just Dockerfile) is such a big deal?

Additionally, the current proposal (<imagename>.Dockerfile) doesn't map at all with the current design of Automated Builds. If one repo on any of my build systems is compromised, the attacker could overload all my repos...?

@jpetazzo like most reasonable people I welcome disagreement but dislike being called silly. Please make an effort to present your arguments without insulting the intelligence of your peers. Thanks.

_Again, I don't understand why having an extra flag to specify the path to the Dockerfile (defaulting to just Dockerfile) is such a big deal?_

I made a specific argument above. Feel free to read it and present a counter-argument.

Re: clutter. @gabrtv had the same concern on IRC. After some discussion we came up with this alternative:

$ ls Dockerfile Dockerfiles/*
Dockerfile
Dockerfiles/frontend-prod
Dockerfiles/frontend-dev

Same idea, but less individual files lying around in the root repo.

Additionally, the current proposal (.Dockerfile) doesn't map at all with the current design of Automated Builds. If one repo on any of my build systems is compromised, the attacker could overload all my repos...?

Well, how are they named now? If you have a repo named "abc" with just a plain Dockerfile, then it would make sence for image generated by that be named "abc". If you have a repo "xyz" with "Dockerfile.abc" and "Dockerfile.zzz" in it, then it would make sence for you to name builds from those as "xyz-abc" and "xyz-zzz".

I am in a simlar situation as you are as well (lots of services each in their own sub-folders with common parts) and explicit option to set a Dockerfile to use at build time would make my thinking easier, but then I can also see how I can combine the "Dockerfile.*" proposal with the already released "tar file as context" to be able to get what I want - the root dockerfiles for each service would generate their build images (with compilers and stuff) and when those are run they would output a tar of context with already compiled executables and a production Dockerfile (taken from the service folder itself) that would describe an image with only the runtime dependencies.

In fact I am now even writing this with a single root Dockerfile because my build environment is more or less the same for all services and I can simply pass it a parameter at runtime to tell it which service release context tar I want it to create.

@shykes well, if we got that far as to have Dockerfile/* , then why not go a step further and have "docker build" simply search for files named "Dockerfile.*" in all subfolders of the current directory, recursively, and run all of them in the context of the current directory? And while we are at if, supply an option to just build one of them ... like "-f"? :)

... dislike being called silly

I didn't know that "silly" could be felt to be offensive; I merely meant "clumsy" maybe, and therefore apologize for misusing the word.

I think your specific argument was "I don't want people to start putting Dockerfiles in random places and systematically using -f when it's not strictly necessary". Is that right? In that case, it makes sense to mandate one Dockerfile at the top of the repo, listing the other Dockerfiles. If that file is absent, the builder can refuse to operate (or issue a warning), and if it lists only one Dockerfile, the builder can also issue a warning. I don't think that the parallel with -v and -p are appropriate here.

@aigarius if we did that it would be less obvious how to map each sub-Dockerfile to a child image. The advantage of both ./Dockerfile.* and ./Dockerfiles/* is that they map to a single namespace which we can use to name the resulting images. By contrast, if I have ./foo/Dockerfile.db and ./bar/Dockerfile.db, and I docker build -t shykes/myapp ., which one gets called shykes/myapp/db?

which one gets called shykes/myapp/db?

Neither. In such scenario you would encode a full path to the Dockerfile into the name, so you'd have "shykes/myapp/bar/db" and "shykes/myapp/foo/db". This _does_ get uncomfortably close to Java namespacing, but I'll leave to others to decide if that is good or bad.

@shykes: regarding Vagrant, well I, and most likely quite a few others, use Vagrant for some overlapping functionality of Docker. So, for me, I need to be convinced that my current multi-machine builds -- which I do in Vagrant, with multiple machine definitions -- do not become more awkward when using Dockerfiles. The thing is that the "context" is much more flexible in Vagrant, where I can on the fly get the context from any location by shared dirs, and so is the language, so if I just want to build one service (or machine) I can set an environment variable and 'vagrant up/provision' it. I use that method daily to just create or provision a few of the services in my system. And, if I get too irritated by a growing Vagrantfile, I can easily 'load' a sub-Vagrantfile describing an individual service or aspect.

Regarding the concrete suggestions for handling multiple Dockerfiles, yes, that would enable us to division our logic for individual services. Great. But, I often want or need to build one service in isolation, so I am still back to a need of specifying one specific Dockerfile...

So, my issues here are two-fold:

  1. Not getting a huge messy file for all services
  2. Being able to build one specific service in that file/cluster of files

Again, Docker is a genius idea and tool, so I WANT people like myself to give up on Vagrant for certain kind of functionalities or use cases.

Sorry if I repeat something treated thoroughly above, but I fail to see how any solution is better than supporting an '-f' option. I.e., modulo implementation problems due to some deep-rooted dependency on the Dockerfile residing in the context root, I don't see any disadvantage with it.

With an '-f' option, one could definitely structure the sub directories and sub Dockerfiles however one pleases, such as the Dockerfiles/ approaches above. Yes, one would then need a simple one-line basher to mimic the exact behavior...

There seems to be tension between sharing and flexibility.

If sharing is the most important feature, then yes, a standard dockerfile
makes sense.

I don't think sharing is the most important feature. Sharing is achieved
via images.

Further, we use docker on projects that are explicitly NOT sharable. In
these cases flexibility is much more valuable for us. Sounds like many in
the thread any are in this same position.

The purposed -f option gives us the flexibility we need. On the flip side,
being able to specify an absolute path for the build context would also
work.
On Jul 3, 2014 6:00 PM, "David Bergman" [email protected] wrote:

Sorry if I repeat something treated thoroughly above, but I fail to see
how any solution is better than supporting an '-f' option. I.e., modulo
implementation problems due to some deep-rooted dependency on the
Dockerfile residing in the context root, I don't see any disadvantage with
it.

With an '-f' option, one could definitely structure the sub directories
and sub Dockerfiles however one pleases, such as the Dockerfiles/
approaches above. Yes, one would then need a simple one-line basher to
mimic the exact behavior...


Reply to this email directly or view it on GitHub
https://github.com/dotcloud/docker/issues/2112#issuecomment-47989616.

Agreed that -f supplies the most flexibility with minimal "surprise" (I personally assumed there was such a flag and tried it before coming to this thread).
Adding a -c, either of a directory or a tar, to provide the context as well also seems like a terrific addition. This would remove constant cds from large build processes, eg; at companies.

Furthermore, a README that contains instructions to docker build -f foo/bar/Dockerfile -c foo/just doesn't sound like it would be off-putting to me.
Also agreed that docker build could easily search for Dockerfiles. This way, a repo structured as such:

-- README
-- foo/
---- Dockerfile
---- etc
-- bar/
---- Dockerfile
---- etc
---- subbar/
------ Dockerfile
-- context/
---- etc

would avoid any possible naming conflicts (repo/bar/subbar), be easy to reason amount, and should make sense in many dev contexts. It would allow for easy use of both docker build, which would build many contexts, and docker build -f, which could leave off the extraneous Dockerfile in most contexts.

If -f is a a file rather than a Docker-containing dir, that should work too, so that a developer can have a dockerfile that is not built when docker build is run.

Just so this does not get lost, yesterday in IRC discussion with @shykes he had another idea on implementing this:

  • There is a master Dockerfile in root of the repository that contains only several lines in a new format such as "INCLUDE foo/bar/Dockerfile AS bar-foo" for each image to be generated
  • Each sub-project (such as bar module of foo subproject), that needs a separate image, maintains a regular Dockerfile at foo/bar/Dockerfile. Paths in that Dockerfile (for ADD and COPY) are still relative to context root, which is the root of the repository (where the master Dockerfile resides)
  • Regular call to "docker build -t mycorp/myapp ." will build all images registered in the root Dockerfile and assign them names such as "mycorp/myapp/bar-foo" in this example
  • Additional command line option to "docker build" would need to be introduced to only build some of the declared images, such as "docker build -t mycorp/myapp --only bar-foo,baz-db"

Just in case people still looking for workaround:

#!/bin/bash
ln -s Dockerfile-dev Dockerfile
docker build -t image/name-dev:latest .
rm Dockerfile

@shykes ping? If there is a core developer approval for a specific design solution, then it is much more likely that someone would try to implement that and make a pull request.

+1 to the -f solution, While I agree each repo should in theory deploy its own service. Some people don't have the luxury of that solution. My particular issue is I have a chef repo with multiple cookbooks in it, and I need to be able to build many services. But I need to be able to ADD directories above. So keeping the context at the root and pointing to a docker file in a subdirectory is optimal.

"-f" is probably not going to be implemented for the reasons already given.

It is imperative that a build works exactly the same from host to host, and having things that rely on having the host setup a particular way (ie, Dockerfile in some random place and context in some other place) breaks this contract.

I think #7115 is probably the proper solution for this

There is also #7204 which would also fit much of the use-case for this, I think.

I dislike both of the 2 proposed solutions involving adding syntax to the dockerfile. Neither of them seem straightforward. #7115 still doesn't seem like it would solve the use-case where I wanted to generate several types of image from a single repo, as I am, for example, doing in one of my projects to generate a 'test' image and a 'prod' image. #7204 seems like an over-complicated solution that solves a different problem -f allows _configurability_ to a build, whereas both of these solutions address _sub_builds_.

I'd really like some explanation of "It is imperative that a build works exactly the same from host to host". I don't see why this is such a key principle.

As it is, I'm using make to generate the docker environment as a workaround to this whole issue, which seems like it already violates your host-to-host run-the-same imperative.

To me, -f seems like a pragmatic solution, but if it's not acceptable, then perhaps we can think of a solution that doesn't involve turning the dockerfile into a fully fledged programming syntax.

It is imperative that a build works exactly the same from host to host

I too would like to hear more about why this is key. Our team is currently
forced to script the build process - the exact thing you're trying to avoid.

When I download an image from a registry it just works. I don't need to
build it. Isn't that achieving host-to-host repeatability? Why do builds
also have to host-to-host repeatable?

On Mon, Jul 28, 2014 at 10:43 AM, Peter Braden [email protected]
wrote:

I dislike both of the 2 proposed solutions involving adding syntax to the
dockerfile. Neither of them seem straightforward. #7115
https://github.com/docker/docker/issues/7115 still doesn't seem like it
would solve the use-case where I wanted to generate several types of image
from a single repo, as I am, for example, doing in one of my projects to
generate a 'test' image and a 'prod' image. #7204
https://github.com/docker/docker/pull/7204 seems like an
over-complicated solution that solves a different problem -f allows
_configurability_ to a build, whereas both of these solutions address
_sub_builds_.

I'd really like some explanation of "It is imperative that a build works
exactly the same from host to host". I don't see why this is such a key
principle.

As it is, I'm using make to generate the docker environment as a
workaround to this whole issue, which seems like it already violates your
host-to-host run-the-same imperative.

To me, -f seems like a pragmatic solution, but if it's not acceptable,
then perhaps we can think of a solution that doesn't involve turning the
dockerfile into a fully fledged programming syntax.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/2112#issuecomment-50347483.

As far as I understand the key idea is that you can pull a git repo and if you see a Dockerfile there, then you can execute "docker build -t mytag ." in it and get the full expected build. The key here is that there no custom build instructions that people might get wrong or not know about.
What _could_ be a compromise is a way to define multiple Docker images in a single context where by default all of them are built with pre-defined sub-tags and then you could have an option to build only one of the many possible images.
This way the common build syntax is preserved while also allowing to both: 1) generate a lot of different images from one context; 2) only choose to generate one of them with an "advanced" option.
The proposed syntax is described in my comment above - https://github.com/docker/docker/issues/2112#issuecomment-48015917

@algarius thanks, I think at this point we should open a new design proposal for the _INCLUDE_ syntax. For examples of design proposals see #6802 or #6805.

The process looks like this:

  • Submit design proposal
  • Discuss design proposal
  • I approve design proposal
  • Send PR for design proposal

Let's be honest, builds aren't really repeatable now.

If I have RUN apt-get update && apt-get install build-essentials ruby python whatever in my Dockerfile, then I'm already at the mercy of
whatever is "latest" in the repos at the point that I build the image. If
YOU build the image a week later, you may well get different versions of
dependencies to the ones I got.

@codeaholics But that is on you for not pegging a version and is completely out of Docker's control.

Probably a fair comment

@codeaholics you are right that the repeatability of builds is not perfect, although there are options for controlling it (like version pinning) and we should provide more. But at least the build always has the same _entry point_, which means that we _can_ introduce better repeatability down the road. However, once your build depends on a 3d-party tool upstream of Docker itself, there is no going back: the repeatability of your build is basically zero, and no future improvement of Docker can fix it.

It's not just about repeatability, either. If your build depends on a particular set of arguments, not discoverable by _docker build_, then your custom tool is the only build tool in the World which can build it: you can no longer benefit from the various CI/CD tools and plugins which assume a native, discoverable _docker build_.

@peterbraden in #7115, once you add the ability for multiple PUBLISH calls, you can now produce multiple images. Then you can add configurability by allowing selecting which sub-images to build. They key is that configurability takes place _within the set of sub-images discoverable by Docker_. That way you preserve discoverability.

@shykes > once your build depends on a 3d-party tool upstream of Docker
itself ... the repeatability of your build is basically zero.

I agree. I think most my Dockerfiles depend on apt and the apt repositories
are constantly changing. A build today is not the same as a build
yesterday. A built image is the only gold standard imo.

If repeatability isn't the primary reason for the restriction, CI plugin
support seems like a pretty weak fallback. Build tools are designed to
support idiosyncratic use cases. A plugin that does nothing more than
configure a docker build cmd seems pretty useless. My build tool is
already a scripting environment. Worse, shoveling all the potential
use-cases into the Dockerfile overtime seems like a bad way to architect
it. The Dockerfile should stay focused on simply producing a new image by
adding layers to a base image. Other work seems out of band.

I love Docker so far. Awesome work guys! This particular issue just happens
to be a pain point.

On Mon, Jul 28, 2014 at 12:25 PM, Solomon Hykes [email protected]
wrote:

@peterbraden https://github.com/peterbraden in #7115
https://github.com/docker/docker/issues/7115, once you add the ability
for multiple PUBLISH calls, you can now produce multiple images. Then you
can add configurability by allowing selecting which sub-images to build.
They key is that configurability takes place _within the set of
sub-images discoverable by Docker_. That way you preserve discoverability.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/2112#issuecomment-50361586.

@thedeeno the goal of the Dockerfile is to specify how to transform a source code repository into something Docker can run.

How about "the goal of _a_ Dockerfile is to specify how to transform a source code repository into something Docker can run."

I think we can agree that there is a real need for a way to have multiple repo->image mappings. As it is, a lot of us are abusing shell scripts and makefiles to do this. In my experience, I haven't seen many custom named Makefiles - configuration and defaults are a powerful force. Because of this, I am not convinced that people abusing the conventions is a realistic scenario.

The alternative that you propose, adding a keyword to the Dockerfile syntax, makes creating Dockerfiles more complex. I'm in favor of keeping that syntax as simple as possible. PUBLISH et al, seem inelegant compared to the rest of the Dockerfile syntax.

There's also the idea of following existing tool conventions. As has been mentioned, -f is such a common pattern that people try it without even knowing it works. A UI should be intuitive, and people _get_ -f flags. Grokking new Dockerfile syntax isn't nearly as intuitive.

@peterbraden @shykes I agree on -f being simpler and intuitive. I originally found this issue because I _expected_ Docker to already have this feature and searched for it (presumably when it didn't work by default) on the first day I started testing it for our applications.

This is not an edge case unless you are in a situation where you do not test your software or run your software in multiple environments. Every repo I have touched in the past 10 years has a requirement to be able to run in different contexts. I think a lot of people on this thread have the same issue. We are all in agreement (I think) that having convention is obviously valuable for the default context for your application (assuming you have one).

I read through #7115 (quickly) and do not understand how it solves this problem for us, perhaps this is an issue of documentation on the PR but if it is hard to communicate, it will cause frustration and errors IMO, while -f or a similar flag would require little effort to explain and use correctly.

Created a new proposal with that INCLUDE idea - https://github.com/docker/docker/issues/7277

As a side note - it might be worth it to think about a more efficient "APT" command instead of clunky "RUN apt-get install -y ...." (and then subsequently YUM, EMERGE, PACMAN and whatever else)

Oh and considering that a lot of people actually just try "-f" option anyway, it would be cute if whenever you try to use it, you get a nice error message directing you to a page in the documentation describing the new solution.

I really think we are all ignoring the elephant in the room.

@shykes is saying things like "The fact that you can point at a source code directory, with no other information out-of-band, and build an image out of it exactly to the specification of the upstream developer, is very powerful and a key differentiator of Docker."

I'm pretty sure this is equivalent to saying "typing docker build . in a directory should absolutely always do the right thing", which is equivalent to saying "If you have a project where there is not an obvious right thing for docker build . to do, you're wrong"

I can't speak for the motivations of other people subscribed to this issue, but my own motivations for this issue are exactly those cases where docker build . has no obvious behavior. Did I want a debug version or a release version? Did I want postgres baked-in or should I emit separate app and postgres images? Did I want it loaded with test data or production data? Did I want to run the tests?

Sure, I would like to live in a world where people didn't have to answer these questions to get a build. But I don't live in that world. Of course I could arbitrarily pick some answers to those questions and define a behavior for docker build .. But the fact of the matter is that building something arbitrary when the user types docker build . isn't "doing the right thing".

The reality of the situation is that there are projects where there is no obvious right thing to build. There are a lot of ways to solve that inside docker, like -f support, or making the user choose from a list of targets imported from somewhere else, or various other proposals. But they all have the property that docker build . breaks, by design. There's a fundamental impedance mismatch here, you can't just proposal your way out of it.

The idea that every project can choose something sensible to do for docker build . is a fantasy. Projects that cannot possibly do something sensible exist, and in number. The only question at this point is whether Docker is going to support the multi-buildproduct nature of those projects directly or whether they will fall back to traditional buildtools like make or even shell scripts. This is happening now: for example Discourse, one of the larger distributed-on-Docker projects, uses a homegrown build system in part to solve this problem.

@drewcrawford you see, that is exactly the problem. "docker build ." should build _the_ image. The one and only true image. You are supposed to change the docker run parameters to adjust the behaviour of the one and only image to different environments.
In this context you likely want a release version, just the app, nothing else. All the other things, like "the postgres image" are different things that get linked to your app at runtime. You can take the same release image and run it with a non-default command to execute your tests in it.
Ideally, with multiimage support you don't even have to choose between "release", "debug" and "with tests" builds - you just have all of them with a base build and 3 small difference builds on top.
If you try to bend Docker to the old way of thinking, you are going to have a bad time. Do not try to hammer a screw with a screwdriver.

@algarius I think the problem is that this thinking does not apply to real applications, and no one has explained how it does if we are truly supposed to have a paradigm shift here.

Allowing multiple images is the most simple and intuitive way to support the requirement that most real world network accessible applications have (multiple environments with different dependencies). Is making the grammar more complicated in a way that cannot be communicated in order to solve this desirable? In the end you are just re-engineering and obfuscating the fact that you have > 1 environment with different dependencies which could easily be expressed in a simple manner like multiple Dockerfiles that needs almost no explanation to end users of the software.

@drewcrawford provided a concrete example for how this is dealt with in a current real world popular application that is distributed with Docker support. How should they be doing this?

@aigarius It's not that I don't understand "the docker way". I just think it's wrong.

For example, the tests may need a different build--an honest-to-god _recompile_--because certain logging code that is too slow for production must be conditionally compiled in.

If this sounds like a job for a make system, you're right--and that is in fact what people like me do, they find ways to script docker builds using actual build systems.

The problem here is that a Dockerfile cannot be both the preferred user interface for building and also substantially less powerful than e.g. make. It can be either the preferred user interface for building, with makelike power, or it can be a low level tool with make etc. as the real user interface. But "real" build systems aren't arbitrarily complicated--they are complicated because people use the features, and no amount of simplified docker builds are going to change that. All that will happen is docker will become the new cc, that people invoke with make.

@shykes @drewcrawford @aigarius I have written up a proposal for a simple -f option in the same fashion as @shykes described earlier in this thread here: #7284

@drewcrawford you're right, to be the preferred user interface for build, docker build does need to be at least as powerful as eg. make. There are 2 ways to do this:

  • 1) We could try to re-implement every feature of every variation of make, as well as every alternative to make out there: Maven, scons, rake, and of course good old shell scripts.

OR

  • 2) We could recognize that these build tools are themselves executable programs, which means they have runtime dependencies, and were themselves built from build dependencies - and so on all the way down. And we could provide a way for you to _actually use_ your favorite build tool. So we wouldn't need to re-implement make because we can actually build and run make.

When you say make, which flavor of make are you referring to? Which version? Built from which svn checkout exactly? With which build options? Which libc is it linked to? Same question for cc which you also mention. Maybe it doesn't matter - maybe your app will turn out exactly the same regardless of which random version of make I end up using it to build. But again, maybe it does matter. Maybe the only way for me to get exactly the same build as you, is to use exactly the same build of Make and Cc than you used. And there's no easy way to do that today - but that's what I'd like to do.

I don't really think of docker build as a build tool, exactly - more as a meta-build tool: a known starting point from which you can reconstitute any build environment, reliably. That's the motivation for #7115.

@shykes I don't think it's necessary to really replicate _every_ feature from make, Maven, rake, etc. After all, these tools don't replicate every feature of each other, and somehow they get along.

However, it is necessary (if Docker is to be the preferred user interface for building images) that the Dockerfile become a more expressive language than it is currently.

The proposal in this issue is really rather a modest move along that line: it is saying "We need an equivalent to make's _targets_". The question of "what version of make" doesn't really enter into the picture--the concept of a "target" is common to make, Maven, rake, and every other serious build system. They have different syntax but the fact that the feature itself is universal should be a clue that this is a thing that people frequently do when they build things.

It does not matter if the thing they are building is a C program, a Docker image, or anything in-between--you are still building some target. The idea of a build target is so fundamental that it spans build system implementations, it spans programming languages, it spans operating systems. We are not talking about replicating some obscure GNU Make-specific feature here, we are talking about something that has the broadest possible agreement.

You are trying to introduce some concept of a "meta-build tool" here but there is no such thing. Just as docker can orchestrate make builds, so can make orchestrate docker builds, and in fact make is being used right now in just that way, because there is no way to specify a build target to Docker. But at any rate neither system is inherently more or less meta than the other, they are all just build systems, that build targets, but in Docker's case you're only allowed one, which is the issue.

Proposals like #7115 are interesting but unless I'm missing something they're not an effective substitute for targets. A target tells you what to build. #7115 gives you a more flexible DSL that can still only build one thing. That's something, but it's not what's being requested here.

docker build -t my_project_builder .
# Builds the builder container
docker run my_project_builder my_target | docker build -t my_project/my_target -< Dockerfile_MyTarget
# target is built, tarballed, and sent to stdout, then piped into a new docker build as the context and custom Dockerfile name

@drewcrawford just to confirm, if docker allowed something equivalent to Make targets, so that you could 1) specify which target to build, eg. make clean; make foo, 2) default to building _all_ targets, and 3) have a way to enumerate targets... That would be satisfactory to you?

That would be satisfactory... I would prefer defaulting to a developer-chosen target instead of defaulting to all targets, but I would accept either solution

That's fine.

@shykes @drewcrawford :+1:

All that will happen is docker will become the new cc, that people invoke with make.

The proposal in this issue is really rather a modest move along that line: it is saying "We need an equivalent to make's targets". The question of "what version of make" doesn't really enter into the picture--the concept of a "target" is common to make, Maven, rake, and every other serious build system. They have different syntax but the fact that the feature itself is universal should be a clue that this is a thing that people frequently do when they build things.

@drewcrawford :+1: +1,000,000

Here's the situation I keep running into:

I want to build two Docker containers from my repo. Maybe one's the server, one's the database. Or maybe one's the server and the other is a client that runs automated tests against the server via API calls.

In any case, there are some files in the repo I need to ADD for both of these containers. As-is, it's impossible to do this without resorting to copying the separate Dockerfiles one at a time in a bash script, doing the build, then replacing or deleting the Dockerfile.

I need either:

  1. The -f command to specify the file to be used as a Dockerfile.
  2. The ability to ADD a file not down-path of the Dockerfile.

Since it's been repeatedly said that number one is a no-go, and I assume number two is even more technically difficult, what is the "proper" way to do this?

@ShawnMilo

@shykes last comment seemed to indicate this is no longer as objectionable as it once was, but I think we are still unclear on which existing proposal would be appropriate or what would need to change if none are.

@jakehow Thanks for the update. So it seems that an official fix to the problem is still far off. In the meantime, is there a "good way" to do it? The best idea I have is a bash script that copies the files one at a time to "Dockerfile" in the repo root and builds the images. It'll work but it feels dirty.

@ShawnMilo yes I think most people are using a build tool (make,rake,etc) or just plain old bash scripting to do this.

Above someone showed a snippet where they have a builder image that deals with this as well.

Here is my way of dealing with this problem. I have to build an image for different versions of a platform let's say v2 and v3. So i have a Dockerfile.v2 and a Dockerfile.v3 When I want to make a build I run first ln -s ./Dockerfile.v3 ./Dockerfile then docker build . Actually I have a script and I just run ./build v3 or ./build v2

While I am currently using a script that links a specified Dockerfile to ./Dockerfile, this would be a nice feature.

I created a PR #7995 to try to address this. Some of the solutions that are discussed in this (long :-) ) thread seem awfully painful for people to use. One of the biggest selling points of Docker (to me) was how easy it is to use and so asking for people to jump through these kind of hoops for what feels like a pretty easy ask doesn't feel right.

Now, its possible that I'm missing something, but is there a technical reason why it must be named "Dockerfile"? While creating the PR I couldn't find one and I was pleasantly surprised at how easy it was to change.

@duglin you have all my support.

This is what we're doing to get around the lack of a -f option in our current project.

# build.sh
...
mv specialDockerfile Dockerfile
docker build -t $(PROJECT) .

Consider this a +1 to adding -f.

Currently I'm also managing things like kikito. I wrote a couple of shell scripts to build different images from same context but would love to see a way to specify a docker file from the CLI argument as suggested already. +1 for this request.

+1000

I came across this today when trying to build a golang project cross-platform using docker. This works, see boot2docker, but I need a Makefile to stitch together the inside and outside of the docker build process, e.g. docker cp the artifacts from inside docker to my build directory.

However if I try to use this in a subdirectory of my repo my GOPATH gets broken, since the .git folder in the repo root is missing. I need to ADD the repo root and then build inside my subdirectory, but this isn't allowed by docker.

After reading through this long thread I saw a concern about not being able to reproduce builds. One way this could be mitigated is to only allow the -f option to point within the build context or downwards. This way you can have multiple Dockerfiles in subdirectories and build them from the repo root. Additionally you could limit this to be within repo boundaries, e.g. at the same level as .git/.hg/.foo and below but not outside the repo.

Here's what I'm thinking

FROM scratch
BUILD myname1 path/to/dockerfile/in/context
BUILD myname2 path/to/dockerfile2/in/context
docker build -t myimage .

This would produce 3 images:

  1. Main image called "myimage", which doesn't really have anything in it for the sake of example
  2. An image called myimage-myname1
  3. An imaged called myimage-myname2

The BUILD keyword would take a name and a path to a Dockerfile. This Dockerfile must be within the original context.
Each build instruction would have access to the full context from the main build.
Although it may be worthwhile to limit the context to the dir-tree that the Dockerfile is contained within.

-1

You could use a context feeder to docker build - like dockerfeed for this special case.

@itsafire The point is to bring this type of functionality into Docker so that this can be a supported way to build your project. I'm sure people would love for this to work with automated builds as well.

I am at loss here. We have several pull requests for this added feature, right? And it doesn't look to hard to do... One could even enforce the rule that @maxnordlund mentioned: to restrict the path to be relative the context/root. That would at least make it work more smoothly with boot2docket and such, and make it "portable".

This is by far the most irritating gap in Docker, and as more and more tools start to use Docker as a part of its functionality, it is important that we quickly add the capability for an '-f' flag so those tools are prepared for that before becoming too final... Related: having a multi-image Dockerfile doesn't cut it, since these related "meta" tools often assume _one_ image per Dockerfile! Also, using '-' doesn't cut it with these tools, and restricts the functionality severely, as we all know.

Can somebody with authority explain why this fix is not merged yet, or at least why this sorely lacking feature is still not there?

Starting to feel like Twilight Zone.

I guess there is a missmatch in how the Docker team uses Docker / wants Docker to be used and how a large part of community uses Docker. Correct me if I'm wrong.

The docker team wants to build a software repository not very much unlike the repository of the Debian project. Where people can get their favorite software and run it easily. The build process for the software should be repeatable and obvious.

Others want to automate their exisiting in-house software deployment, which can already be highly complex with a lot of build tools, CI system etc. For them Docker is just another tool that has to fit into their infrastructure. And they need a bit more flexibility in how they can use Docker.

I think Docker (like .deb) can satisfy the needs of both parties, but compromises will have to be made.

The basic problem is: where does this end ? Turing completeness for Dockerfile ? Probably not. There will always be the need for more flexibility to solve ones special case. Some ideas will enter the Dockerfile language, others won't. Since docker is able to eat a context via stdin on a build, missing functionality can be implemented via a pre-processor.

@itsafire Last time I checked, docker is able to eat a Dockerfile but not a context. In fact, the context is ignored when you supply a Dockerfile that way. If they added support for what you suggested this would be a closed issue.

I haven't looked in a while but the basic request is this: give us an explicit way to supply a both a Dockerfile AND a context during build. Honestly shocked this is like 8 months later and still open.

@thedeeno this is indeed possible
cat mystuff.tar.gz | docker build -

@cpuguy83 but I can't supply an explicit Dockerfile with that command. Right?

@thedeeno yes, you tar up whatever Dockerfile you want with whatever context you want.

@cpuguy83 @thedeeno No, that doesn't work, because you then can not ADD any files because none are in the "cwd" scope you usually get with a Dockerfile.

Edit: This statement is wrong; I misunderstood @cpuguy83's example.

@ShawnMilo yes it is. Anything in the tar is within context.

@cpuguy83 Sorry, my mistake. I hastily read it as just piping in the Dockerfile, not a whole tarball.

@cpuguy83 Nice! I vote close then if it works like you suggest.

Side question, in my custom solution timestamps busted the cache when taring. That still an issue? If I tar the same folder multiple times will it use the cache on build?

Keep up the great work!

Piping a whole context doesn't alleviate what we talk about above, at all.

What we want and need is a way to use the _same_ context with various Dockerfiles. Such as the aforementioned "build with logging" and "build without logging", as individual images. I have a lot of other use cases where this is needed.

I fail to see how tar-balling a directory would help in this. Yes, one could create a special directory and copy the specific Dockerfile there, and then the whole context directory, or tar and append a new Dockerfile, and then gzipping. But how is that easier than the [quite terrible] workaround that we currently have to employ of having a pre-processor script putting the correct Dockerfile in place before running docker?

And, this won't help with the Docker ecology, as I noted above.

Have I missed something?

Again, a simple '-f' option. Please. Please, pretty please. Force it to be a relative path of the context. That is fine.

@davber What you really want is for Docker to handle tarballing the context and the Dockerfile for you.
And I'm not totally against this. Though I think nested builds may be a better solution to this.

@cpuguy83 : yes, so I want Docker to handle that for me, yes, which would include picking the context part from one place and the Dockerfile from potentially another place, or with a non-standard name. I.e., to support a separate '-f' flag :-)

Nested builds don't solve the problems we face, which started this thread, and keeps it going.

I.e., we still want to use the same root context.

Yes, we can copy files, and, yes, we do to circumvent this for us strange coupling of context with an exact Dockerfile named 'Dockerfile'. But that is not ideal, and setting up rsync to ensuring the files are indeed identical to the original ones is just weird.

@cpuguy83 : can you explain how nested builds would help me, or any other of the "whiners" in here? :-)

@davber
My take on it is this:

FROM scratch
BUILD myname1 path/to/dockerfile
BUILD myname2 path/to/another/dockerfile

And this:

docker build -t myimage .

Would yield 3 images, "myimage", "myimage-myname1", "myimage-myname2".
Each inner build would have access to the full build context as absolute paths. Relative paths would be relative to the Dockerfile.
And the "myimage" could have it's own stuff as well beyond just BUILD instructions.

As I mentioned earlier, a lot of the new tools in the greater (and great!) Docker ecology assume that each Dockerfile is associated with exactly one Docker image. The various "Fig"-like orchestrator tools out there. And a lot of the new and old cloud solutions having specific support for Docker also have this one-to-one assumption. Granted, they would in the world created by an '-f' option then have to not only get a context -- as a tar ball for instance -- but also a, potentially separate, Dockerfile. But each such upload would still correspond to exactly one Docker image.

If we go with the route of potentially separating the Dockerfile from the context root, I hope these tools will start to live with this scenario:

Each deployment/use of a Docker image is done with an upload of either:

   1. a Dockerfile solely, when no contextual operations are needed
   2. a context tar ball only, containing the context with a top-level Dockerfile
   3. both a context tar ball and a separate Dockerfile

The longer we stay with this strong coupling of 'Dockerfile' at the top level of context, the more engrained that will be in the ecology. I.e., we should act now, as the Docker world moves swiftly, due to the general awesomeness.

And, honestly, that is a reasonable and conceptually attractive assumption, to have an isomorphism between Dockerfiles and images, even though the former would strictly be a product space of context directories (tared up...) and Dockerfiles, defaulting to (null, file) if only Dockerfile is provided and (context, context/'Dockerfile') if only context is provided.

And even for local deployment: say that we want to use Fig for at least local orchestration: how would one go about doing that? What one would have to do is to pre-create the images from such a multi-build Dockerfile, and then refer to those images in Fig. Not optimal.

an isomorphism between Dockerfiles and images

This asumption is already broken in how the people in this thread are using Dockerfiles. I.e. using scripting to replace the Dockerfile manually before executing docker build. They probably also have their own orchestration in place. This feature request isn't about changing the docker landscape, it's about making docker work for a specfic kind of use.

@hanikesn : two comments:

  1. why is that assumption broken by having to copy Dockerfiles into place before building; it would still be one Dockerfile <-> one image?
  2. what I am arguing is that I want whatever solution we come up with here to _work with_ the existing and growing Docker landscape, without too big changes needed in this landscape; and I think by _keeping_ that isomorphism mentioned, we do so.

Other suggestions here have been to have one multi-image Dockerfile, potentially calling out to sub-Dockerfiles. That wouldn't work with how most tools (Fig etc.) currently use Docker.

@davber

Nested builds don't solve the problems we face, which started this thread, and keeps it going.

I propose this solution I already pointed out earlier:

$ docker-pre-processor [ --options ... ] . | docker build -

Whereas --options are the rules the context in the (here) current directory is to be altered and passed to docker. This has to be done on the fly by creating a temporary tar archive containing the context. That way the source context can stay untouched. It's easier to change the pre-processor than Dockerfile syntax.

@itsafire

What about tools expecting Dockerfiles today? They are becoming more plentiful, using Amazon, Google, or such. And Fig and similar orchestration frameworks.

We would then have to push a standardized 'docker-pre-processor' tool and the use of such tool out to those frameworks, providers and tools.

It would sure be much easier having 'docker' proper support at least the option triggering this thread.

@itsafire everyone with this issue who has solved it is already using some sort of preprocessor or wrapper around docker build to achieve this goal.

The fragmentation around this situation is in conflict with the @docker team's stated goal of 'repeatability'. This discussion and the others are about resolving this issue.

1+ year and 130+ comments and counting for a simple issue affecting most of the users... I'm impressed. Keep up the good work, Docker!

+1

Tools should help people to follow their own way, but not to impose the "right" way. A simple case that brought me to this discussion:

`-- project
     |-- deploy
     |    |-- .dockerignore
     |    |-- Dockerfile
     |    ...
     `-- src

My way is to keep project root clean. But ADD ../src and -f deploy/Dockerfile doesn't work. For now I have Dockerfile and .dockerignore in project root, but it is pain for me.

On my side I have built a script which prepare a folder with the required files and execute the standard command line docker build -t my/image . as I have encountered the issue that the .dockerignore file is ignored by the ADD...

+1 sure would like to be able to have multiple Dockerfiles in a single repo. My use case: One image is for production use and deployment, another image is a reporting instance designed to use the same backend tools and database connectivity, but requires no front-end, web, system service, or process supervision...

+1 for this. I need to add files to different images from the same folder, for different servers.

+1 I'm enjoying Docker so far, but due to they way my team i setup I really need a way of building different deployables that share a good chunk of code, from one repo. Not particularly keen to build them all into an uberdocker container as then their deployment/release cycles are needlessly tied together. What's the best practice for getting round this?

@jfgreen: Put multiple Dockerfiles wherever you like, and name them whatever you like. Then have a bash script that copies them one at a time to the repo root as "./Dockerfile," runs "docker build," then deletes them. That's what I do for multiple projects and it works perfectly. I have a "dockerfiles" folder containing files named things like database, base, tests, etc. which are all Dockerfiles.

@ShawnMilo Thanks, that seems like a very reasonable workaround.

+1

+1

+1 to a -f. There is a sane default which if the flag is omitted Docker will do _the right thing._ In the case that for whatever reason the very specific view of _the right thing_ is for a user _the wrong thing_ there is -f. Above comparisons to tools like make I think are reasonable. make is also a very opinionated utility originally written in 1976 and as such its had a reasonable amount of time to stabilize a feature set. I think its instructive that in man make the only flag that gets a mention in the very brief synopsis is... -f.

       make [ -f makefile ] [ options ] ... [ targets ] ...

There is no problem in opinionated, ux centered utilities, but some things likes like -f are pragmatic nods to the real world users live in. A tool should make your life easier not harder. If I have to write a Makefile or a shell script to work around a tool without a -f thats a failure of the tool. Clearly from the number of users that have taken the time to comment and weighed in to +1 the feature has significant utility even a year after it was initially proposed.

+1

+1 (or an official blog post with the recommended workaround)

+1

+1

@crosbymichael I believe this can be closed now due to #9707 being merged

VICTORY.

If you all want to try this new feature out you can download the binaries for master on:

master.dockerproject.com

Thanks @duglin !

:clap:

Thank you @crosbymichael for the binary ! :+1:

Well done guys! :clap:

So it's docker build -f Dockerfile.dev .? edit: yep

Was this page helpful?
0 / 5 - 0 ratings