Moby: Dockerfile COPY with file globs will copy files from subdirectories to the destination directory

Created on 26 Aug 2015  Β·  54Comments  Β·  Source: moby/moby

Description of problem:
When using COPY in a Dockerfile and using globs to copy files & folders, docker will (sometimes?) also copy files from subfolders to the destination folder.

$ docker version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 19:47:52 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.0
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0d03096
 Built:        Tue Aug 11 17:17:40 UTC 2015
 OS/Arch:      linux/amd64

$ docker info
Containers: 26
Images: 152
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 204
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.0.9-boot2docker
Operating System: Boot2Docker 1.8.0 (TCL 6.3); master : 7f12e95 - Tue Aug 11 17:55:16 UTC 2015
CPUs: 4
Total Memory: 3.858 GiB
Name: dev
ID: 7EON:IEHP:Z5QW:KG4Z:PG5J:DV4W:77S4:MJPX:2C5P:Z5UY:O22A:SYNK
Debug mode (server): true
File Descriptors: 42
Goroutines: 95
System Time: 2015-08-26T17:17:34.772268259Z
EventsListeners: 1
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: jfchevrette
Registry: https://index.docker.io/v1/
Labels:
 provider=vmwarefusion

$ uname -a
Darwin cerberus.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64

Environment details:
Local setup on OSX /w boot2docker built with docker-machine

How to Reproduce:

Context

$ tree
.
β”œβ”€β”€ Dockerfile
└── files
    β”œβ”€β”€ dir
    β”‚Β Β  β”œβ”€β”€ dirfile1
    β”‚Β Β  β”œβ”€β”€ dirfile2
    β”‚Β Β  └── dirfile3
    β”œβ”€β”€ file1
    β”œβ”€β”€ file2
    └── file3

Dockerfile

FROM busybox

RUN mkdir /test
COPY files/* /test/

Actual Results

$ docker run -it copybug ls -1 /test/
dirfile1
dirfile2
dirfile3
file1
file2
file3

Expected Results
The resulting image should have the same directory structure from the context

arebuilder

Most helpful comment

Make a new command CP and get it right this time please.

All 54 comments

Updated original message with output from docker info and uname -a and reformatted it to be according to the issue reporting template.

I've had this on 1.6.2 and 1.8
https://gist.github.com/jrabbit/e4f864ca1664ec0dd288 second level directories are treated as first level ones should be for some reason?

for those googling: if you're having issues with COPY * /src try COPY / /src

@jfchevrette I think I know why this is happening.
You have COPY files/* /test/ which expands to COPY files/dir files/file1 files/file2 files/file /test/. If you split this up into individual COPY commands (e.g. COPY files/dir /test/) you'll see that (for better or worse) COPY will copy the contents of each arg dir into the destination dir. Not the arg dir itself, but the contents. If you added a 3rd level of dirs I bet those will stick around.

I'm not thrilled with that fact that COPY doesn't preserve the top-level dir but its been that way for a while now.

You can try to make this less painful by copying one level higher in the src tree, if possible.

I'm pretty confident that @duglin in right and it could be very risky to change that behavior. many dockerfiles may break or simply copy inuntended stuff.

However I'd argue that for the long run it would be better if COPY was following the way tools such as cp or rsync handle globs & trailing slashes on folders. It's definitely not expected for COPY to copy files from a subfolder matching dir/* into the destination IMO

@jfchevrette yep - first chance we get we should "fix" this.
Closing it for now...

@duglin so, closing means it will not get fixed?

@tugberkugurlu yup, at least for now. There's work underway to redo the entire build infrastructure and when we do that is when we can make COPY (or its new equivalent) act the way it should.

@duglin thanks. Is it possible to keep this issue open and update the status here? Or is there any other issue for this that I can subscribe to?

@tugberkugurlu I thought we had an issue for "client-side builder support" but I can't seem to find it. So all we may have is what the ROADMAP ( https://github.com/docker/docker/blob/master/ROADMAP.md#22-dockerfile-syntax ) says.

As for keeping the issue open, I don't think we can do that. The general rule that Docker has been following is to close any issue that isn't actionable right away. Issues for future work are typically closed and then reopened once the state of things change such that some action (PR) can be taken for the issue.

@duglin This is very serious issue, you shouldn't just close it because the problem was introduced in 0.1 release. It would be more appropriate to target this for 2.0 release (milestones are on github too).

I guess most people use:

COPY . /app

and blacklist all other folders in .gitignore or have single level directory structure and use COPY which actually has mv semantics:

COPY src /myapp

It's quite hard for me to imagine that someone would actually use COPY for flattening directory structure. The other workaround for this is using tar -cf .. & ADD tarfile.tar.gz. Changing at least this would be really helpful. The other thing is respecting slashes in directory names COPY src /src vs COPY src/ /src (which are currently completely ignored).

duglin closed this on Sep 1, 2015

@duglin This is a ridiculous and infuriating issue and should not be closed. The COPY command behaves specifically in disagreement with the documented usage and examples.

@tjwebb there's still an open issue https://github.com/docker/docker/issues/29211. This can only be looked into if there's a way to fix this that's fully backward compatible. We're open to suggestions if you have a proposal _how_ this could be implemented (if you _do_, feel free to write this up, and open a proposal, linking to this issue). Note that there's already a difference between (for example), OS X, and Linux in the way cp is handled;

mkdir -p repro-15858 \
  && cd repro-15858 \
  && mkdir -p source/dir1 source/dir2 \
  && touch source/file1 source/dir1/dir1-file1 \
  && mkdir -p target1 target2 target3 target4 target5 target6

cp -r source target1 \
&& cp -r source/ target2 \
&& cp -r source/ target3/ \
&& cp -r source/* target4/ \
&& cp -r source/dir* target5/ \
&& cp -r source/dir*/ target6/ \
&& tree

OS X:

.
β”œβ”€β”€ source
β”‚Β Β  β”œβ”€β”€ dir1
β”‚Β Β  β”‚Β Β  └── dir1-file1
β”‚Β Β  β”œβ”€β”€ dir2
β”‚Β Β  └── file1
β”œβ”€β”€ target1
β”‚Β Β  └── source
β”‚Β Β      β”œβ”€β”€ dir1
β”‚Β Β      β”‚Β Β  └── dir1-file1
β”‚Β Β      β”œβ”€β”€ dir2
β”‚Β Β      └── file1
β”œβ”€β”€ target2
β”‚Β Β  β”œβ”€β”€ dir1
β”‚Β Β  β”‚Β Β  └── dir1-file1
β”‚Β Β  β”œβ”€β”€ dir2
β”‚Β Β  └── file1
β”œβ”€β”€ target3
β”‚Β Β  β”œβ”€β”€ dir1
β”‚Β Β  β”‚Β Β  └── dir1-file1
β”‚Β Β  β”œβ”€β”€ dir2
β”‚Β Β  └── file1
β”œβ”€β”€ target4
β”‚Β Β  β”œβ”€β”€ dir1
β”‚Β Β  β”‚Β Β  └── dir1-file1
β”‚Β Β  β”œβ”€β”€ dir2
β”‚Β Β  └── file1
β”œβ”€β”€ target5
β”‚Β Β  β”œβ”€β”€ dir1
β”‚Β Β  β”‚Β Β  └── dir1-file1
β”‚Β Β  └── dir2
└── target6
    └── dir1-file1

20 directories, 12 files

On Ubuntu (/bin/sh)

.
|-- source
|   |-- dir1
|   |   `-- dir1-file1
|   |-- dir2
|   `-- file1
|-- target1
|   `-- source
|       |-- dir1
|       |   `-- dir1-file1
|       |-- dir2
|       `-- file1
|-- target2
|   `-- source
|       |-- dir1
|       |   `-- dir1-file1
|       |-- dir2
|       `-- file1
|-- target3
|   `-- source
|       |-- dir1
|       |   `-- dir1-file1
|       |-- dir2
|       `-- file1
|-- target4
|   |-- dir1
|   |   `-- dir1-file1
|   |-- dir2
|   `-- file1
|-- target5
|   |-- dir1
|   |   `-- dir1-file1
|   `-- dir2
`-- target6
    |-- dir1
    |   `-- dir1-file1
    `-- dir2

24 directories, 12 files
diff --git a/macos.txt b/ubuntu.txt
index 188d2c3..d776f19 100644
--- a/macos.txt
+++ b/ubuntu.txt
@@ -11,15 +11,17 @@
 β”‚       β”œβ”€β”€ dir2
 β”‚       └── file1
 β”œβ”€β”€ target2
-β”‚   β”œβ”€β”€ dir1
-β”‚   β”‚   └── dir1-file1
-β”‚   β”œβ”€β”€ dir2
-β”‚   └── file1
+β”‚   └── source
+β”‚       β”œβ”€β”€ dir1
+β”‚       β”‚   └── dir1-file1
+β”‚       β”œβ”€β”€ dir2
+β”‚       └── file1
 β”œβ”€β”€ target3
-β”‚   β”œβ”€β”€ dir1
-β”‚   β”‚   └── dir1-file1
-β”‚   β”œβ”€β”€ dir2
-β”‚   └── file1
+β”‚   └── source
+β”‚       β”œβ”€β”€ dir1
+β”‚       β”‚   └── dir1-file1
+β”‚       β”œβ”€β”€ dir2
+β”‚       └── file1
 β”œβ”€β”€ target4
 β”‚   β”œβ”€β”€ dir1
 β”‚   β”‚   └── dir1-file1
@@ -30,6 +32,8 @@
 β”‚   β”‚   └── dir1-file1
 β”‚   └── dir2
 └── target6
-    └── dir1-file1
+    β”œβ”€β”€ dir1
+    β”‚   └── dir1-file1
+    └── dir2

-20 directories, 12 files
+24 directories, 12 files

Make a new command CP and get it right this time please.

I would echo the above, this must have wasted countless development hours, its extremely un-intuitive.

+1 from me. This is really stupid behavior and could easily be remedied by just adding a CP command that performs how COPY should have.

"Backwards compatibility" is a cop out

The TL;DR version:

Don't use COPY * /app, it doesn't do what you'd expect it to do.
Use COPY . /app instead to preserve the directory tree.

COPY only able copy it's subfolder .

Just spent countless hours on this... Why does this even work this way?

I'm using Paket and want to copy the following in the right structure:

.
β”œβ”€β”€ .paket/
β”‚   β”œβ”€β”€ paket.exe
β”‚   β”œβ”€β”€ paket.bootstrapper.exe
β”œβ”€β”€ paket.dependencies
β”œβ”€β”€ paket.lock
β”œβ”€β”€ projectN/

And by doing COPY *paket* ./ it results in this inside the container:

.
β”œβ”€β”€ paket.dependencies
β”œβ”€β”€ paket.lock

How about adding a --glob or --recursive flag for COPY and ADD ?

COPY . /destination preserves sub-folders.

Three years and this is still an issue :-/

Can we get an ETA, when this will be fixed

not an issue...
from above...
COPY . /destination preserves sub-folders.

True, no longer an issue after you fume for half a day and end up here. Sure :)
Let's be constructive,

image

We really need a new _CP_ command or a --recursive flag to _COPY_ so backwards compatibility is preserved.

Top points if we also show a warning on image build, like:
Directory structure not preserved with COPY *, use CP or COPY . More here <link>. if we detect possible misuse.

I'm looking for this for copying across nested lerna package.json files in subdirectories to better utilise npm install cache to only trigger when dependencies change. Currently all files changed cause dependencies to install again.

Something like this would be great:

COPY ["package.json", "packages/*/package.json", "/app/"]

Go check #29211 guys. This one has been closed and no one cares.

@zentby Conversation is here, issue is tracked there (since this one is closed)... It's confusing.

a workaround is to COPY files and RUN cp -R command

COPY files /tmp/
RUN cp -R /tmp/etc/* /etc/ && rm -rf /tmp/etc

That won't work @instabledesign as the COPY command destroys cache when a file is different that shouldn't invalidate cache (for instance I only want to copy files relating to npm dependency installation as that doesn't often change)

I also needed to copy just a set of files (in my case, *.sln and *.csproj files for dotnet core) to perverse cache. One work around is to create a tar ball of just files you want and then ADD the tarball in the Docker file. Yeah, now you have to have a shell script in addition to the Docker file...

build.sh

#!/bin/bash

# unfortunately there's no easy way to copy just the *.sln and *.csproj (see https://github.com/moby/moby/issues/15858)
# so we generate a tar file containing the required files for the layer

find .. -name '*.csproj' -o -name 'Finomial.InternalServicesCore.sln' -o -name 'nuget.config' | sort | tar cf dotnet-restore.tar -T - 2> /dev/null

docker build -t finomial/iscore-build -f Dockerfile ..

Docker file

FROM microsoft/aspnetcore-build:2.0
WORKDIR /src

# set up a layer for dotnet restore 

ADD docker/dotnet-restore.tar ./

RUN dotnet restore

# now copy all the source and do the dotnet buld
COPY . ./

RUN dotnet publish --no-restore -c Release -o bin Finomial.InternalServicesCore.sln

You can use multiple COPY commands to do this, but that has the disadvantage of creating multiple image layers and bloating your final image size.

As kayjtea mentioned above, you can also wrap the docker build command in a helper build script to create tarballs that preserve directory structure, and ADD them in, but that adds complexity and breaks things like docker-compose build and Docker Hub automated builds.

Really, COPY should function just like a POSIX compliant /bin/cp -r command, but it seems like that won't happen for 'backwards compatibility,' even though the current behavior is completely unintuitive for anyone with experience in *nix systems.


The best compromise I have found is to use a multi-stage build as a hack:

FROM scratch as project_root
# Use COPY to move individual directories
# and WORKDIR to change directory
WORKDIR /
COPY ./file1 .
COPY ./dir1/ ./dir1/
COPY ./dir2/ .
WORKDIR /newDir
COPY ./file2 .

# The actual final build you end up using/pushing
# Node.js app as example
FROM node
WORKDIR /opt/app

COPY package.json .
RUN npm install

COPY --from=project_root / .
CMD ["npm", "start"]

This is self contained within one Dockerfile, and only creates one layer in the final image, just like how an ADD project.tar would work.

Having a complete COPY command would really help when attempting to preserve the docker build cache. The ROS community develops using nested workspace of packages, with each one declaring dependencies in its own package.xml file. These files are used by a dependency manager to install any upstream libraries. These package.xml file change relatively infrequently wrt to code in packages themselves once the groundwork is set. If the directory tree structure was preserved during a copy, we could simply copy our workspace during the docker build in two stages to maximise caching e.g.:

# copy project dependency metadata
COPY ./**/package.xml /opt/ws/

# install step that fetches unsatisfied dependency
RUN dependency_manager install --workspace /opt/ws/

# copy the rest of the project's code
COPY ./ /opt/ws/

# compile code with cached dependencies
RUN build_tool build --workspace /opt/ws/

Thus the cache for the dependency install layer above would only bust if the developer happened to change a declared dependency, while a change in the package's code would only bust the compilation layer.

Currently, all the matched package.xml files are being copied on top of each other to the root of the destination directory, with the last globed file being the only package.xml that persisted in the image. Which is really quite un-intuitive for users! Why are copied files being overwritten on top of eachother, plus the undefined behavior of which eventually persists in the image.

This is such a pain in basically every stack that has package management, so it affects so many of us. Can it be fixed? Sheesh. It's been an issue since 2015! The suggestion to use a new command of CP is a good one.

Can we reopen this? It's very tedious behavior that COPY command uses a golang internal function for path matching, rather than a real wide-adopted standard, like glob

For those who'd like to copy via globing using a workaround with experimental buildkit syntax, even if caching isn't as precise or robust can take a look at the comments here: https://github.com/moby/moby/issues/39530#issuecomment-530606189

I'd still like to see this issue re-opened so we can cache on selective glob style copies.

I realized a relatively simple workaround for my example in https://github.com/moby/moby/issues/15858#issuecomment-462017830 via multi-stage builds, and thought many of you here with similar needs may appreciate arbitrary caching on copied artifacts from the build context. Using multi-stage builds, it's possible to filter/preprocess the directory to cache:

# Add prior stage to cache/copy from
FROM ubuntu AS package_cache

# Copy from build context
WORKDIR /tmp
COPY ./ ./src

# Filter or glob files to cache upon
RUN mkdir ./cache && cd ./src && \
    find ./ -name "package.xml" | \
      xargs cp --parents -t ../cache

# Continue with primary stage
FROM ubuntu

# copy project dependency metadata
COPY --from=package_cache /tmp/cache /opt/ws/

# install step that fetches unsatisfied dependency
RUN dependency_manager install --workspace /opt/ws/

# copy the rest of the project's code
COPY ./ /opt/ws/

# compile code with cached dependencies
RUN build_tool build --workspace /opt/ws/

For real world working example, you could also take a look here: https://github.com/ros-planning/navigation2/pull/1122

I'm looking for this for copying across nested lerna package.json files in subdirectories to better utilise npm install cache to only trigger when dependencies change. Currently all files changed cause dependencies to install again.

Something like this would be great:

COPY ["package.json", "packages/*/package.json", "/app/"]

i m having the exact same use case.

I'm looking for this for copying across nested lerna package.json files in subdirectories to better utilise npm install cache to only trigger when dependencies change. Currently all files changed cause dependencies to install again.

Something like this would be great:

COPY ["package.json", "packages/*/package.json", "/app/"]

This case but for Yarn workspaces.

It's 2020 and this is still not fixed.

If anyone is struggling with this in a dotnet setting, I've solved it for us by writing a dotnet core global tool that restores the directory structure for the *.csproj files, allowing a restore to follow. See documentation on how to do it here.

FYI, theoretically a similar approach could be used in other settings, but essentially the tool is reverse-engineering the folder structure, so I'm not sure how easy or even possible that would be for say a lerna or yarn workspaces setup. Happy to investigate it if there's interest. Could even be possible in the same tool if folks were happy to install the dotnet core runtime for it to work, else the same approach I've done would need to be built in a language that doesn't require a new dependency, like node I guess.

It's amazing that implementing a copy command used to be a task for a studen on the first year, and now it's too complex for skilled programmers with many years of experience...

It's probably not the most embarassing bug ever, but taking into account it's followed by many years of discussion without any output, it certainly makes up to the top.

@benmccallum
FYI, theoretically a similar approach could be used in other settings, but essentially the tool is reverse-engineering the folder structure,

Isn't it easier for most occassions to just do what https://github.com/moby/moby/issues/15858#issuecomment-532016362 suggested and use a multi-stage build to prefilter?

Also for the dotnet restore case it's a relatively easy pattern:

# Prefiltering stage using find -exec and cp --parents to copy out
# the project files in their proper directory structure.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS dotnet-prep
COPY . ./src/
RUN mkdir ./proj && cd ./src && \
  find . -type f -a \( -iname "*.sln" -o -iname "*.csproj" \) \
    -exec cp --parents "{}" ../proj/ \;

# New build stage, independent cache
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS dotnet-build

# Copy only the project files with correct directory structure
# then restore packages
COPY --from=dotnet-prep ./proj ./src/
RUN dotnet restore

# Copy everything else
COPY --from=dotnet-prep ./src ./src/
# etc.

Still doesn't make for an appropriate excuse for Docker to never have implemented a decent variant on the COPY command that just follows normal, sane syncing semantics though.
I mean; come on!

@rjgotten , I like it! Certainly much easier than what I did and I can't see why this wouldn't work for my needs. I'll give it a go tomorrow and if this works I'll change my doc's to recommend this as a better approach.

I think my initial issue was I was on Windows so probably dismissed that suggestion. I'm not anymore, but do you have an equivalent Windows version for completeness? I wonder if PowerShell is pre-installed in the dotnet core images...

Is there really a need for the additional/repeated FROM ... though? Every time you do a RUN it creates a new layer for caching, right? Maybe I'm missing something though, it's been a while since I've had to think about this!

I wonder if PowerShell is pre-installed in the dotnet core images...

I think it actually is. Which would make it a bit easier to do this in a cross-platform way.

Is there really a need for the additional/repeated FROM ... though?

Isolated build stages get independent layer caching.
The first stage does the preparation work. Since it initially has to copy everything in, it always invalidates its first layer's cache, and thus the layers after it, when _any_ file changes. But that only holds for the layers _within that build stage_.

The second stage starts by _only_ copying in the project-related files and as long as those files are the same - i.e. same file names; same content; etc. - across builds that layer _won't_ invalidate. Which means the dotnet restore layer _also_ won't be invalidated unless those project files actually changed.

Had some time to sit with this and I understand now! Docker is fun as unless you're always spend time with it, you forget how all the commands work. Crucially, I'd forgotten that the RUN cmd can only operate on the filesystem of the docker image, not the build context files. So you're forced to COPY everything over before you can do a complex RUN cmd that preserves dirs. And that's why we so desperately need decent COPY globbing!

That approach
The initial COPY cmd is, like you mention, copying _everything_ over and then it's pulling out the .sln and .csproj files out into a separate /proj folder. Any code change will invalidate these steps. Crucially, the limitation this is working around is that the awesome RUN linux cmd can only operate on files _on the docker image already_, brought over by the greedy COPY prior.

Then a new stage is started, and copies the /proj folder contents over which can then be used for dotnet restore. Since the cache "key" is the file hashes essentially, this will rarely bust this cache layer, nor the subsequent dotnet restore one, so you avoid the expensive restore. Nice!

My approach
Uses just one build stage for this at the cost of a few more COPY commands to bring over the files that affect a dotnet restore. I specifically flatten all the .csproj files into one dir, and then use my global tool to re-construct the right directory structure from the entry .sln file. It's only after this that I COPY over all the src files, so I can effectively cache layers all the way up to here regularly, rather than always having to COPY over all the src files up front.

Takeaways
I think it's going to depend on people's codebase as to how effective each approach is. For us, in a mono repo, we've got a LOT of shared code that gets copied across in the "all src over" COPY. .dockerignore helps here, but is hard to maintain, so we're pretty "greedy" in that COPY; so it's quite slow. As such, my approach, although slightly more complicated, would probably be faster for us than this alternative.

Thanks for the explanation. Still can't believe we even have to have this conversation haha. I still need to investigate the newer BuildKit stuff to see if this is easier now. Has anyone else done that?

BuildKit investigation
RUN --mount=type=bind - Sounds like this would this allow us to do the fancy linux cmd against the build context (rather than RUN just being limited to the image's filesystem). Indeed it apparently defaults to the build context.

RUN --mount=type=cache - sounds like some kind of re-usable cache directory (between docker builds?) that is preserved? So in essence, we wouldn't even need to worry too much about a cache layer for package restores, because with a re-used cache of packages previously restored it'd already be a hell of a lot faster!?

I think the issue is not fixed yet because it's "closed" and people are used to the workarounds.

When you don't understand why people tend to move to other container type.

Is there other container type which I can use. Cannot believe this is not supported after so many years? Is docker an open source project, can anyone please have it fixed?

We have a COPY --dir option in Earthly to make copy behave more like cp -r. Perhaps this could be ported to Dockerfile too?

To speedup building of images for .net core applications we should create some intermediate container which will contain all restored nuget packages. For multi-project solution I used this workaround. I just copied all project files to single folder and run dotnet restore for each of them. There are some warnings about missed projects because we cannot preserve folder hierarchy, but still it is working solution.

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build

# Nugets restore
WORKDIR /src/allprojects          # just temporary storage for .csproj files
COPY */*.csproj ./
RUN for file in $(ls *.csproj); do dotnet restore ${file}; done

# Build/Publish
WORKDIR /src/solution             # actual folder with source code and full hierarchy 
COPY . .
RUN dotnet publish "MyProject/MyProject.csproj" -c Release -o /bublish/myproject

# Run Application
FROM mcr.microsoft.com/dotnet/core/runtime:3.1 AS base
WORKDIR /app
COPY --from=build /bublish/myproject .
ENTRYPOINT ["dotnet", "MyProject.dll"]

@zatuliveter
There are some warnings about missed projects because we cannot preserve folder hierarchy, but still it is working solution.

No; that doesn't work. And here's why:

.NET Core stores package meta-information in the ./obj subdirectory associated with each project. Without that information present, the package will not be deemed installed and ready to use. (Don't believe me? Then throw out your ./obj folder and then e.g. open the project in VSCode and watch it ask you to re-run package restore. Go ahead; give it a try.)

If the project files on which you execute the package restore are in a different directory structure than the following dotnet build or dotnet publish, then those commands won't see the package as restored.

The reason your solution does not outright fail, is because dotnet publish and dotnet build both imply dotnet restore. They actively check for unrestored packages and restore them on-the-fly. To avoid them doing this, you actively have to pass the --no-restore flag, which you are not doing.

So really, what your solution is doing is restoring the packages _TWICE_. The first time is essentially a big waste of time and space, because it doesn't go into the correct directory structure for it to be re-used. The second time, implicit as part of the publish command, works; but because it is part of the same layer as the build&publish operation your packages aren't actually being cached separate from your code changes at all.

@rjgotten,
Thank you for your reply and clarifications.
In fact all nuget packages are cached in global-packages folder in 'build' docker container. In my case it is /root/.nuget/packages/ folder, and obj folder just contains small files with references to this global storage, so there is no wasting of storage (as you mention).
Second restore during publish at least x10 times faster (in my case) because all nugets are cached in container.

@zatuliveter @rjgotten thanks for the info at the end here. I was running into similar issues and came up with the following bit of dockerfile to improve on the examples you gave. Bash certainly isn't my strong suit so go easy on me! Our structure is Project/Project.csproj for all of our projects. This copies all the proj files in, moves them to the correct place, and then does the restore / copy all / publish.

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ./*/*.csproj ./proj/
RUN for file in $(ls ./proj); do mkdir /src/${file%.*} && mv ./proj/${file} /src/${file%.*}/${file}; done
RUN dotnet restore "MyProject/MyProject.csproj"
COPY . .
WORKDIR "/src/MyProject"
RUN dotnet publish "MyProject.csproj" -c Release -o /app --no-restore
Was this page helpful?
0 / 5 - 0 ratings