Compose: docker-compose up doesn't rebuild image although Dockerfile has changed

Created on 30 May 2015  ·  11Comments  ·  Source: docker/compose

Very often, docker-compose up doesn't rebuild image specified as "build:" in the docker-compose.yml although the respective Dockerfile has changed. Instead, I need to run docker build -t servicename_foldername . manually for the affected service which will actually update the image accordingly.

Is this intended? Because it's rather annoying, I can never be sure the image is actually up-to-date and need to run docker build manually before using docker-compose up.

kinquestion

Most helpful comment

True, docker-compose up never rebuilds an image. This is "intended", but it's something I think we should change: #693

You can run docker-compose build to build the images.

Duplicate of #614

All 11 comments

True, docker-compose up never rebuilds an image. This is "intended", but it's something I think we should change: #693

You can run docker-compose build to build the images.

Duplicate of #614

Hey @dnephin, what a small world!

I'm running into an issue where docker-compose build is not properly rebuilding containers. It's causing issues with preventing a Varnish container from starting due to stale lock files.

Based on what I've read elsewhere (e.g. #1195), it seems like docker-compose build is the recommended way to rebuild containers and should prevent problems like these.

╭─wting@nuc ~/code/reddit-mobile ‹python-2.7.12› ‹wting_chan-159_add_varnish_to_2X×ad20b6d›
╰─➤  docker ps                                                                                                              2016.09.15 12:20:46 PDT 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
╭─wting@nuc ~/code/reddit-mobile ‹python-2.7.12› ‹wting_chan-159_add_varnish_to_2X×ad20b6d›
╰─➤  docker-compose --version                                                                                               2016.09.15 12:20:48 PDT 
docker-compose version 1.8.0, build f3628c7
╭─wting@nuc ~/code/reddit-mobile ‹python-2.7.12› ‹wting_chan-159_add_varnish_to_2X×ad20b6d›
╰─➤  docker --version                                                                                                       2016.09.15 12:20:51 PDT 
Docker version 1.12.1, build 23cf638
╭─wting@nuc ~/code/reddit-mobile ‹python-2.7.12› ‹wting_chan-159_add_varnish_to_2X×ad20b6d›
╰─➤  docker-compose build && docker-compose up                                                                          2 ↵ 2016.09.15 12:23:35 PDT 
Building web
Step 1 : FROM reddit/reddit-nodejs:latest
 ---> ee57c186eb35
Step 2 : VOLUME /src
 ---> Using cache
 ---> 3720601d98c8
Step 3 : WORKDIR /src
 ---> Using cache
 ---> d4b9b360ef4e
Step 4 : EXPOSE 4444
 ---> Using cache
 ---> 5e232be73781
Step 5 : ENTRYPOINT npm start
 ---> Using cache
 ---> 1094fc9857bb
Successfully built 1094fc9857bb
Building varnish
Step 1 : FROM quay.io/reddit/varnish-fastly
# Executing 1 build trigger...
Step 1 : COPY default.vcl /etc/varnish/default.vcl
 ---> Using cache
 ---> ac9dadb35674
Step 2 : ENV VARNISH_PORTS 8080
 ---> Using cache
 ---> 3c43e0226f5f
Step 3 : EXPOSE 8080
 ---> Using cache
 ---> c88093c2ff32
Successfully built c88093c2ff32
Starting redditmobile_web_1
Starting redditmobile_varnish_1
Attaching to redditmobile_web_1, redditmobile_varnish_1
varnish_1  | storage_malloc: max size 100 MB.
varnish_1  | SHMFILE owned by running varnishd master (pid=1)  # STALE LOCK FILE
varnish_1  | (Use unique -n arguments if you want multiple instances.)
web_1      | 
web_1      | > [email protected] start /src
web_1      | > NODE_ENV=production npm run server
web_1      | 
redditmobile_varnish_1 exited with code 2
web_1      | 
web_1      | > [email protected] server /src
web_1      | > NODE_ENV=production node ./bin/ProductionServer.js
web_1      | 
web_1      | Started cluster with 4 processes.
web_1      | Started server at PID 31
web_1      | Started server at PID 46
[..]

hey @wting

I think what might be happening is that the lock file is in a volume (https://docs.docker.com/compose/overview/#/preserve-volume-data-when-containers-are-created).

You can try docker-compose down to remove the old containers, which will remove the volume reference. The next up should start with fresh volumes.

If it's not in a volume, I guess it could just be that the lock was never removed. Compose will try to start a container if it exists and the config hasn't changed, so running down should fix that as well.

Thanks! docker-compose down worked, as well as docker-compose up --force-recreate (but not docker-compose --build). I suppose it's not intuitive because there's a volume mounted for the web container but not the Varnish container; yet Varnish files stick around. Here's the docker-compose.yml file:

version: '2'
services:
  web:
    build:
      context: .
      dockerfile: Dockerfile.dev
    ports:
     - "4444:4444"
    volumes:
     - .:/src
  varnish:
    build:
      context: .
      dockerfile: Dockerfile.varnish
    ports:
    - "4301:80"
    depends_on:
    - web

For context:

Thanks wting, and all the ones who wrote on this issue, I've been "saved" by your comment => "docker-compose up --force-recreate (but not docker-compose --build)."

docker-compose up --build

True, docker-compose up never rebuilds an image. This is "intended", but it's something I think we should change: #693

You can run docker-compose build to build the images.

Duplicate of #614

Thanks that really helped me assign a new Dockerfile and rebuild the container with the latest changes

FYI - had the same issue and the same fix as discussed. The interwebs led me here. Thanks for the good discussions!

docker-compose up --build -V

For clarify what is -V parameter:
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving data from the previous containers.

To rebuild a single image inside docker-compose:

docker-compose up -d --force-recreate --no-deps --build $service

e.g:

docker-compose up -d --force-recreate --no-deps --build varnish

Was this page helpful?
5 / 5 - 1 ratings

Related issues

CrimsonGlory picture CrimsonGlory  ·  3Comments

guycalledseven picture guycalledseven  ·  3Comments

29e7e280-0d1c-4bba-98fe-f7cd3ca7500a picture 29e7e280-0d1c-4bba-98fe-f7cd3ca7500a  ·  3Comments

saulshanabrook picture saulshanabrook  ·  3Comments

Hendrik-H picture Hendrik-H  ·  3Comments