Making docker-in-docker builds x2 faster using Docker “cache-from” option
There are many reasons for not using docker-in-docker (DIND) in a CI setup. The biggest reason against using DIND in CI is: you cannot use images from the host machine, i.e. every time you make a build, Docker will download all layers of your image.
The good news is that Docker v1.13 added a capability to specify images used as a cache source during the build step. These images do not need to have a local parent chain and can be pulled from other registries.
To leverage this feature, you need to add
--cache-from option to your build script, i.e. if earlier your CI manifest was building an image as such:
docker build --tag $CONTAINER_IMAGE:$CI_BUILD_REF .
Then to leverage cache reuse you need to change it to:
and ensure that you push
$CONTAINER_IMAGE:latest image after successful builds.
Using a different storage driver
While on the subject, in addition to using
--cache-from , evaluate an alternative storage driver. By default,
docker:dind uses vfs storage driver. vfs storage driver is designed to be used only for debugging. vfs storage driver is slow to
commit and to
DIND storage driver can be specified using Custom daemon flags (also see Start a daemon instance note about picking the storage driver).
If you (like me) are using GitLab enterprise edition, feel free to have a look at my
.gitlab-ci.yml configuration for the above changes. This configuration leverages
--cache-from Docker build option and overlay2 storage driver.
How many times faster, really?
Naturally, the build time improvements depend on what part of the CI job is pulling the images.
With Node.js projects, I have observed CI job improvements upwards of x3 times (from avg. 20 minutes to 7 minutes, for build, test, release).