Docker: Building Ceph Images

Written by Michael Sevilla [Updated on 06/24/18 for Luminous!]
This blog was adapted from one of our other blogs [link]

In this post we will compile Ceph and package it in a Docker image. This makes it easier to develop and test on a cluster because Docker layering reduces the time for nodes to pull new binaries. Because Mantle was merged into Ceph the build process now includes Mantle by default!

Compiling Ceph and Building an Image

If you want to build with a CloudLab node, instantiate a node with our profile named CephFS-build and setup the node with Docker. For the Wisconsion CloudLab nodes we had to modify user groups and make space for the Docker images; see the CloudLab: Troubleshooting guide for assistance.

Because we are compiling Ceph and packaging it into a container, we use a a convenience script that helps us pass the Docker parameters to a command line version of docker run:

~$ wget
~$ .

Now we compile Ceph and build the Docker image with the new binaries:

~$ mkdir ceph; cd ceph
~/ceph$ dmake \
    -e IMAGE_NAME="ceph/daemon:custom" \
    -e GIT_URL="" \
    -e SHA1_OR_REF="remotes/origin/luminous" \
    -e RECONFIGURE="true" \
    -e BUILD_THREADS=`grep processor /proc/cpuinfo | wc -l` \
    -e BASE_DAEMON_IMAGE="ceph/daemon:master-29f1e9c-luminous-ubuntu-16.04-x86_64" \
    cephbuilder/ceph:luminous build-cmake 
[... snip ...]
++ exit 0
created image ceph/daemon:custom

The Ceph source code and compiled binaries end up in the ceph directory. Running dmake with -e passes environment variables to the container. These environment variables are read by the cephbuilder/ceph container; see the scripts directory to figure out what environment variables are avaiable, what they mean, and how to use them. A brief description of the parameters we used for this build:

In the process, the above commands pull the builder image from DockerHub. The builder image builds the Ceph master branch. Check out the DockerHub page here to see which images are getting pulled. For more information on what is in the image take a look at our Ceph builder wiki here.

Checking the Image:

We should sanity check that image by making sure that all the Ceph command line tools work:

~/ceph$ docker run --entrypoint=ceph ceph/daemon:custom
[... snip ...]
~/ceph$ docker run --entrypoint=ceph-fuse ceph/daemon:custom
[... snip ...]
~/ceph$ docker run --entrypoint=rados ceph/daemon:custom
rados: symbol lookup error: rados: undefined symbol: _ZN4ceph7logging3Log12create_entryEiiPm

If the commands return help menus they are fine; the last error is problematic and can happen when building images based on the Ceph master branch. Luckily, this is just a library problem and we can fix it with:

~/ceph$ docker run --name=fix --entrypoint=/bin/bash -v `pwd`:/ceph ceph/daemon:custom 
~/ceph$ docker exec -it fix cp /ceph/build/lib/*rados* /usr/lib
~/ceph$ docker commit --change='ENTRYPOINT ["/"]' fix ceph/daemon:custom
~/ceph$ docker run --entrypoint=rados ceph/daemon:custom
2016-10-31 03:57:02.637232 7fbc821a2a40 -1 did not load config file, using default settings.
rados: you must give an action. Try --help

Great. This is what we want.

After the images are built, you can start using them – so checkout the next blog.


We compiled Ceph and packaged it in a Docker image. Hopefully this makes it easier to develop and test on a cluster while ensuring reproducibility and automation.

Jekyll theme inspired by researcher

Don't click on this easter egg: A juicy easter egg!