In this post we will compile Ceph and package it in a Docker image. This makes it easier to develop and test on a cluster because Docker layering reduces the time for nodes to pull new binaries. Because Mantle was merged into Ceph the build process now includes Mantle by default!
If you want to build with a CloudLab node, instantiate a node with our profile named CephFS-build and setup the node with Docker. For the Wisconsion CloudLab nodes we had to modify user groups and make space for the Docker images; see the CloudLab: Troubleshooting guide for assistance.
Because we are compiling Ceph and packaging it into a container, we use a a
convenience script that helps us pass the Docker parameters to a command line
version of docker run
:
~$ wget https://raw.githubusercontent.com/systemslab/docker-cephdev/luminous/aliases.sh
~$ . aliases.sh
Now we compile Ceph and build the Docker image with the new binaries:
~$ mkdir ceph; cd ceph
~/ceph$ dmake \
-e IMAGE_NAME="ceph/daemon:custom" \
-e GIT_URL="https://github.com/ceph/ceph.git" \
-e SHA1_OR_REF="remotes/origin/luminous" \
-e CONFIGURE_FLAGS="-DWITH_RDMA=OFF -DWITH_TESTS=OFF" \
-e RECONFIGURE="true" \
-e BUILD_THREADS=`grep processor /proc/cpuinfo | wc -l` \
-e BASE_DAEMON_IMAGE="ceph/daemon:master-29f1e9c-luminous-ubuntu-16.04-x86_64" \
cephbuilder/ceph:luminous build-cmake
[... snip ...]
++ exit 0
created image ceph/daemon:custom
The Ceph source code and compiled binaries end up in the ceph
directory.
Running dmake
with -e
passes environment variables to the container. These
environment variables are read by the cephbuilder/ceph
container; see the
scripts
directory to figure out what environment variables are avaiable, what they
mean, and how to use them. A brief description of the parameters we used for
this build:
IMAGE_NAME
: the name of the container; here we follow the naming
protocol of the Docker images distributed by Ceph but tag it as a custom
build
GIT_URL
: where to pull the source code from
SHA1_OR_REF
: which branch, commit, or reference to check out
CONFIGURE_FLAGS
: compile parameters; in this case we skip building RDMA
libraries and testing infrastructures
RECONFIGURE
: indicates that the build should be conducted from scratch
BUILD_THREADS
: sets the number of cores to use during the compilation; in
the example above, we use all available cores.
In the process, the above commands pull the builder image from DockerHub. The builder image builds the Ceph master branch. Check out the DockerHub page here to see which images are getting pulled. For more information on what is in the image take a look at our Ceph builder wiki here.
We should sanity check that image by making sure that all the Ceph command line tools work:
~/ceph$ docker run --entrypoint=ceph ceph/daemon:custom
[... snip ...]
~/ceph$ docker run --entrypoint=ceph-fuse ceph/daemon:custom
[... snip ...]
~/ceph$ docker run --entrypoint=rados ceph/daemon:custom
rados: symbol lookup error: rados: undefined symbol: _ZN4ceph7logging3Log12create_entryEiiPm
If the commands return help menus they are fine; the last error is problematic and can happen when building images based on the Ceph master branch. Luckily, this is just a library problem and we can fix it with:
~/ceph$ docker run --name=fix --entrypoint=/bin/bash -v `pwd`:/ceph ceph/daemon:custom
~/ceph$ docker exec -it fix cp /ceph/build/lib/*rados* /usr/lib
~/ceph$ docker commit --change='ENTRYPOINT ["/entrypoint.sh"]' fix ceph/daemon:custom
~/ceph$ docker run --entrypoint=rados ceph/daemon:custom
2016-10-31 03:57:02.637232 7fbc821a2a40 -1 did not load config file, using default settings.
rados: you must give an action. Try --help
Great. This is what we want.
After the images are built, you can start using them – so checkout the next blog.
We compiled Ceph and packaged it in a Docker image. Hopefully this makes it easier to develop and test on a cluster while ensuring reproducibility and automation.
Jekyll theme inspired by researcher