In this post we will compile Ceph and package it in a Docker image. This makes it easier to develop and test on a cluster because Docker layering reduces the time for nodes to pull new binaries. Because Mantle was merged into Ceph the build process now includes Mantle by default!
First, get the source code:
~$ git clone --recursive https://github.com/ceph/ceph.git
We will build a Docker image with our custom Ceph binaries. First, pull in the Ceph image you want to layer your changes onto and tag it so our Docker containers know where to find it:
~$ docker pull ceph/daemon:tag-build-master-jewel-ubuntu-14.04 tag-build-master-jewel-ubuntu-14.04: Pulling from ceph/daemon [... snip ...] ~$ docker tag ceph/daemon:tag-build-master-jewel-ubuntu-14.04 ceph/daemon:latest ~$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ceph/daemon latest e68ed703825f 13 days ago 1.078 GB
Now we compile Ceph and build the Docker image with the new binaries:
~$ wget https://raw.githubusercontent.com/systemslab/docker-cephdev/master/aliases.sh ~$ . aliases.sh ~$ mkdir ceph; cd ceph ~/ceph$ dmake \ -e GIT_URL="https://github.com/ceph/ceph.git" \ -e RECONFIGURE="true" \ -e SHA1_OR_REF="remotes/origin/master" \ -e CONFIGURE_FLAGS="-DWITH_TESTS=OFF" \ -e BUILD_THREADS=`grep processor /proc/cpuinfo | wc -l` \ cephbuilder/ceph:latest \ build-cmake ~/ceph$ docker tag ceph-heads/remotes/origin/master myimg
This tells the builder to pull the source code from
GIT_URL and checkout
RECONFIGURE flag ensures that the directory is
BUILD_THREADS sets the number of cores to use during the compilation;
in the example above, we use all available cores.
reduces the final size of the image by skipping the insallation of test
binaries. The Ceph source code and compiled binaries end up in the
In the process, the above commands pull the builder image from DockerHub. The builder image builds the Ceph master branch (we also have one for building Jewel). Check out the DockerHub page here to see which images are getting pulled. For more information on what is in the image take a look at our Ceph builder wiki here.
We should sanity check that image by making sure that all the Ceph command line tools work:
~/ceph$ docker run --entrypoint=ceph myimg [... snip ...] ~/ceph$ docker run --entrypoint=ceph-fuse myimg [... snip ...] ~/ceph$ docker run --entrypoint=rados img rados: symbol lookup error: rados: undefined symbol: _ZN4ceph7logging3Log12create_entryEiiPm
If the commands return help menus they are fine; the last error is problematic and can happen when building images based on the Ceph master branch. Luckily, this is just a library problem and we can fix it with:
~/ceph$ docker run --name=fix --entrypoint=/bin/bash -v `pwd`:/ceph myimg ~/ceph$ docker exec -it fix cp /ceph/build/lib/*rados* /usr/lib ~/ceph$ docker commit --change='ENTRYPOINT ["/entrypoint.sh"]' fix myimg ~/ceph$ docker run --entrypoint=rados myimg 2016-10-31 03:57:02.637232 7fbc821a2a40 -1 did not load config file, using default settings. rados: you must give an action. Try --help
Great. This is what we want.
After the images are built, you can start using them – so checkout the next blog.
We compiled Ceph and packaged it in a Docker image. Hopefully this makes it easier to develop and test on a cluster while ensuring reproducibility and automation.
Jekyll theme inspired by researcher