1 # Ceph - a scalable distributed storage system
3 Please see https://ceph.com/ for current info.
8 Most of Ceph is dual licensed under the LGPL version 2.1 or 3.0. Some
9 miscellaneous code is under a BSD-style license or is public domain.
10 The documentation is licensed under Creative Commons
11 Attribution Share Alike 3.0 (CC-BY-SA-3.0). There are a handful of headers
12 included here that are licensed under the GPL. Please see the file
13 COPYING for a full inventory of licenses by file.
15 Code contributions must include a valid "Signed-off-by" acknowledging
16 the license for the modified or contributed file. Please see the file
17 SubmittingPatches.rst for details on what that means and on how to
18 generate and submit patches.
20 We do not require assignment of copyright to contribute code; code is
21 contributed under the terms of the applicable license.
24 ## Checking out the source
26 You can clone from github with
28 git clone git@github.com:ceph/ceph
30 or, if you are not a github user,
32 git clone https://github.com/ceph/ceph.git
34 Ceph contains many git submodules that need to be checked out with
36 git submodule update --init --recursive
39 ## Build Prerequisites
41 The list of Debian or RPM packages dependencies can be installed with:
48 Note that these instructions are meant for developers who are
49 compiling the code for development and testing. To build binaries
50 suitable for installation we recommend you build deb or rpm packages
51 or refer to the `ceph.spec.in` or `debian/rules` to see which
52 configuration options are specified for production builds.
60 (do_cmake.sh now defaults to creating a debug build of ceph that can
61 be up to 5x slower with some workloads. Please pass
62 "-DCMAKE_BUILD_TYPE=RelWithDebInfo" to do_cmake.sh to create a non-debug
65 The number of jobs used by `ninja` is derived from the number of CPU cores of
66 the building host if unspecified. Use the `-j` option to limit the job number
67 if the build jobs are running out of memory. On average, each job takes around
70 This assumes you make your build dir a subdirectory of the ceph.git
71 checkout. If you put it elsewhere, just point `CEPH_GIT_DIR` to the correct
72 path to the checkout. Any additional CMake args can be specified by setting ARGS
73 before invoking do_cmake. See [cmake options](#cmake-options)
76 ARGS="-DCMAKE_C_COMPILER=gcc-7" ./do_cmake.sh
78 To build only certain targets use:
88 If you run the `cmake` command by hand, there are many options you can
89 set with "-D". For example, the option to build the RADOS Gateway is
90 defaulted to ON. To build without the RADOS Gateway:
92 cmake -DWITH_RADOSGW=OFF [path to top-level ceph directory]
94 Another example below is building with debugging and alternate locations
95 for a couple of external dependencies:
97 cmake -DLEVELDB_PREFIX="/opt/hyperleveldb" \
98 -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
101 Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. By
102 default, cmake will build these bundled dependencies from source instead of
103 using libraries that are already installed on the system. You can opt-in to
104 using these system libraries, provided they meet the minimum version required
105 by Ceph, with cmake options like `WITH_SYSTEM_BOOST`:
107 cmake -DWITH_SYSTEM_BOOST=ON [...]
109 To view an exhaustive list of -D options, you can invoke `cmake` with:
113 If you often pipe `ninja` to `less` and would like to maintain the
114 diagnostic colors for errors and warnings (and if your compiler
115 supports it), you can invoke `cmake` with:
117 cmake -DDIAGNOSTICS_COLOR=always ...
119 Then you'll get the diagnostic colors when you execute:
123 Other available values for 'DIAGNOSTICS_COLOR' are 'auto' (default) and
127 ## Building a source tarball
129 To build a complete source tarball with everything needed to build from
130 source and/or build a (deb or rpm) package, run
134 This will create a tarball like ceph-$version.tar.bz2 from git.
135 (Ensure that any changes you want to include in your working directory
136 are committed to git.)
139 ## Running a test cluster
141 To run a functional test cluster,
144 ninja vstart # builds just enough to run vstart
145 ../src/vstart.sh --debug --new -x --localhost --bluestore
148 Almost all of the usual commands are available in the bin/ directory.
151 ./bin/rados -p rbd bench 30 write
152 ./bin/rbd create foo --size 1000
154 To shut down the test cluster,
158 To start or stop individual daemons, the sysvinit script can be used:
160 ./bin/init-ceph restart osd.0
164 ## Running unit tests
166 To build and run all tests (in parallel using all processors), use `ctest`:
172 (Note: Many targets built from src/test are not run using `ctest`.
173 Targets starting with "unittest" are run in `ninja check` and thus can
174 be run with `ctest`. Targets starting with "ceph_test" can not, and should
177 When failures occur, look in build/Testing/Temporary for logs.
179 To build and run all tests and their dependencies without other
180 unnecessary targets in Ceph:
183 ninja check -j$(nproc)
185 To run an individual test manually, run `ctest` with -R (regex matching):
187 ctest -R [regex matching test name(s)]
189 (Note: `ctest` does not build the test it's running or the dependencies needed
192 To run an individual test manually and see all the tests output, run
193 `ctest` with the -V (verbose) flag:
195 ctest -V -R [regex matching test name(s)]
197 To run tests manually and run the jobs in parallel, run `ctest` with
200 ctest -j [number of jobs]
202 There are many other flags you can give `ctest` for better control
203 over manual test execution. To view these options run:
208 ## Building the Documentation
212 The list of package dependencies for building the documentation can be
213 found in `doc_deps.deb.txt`:
215 sudo apt-get install `cat doc_deps.deb.txt`
217 ### Building the Documentation
219 To build the documentation, ensure that you are in the top-level
220 `/ceph` directory, and execute the build script. For example:
226 To report an issue and view existing issues, please visit https://tracker.ceph.com/projects/ceph.