]> git.proxmox.com Git - ceph.git/blob - ceph/doc/dev/quick_guide.rst
e78a023d3779aa53d5e4a96510eab8b447548383
[ceph.git] / ceph / doc / dev / quick_guide.rst
1 =================================
2 Developer Guide (Quick)
3 =================================
4
5 This guide will describe how to build and test Ceph for development.
6
7 Development
8 -----------
9
10 The ``run-make-check.sh`` script will install Ceph dependencies,
11 compile everything in debug mode and run a number of tests to verify
12 the result behaves as expected.
13
14 .. prompt:: bash $
15
16 ./run-make-check.sh
17
18 Optionally if you want to work on a specific component of Ceph,
19 install the dependencies and build Ceph in debug mode with required cmake flags.
20
21 Example:
22
23 .. prompt:: bash $
24
25 ./install-deps.sh
26 ./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF
27
28 You can also turn off building of some core components that are not relevant to
29 your development:
30
31 .. prompt:: bash $
32
33 ./do_cmake.sh ... -DWITH_RBD=OFF -DWITH_KRBD=OFF -DWITH_RADOSGW=OFF
34
35 Finally, build ceph:
36
37 .. prompt:: bash $
38
39 cmake --build build [--target <target>...]
40
41 Omit ``--target...`` if you want to do a full build.
42
43
44 Running a development deployment
45 --------------------------------
46
47 Ceph contains a script called ``vstart.sh`` (see also
48 :doc:`/dev/dev_cluster_deployement`) which allows developers to quickly test
49 their code using a simple deployment on your development system. Once the build
50 finishes successfully, start the ceph deployment using the following command:
51
52 .. prompt:: bash $
53
54 cd build
55 ../src/vstart.sh -d -n
56
57 You can also configure ``vstart.sh`` to use only one monitor and one metadata server by using the following:
58
59 .. prompt:: bash $
60
61 env MON=1 MDS=1 ../src/vstart.sh -d -n -x
62
63 Most logs from the cluster can be found in ``build/out``.
64
65 The system creates two pools on startup: `cephfs_data_a` and `cephfs_metadata_a`. Let's get some stats on
66 the current pools:
67
68 .. code-block:: console
69
70 $ bin/ceph osd pool stats
71 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
72 pool cephfs_data_a id 1
73 nothing is going on
74
75 pool cephfs_metadata_a id 2
76 nothing is going on
77
78 $ bin/ceph osd pool stats cephfs_data_a
79 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
80 pool cephfs_data_a id 1
81 nothing is going on
82
83 $ bin/rados df
84 POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
85 cephfs_data_a 0 0 0 0 0 0 0 0 0 0 0
86 cephfs_metadata_a 2246 21 0 63 0 0 0 0 0 42 8192
87
88 total_objects 21
89 total_used 244G
90 total_space 1180G
91
92
93 Make a pool and run some benchmarks against it:
94
95 .. prompt:: bash $
96
97 bin/ceph osd pool create mypool
98 bin/rados -p mypool bench 10 write -b 123
99
100 Place a file into the new pool:
101
102 .. prompt:: bash $
103
104 bin/rados -p mypool put objectone <somefile>
105 bin/rados -p mypool put objecttwo <anotherfile>
106
107 List the objects in the pool:
108
109 .. prompt:: bash $
110
111 bin/rados -p mypool ls
112
113 Once you are done, type the following to stop the development ceph deployment:
114
115 .. prompt:: bash $
116
117 ../src/stop.sh
118
119 Resetting your vstart environment
120 ---------------------------------
121
122 The vstart script creates out/ and dev/ directories which contain
123 the cluster's state. If you want to quickly reset your environment,
124 you might do something like this:
125
126 .. prompt:: bash [build]$
127
128 ../src/stop.sh
129 rm -rf out dev
130 env MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d
131
132 Running a RadosGW development environment
133 -----------------------------------------
134
135 Set the ``RGW`` environment variable when running vstart.sh to enable the RadosGW.
136
137 .. prompt:: bash $
138
139 cd build
140 RGW=1 ../src/vstart.sh -d -n -x
141
142 You can now use the swift python client to communicate with the RadosGW.
143
144 .. prompt:: bash $
145
146 swift -A http://localhost:8000/auth -U test:tester -K testing list
147 swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph
148 swift -A http://localhost:8000/auth -U test:tester -K testing list
149
150
151 Run unit tests
152 --------------
153
154 The tests are located in `src/tests`. To run them type:
155
156 .. prompt:: bash $
157
158 (cd build && ninja check)