]> git.proxmox.com Git - ceph.git/blob - ceph/doc/dev/quick_guide.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / doc / dev / quick_guide.rst
1 =================================
2 Developer Guide (Quick)
3 =================================
4
5 This guide will describe how to build and test Ceph for development.
6
7 Development
8 -----------
9
10 The ``run-make-check.sh`` script will install Ceph dependencies,
11 compile everything in debug mode and run a number of tests to verify
12 the result behaves as expected.
13
14 .. prompt:: bash $
15
16 ./run-make-check.sh
17
18 Optionally if you want to work on a specific component of Ceph,
19 install the dependencies and build Ceph in debug mode with required cmake flags.
20
21 Example:
22
23 .. prompt:: bash $
24
25 ./install-deps.sh
26 ./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF
27
28 Running a development deployment
29 --------------------------------
30 Ceph contains a script called ``vstart.sh`` (see also :doc:`/dev/dev_cluster_deployement`) which allows developers to quickly test their code using
31 a simple deployment on your development system. Once the build finishes successfully, start the ceph
32 deployment using the following command:
33
34 .. prompt:: bash $
35
36 cd ceph/build # Assuming this is where you ran cmake
37 make vstart
38 ../src/vstart.sh -d -n -x
39
40 You can also configure ``vstart.sh`` to use only one monitor and one metadata server by using the following:
41
42 .. prompt:: bash $
43
44 MON=1 MDS=1 ../src/vstart.sh -d -n -x
45
46 The system creates two pools on startup: `cephfs_data_a` and `cephfs_metadata_a`. Let's get some stats on
47 the current pools:
48
49 .. code-block:: console
50
51 $ bin/ceph osd pool stats
52 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
53 pool cephfs_data_a id 1
54 nothing is going on
55
56 pool cephfs_metadata_a id 2
57 nothing is going on
58
59 $ bin/ceph osd pool stats cephfs_data_a
60 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
61 pool cephfs_data_a id 1
62 nothing is going on
63
64 $ bin/rados df
65 POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
66 cephfs_data_a 0 0 0 0 0 0 0 0 0 0 0
67 cephfs_metadata_a 2246 21 0 63 0 0 0 0 0 42 8192
68
69 total_objects 21
70 total_used 244G
71 total_space 1180G
72
73
74 Make a pool and run some benchmarks against it:
75
76 .. prompt:: bash $
77
78 bin/ceph osd pool create mypool
79 bin/rados -p mypool bench 10 write -b 123
80
81 Place a file into the new pool:
82
83 .. prompt:: bash $
84
85 bin/rados -p mypool put objectone <somefile>
86 bin/rados -p mypool put objecttwo <anotherfile>
87
88 List the objects in the pool:
89
90 .. prompt:: bash $
91
92 bin/rados -p mypool ls
93
94 Once you are done, type the following to stop the development ceph deployment:
95
96 .. prompt:: bash $
97
98 ../src/stop.sh
99
100 Resetting your vstart environment
101 ---------------------------------
102
103 The vstart script creates out/ and dev/ directories which contain
104 the cluster's state. If you want to quickly reset your environment,
105 you might do something like this:
106
107 .. prompt:: bash [build]$
108
109 ../src/stop.sh
110 rm -rf out dev
111 MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d
112
113 Running a RadosGW development environment
114 -----------------------------------------
115
116 Set the ``RGW`` environment variable when running vstart.sh to enable the RadosGW.
117
118 .. prompt:: bash $
119
120 cd build
121 RGW=1 ../src/vstart.sh -d -n -x
122
123 You can now use the swift python client to communicate with the RadosGW.
124
125 .. prompt:: bash $
126
127 swift -A http://localhost:8000/auth -U test:tester -K testing list
128 swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph
129 swift -A http://localhost:8000/auth -U test:tester -K testing list
130
131
132 Run unit tests
133 --------------
134
135 The tests are located in `src/tests`. To run them type:
136
137 .. prompt:: bash $
138
139 make check
140