]> git.proxmox.com Git - ceph.git/blob - ceph/doc/dev/cephadm/developing-cephadm.rst
17e4d3ddee40467d599661b6976d2178f7b7a58f
[ceph.git] / ceph / doc / dev / cephadm / developing-cephadm.rst
1 =======================
2 Developing with cephadm
3 =======================
4
5 There are several ways to develop with cephadm. Which you use depends
6 on what you're trying to accomplish.
7
8 vstart --cephadm
9 ================
10
11 - Start a cluster with vstart, with cephadm configured
12 - Manage any additional daemons with cephadm
13 - Requires compiled ceph binaries
14
15 In this case, the mon and manager at a minimum are running in the usual
16 vstart way, not managed by cephadm. But cephadm is enabled and the local
17 host is added, so you can deploy additional daemons or add additional hosts.
18
19 This works well for developing cephadm itself, because any mgr/cephadm
20 or cephadm/cephadm code changes can be applied by kicking ceph-mgr
21 with ``ceph mgr fail x``. (When the mgr (re)starts, it loads the
22 cephadm/cephadm script into memory.)
23
24 ::
25
26 MON=1 MGR=1 OSD=0 MDS=0 ../src/vstart.sh -d -n -x --cephadm
27
28 - ``~/.ssh/id_dsa[.pub]`` is used as the cluster key. It is assumed that
29 this key is authorized to ssh with no passphrase to root@`hostname`.
30 - cephadm does not try to manage any daemons started by vstart.sh (any
31 nonzero number in the environment variables). No service spec is defined
32 for mon or mgr.
33 - You'll see health warnings from cephadm about stray daemons--that's because
34 the vstart-launched daemons aren't controlled by cephadm.
35 - The default image is ``quay.io/ceph-ci/ceph:master``, but you can change
36 this by passing ``-o container_image=...`` or ``ceph config set global container_image ...``.
37
38
39 cstart and cpatch
40 =================
41
42 The ``cstart.sh`` script will launch a cluster using cephadm and put the
43 conf and keyring in your build dir, so that the ``bin/ceph ...`` CLI works
44 (just like with vstart). The ``ckill.sh`` script will tear it down.
45
46 - A unique but stable fsid is stored in ``fsid`` (in the build dir).
47 - The mon port is random, just like with vstart.
48 - The container image is ``quay.io/ceph-ci/ceph:$tag`` where $tag is
49 the first 8 chars of the fsid.
50 - If the container image doesn't exist yet when you run cstart for the
51 first time, it is built with cpatch.
52
53 There are a few advantages here:
54
55 - The cluster is a "normal" cephadm cluster that looks and behaves
56 just like a user's cluster would. In contract, vstart and teuthology
57 clusters tend to be special in subtle (and not-so-subtle) ways.
58
59 To start a test cluster::
60
61 sudo ../src/cstart.sh
62
63 The last line of this will be a line you can cut+paste to update the
64 container image. For instance::
65
66 sudo ../src/script/cpatch -t quay.io/ceph-ci/ceph:8f509f4e
67
68 By default, cpatch will patch everything it can think of from the local
69 build dir into the container image. If you are working on a specific
70 part of the system, though, can you get away with smaller changes so that
71 cpatch runs faster. For instance::
72
73 sudo ../src/script/cpatch -t quay.io/ceph-ci/ceph:8f509f4e --py
74
75 will update the mgr modules (minus the dashboard). Or::
76
77 sudo ../src/script/cpatch -t quay.io/ceph-ci/ceph:8f509f4e --core
78
79 will do most binaries and libraries. Pass ``-h`` to cpatch for all options.
80
81 Once the container is updated, you can refresh/restart daemons by bouncing
82 them with::
83
84 sudo systemctl restart ceph-`cat fsid`.target
85
86 When you're done, you can tear down the cluster with::
87
88 sudo ../src/ckill.sh # or,
89 sudo ../src/cephadm/cephadm rm-cluster --force --fsid `cat fsid`
90
91 cephadm bootstrap --shared_ceph_folder
92 ======================================
93
94 Cephadm can also be used directly without compiled ceph binaries.
95
96 Run cephadm like so::
97
98 sudo ./cephadm bootstrap --mon-ip 127.0.0.1 \
99 --ssh-private-key /home/<user>/.ssh/id_rsa \
100 --skip-mon-network \
101 --skip-monitoring-stack --single-host-defaults \
102 --skip-dashboard \
103 --shared_ceph_folder /home/<user>/path/to/ceph/
104
105 - ``~/.ssh/id_rsa`` is used as the cluster key. It is assumed that
106 this key is authorized to ssh with no passphrase to root@`hostname`.
107
108 Source code changes made in the ``pybind/mgr/`` directory then
109 require a daemon restart to take effect.
110
111 Note regarding network calls from CLI handlers
112 ==============================================
113
114 Executing any cephadm CLI commands like ``ceph orch ls`` will block the
115 mon command handler thread within the MGR, thus preventing any concurrent
116 CLI calls. Note that pressing ``^C`` will not resolve this situation,
117 as *only* the client will be aborted, but not execution of the command
118 within the orchestrator manager module itself. This means, cephadm will
119 be completely unresponsive until the execution of the CLI handler is
120 fully completed. Note that even ``ceph orch ps`` will not respond while
121 another handler is executing.
122
123 This means we should do very few synchronous calls to remote hosts.
124 As a guideline, cephadm should do at most ``O(1)`` network calls in CLI handlers.
125 Everything else should be done asynchronously in other threads, like ``serve()``.
126
127 Note regarding different variables used in the code
128 ===================================================
129
130 * a ``service_type`` is something like mon, mgr, alertmanager etc defined
131 in ``ServiceSpec``
132 * a ``service_id`` is the name of the service. Some services don't have
133 names.
134 * a ``service_name`` is ``<service_type>.<service_id>``
135 * a ``daemon_type`` is the same as the service_type, except for ingress,
136 which has the haproxy and keepalived daemon types.
137 * a ``daemon_id`` is typically ``<service_id>.<hostname>.<random-string>``.
138 (Not the case for e.g. OSDs. OSDs are always called OSD.N)
139 * a ``daemon_name`` is ``<daemon_type>.<daemon_id>``
140
141 Kcli: a virtualization management tool to make easy orchestrators development
142 =============================================================================
143 `Kcli <https://github.com/karmab/kcli>`_ is meant to interact with existing
144 virtualization providers (libvirt, KubeVirt, oVirt, OpenStack, VMware vSphere,
145 GCP and AWS) and to easily deploy and customize VMs from cloud images.
146
147 It allows you to setup an environment with several vms with your preferred
148 configuration( memory, cpus, disks) and OS flavor.
149
150 main advantages:
151 ----------------
152 - Is fast. Typically you can have a completely new Ceph cluster ready to debug
153 and develop orchestrator features in less than 5 minutes.
154 - Is a "near production" lab. The lab created with kcli is very near of "real"
155 clusters in QE labs or even in production. So easy to test "real things" in
156 almost "real environment"
157 - Is safe and isolated. Do not depend of the things you have installed in your
158 machine. And the vms are isolated from your environment.
159 - Easy to work "dev" environment. For "not compilated" software pieces,
160 for example any mgr module. It is an environment that allow you to test your
161 changes interactively.
162
163 Installation:
164 -------------
165 Complete documentation in `kcli installation <https://kcli.readthedocs.io/en/latest/#installation>`_
166 but we strongly suggest to use the container image approach.
167
168 So things to do:
169 - 1. Review `requeriments <https://kcli.readthedocs.io/en/latest/#libvirt-hypervisor-requisites>`_
170 and install/configure whatever you need to meet them.
171 - 2. get the kcli image and create one alias for executing the kcli command
172 ::
173
174 # podman pull quay.io/karmab/kcli
175 # alias kcli='podman run --net host -it --rm --security-opt label=disable -v $HOME/.ssh:/root/.ssh -v $HOME/.kcli:/root/.kcli -v /var/lib/libvirt/images:/var/lib/libvirt/images -v /var/run/libvirt:/var/run/libvirt -v $PWD:/workdir -v /var/tmp:/ignitiondir quay.io/karmab/kcli'
176
177 .. note:: /var/lib/libvirt/images can be customized.... be sure that you are
178 using this folder for your OS images
179
180 .. note:: Once you have used your kcli tool to create and use different labs, we
181 suggest you to "save" and use your own kcli image.
182 Why?: kcli is alive and it changes (and for the moment only exists one tag ...
183 latest). Because we have more than enough with the current functionality, and
184 what we want is overall stability,
185 we suggest to store the kcli image you are using in a safe place and update
186 your kcli alias to use your own image.
187
188 Test your kcli installation:
189 ----------------------------
190 See the kcli `basic usage workflow <https://kcli.readthedocs.io/en/latest/#basic-workflow>`_
191
192 Create a Ceph lab cluster
193 -------------------------
194 In order to make easy this task we are going to use a kcli plan.
195
196 A kcli plan is a file where you can define the different settings you want to
197 have in a set of vms.
198 You can define hardware parameters (cpu, memory, disks ..), operating system and
199 it also allows you to automate the installation and configuration of any
200 software you want to have.
201
202 There is a `repository <https://github.com/karmab/kcli-plans>`_ with a collection of
203 plans that can be used for different purposes. And we have predefined plans to
204 install Ceph clusters using Ceph ansible or cephadm, lets create our first Ceph
205 cluster using cephadm::
206
207 # kcli2 create plan -u https://github.com/karmab/kcli-plans/blob/master/ceph/ceph_cluster.yml
208
209 This will create a set of three vms using the plan file pointed by the url.
210 After a few minutes (depend of your laptop power), lets examine the cluster:
211
212 * Take a look to the vms created::
213
214 # kcli list vms
215
216 * Enter in the bootstrap node::
217
218 # kcli ssh ceph-node-00
219
220 * Take a look to the ceph cluster installed::
221
222 [centos@ceph-node-00 ~]$ sudo -i
223 [root@ceph-node-00 ~]# cephadm version
224 [root@ceph-node-00 ~]# cephadm shell
225 [ceph: root@ceph-node-00 /]# ceph orch host ls
226
227 Create a Ceph cluster to make easy developing in mgr modules (Orchestrators and Dashboard)
228 ------------------------------------------------------------------------------------------
229 The cephadm kcli plan (and cephadm) are prepared to do that.
230
231 The idea behind this method is to replace several python mgr folders in each of
232 the ceph daemons with the source code folders in your host machine.
233 This "trick" will allow you to make changes in any orchestrator or dashboard
234 module and test them intermediately. (only needed to disable/enable the mgr module)
235
236 So in order to create a ceph cluster for development purposes you must use the
237 same cephadm plan but with a new parameter pointing your Ceph source code folder::
238
239 # kcli create plan -u https://github.com/karmab/kcli-plans/blob/master/ceph/ceph_cluster.yml -P ceph_dev_folder=/home/mycodefolder/ceph
240
241 Ceph Dashboard development
242 --------------------------
243 Ceph dashboard module is not going to be loaded if previously you have not
244 generated the frontend bundle.
245
246 For now, in order load properly the Ceph Dashboardmodule and to apply frontend
247 changes you have to run "ng build" on your laptop::
248
249 # Start local frontend build with watcher (in background):
250 sudo dnf install -y nodejs
251 cd <path-to-your-ceph-repo>
252 cd src/pybind/mgr/dashboard/frontend
253 sudo chown -R <your-user>:root dist node_modules
254 NG_CLI_ANALYTICS=false npm ci
255 npm run build -- --deleteOutputPath=false --watch &
256
257 After saving your changes, the frontend bundle will be built again.
258 When completed, you'll see::
259
260 "Localized bundle generation complete."
261
262 Then you can reload your Dashboard browser tab.