]> git.proxmox.com Git - ceph.git/blob - ceph/doc/dev/cephadm/developing-cephadm.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / doc / dev / cephadm / developing-cephadm.rst
1 =======================
2 Developing with cephadm
3 =======================
4
5 There are several ways to develop with cephadm. Which you use depends
6 on what you're trying to accomplish.
7
8 vstart --cephadm
9 ================
10
11 - Start a cluster with vstart, with cephadm configured
12 - Manage any additional daemons with cephadm
13
14 In this case, the mon and manager at a minimum are running in the usual
15 vstart way, not managed by cephadm. But cephadm is enabled and the local
16 host is added, so you can deploy additional daemons or add additional hosts.
17
18 This works well for developing cephadm itself, because any mgr/cephadm
19 or cephadm/cephadm code changes can be applied by kicking ceph-mgr
20 with ``ceph mgr fail x``. (When the mgr (re)starts, it loads the
21 cephadm/cephadm script into memory.)
22
23 ::
24
25 MON=1 MGR=1 OSD=0 MDS=0 ../src/vstart.sh -d -n -x --cephadm
26
27 - ``~/.ssh/id_dsa[.pub]`` is used as the cluster key. It is assumed that
28 this key is authorized to ssh with no passphrase to root@`hostname`.
29 - cephadm does not try to manage any daemons started by vstart.sh (any
30 nonzero number in the environment variables). No service spec is defined
31 for mon or mgr.
32 - You'll see health warnings from cephadm about stray daemons--that's because
33 the vstart-launched daemons aren't controlled by cephadm.
34 - The default image is ``quay.io/ceph-ci/ceph:master``, but you can change
35 this by passing ``-o container_image=...`` or ``ceph config set global container_image ...``.
36
37
38 cstart and cpatch
39 =================
40
41 The ``cstart.sh`` script will launch a cluster using cephadm and put the
42 conf and keyring in your build dir, so that the ``bin/ceph ...`` CLI works
43 (just like with vstart). The ``ckill.sh`` script will tear it down.
44
45 - A unique but stable fsid is stored in ``fsid`` (in the build dir).
46 - The mon port is random, just like with vstart.
47 - The container image is ``quay.io/ceph-ci/ceph:$tag`` where $tag is
48 the first 8 chars of the fsid.
49 - If the container image doesn't exist yet when you run cstart for the
50 first time, it is built with cpatch.
51
52 There are a few advantages here:
53
54 - The cluster is a "normal" cephadm cluster that looks and behaves
55 just like a user's cluster would. In contract, vstart and teuthology
56 clusters tend to be special in subtle (and not-so-subtle) ways.
57
58 To start a test cluster::
59
60 sudo ../src/cstart.sh
61
62 The last line of this will be a line you can cut+paste to update the
63 container image. For instance::
64
65 sudo ../src/script/cpatch -t quay.io/ceph-ci/ceph:8f509f4e
66
67 By default, cpatch will patch everything it can think of from the local
68 build dir into the container image. If you are working on a specific
69 part of the system, though, can you get away with smaller changes so that
70 cpatch runs faster. For instance::
71
72 sudo ../src/script/cpatch -t quay.io/ceph-ci/ceph:8f509f4e --py
73
74 will update the mgr modules (minus the dashboard). Or::
75
76 sudo ../src/script/cpatch -t quay.io/ceph-ci/ceph:8f509f4e --core
77
78 will do most binaries and libraries. Pass ``-h`` to cpatch for all options.
79
80 Once the container is updated, you can refresh/restart daemons by bouncing
81 them with::
82
83 sudo systemctl restart ceph-`cat fsid`.target
84
85 When you're done, you can tear down the cluster with::
86
87 sudo ../src/ckill.sh # or,
88 sudo ../src/cephadm/cephadm rm-cluster --force --fsid `cat fsid`
89
90 Note regarding network calls from CLI handlers
91 ==============================================
92
93 Executing any cephadm CLI commands like ``ceph orch ls`` will block the
94 mon command handler thread within the MGR, thus preventing any concurrent
95 CLI calls. Note that pressing ``^C`` will not resolve this situation,
96 as *only* the client will be aborted, but not execution of the command
97 within the orchestrator manager module itself. This means, cephadm will
98 be completely unresponsive until the execution of the CLI handler is
99 fully completed. Note that even ``ceph orch ps`` will not respond while
100 another handler is executing.
101
102 This means we should do very few synchronous calls to remote hosts.
103 As a guideline, cephadm should do at most ``O(1)`` network calls in CLI handlers.
104 Everything else should be done asynchronously in other threads, like ``serve()``.