3 .. _orchestrator-modules:
5 .. py:currentmodule:: orchestrator
7 ceph-mgr orchestrator modules
8 =============================
12 This is developer documentation, describing Ceph internals that
13 are only relevant to people writing ceph-mgr orchestrator modules.
15 In this context, *orchestrator* refers to some external service that
16 provides the ability to discover devices and create Ceph services. This
17 includes external projects such as Rook.
19 An *orchestrator module* is a ceph-mgr module (:ref:`mgr-module-dev`)
20 which implements common management operations using a particular
23 Orchestrator modules subclass the ``Orchestrator`` class: this class is
24 an interface, it only provides method definitions to be implemented
25 by subclasses. The purpose of defining this common interface
26 for different orchestrators is to enable common UI code, such as
27 the dashboard, to work with various different backends.
34 volumes [label="mgr/volumes"]
35 rook [label="mgr/rook"]
36 dashboard [label="mgr/dashboard"]
37 orchestrator_cli [label="mgr/orchestrator"]
38 orchestrator [label="Orchestrator Interface"]
39 cephadm [label="mgr/cephadm"]
44 volumes -> orchestrator
45 dashboard -> orchestrator
46 orchestrator_cli -> orchestrator
47 orchestrator -> rook -> rook_io
48 orchestrator -> cephadm
51 rook_io [label="Rook"]
56 Behind all the abstraction, the purpose of orchestrator modules is simple:
57 enable Ceph to do things like discover available hardware, create and
58 destroy OSDs, and run MDS and RGW services.
60 A tutorial is not included here: for full and concrete examples, see
61 the existing implemented orchestrator modules in the Ceph source tree.
67 a daemon that uses local storage, such as OSD or mon.
70 a daemon that doesn't use any local storage, such
71 as an MDS, RGW, nfs-ganesha, iSCSI gateway.
74 arbitrary string tags that may be applied by administrators
75 to hosts. Typically administrators use labels to indicate
76 which hosts should run which kinds of service. Labels are
77 advisory (from human input) and do not guarantee that hosts
78 have particular physical capabilities.
81 collection of block devices with common/shared OSD
82 formatting (typically one or more SSDs acting as
83 journals/dbs for a group of HDDs).
86 choice of which host is used to run a service.
91 The underlying orchestrator remains the source of truth for information
92 about whether a service is running, what is running where, which
93 hosts are available, etc. Orchestrator modules should avoid taking
94 any internal copies of this information, and read it directly from
95 the orchestrator backend as much as possible.
97 Bootstrapping hosts and adding them to the underlying orchestration
98 system is outside the scope of Ceph's orchestrator interface. Ceph
99 can only work on hosts when the orchestrator is already aware of them.
101 Where possible, placement of stateless services should be left up to the
104 Completions and batching
105 ------------------------
107 All methods that read or modify the state of the system can potentially
108 be long running. Therefore the module needs to schedule those operations.
110 Each orchestrator module implements its own underlying mechanisms
111 for completions. This might involve running the underlying operations
112 in threads, or batching the operations up before later executing
113 in one go in the background. If implementing such a batching pattern, the
114 module would do no work on any operation until it appeared in a list
115 of completions passed into *process*.
120 The main goal of error handling within orchestrator modules is to provide debug information to
121 assist users when dealing with deployment errors.
123 .. autoclass:: OrchestratorError
124 .. autoclass:: NoOrchestrator
125 .. autoclass:: OrchestratorValidationError
128 In detail, orchestrators need to explicitly deal with different kinds of errors:
130 1. No orchestrator configured
132 See :class:`NoOrchestrator`.
134 2. An orchestrator doesn't implement a specific method.
136 For example, an Orchestrator doesn't support ``add_host``.
138 In this case, a ``NotImplementedError`` is raised.
140 3. Missing features within implemented methods.
142 E.g. optional parameters to a command that are not supported by the
143 backend (e.g. the hosts field in :func:`Orchestrator.apply_mons` command with the rook backend).
145 See :class:`OrchestratorValidationError`.
147 4. Input validation errors
149 The ``orchestrator`` module and other calling modules are supposed to
150 provide meaningful error messages.
152 See :class:`OrchestratorValidationError`.
154 5. Errors when actually executing commands
156 The resulting Completion should contain an error string that assists in understanding the
157 problem. In addition, :func:`Completion.is_errored` is set to ``True``
159 6. Invalid configuration in the orchestrator modules
161 This can be tackled similar to 5.
164 All other errors are unexpected orchestrator issues and thus should raise an exception that are then
165 logged into the mgr log file. If there is a completion object at that point,
166 :func:`Completion.result` may contain an error message.
169 Excluded functionality
170 ----------------------
172 - Ceph's orchestrator interface is not a general purpose framework for
173 managing linux servers -- it is deliberately constrained to manage
174 the Ceph cluster's services only.
175 - Multipathed storage is not handled (multipathing is unnecessary for
176 Ceph clusters). Each drive is assumed to be visible only on
182 .. automethod:: Orchestrator.add_host
183 .. automethod:: Orchestrator.remove_host
184 .. automethod:: Orchestrator.get_hosts
185 .. automethod:: Orchestrator.update_host_addr
186 .. automethod:: Orchestrator.add_host_label
187 .. automethod:: Orchestrator.remove_host_label
189 .. autoclass:: HostSpec
194 .. automethod:: Orchestrator.get_inventory
195 .. autoclass:: InventoryFilter
197 .. py:currentmodule:: ceph.deployment.inventory
199 .. autoclass:: Devices
202 .. autoclass:: Device
205 .. py:currentmodule:: orchestrator
210 A :ref:`orchestrator-cli-placement-spec` defines the placement of
211 daemons of a specific service.
213 In general, stateless services do not require any specific placement
214 rules as they can run anywhere that sufficient system resources
215 are available. However, some orchestrators may not include the
216 functionality to choose a location in this way. Optionally, you can
217 specify a location when creating a stateless service.
220 .. py:currentmodule:: ceph.deployment.service_spec
222 .. autoclass:: PlacementSpec
225 .. py:currentmodule:: orchestrator
231 .. autoclass:: ServiceDescription
233 .. py:currentmodule:: ceph.deployment.service_spec
235 .. autoclass:: ServiceSpec
237 .. py:currentmodule:: orchestrator
239 .. automethod:: Orchestrator.describe_service
241 .. automethod:: Orchestrator.service_action
242 .. automethod:: Orchestrator.remove_service
248 .. automethod:: Orchestrator.list_daemons
249 .. automethod:: Orchestrator.remove_daemons
250 .. automethod:: Orchestrator.daemon_action
255 .. automethod:: Orchestrator.create_osds
257 .. automethod:: Orchestrator.blink_device_light
258 .. autoclass:: DeviceLightLoc
260 .. _orchestrator-osd-replace:
265 See :ref:`rados-replacing-an-osd` for the underlying process.
267 Replacing OSDs is fundamentally a two-staged process, as users need to
268 physically replace drives. The orchestrator therefore exposes this two-staged process.
270 Phase one is a call to :meth:`Orchestrator.remove_daemons` with ``destroy=True`` in order to mark
271 the OSD as destroyed.
274 Phase two is a call to :meth:`Orchestrator.create_osds` with a Drive Group with
276 .. py:currentmodule:: ceph.deployment.drive_group
278 :attr:`DriveGroupSpec.osd_id_claims` set to the destroyed OSD ids.
280 .. py:currentmodule:: orchestrator
285 .. automethod:: Orchestrator.add_daemon
286 .. automethod:: Orchestrator.apply_mon
287 .. automethod:: Orchestrator.apply_mgr
288 .. automethod:: Orchestrator.apply_mds
289 .. automethod:: Orchestrator.apply_rbd_mirror
291 .. py:currentmodule:: ceph.deployment.service_spec
293 .. autoclass:: RGWSpec
295 .. py:currentmodule:: orchestrator
297 .. automethod:: Orchestrator.apply_rgw
299 .. py:currentmodule:: ceph.deployment.service_spec
301 .. autoclass:: NFSServiceSpec
303 .. py:currentmodule:: orchestrator
305 .. automethod:: Orchestrator.apply_nfs
310 .. automethod:: Orchestrator.upgrade_available
311 .. automethod:: Orchestrator.upgrade_start
312 .. automethod:: Orchestrator.upgrade_status
313 .. autoclass:: UpgradeStatusSpec
318 .. automethod:: Orchestrator.available
319 .. automethod:: Orchestrator.get_feature_set
324 .. autoclass:: OrchestratorClientMixin