2 .. _orchestrator-cli-module:
8 This module provides a command line interface (CLI) to orchestrator
9 modules (ceph-mgr modules which interface with external orchestration services).
11 As the orchestrator CLI unifies different external orchestrators, a common nomenclature
12 for the orchestrator module is needed.
14 +--------------------------------------+---------------------------------------+
15 | *host* | hostname (not DNS name) of the |
16 | | physical host. Not the podname, |
17 | | container name, or hostname inside |
19 +--------------------------------------+---------------------------------------+
20 | *service type* | The type of the service. e.g., nfs, |
21 | | mds, osd, mon, rgw, mgr, iscsi |
22 +--------------------------------------+---------------------------------------+
23 | *service* | A logical service, Typically |
24 | | comprised of multiple service |
25 | | instances on multiple hosts for HA |
27 | | * ``fs_name`` for mds type |
28 | | * ``rgw_zone`` for rgw type |
29 | | * ``ganesha_cluster_id`` for nfs type |
30 +--------------------------------------+---------------------------------------+
31 | *daemon* | A single instance of a service. |
32 | | Usually a daemon, but maybe not |
33 | | (e.g., might be a kernel service |
34 | | like LIO or knfsd or whatever) |
36 | | This identifier should |
37 | | uniquely identify the instance |
38 +--------------------------------------+---------------------------------------+
40 The relation between the names is the following:
42 * A *service* has a specfic *service type*
43 * A *daemon* is a physical instance of a *service type*
48 Orchestrator modules may only implement a subset of the commands listed below.
49 Also, the implementation of the commands are orchestrator module dependent and will
50 differ between implementations.
59 Show current orchestrator mode and high-level status (whether the module able
62 Also show any in-progress actions.
67 List hosts associated with the cluster::
71 Add and remove hosts::
73 ceph orch host add <host>
74 ceph orch host rm <host>
82 Print a list of discovered devices, grouped by host and optionally
83 filtered to a particular host:
87 ceph orch device ls [--host=...] [--refresh]
93 Device Path Type Size Rotates Available Model
94 /dev/sdb hdd 50.0G True True ATA/QEMU HARDDISK
95 /dev/sda hdd 50.0G True False ATA/QEMU HARDDISK
98 Device Path Type Size Rotates Available Model
99 /dev/sdb hdd 50.0G True True ATA/QEMU HARDDISK
100 /dev/sda hdd 50.0G True False ATA/QEMU HARDDISK
103 Output form Ansible orchestrator
108 Create OSDs on a group of devices on a single host::
110 ceph orch osd create <host>:<drive>
111 ceph orch osd create -i <path-to-drive-group.json>
114 The output of ``osd create`` is not specified and may vary between orchestrator backends.
116 Where ``drive.group.json`` is a JSON file containing the fields defined in
117 :class:`ceph.deployment_utils.drive_group.DriveGroupSpec`
121 # ceph orch osd create 192.168.121.206:/dev/sdc
122 {"status": "OK", "msg": "", "data": {"event": "playbook_on_stats", "uuid": "7082f3ba-f5b7-4b7c-9477-e74ca918afcb", "stdout": "\r\nPLAY RECAP *********************************************************************\r\n192.168.121.206 : ok=96 changed=3 unreachable=0 failed=0 \r\n", "counter": 932, "pid": 10294, "created": "2019-05-28T22:22:58.527821", "end_line": 1170, "runner_ident": "083cad3c-8197-11e9-b07a-2016b900e38f", "start_line": 1166, "event_data": {"ignored": 0, "skipped": {"192.168.121.206": 186}, "ok": {"192.168.121.206": 96}, "artifact_data": {}, "rescued": 0, "changed": {"192.168.121.206": 3}, "pid": 10294, "dark": {}, "playbook_uuid": "409364a6-9d49-4e44-8b7b-c28e5b3adf89", "playbook": "add-osd.yml", "failures": {}, "processed": {"192.168.121.206": 1}}, "parent_uuid": "409364a6-9d49-4e44-8b7b-c28e5b3adf89"}}
125 Output form Ansible orchestrator
131 ceph orch osd rm <osd-id> [osd-id...]
133 Removes one or more OSDs from the cluster and the host, if the OSDs are marked as
139 {"status": "OK", "msg": "", "data": {"event": "playbook_on_stats", "uuid": "1a16e631-906d-48e0-9e24-fa7eb593cc0a", "stdout": "\r\nPLAY RECAP *********************************************************************\r\n192.168.121.158 : ok=2 changed=0 unreachable=0 failed=0 \r\n192.168.121.181 : ok=2 changed=0 unreachable=0 failed=0 \r\n192.168.121.206 : ok=2 changed=0 unreachable=0 failed=0 \r\nlocalhost : ok=31 changed=8 unreachable=0 failed=0 \r\n", "counter": 240, "pid": 10948, "created": "2019-05-28T22:26:09.264012", "end_line": 308, "runner_ident": "8c093db0-8197-11e9-b07a-2016b900e38f", "start_line": 301, "event_data": {"ignored": 0, "skipped": {"localhost": 37}, "ok": {"192.168.121.181": 2, "192.168.121.158": 2, "192.168.121.206": 2, "localhost": 31}, "artifact_data": {}, "rescued": 0, "changed": {"localhost": 8}, "pid": 10948, "dark": {}, "playbook_uuid": "a12ec40e-bce9-4bc9-b09e-2d8f76a5be02", "playbook": "shrink-osd.yml", "failures": {}, "processed": {"192.168.121.181": 1, "192.168.121.158": 1, "192.168.121.206": 1, "localhost": 1}}, "parent_uuid": "a12ec40e-bce9-4bc9-b09e-2d8f76a5be02"}}
142 Output form Ansible orchestrator
149 ceph orch device ident-on <dev_id>
150 ceph orch device ident-on <dev_name> <host>
151 ceph orch device fault-on <dev_id>
152 ceph orch device fault-on <dev_name> <host>
154 ceph orch device ident-off <dev_id> [--force=true]
155 ceph orch device ident-off <dev_id> <host> [--force=true]
156 ceph orch device fault-off <dev_id> [--force=true]
157 ceph orch device fault-off <dev_id> <host> [--force=true]
159 where ``dev_id`` is the device id as listed in ``osd metadata``,
160 ``dev_name`` is the name of the device on the system and ``host`` is the host as
161 returned by ``orchestrator host ls``
163 ceph orch osd ident-on {primary,journal,db,wal,all} <osd-id>
164 ceph orch osd ident-off {primary,journal,db,wal,all} <osd-id>
165 ceph orch osd fault-on {primary,journal,db,wal,all} <osd-id>
166 ceph orch osd fault-off {primary,journal,db,wal,all} <osd-id>
168 Where ``journal`` is the filestore journal, ``wal`` is the write ahead log of
169 bluestore and ``all`` stands for all devices associated with the osd
172 Monitor and manager management
173 ==============================
175 Creates or removes MONs or MGRs from the cluster. Orchestrator may return an
176 error if it doesn't know how to do this transition.
178 Update the number of monitor hosts::
180 ceph orch apply mon <num> [host, host:network...]
182 Each host can optionally specify a network for the monitor to listen on.
184 Update the number of manager hosts::
186 ceph orch apply mgr <num> [host...]
191 The host lists are the new full list of mon/mgr hosts
195 specifying hosts is optional for some orchestrator modules
196 and mandatory for others (e.g. Ansible).
202 Print a list of services known to the orchestrator. The list can be limited to
203 services on a particular host with the optional --host parameter and/or
204 services of a particular type via optional --type parameter
205 (mon, osd, mgr, mds, rgw):
210 ceph orch service ls [--host host] [--svc_type type] [--refresh]
212 Discover the status of a particular service or daemons::
214 ceph orch service ls --svc_type type --svc_id <name> [--refresh]
217 Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
218 the id is the numeric OSD ID, for MDS services it is the file system name::
220 ceph orch daemon status <type> <instance-name> [--refresh]
223 .. _orchestrator-cli-cephfs:
228 In order to set up a :term:`CephFS`, execute::
230 ceph fs volume create <fs_name> <placement spec>
232 Where ``name`` is the name of the CephFS, ``placement`` is a
233 :ref:`orchestrator-cli-placement-spec`.
235 This command will create the required Ceph pools, create the new
236 CephFS, and deploy mds servers.
238 Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
239 =================================================
241 The orchestrator is not responsible for configuring the services. Please look into the corresponding
242 documentation for details.
244 The ``name`` parameter is an identifier of the group of instances:
246 * a CephFS file system for a group of MDS daemons,
247 * a zone name for a group of RGWs
249 Sizing: the ``size`` parameter gives the number of daemons in the cluster
250 (e.g. the number of MDS daemons for a particular CephFS file system).
252 Creating/growing/shrinking/removing services::
254 ceph orch {mds,rgw} update <name> <size> [host…]
255 ceph orch {mds,rgw} add <name>
256 ceph orch nfs update <name> <size> [host…]
257 ceph orch nfs add <name> <pool> [--namespace=<namespace>]
258 ceph orch {mds,rgw,nfs} rm <name>
260 e.g., ``ceph orch mds update myfs 3 host1 host2 host3``
264 ceph orch service {stop,start,reload} <type> <name>
266 ceph orch daemon {start,stop,reload} <type> <daemon-id>
268 .. _orchestrator-cli-service-spec:
270 Service Specification
271 =====================
273 As *Service Specification* is a data structure often represented as YAML
274 to specify the deployment of services. For example:
279 service_id: realm.zone
287 Where the properties of a service specification are the following:
289 * ``service_type`` is the type of the service. Needs to be either a Ceph
290 service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
291 ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), or part of the
292 monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
294 * ``service_id`` is the name of the service. Omit the service time
295 * ``placement`` is a :ref:`orchestrator-cli-placement-spec`
296 * ``spec``: additional specifications for a specific service.
298 Each service type can have different requirements for the spec.
300 Service specifications of type ``mon``, ``mgr``, and the monitoring
301 types do not require a ``service_id``
303 A service of type ``nfs`` requires a pool name and contain
304 an optional namespace:
316 namespace: mynamespace
318 Where ``pool`` is a RADOS pool where NFS client recovery data is stored
319 and ``namespace`` is a RADOS namespace where NFS client recovery
320 data is stored in the pool.
322 A service of type ``osd`` is in detail described in :ref:`drivegroups`
324 Many service specifications can then be applied at once using
325 ``ceph orch apply -i`` by submitting a multi-document YAML file::
327 cat <<EOF | ceph orch apply -i -
343 .. _orchestrator-cli-placement-spec:
345 Placement Specification
346 =======================
348 In order to allow the orchestrator to deploy a *service*, it needs to
349 know how many and where it should deploy *daemons*. The orchestrator
350 defines a placement specification that can either be passed as a command line argument.
355 Daemons can be explictly placed on hosts by simply specifying them::
357 orch apply prometheus "host1 host2 host3"
363 service_type: prometheus
370 MONs and other services may require some enhanced network specifications::
372 orch daemon add mon myhost:[v2:1.2.3.4:3000,v1:1.2.3.4:6789]=name
374 Where ``[v2:1.2.3.4:3000,v1:1.2.3.4:6789]`` is the network address of the monitor
375 and ``=name`` specifies the name of the new monitor.
380 Daemons can be explictly placed on hosts that match a specifc label::
382 orch apply prometheus label:mylabel
388 service_type: prometheus
393 Placement by pattern matching
394 -----------------------------
396 Daemons can be placed on hosts as well::
398 orch apply prometheus 'myhost[1-3]'
404 service_type: prometheus
406 host_pattern: "myhost[1-3]"
408 To place a service on *all* hosts, use ``"*"``::
416 service_type: node-exporter
424 By specifying ``count``, only that number of daemons will be created::
426 orch apply prometheus 3
428 To deploy *daemons* on a subset of hosts, also specify the count::
430 orch apply prometheus "2 host1 host2 host3"
432 If the count is bigger than the amount of hosts, cephadm still deploys two daemons::
434 orch apply prometheus "3 host1 host2"
440 service_type: prometheus
448 service_type: prometheus
457 Configuring the Orchestrator CLI
458 ================================
460 To enable the orchestrator, select the orchestrator module to use
461 with the ``set backend`` command::
463 ceph orch set backend <module>
465 For example, to enable the Rook orchestrator module and use it with the CLI::
467 ceph mgr module enable rook
468 ceph orch set backend rook
470 Check the backend is properly configured::
474 Disable the Orchestrator
475 ------------------------
477 To disable the orchestrator, use the empty string ``""``::
479 ceph orch set backend ""
480 ceph mgr module disable rook
482 Current Implementation Status
483 =============================
485 This is an overview of the current implementation status of the orchestrators.
487 =================================== ====== =========
489 =================================== ====== =========
502 daemon {stop,start,...} ⚪ ✔
503 device {ident,fault}-(on,off} ⚪ ✔
512 =================================== ====== =========
516 * ⚪ = not yet implemented