2 .. _orchestrator-cli-module:
8 This module provides a command line interface (CLI) to orchestrator
9 modules (ceph-mgr modules which interface with external orchestation services)
11 As the orchestrator CLI unifies different external orchestrators, a common nomenclature
12 for the orchestrator module is needed.
14 +--------------------------------------+---------------------------------------+
15 | host | hostname (not DNS name) of the |
16 | | physical host. Not the podname, |
17 | | container name, or hostname inside |
19 +--------------------------------------+---------------------------------------+
20 | service type | The type of the service. e.g., nfs, |
21 | | mds, osd, mon, rgw, mgr, iscsi |
22 +--------------------------------------+---------------------------------------+
23 | service | A logical service, Typically |
24 | | comprised of multiple service |
25 | | instances on multiple hosts for HA |
27 | | * ``fs_name`` for mds type |
28 | | * ``rgw_zone`` for rgw type |
29 | | * ``ganesha_cluster_id`` for nfs type |
30 +--------------------------------------+---------------------------------------+
31 | service instance | A single instance of a service. |
32 | | Usually a daemon, but maybe not |
33 | | (e.g., might be a kernel service |
34 | | like LIO or knfsd or whatever) |
36 | | This identifier should |
37 | | uniquely identify the instance |
38 +--------------------------------------+---------------------------------------+
39 | daemon | A running process on a host; use |
40 | | “service instance” instead |
41 +--------------------------------------+---------------------------------------+
43 The relation between the names is the following:
45 * a service belongs to a service type
46 * a service instance belongs to a service type
47 * a service instance belongs to a single service group
52 To enable the orchestrator, please select the orchestrator module to use
53 with the ``set backend`` command::
55 ceph orchestrator set backend <module>
57 For example, to enable the Rook orchestrator module and use it with the CLI::
59 ceph mgr module enable rook
60 ceph orchestrator set backend rook
62 You can then check backend is properly configured::
64 ceph orchestrator status
66 Disable the Orchestrator
67 ~~~~~~~~~~~~~~~~~~~~~~~~
69 To disable the orchestrator again, use the empty string ``""``::
71 ceph orchestrator set backend ""``
72 ceph mgr module disable rook
79 The orchestrator CLI is unfinished and work in progress. Some commands will not
80 exist, or return a different result.
84 Orchestrator modules may only implement a subset of the commands listed below.
85 Also, the implementation of the commands are orchestrator module dependent and will
86 differ between implementations.
93 ceph orchestrator status
95 Show current orchestrator mode and high-level status (whether the module able
98 Also show any in-progress actions.
103 List hosts associated with the cluster::
105 ceph orchestrator host ls
107 Add and remove hosts::
109 ceph orchestrator host add <host>
110 ceph orchestrator host rm <host>
118 Print a list of discovered devices, grouped by node and optionally
119 filtered to a particular node:
123 ceph orchestrator device ls [--host=...] [--refresh]
128 Create OSDs on a group of devices on a single host::
130 ceph orchestrator osd create <host>:<drive>
131 ceph orchestrator osd create -i <path-to-drive-group.json>
134 The output of ``osd create`` is not specified and may vary between orchestrator backends.
136 Where ``drive.group.json`` is a JSON file containing the fields defined in :class:`orchestrator.DriveGroupSpec`
143 ceph orchestrator osd rm <osd-id> [osd-id...]
145 Removes one or more OSDs from the cluster and the host, if the OSDs are marked as
154 ceph orchestrator device ident-on <host> <devname>
155 ceph orchestrator device ident-off <host> <devname>
156 ceph orchestrator device fault-on <host> <devname>
157 ceph orchestrator device fault-off <host> <devname>
159 ceph orchestrator osd ident-on {primary,journal,db,wal,all} <osd-id>
160 ceph orchestrator osd ident-off {primary,journal,db,wal,all} <osd-id>
161 ceph orchestrator osd fault-on {primary,journal,db,wal,all} <osd-id>
162 ceph orchestrator osd fault-off {primary,journal,db,wal,all} <osd-id>
164 Where ``journal`` is the filestore journal, ``wal`` is the write ahead log of
165 bluestore and ``all`` stands for all devices associated with the osd
168 Monitor and manager management
169 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
171 Creates or removes MONs or MGRs from the cluster. Orchestrator may return an
172 error if it doesn't know how to do this transition.
174 Update the number of monitor nodes::
176 ceph orchestrator mon update <num> [host, host:network...]
178 Each host can optionally specificy a network for the monitor to listen on.
180 Update the number of manager nodes::
182 ceph orchestrator mgr update <num> [host...]
187 The host lists are the new full list of mon/mgr hosts
191 specifying hosts is optional for some orchestrator modules
192 and mandatory for others (e.g. Ansible).
198 Print a list of services known to the orchestrator. The list can be limited to
199 services on a particular host with the optional --host parameter and/or
200 services of a particular type via optional --type parameter
201 (mon, osd, mgr, mds, rgw):
205 ceph orchestrator service ls [--host host] [--svc_type type] [--refresh]
207 Discover the status of a particular service::
209 ceph orchestrator service ls --svc_type type --svc_id <name> [--refresh]
212 Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
213 the id is the numeric OSD ID, for MDS services it is the filesystem name::
215 ceph orchestrator service-instance status <type> <instance-name> [--refresh]
219 Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
220 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221 The orchestrator is not responsible for configuring the services. Please look into the corresponding
222 documentation for details.
224 The ``name`` parameter is an identifier of the group of instances:
226 * a CephFS filesystem for a group of MDS daemons,
227 * a zone name for a group of RGWs
229 Sizing: the ``size`` parameter gives the number of daemons in the cluster
230 (e.g. the number of MDS daemons for a particular CephFS filesystem).
232 Creating/growing/shrinking/removing services::
234 ceph orchestrator {mds,rgw} update <name> <size> [host…]
235 ceph orchestrator {mds,rgw} add <name>
236 ceph orchestrator nfs update <name> <size> [host…]
237 ceph orchestrator nfs add <name> <pool> [--namespace=<namespace>]
238 ceph orchestrator {mds,rgw,nfs} rm <name>
240 e.g., ``ceph orchestrator mds update myfs 3 host1 host2 host3``
244 ceph orchestrator service {stop,start,reload} <type> <name>
246 ceph orchestrator service-instance {start,stop,reload} <type> <instance-name>
249 Current Implementation Status
250 =============================
252 This is an overview of the current implementation status of the orchestrators.
254 =================================== ========= ====== ========= =====
255 Command Ansible Rook DeepSea SSH
256 =================================== ========= ====== ========= =====
262 osd create ✔️ ✔️ ⚪ ✔️
263 osd device {ident,fault}-{on,off} ⚪ ⚪ ⚪ ⚪
265 device {ident,fault}-(on,off} ⚪ ⚪ ⚪ ⚪
266 device ls ✔️ ✔️ ✔️ ✔️
268 service-instance status ⚪ ⚪ ⚪ ⚪
269 iscsi {stop,start,reload} ⚪ ⚪ ⚪ ⚪
273 mds {stop,start,reload} ⚪ ⚪ ⚪ ⚪
277 nfs {stop,start,reload} ⚪ ⚪ ⚪ ⚪
281 rbd-mirror {stop,start,reload} ⚪ ⚪ ⚪ ⚪
282 rbd-mirror add ⚪ ⚪ ⚪ ⚪
283 rbd-mirror rm ⚪ ⚪ ⚪ ⚪
284 rbd-mirror update ⚪ ⚪ ⚪ ⚪
285 rgw {stop,start,reload} ⚪ ⚪ ⚪ ⚪
289 =================================== ========= ====== ========= =====
293 * ⚪ = not yet implemented