]> git.proxmox.com Git - ceph.git/blame - ceph/doc/mgr/orchestrator_cli.rst
import ceph 14.2.5
[ceph.git] / ceph / doc / mgr / orchestrator_cli.rst
CommitLineData
11fdf7f2
TL
1
2.. _orchestrator-cli-module:
3
4================
5Orchestrator CLI
6================
7
8This module provides a command line interface (CLI) to orchestrator
9modules (ceph-mgr modules which interface with external orchestation services)
10
11As the orchestrator CLI unifies different external orchestrators, a common nomenclature
12for the orchestrator module is needed.
13
14+--------------------------------------+---------------------------------------+
15| host | hostname (not DNS name) of the |
16| | physical host. Not the podname, |
17| | container name, or hostname inside |
18| | the container. |
19+--------------------------------------+---------------------------------------+
20| service type | The type of the service. e.g., nfs, |
21| | mds, osd, mon, rgw, mgr, iscsi |
22+--------------------------------------+---------------------------------------+
23| service | A logical service, Typically |
24| | comprised of multiple service |
25| | instances on multiple hosts for HA |
26| | |
27| | * ``fs_name`` for mds type |
28| | * ``rgw_zone`` for rgw type |
29| | * ``ganesha_cluster_id`` for nfs type |
30+--------------------------------------+---------------------------------------+
31| service instance | A single instance of a service. |
32| |  Usually a daemon, but maybe not |
33| | (e.g., might be a kernel service |
34| | like LIO or knfsd or whatever) |
35| | |
36| | This identifier should |
37| | uniquely identify the instance |
38+--------------------------------------+---------------------------------------+
39| daemon | A running process on a host; use |
40| | “service instance” instead |
41+--------------------------------------+---------------------------------------+
42
43The relation between the names is the following:
44
45* a service belongs to a service type
46* a service instance belongs to a service type
47* a service instance belongs to a single service group
48
49Configuration
50=============
51
eafe8130
TL
52To enable the orchestrator, please select the orchestrator module to use
53with the ``set backend`` command::
11fdf7f2
TL
54
55 ceph orchestrator set backend <module>
56
57For example, to enable the Rook orchestrator module and use it with the CLI::
58
59 ceph mgr module enable rook
60 ceph orchestrator set backend rook
61
62You can then check backend is properly configured::
63
64 ceph orchestrator status
65
eafe8130
TL
66Disable the Orchestrator
67~~~~~~~~~~~~~~~~~~~~~~~~
68
69To disable the orchestrator again, use the empty string ``""``::
70
71 ceph orchestrator set backend ""``
72 ceph mgr module disable rook
73
11fdf7f2
TL
74Usage
75=====
76
77.. warning::
78
79 The orchestrator CLI is unfinished and work in progress. Some commands will not
80 exist, or return a different result.
81
82.. note::
83
84 Orchestrator modules may only implement a subset of the commands listed below.
85 Also, the implementation of the commands are orchestrator module dependent and will
86 differ between implementations.
87
88Status
89~~~~~~
90
91::
92
93 ceph orchestrator status
94
95Show current orchestrator mode and high-level status (whether the module able
96to talk to it)
97
98Also show any in-progress actions.
99
100Host Management
101~~~~~~~~~~~~~~~
102
103List hosts associated with the cluster::
104
105 ceph orchestrator host ls
106
107Add and remove hosts::
108
109 ceph orchestrator host add <host>
110 ceph orchestrator host rm <host>
111
112OSD Management
113~~~~~~~~~~~~~~
114
115List Devices
116^^^^^^^^^^^^
117
118Print a list of discovered devices, grouped by node and optionally
119filtered to a particular node:
120
121::
122
123 ceph orchestrator device ls [--host=...] [--refresh]
124
125Create OSDs
126^^^^^^^^^^^
127
128Create OSDs on a group of devices on a single host::
129
130 ceph orchestrator osd create <host>:<drive>
131 ceph orchestrator osd create -i <path-to-drive-group.json>
132
133
134The output of ``osd create`` is not specified and may vary between orchestrator backends.
135
136Where ``drive.group.json`` is a JSON file containing the fields defined in :class:`orchestrator.DriveGroupSpec`
137
138
139Decommission an OSD
140^^^^^^^^^^^^^^^^^^^
141::
142
143 ceph orchestrator osd rm <osd-id> [osd-id...]
144
145Removes one or more OSDs from the cluster and the host, if the OSDs are marked as
146``destroyed``.
147
148
149..
150 Blink Device Lights
151 ^^^^^^^^^^^^^^^^^^^
152 ::
153
154 ceph orchestrator device ident-on <host> <devname>
155 ceph orchestrator device ident-off <host> <devname>
156 ceph orchestrator device fault-on <host> <devname>
157 ceph orchestrator device fault-off <host> <devname>
158
159 ceph orchestrator osd ident-on {primary,journal,db,wal,all} <osd-id>
160 ceph orchestrator osd ident-off {primary,journal,db,wal,all} <osd-id>
161 ceph orchestrator osd fault-on {primary,journal,db,wal,all} <osd-id>
162 ceph orchestrator osd fault-off {primary,journal,db,wal,all} <osd-id>
163
164 Where ``journal`` is the filestore journal, ``wal`` is the write ahead log of
165 bluestore and ``all`` stands for all devices associated with the osd
166
167
168Monitor and manager management
169~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170
171Creates or removes MONs or MGRs from the cluster. Orchestrator may return an
172error if it doesn't know how to do this transition.
173
174Update the number of monitor nodes::
175
176 ceph orchestrator mon update <num> [host, host:network...]
177
178Each host can optionally specificy a network for the monitor to listen on.
179
180Update the number of manager nodes::
181
182 ceph orchestrator mgr update <num> [host...]
183
184..
185 .. note::
186
187 The host lists are the new full list of mon/mgr hosts
188
189 .. note::
190
191 specifying hosts is optional for some orchestrator modules
192 and mandatory for others (e.g. Ansible).
193
194
195Service Status
196~~~~~~~~~~~~~~
197
198Print a list of services known to the orchestrator. The list can be limited to
199services on a particular host with the optional --host parameter and/or
200services of a particular type via optional --type parameter
201(mon, osd, mgr, mds, rgw):
202
203::
204
eafe8130 205 ceph orchestrator service ls [--host host] [--svc_type type] [--refresh]
11fdf7f2
TL
206
207Discover the status of a particular service::
208
eafe8130 209 ceph orchestrator service ls --svc_type type --svc_id <name> [--refresh]
11fdf7f2
TL
210
211
212Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
213the id is the numeric OSD ID, for MDS services it is the filesystem name::
214
215 ceph orchestrator service-instance status <type> <instance-name> [--refresh]
216
217
218
219Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
220~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221The orchestrator is not responsible for configuring the services. Please look into the corresponding
222documentation for details.
223
224The ``name`` parameter is an identifier of the group of instances:
225
226* a CephFS filesystem for a group of MDS daemons,
227* a zone name for a group of RGWs
228
229Sizing: the ``size`` parameter gives the number of daemons in the cluster
230(e.g. the number of MDS daemons for a particular CephFS filesystem).
231
232Creating/growing/shrinking/removing services::
233
234 ceph orchestrator {mds,rgw} update <name> <size> [host…]
235 ceph orchestrator {mds,rgw} add <name>
236 ceph orchestrator nfs update <name> <size> [host…]
237 ceph orchestrator nfs add <name> <pool> [--namespace=<namespace>]
238 ceph orchestrator {mds,rgw,nfs} rm <name>
239
240e.g., ``ceph orchestrator mds update myfs 3 host1 host2 host3``
241
242Start/stop/reload::
243
244 ceph orchestrator service {stop,start,reload} <type> <name>
245
246 ceph orchestrator service-instance {start,stop,reload} <type> <instance-name>
247
248
249Current Implementation Status
250=============================
251
252This is an overview of the current implementation status of the orchestrators.
253
254=================================== ========= ====== ========= =====
255 Command Ansible Rook DeepSea SSH
256=================================== ========= ====== ========= =====
257 host add ⚪ ⚪ ⚪ ✔️
258 host ls ⚪ ⚪ ⚪ ✔️
259 host rm ⚪ ⚪ ⚪ ✔️
260 mgr update ⚪ ⚪ ⚪ ✔️
81eedcae 261 mon update ⚪ ✔️ ⚪ ✔️
11fdf7f2
TL
262 osd create ✔️ ✔️ ⚪ ✔️
263 osd device {ident,fault}-{on,off} ⚪ ⚪ ⚪ ⚪
264 osd rm ✔️ ⚪ ⚪ ⚪
265 device {ident,fault}-(on,off} ⚪ ⚪ ⚪ ⚪
266 device ls ✔️ ✔️ ✔️ ✔️
267 service ls ⚪ ✔️ ✔️ ⚪
11fdf7f2
TL
268 service-instance status ⚪ ⚪ ⚪ ⚪
269 iscsi {stop,start,reload} ⚪ ⚪ ⚪ ⚪
270 iscsi add ⚪ ⚪ ⚪ ⚪
271 iscsi rm ⚪ ⚪ ⚪ ⚪
272 iscsi update ⚪ ⚪ ⚪ ⚪
273 mds {stop,start,reload} ⚪ ⚪ ⚪ ⚪
274 mds add ⚪ ✔️ ⚪ ⚪
275 mds rm ⚪ ✔️ ⚪ ⚪
276 mds update ⚪ ⚪ ⚪ ⚪
277 nfs {stop,start,reload} ⚪ ⚪ ⚪ ⚪
278 nfs add ⚪ ✔️ ⚪ ⚪
279 nfs rm ⚪ ✔️ ⚪ ⚪
280 nfs update ⚪ ⚪ ⚪ ⚪
281 rbd-mirror {stop,start,reload} ⚪ ⚪ ⚪ ⚪
282 rbd-mirror add ⚪ ⚪ ⚪ ⚪
283 rbd-mirror rm ⚪ ⚪ ⚪ ⚪
284 rbd-mirror update ⚪ ⚪ ⚪ ⚪
285 rgw {stop,start,reload} ⚪ ⚪ ⚪ ⚪
286 rgw add ⚪ ✔️ ⚪ ⚪
287 rgw rm ⚪ ✔️ ⚪ ⚪
288 rgw update ⚪ ⚪ ⚪ ⚪
289=================================== ========= ====== ========= =====
290
291where
292
293* ⚪ = not yet implemented
294* ❌ = not applicable
295* ✔ = implemented