]> git.proxmox.com Git - ceph.git/blob - ceph/doc/mgr/orchestrator_cli.rst
update download target update for octopus release
[ceph.git] / ceph / doc / mgr / orchestrator_cli.rst
1
2 .. _orchestrator-cli-module:
3
4 ================
5 Orchestrator CLI
6 ================
7
8 This module provides a command line interface (CLI) to orchestrator
9 modules (ceph-mgr modules which interface with external orchestation services)
10
11 As the orchestrator CLI unifies different external orchestrators, a common nomenclature
12 for the orchestrator module is needed.
13
14 +--------------------------------------+---------------------------------------+
15 | host | hostname (not DNS name) of the |
16 | | physical host. Not the podname, |
17 | | container name, or hostname inside |
18 | | the container. |
19 +--------------------------------------+---------------------------------------+
20 | service type | The type of the service. e.g., nfs, |
21 | | mds, osd, mon, rgw, mgr, iscsi |
22 +--------------------------------------+---------------------------------------+
23 | service | A logical service, Typically |
24 | | comprised of multiple service |
25 | | instances on multiple hosts for HA |
26 | | |
27 | | * ``fs_name`` for mds type |
28 | | * ``rgw_zone`` for rgw type |
29 | | * ``ganesha_cluster_id`` for nfs type |
30 +--------------------------------------+---------------------------------------+
31 | service instance | A single instance of a service. |
32 | |  Usually a daemon, but maybe not |
33 | | (e.g., might be a kernel service |
34 | | like LIO or knfsd or whatever) |
35 | | |
36 | | This identifier should |
37 | | uniquely identify the instance |
38 +--------------------------------------+---------------------------------------+
39 | daemon | A running process on a host; use |
40 | | “service instance” instead |
41 +--------------------------------------+---------------------------------------+
42
43 The relation between the names is the following:
44
45 * a service belongs to a service type
46 * a service instance belongs to a service type
47 * a service instance belongs to a single service group
48
49 Configuration
50 =============
51
52 To enable the orchestrator, please select the orchestrator module to use
53 with the ``set backend`` command::
54
55 ceph orchestrator set backend <module>
56
57 For example, to enable the Rook orchestrator module and use it with the CLI::
58
59 ceph mgr module enable rook
60 ceph orchestrator set backend rook
61
62 You can then check backend is properly configured::
63
64 ceph orchestrator status
65
66 Disable the Orchestrator
67 ~~~~~~~~~~~~~~~~~~~~~~~~
68
69 To disable the orchestrator again, use the empty string ``""``::
70
71 ceph orchestrator set backend ""``
72 ceph mgr module disable rook
73
74 Usage
75 =====
76
77 .. warning::
78
79 The orchestrator CLI is unfinished and work in progress. Some commands will not
80 exist, or return a different result.
81
82 .. note::
83
84 Orchestrator modules may only implement a subset of the commands listed below.
85 Also, the implementation of the commands are orchestrator module dependent and will
86 differ between implementations.
87
88 Status
89 ~~~~~~
90
91 ::
92
93 ceph orchestrator status
94
95 Show current orchestrator mode and high-level status (whether the module able
96 to talk to it)
97
98 Also show any in-progress actions.
99
100 Host Management
101 ~~~~~~~~~~~~~~~
102
103 List hosts associated with the cluster::
104
105 ceph orchestrator host ls
106
107 Add and remove hosts::
108
109 ceph orchestrator host add <host>
110 ceph orchestrator host rm <host>
111
112 OSD Management
113 ~~~~~~~~~~~~~~
114
115 List Devices
116 ^^^^^^^^^^^^
117
118 Print a list of discovered devices, grouped by node and optionally
119 filtered to a particular node:
120
121 ::
122
123 ceph orchestrator device ls [--host=...] [--refresh]
124
125 Create OSDs
126 ^^^^^^^^^^^
127
128 Create OSDs on a group of devices on a single host::
129
130 ceph orchestrator osd create <host>:<drive>
131 ceph orchestrator osd create -i <path-to-drive-group.json>
132
133
134 The output of ``osd create`` is not specified and may vary between orchestrator backends.
135
136 Where ``drive.group.json`` is a JSON file containing the fields defined in :class:`orchestrator.DriveGroupSpec`
137
138
139 Decommission an OSD
140 ^^^^^^^^^^^^^^^^^^^
141 ::
142
143 ceph orchestrator osd rm <osd-id> [osd-id...]
144
145 Removes one or more OSDs from the cluster and the host, if the OSDs are marked as
146 ``destroyed``.
147
148
149 ..
150 Blink Device Lights
151 ^^^^^^^^^^^^^^^^^^^
152 ::
153
154 ceph orchestrator device ident-on <host> <devname>
155 ceph orchestrator device ident-off <host> <devname>
156 ceph orchestrator device fault-on <host> <devname>
157 ceph orchestrator device fault-off <host> <devname>
158
159 ceph orchestrator osd ident-on {primary,journal,db,wal,all} <osd-id>
160 ceph orchestrator osd ident-off {primary,journal,db,wal,all} <osd-id>
161 ceph orchestrator osd fault-on {primary,journal,db,wal,all} <osd-id>
162 ceph orchestrator osd fault-off {primary,journal,db,wal,all} <osd-id>
163
164 Where ``journal`` is the filestore journal, ``wal`` is the write ahead log of
165 bluestore and ``all`` stands for all devices associated with the osd
166
167
168 Monitor and manager management
169 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170
171 Creates or removes MONs or MGRs from the cluster. Orchestrator may return an
172 error if it doesn't know how to do this transition.
173
174 Update the number of monitor nodes::
175
176 ceph orchestrator mon update <num> [host, host:network...]
177
178 Each host can optionally specificy a network for the monitor to listen on.
179
180 Update the number of manager nodes::
181
182 ceph orchestrator mgr update <num> [host...]
183
184 ..
185 .. note::
186
187 The host lists are the new full list of mon/mgr hosts
188
189 .. note::
190
191 specifying hosts is optional for some orchestrator modules
192 and mandatory for others (e.g. Ansible).
193
194
195 Service Status
196 ~~~~~~~~~~~~~~
197
198 Print a list of services known to the orchestrator. The list can be limited to
199 services on a particular host with the optional --host parameter and/or
200 services of a particular type via optional --type parameter
201 (mon, osd, mgr, mds, rgw):
202
203 ::
204
205 ceph orchestrator service ls [--host host] [--svc_type type] [--refresh]
206
207 Discover the status of a particular service::
208
209 ceph orchestrator service ls --svc_type type --svc_id <name> [--refresh]
210
211
212 Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
213 the id is the numeric OSD ID, for MDS services it is the filesystem name::
214
215 ceph orchestrator service-instance status <type> <instance-name> [--refresh]
216
217
218
219 Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
220 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221 The orchestrator is not responsible for configuring the services. Please look into the corresponding
222 documentation for details.
223
224 The ``name`` parameter is an identifier of the group of instances:
225
226 * a CephFS filesystem for a group of MDS daemons,
227 * a zone name for a group of RGWs
228
229 Sizing: the ``size`` parameter gives the number of daemons in the cluster
230 (e.g. the number of MDS daemons for a particular CephFS filesystem).
231
232 Creating/growing/shrinking/removing services::
233
234 ceph orchestrator {mds,rgw} update <name> <size> [host…]
235 ceph orchestrator {mds,rgw} add <name>
236 ceph orchestrator nfs update <name> <size> [host…]
237 ceph orchestrator nfs add <name> <pool> [--namespace=<namespace>]
238 ceph orchestrator {mds,rgw,nfs} rm <name>
239
240 e.g., ``ceph orchestrator mds update myfs 3 host1 host2 host3``
241
242 Start/stop/reload::
243
244 ceph orchestrator service {stop,start,reload} <type> <name>
245
246 ceph orchestrator service-instance {start,stop,reload} <type> <instance-name>
247
248
249 Current Implementation Status
250 =============================
251
252 This is an overview of the current implementation status of the orchestrators.
253
254 =================================== ========= ====== ========= =====
255 Command Ansible Rook DeepSea SSH
256 =================================== ========= ====== ========= =====
257 host add ⚪ ⚪ ⚪ ✔️
258 host ls ⚪ ⚪ ⚪ ✔️
259 host rm ⚪ ⚪ ⚪ ✔️
260 mgr update ⚪ ⚪ ⚪ ✔️
261 mon update ⚪ ✔️ ⚪ ✔️
262 osd create ✔️ ✔️ ⚪ ✔️
263 osd device {ident,fault}-{on,off} ⚪ ⚪ ⚪ ⚪
264 osd rm ✔️ ⚪ ⚪ ⚪
265 device {ident,fault}-(on,off} ⚪ ⚪ ⚪ ⚪
266 device ls ✔️ ✔️ ✔️ ✔️
267 service ls ⚪ ✔️ ✔️ ⚪
268 service-instance status ⚪ ⚪ ⚪ ⚪
269 iscsi {stop,start,reload} ⚪ ⚪ ⚪ ⚪
270 iscsi add ⚪ ⚪ ⚪ ⚪
271 iscsi rm ⚪ ⚪ ⚪ ⚪
272 iscsi update ⚪ ⚪ ⚪ ⚪
273 mds {stop,start,reload} ⚪ ⚪ ⚪ ⚪
274 mds add ⚪ ✔️ ⚪ ⚪
275 mds rm ⚪ ✔️ ⚪ ⚪
276 mds update ⚪ ⚪ ⚪ ⚪
277 nfs {stop,start,reload} ⚪ ⚪ ⚪ ⚪
278 nfs add ⚪ ✔️ ⚪ ⚪
279 nfs rm ⚪ ✔️ ⚪ ⚪
280 nfs update ⚪ ⚪ ⚪ ⚪
281 rbd-mirror {stop,start,reload} ⚪ ⚪ ⚪ ⚪
282 rbd-mirror add ⚪ ⚪ ⚪ ⚪
283 rbd-mirror rm ⚪ ⚪ ⚪ ⚪
284 rbd-mirror update ⚪ ⚪ ⚪ ⚪
285 rgw {stop,start,reload} ⚪ ⚪ ⚪ ⚪
286 rgw add ⚪ ✔️ ⚪ ⚪
287 rgw rm ⚪ ✔️ ⚪ ⚪
288 rgw update ⚪ ⚪ ⚪ ⚪
289 =================================== ========= ====== ========= =====
290
291 where
292
293 * ⚪ = not yet implemented
294 * ❌ = not applicable
295 * ✔ = implemented