]> git.proxmox.com Git - ceph.git/blob - ceph/doc/mgr/orchestrator.rst
2f20667f20c11865fbf40cdc33614ea3b2d13c42
[ceph.git] / ceph / doc / mgr / orchestrator.rst
1
2 .. _orchestrator-cli-module:
3
4 ================
5 Orchestrator CLI
6 ================
7
8 This module provides a command line interface (CLI) to orchestrator
9 modules (``ceph-mgr`` modules which interface with external orchestration services).
10
11 As the orchestrator CLI unifies multiple external orchestrators, a common nomenclature
12 for the orchestrator module is needed.
13
14 +--------------------------------------+---------------------------------------+
15 | *host* | hostname (not DNS name) of the |
16 | | physical host. Not the podname, |
17 | | container name, or hostname inside |
18 | | the container. |
19 +--------------------------------------+---------------------------------------+
20 | *service type* | The type of the service. e.g., nfs, |
21 | | mds, osd, mon, rgw, mgr, iscsi |
22 +--------------------------------------+---------------------------------------+
23 | *service* | A logical service, Typically |
24 | | comprised of multiple service |
25 | | instances on multiple hosts for HA |
26 | | |
27 | | * ``fs_name`` for mds type |
28 | | * ``rgw_zone`` for rgw type |
29 | | * ``ganesha_cluster_id`` for nfs type |
30 +--------------------------------------+---------------------------------------+
31 | *daemon* | A single instance of a service. |
32 | | Usually a daemon, but maybe not |
33 | | (e.g., might be a kernel service |
34 | | like LIO or knfsd or whatever) |
35 | | |
36 | | This identifier should |
37 | | uniquely identify the instance |
38 +--------------------------------------+---------------------------------------+
39
40 The relation between the names is the following:
41
42 * A *service* has a specific *service type*
43 * A *daemon* is a physical instance of a *service type*
44
45
46 .. note::
47
48 Orchestrator modules may only implement a subset of the commands listed below.
49 Also, the implementation of the commands may differ between modules.
50
51 Status
52 ======
53
54 ::
55
56 ceph orch status [--detail]
57
58 Show current orchestrator mode and high-level status (whether the orchestrator
59 plugin is available and operational)
60
61
62 ..
63 Turn On Device Lights
64 ^^^^^^^^^^^^^^^^^^^^^
65 ::
66
67 ceph orch device ident-on <dev_id>
68 ceph orch device ident-on <dev_name> <host>
69 ceph orch device fault-on <dev_id>
70 ceph orch device fault-on <dev_name> <host>
71
72 ceph orch device ident-off <dev_id> [--force=true]
73 ceph orch device ident-off <dev_id> <host> [--force=true]
74 ceph orch device fault-off <dev_id> [--force=true]
75 ceph orch device fault-off <dev_id> <host> [--force=true]
76
77 where ``dev_id`` is the device id as listed in ``osd metadata``,
78 ``dev_name`` is the name of the device on the system and ``host`` is the host as
79 returned by ``orchestrator host ls``
80
81 ceph orch osd ident-on {primary,journal,db,wal,all} <osd-id>
82 ceph orch osd ident-off {primary,journal,db,wal,all} <osd-id>
83 ceph orch osd fault-on {primary,journal,db,wal,all} <osd-id>
84 ceph orch osd fault-off {primary,journal,db,wal,all} <osd-id>
85
86 where ``journal`` is the filestore journal device, ``wal`` is the bluestore
87 write ahead log device, and ``all`` stands for all devices associated with the OSD
88
89
90 .. _orchestrator-cli-stateless-services:
91
92 Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
93 =================================================
94
95 (Please note: The orchestrator will not configure the services. Please look into the corresponding
96 documentation for service configuration details.)
97
98 The ``name`` parameter is an identifier of the group of instances:
99
100 * a CephFS file system for a group of MDS daemons,
101 * a zone name for a group of RGWs
102
103 Creating/growing/shrinking/removing services::
104
105 ceph orch apply mds <fs_name> [--placement=<placement>] [--dry-run]
106 ceph orch apply rgw <name> [--realm=<realm>] [--zone=<zone>] [--port=<port>] [--ssl] [--placement=<placement>] [--dry-run]
107 ceph orch apply nfs <name> <pool> [--namespace=<namespace>] [--placement=<placement>] [--dry-run]
108 ceph orch rm <service_name> [--force]
109
110 where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
111
112 e.g., ``ceph orch apply mds myfs --placement="3 host1 host2 host3"``
113
114 Service Commands::
115
116 ceph orch <start|stop|restart|redeploy|reconfig> <service_name>
117
118
119
120 Configuring the Orchestrator CLI
121 ================================
122
123 To enable the orchestrator, select the orchestrator module to use
124 with the ``set backend`` command::
125
126 ceph orch set backend <module>
127
128 For example, to enable the Rook orchestrator module and use it with the CLI::
129
130 ceph mgr module enable rook
131 ceph orch set backend rook
132
133 Check the backend is properly configured::
134
135 ceph orch status
136
137 Disable the Orchestrator
138 ------------------------
139
140 To disable the orchestrator, use the empty string ``""``::
141
142 ceph orch set backend ""
143 ceph mgr module disable rook
144
145 Current Implementation Status
146 =============================
147
148 This is an overview of the current implementation status of the orchestrators.
149
150 =================================== ====== =========
151 Command Rook Cephadm
152 =================================== ====== =========
153 apply iscsi ⚪ ✔
154 apply mds ✔ ✔
155 apply mgr ⚪ ✔
156 apply mon ✔ ✔
157 apply nfs ✔ ✔
158 apply osd ✔ ✔
159 apply rbd-mirror ✔ ✔
160 apply cephfs-mirror ⚪ ✔
161 apply grafana ⚪ ✔
162 apply prometheus ❌ ✔
163 apply alertmanager ❌ ✔
164 apply node-exporter ❌ ✔
165 apply rgw ✔ ✔
166 apply container ⚪ ✔
167 apply snmp-gateway ❌ ✔
168 host add ⚪ ✔
169 host ls ✔ ✔
170 host rm ⚪ ✔
171 host maintenance enter ❌ ✔
172 host maintenance exit ❌ ✔
173 daemon status ⚪ ✔
174 daemon {stop,start,...} ⚪ ✔
175 device {ident,fault}-(on,off} ⚪ ✔
176 device ls ✔ ✔
177 iscsi add ⚪ ✔
178 mds add ⚪ ✔
179 nfs add ⚪ ✔
180 rbd-mirror add ⚪ ✔
181 rgw add ⚪ ✔
182 ls ✔ ✔
183 ps ✔ ✔
184 status ✔ ✔
185 upgrade ❌ ✔
186 =================================== ====== =========
187
188 where
189
190 * ⚪ = not yet implemented
191 * ❌ = not applicable
192 * ✔ = implemented