]> git.proxmox.com Git - ceph.git/blame - ceph/doc/mgr/orchestrator.rst
import 15.2.9
[ceph.git] / ceph / doc / mgr / orchestrator.rst
CommitLineData
9f95a23c
TL
1
2.. _orchestrator-cli-module:
3
4================
5Orchestrator CLI
6================
7
8This module provides a command line interface (CLI) to orchestrator
9modules (ceph-mgr modules which interface with external orchestration services).
10
11As the orchestrator CLI unifies different external orchestrators, a common nomenclature
12for the orchestrator module is needed.
13
14+--------------------------------------+---------------------------------------+
15| *host* | hostname (not DNS name) of the |
16| | physical host. Not the podname, |
17| | container name, or hostname inside |
18| | the container. |
19+--------------------------------------+---------------------------------------+
20| *service type* | The type of the service. e.g., nfs, |
21| | mds, osd, mon, rgw, mgr, iscsi |
22+--------------------------------------+---------------------------------------+
23| *service* | A logical service, Typically |
24| | comprised of multiple service |
25| | instances on multiple hosts for HA |
26| | |
27| | * ``fs_name`` for mds type |
28| | * ``rgw_zone`` for rgw type |
29| | * ``ganesha_cluster_id`` for nfs type |
30+--------------------------------------+---------------------------------------+
31| *daemon* | A single instance of a service. |
32| | Usually a daemon, but maybe not |
33| | (e.g., might be a kernel service |
34| | like LIO or knfsd or whatever) |
35| | |
36| | This identifier should |
37| | uniquely identify the instance |
38+--------------------------------------+---------------------------------------+
39
40The relation between the names is the following:
41
f91f0fd5 42* A *service* has a specific *service type*
9f95a23c
TL
43* A *daemon* is a physical instance of a *service type*
44
45
46.. note::
47
48 Orchestrator modules may only implement a subset of the commands listed below.
f91f0fd5 49 Also, the implementation of the commands may differ between modules.
9f95a23c
TL
50
51Status
52======
53
54::
55
56 ceph orch status
57
f91f0fd5
TL
58Show current orchestrator mode and high-level status (whether the orchestrator
59plugin is available and operational)
9f95a23c 60
adb31ebb
TL
61.. _orchestrator-cli-host-management:
62
9f95a23c
TL
63Host Management
64===============
65
66List hosts associated with the cluster::
67
68 ceph orch host ls
69
70Add and remove hosts::
71
e306af50
TL
72 ceph orch host add <hostname> [<addr>] [<labels>...]
73 ceph orch host rm <hostname>
74
adb31ebb 75For cephadm, see also :ref:`cephadm-fqdn` and :ref:`cephadm-removing-hosts`.
f6b5b4d7 76
e306af50
TL
77Host Specification
78------------------
79
80Many hosts can be added at once using
81``ceph orch apply -i`` by submitting a multi-document YAML file::
82
83 ---
84 service_type: host
85 addr: node-00
86 hostname: node-00
87 labels:
88 - example1
89 - example2
90 ---
91 service_type: host
92 addr: node-01
93 hostname: node-01
94 labels:
95 - grafana
96 ---
97 service_type: host
98 addr: node-02
99 hostname: node-02
100
f91f0fd5 101This can be combined with service specifications (below) to create a cluster spec file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec`` also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them.
9f95a23c
TL
102
103OSD Management
104==============
105
106List Devices
107------------
108
109Print a list of discovered devices, grouped by host and optionally
110filtered to a particular host:
111
112::
113
114 ceph orch device ls [--host=...] [--refresh]
115
116Example::
117
e306af50
TL
118 HOST PATH TYPE SIZE DEVICE AVAIL REJECT REASONS
119 master /dev/vda hdd 42.0G False locked
120 node1 /dev/vda hdd 42.0G False locked
121 node1 /dev/vdb hdd 8192M 387836 False locked, LVM detected, Insufficient space (<5GB) on vgs
122 node1 /dev/vdc hdd 8192M 450575 False locked, LVM detected, Insufficient space (<5GB) on vgs
123 node3 /dev/vda hdd 42.0G False locked
124 node3 /dev/vdb hdd 8192M 395145 False LVM detected, locked, Insufficient space (<5GB) on vgs
125 node3 /dev/vdc hdd 8192M 165562 False LVM detected, locked, Insufficient space (<5GB) on vgs
126 node2 /dev/vda hdd 42.0G False locked
127 node2 /dev/vdb hdd 8192M 672147 False LVM detected, Insufficient space (<5GB) on vgs, locked
128 node2 /dev/vdc hdd 8192M 228094 False LVM detected, Insufficient space (<5GB) on vgs, locked
9f95a23c 129
9f95a23c 130
e306af50
TL
131
132
133Erase Devices (Zap Devices)
134---------------------------
135
f91f0fd5 136Erase (zap) a device so that it can be reused. ``zap`` calls ``ceph-volume zap`` on the remote host.
e306af50
TL
137
138::
139
140 orch device zap <hostname> <path>
141
142Example command::
143
144 ceph orch device zap my_hostname /dev/sdx
145
f6b5b4d7
TL
146.. note::
147 Cephadm orchestrator will automatically deploy drives that match the DriveGroup in your OSDSpec if the unmanaged flag is unset.
f91f0fd5 148 For example, if you use the ``all-available-devices`` option when creating OSDs, when you ``zap`` a device the cephadm orchestrator will automatically create a new OSD in the device .
f6b5b4d7
TL
149 To disable this behavior, see :ref:`orchestrator-cli-create-osds`.
150
151.. _orchestrator-cli-create-osds:
9f95a23c
TL
152
153Create OSDs
154-----------
155
f6b5b4d7 156Create OSDs on a set of devices on a single host::
9f95a23c 157
e306af50
TL
158 ceph orch daemon add osd <host>:device1,device2
159
f6b5b4d7 160Another way of doing it is using ``apply`` interface::
e306af50 161
f6b5b4d7 162 ceph orch apply osd -i <json_file/yaml_file> [--dry-run]
e306af50 163
f91f0fd5 164where the ``json_file/yaml_file`` is a DriveGroup specification.
e306af50 165For a more in-depth guide to DriveGroups please refer to :ref:`drivegroups`
9f95a23c 166
f91f0fd5
TL
167``dry-run`` will cause the orchestrator to present a preview of what will happen
168without actually creating the OSDs.
e306af50
TL
169
170Example::
171
f6b5b4d7 172 # ceph orch apply osd --all-available-devices --dry-run
e306af50
TL
173 NAME HOST DATA DB WAL
174 all-available-devices node1 /dev/vdb - -
175 all-available-devices node2 /dev/vdc - -
176 all-available-devices node3 /dev/vdd - -
177
f6b5b4d7 178When the parameter ``all-available-devices`` or a DriveGroup specification is used, a cephadm service is created.
f91f0fd5
TL
179This service guarantees that all available devices or devices included in the DriveGroup will be used for OSDs.
180Note that the effect of ``--all-available-devices`` is persistent; that is, drives which are added to the system
181or become available (say, by zapping) after the command is complete will be automatically found and added to the cluster.
f6b5b4d7 182
f91f0fd5 183That is, after using::
f6b5b4d7
TL
184
185 ceph orch apply osd --all-available-devices
186
f91f0fd5 187* If you add new disks to the cluster they will automatically be used to create new OSDs.
f6b5b4d7
TL
188* A new OSD will be created automatically if you remove an OSD and clean the LVM physical volume.
189
f91f0fd5 190If you want to avoid this behavior (disable automatic creation of OSD on available devices), use the ``unmanaged`` parameter::
f6b5b4d7
TL
191
192 ceph orch apply osd --all-available-devices --unmanaged=true
9f95a23c 193
e306af50 194Remove an OSD
f6b5b4d7 195-------------
9f95a23c
TL
196::
197
f91f0fd5 198 ceph orch osd rm <osd_id(s)> [--replace] [--force]
9f95a23c 199
f6b5b4d7 200Evacuates PGs from an OSD and removes it from the cluster.
9f95a23c
TL
201
202Example::
203
f6b5b4d7 204 # ceph orch osd rm 0
e306af50
TL
205 Scheduled OSD(s) for removal
206
207
208OSDs that are not safe-to-destroy will be rejected.
209
210You can query the state of the operation with::
211
212 # ceph orch osd rm status
f6b5b4d7
TL
213 OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
214 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
215 3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158
216 4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
e306af50
TL
217
218
f91f0fd5 219When no PGs are left on the OSD, it will be decommissioned and removed from the cluster.
e306af50 220
f6b5b4d7
TL
221.. note::
222 After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created.
223 Read information about the ``unmanaged`` parameter in :ref:`orchestrator-cli-create-osds`.
224
225Stopping OSD Removal
226--------------------
227
f91f0fd5 228You can stop the queued OSD removal operation with
f6b5b4d7
TL
229
230::
231
232 ceph orch osd rm stop <svc_id(s)>
233
234Example::
235
236 # ceph orch osd rm stop 4
237 Stopped OSD(s) removal
238
f91f0fd5 239This will reset the initial state of the OSD and take it off the removal queue.
f6b5b4d7 240
e306af50
TL
241
242Replace an OSD
243-------------------
244::
245
f6b5b4d7 246 orch osd rm <svc_id(s)> --replace [--force]
e306af50
TL
247
248Example::
249
250 # ceph orch osd rm 4 --replace
251 Scheduled OSD(s) for replacement
252
253
254This follows the same procedure as the "Remove OSD" part with the exception that the OSD is not permanently removed
f91f0fd5 255from the CRUSH hierarchy, but is assigned a 'destroyed' flag.
e306af50
TL
256
257**Preserving the OSD ID**
258
f91f0fd5 259The previously-set 'destroyed' flag is used to determine OSD ids that will be reused in the next OSD deployment.
e306af50 260
f91f0fd5
TL
261If you use OSDSpecs for OSD deployment, your newly added disks will be assigned the OSD ids of their replaced
262counterparts, assuming the new disks still match the OSDSpecs.
e306af50 263
f91f0fd5 264For assistance in this process you can use the '--dry-run' feature.
e306af50
TL
265
266Tip: The name of your OSDSpec can be retrieved from **ceph orch ls**
267
268Alternatively, you can use your OSDSpec file::
269
f6b5b4d7 270 ceph orch apply osd -i <osd_spec_file> --dry-run
e306af50
TL
271 NAME HOST DATA DB WAL
272 <name_of_osd_spec> node1 /dev/vdb - -
273
274
f6b5b4d7 275If this matches your anticipated behavior, just omit the --dry-run flag to execute the deployment.
9f95a23c 276
9f95a23c
TL
277
278..
f91f0fd5
TL
279 Turn On Device Lights
280 ^^^^^^^^^^^^^^^^^^^^^
9f95a23c
TL
281 ::
282
283 ceph orch device ident-on <dev_id>
284 ceph orch device ident-on <dev_name> <host>
285 ceph orch device fault-on <dev_id>
286 ceph orch device fault-on <dev_name> <host>
287
288 ceph orch device ident-off <dev_id> [--force=true]
289 ceph orch device ident-off <dev_id> <host> [--force=true]
290 ceph orch device fault-off <dev_id> [--force=true]
291 ceph orch device fault-off <dev_id> <host> [--force=true]
292
293 where ``dev_id`` is the device id as listed in ``osd metadata``,
294 ``dev_name`` is the name of the device on the system and ``host`` is the host as
295 returned by ``orchestrator host ls``
296
297 ceph orch osd ident-on {primary,journal,db,wal,all} <osd-id>
298 ceph orch osd ident-off {primary,journal,db,wal,all} <osd-id>
299 ceph orch osd fault-on {primary,journal,db,wal,all} <osd-id>
300 ceph orch osd fault-off {primary,journal,db,wal,all} <osd-id>
301
f91f0fd5
TL
302 where ``journal`` is the filestore journal device, ``wal`` is the bluestore
303 write ahead log device, and ``all`` stands for all devices associated with the OSD
9f95a23c
TL
304
305
306Monitor and manager management
307==============================
308
309Creates or removes MONs or MGRs from the cluster. Orchestrator may return an
310error if it doesn't know how to do this transition.
311
312Update the number of monitor hosts::
313
f91f0fd5
TL
314 ceph orch apply mon --placement=<placement> [--dry-run]
315
316Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
9f95a23c
TL
317
318Each host can optionally specify a network for the monitor to listen on.
319
320Update the number of manager hosts::
321
f91f0fd5
TL
322 ceph orch apply mgr --placement=<placement> [--dry-run]
323
324Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
9f95a23c
TL
325
326..
327 .. note::
328
329 The host lists are the new full list of mon/mgr hosts
330
331 .. note::
332
333 specifying hosts is optional for some orchestrator modules
334 and mandatory for others (e.g. Ansible).
335
336
337Service Status
338==============
339
340Print a list of services known to the orchestrator. The list can be limited to
341services on a particular host with the optional --host parameter and/or
342services of a particular type via optional --type parameter
343(mon, osd, mgr, mds, rgw):
344
345::
346
1911f103 347 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
9f95a23c
TL
348
349Discover the status of a particular service or daemons::
350
1911f103 351 ceph orch ls --service_type type --service_name <name> [--refresh]
f6b5b4d7 352
1911f103
TL
353Export the service specs known to the orchestrator as yaml in format
354that is compatible to ``ceph orch apply -i``::
355
356 ceph orch ls --export
357
9f95a23c 358
1911f103
TL
359Daemon Status
360=============
9f95a23c 361
1911f103
TL
362Print a list of all daemons known to the orchestrator::
363
364 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
f6b5b4d7 365
9f95a23c
TL
366Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
367the id is the numeric OSD ID, for MDS services it is the file system name::
368
1911f103 369 ceph orch ps --daemon_type osd --daemon_id 0
9f95a23c
TL
370
371
372.. _orchestrator-cli-cephfs:
f6b5b4d7 373
f91f0fd5
TL
374Deploying CephFS
375================
9f95a23c
TL
376
377In order to set up a :term:`CephFS`, execute::
378
379 ceph fs volume create <fs_name> <placement spec>
f6b5b4d7 380
f91f0fd5 381where ``name`` is the name of the CephFS and ``placement`` is a
9f95a23c 382:ref:`orchestrator-cli-placement-spec`.
f6b5b4d7
TL
383
384This command will create the required Ceph pools, create the new
9f95a23c
TL
385CephFS, and deploy mds servers.
386
f91f0fd5
TL
387
388.. _orchestrator-cli-stateless-services:
389
9f95a23c
TL
390Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
391=================================================
392
f91f0fd5
TL
393(Please note: The orchestrator will not configure the services. Please look into the corresponding
394documentation for service configuration details.)
9f95a23c
TL
395
396The ``name`` parameter is an identifier of the group of instances:
397
398* a CephFS file system for a group of MDS daemons,
399* a zone name for a group of RGWs
400
9f95a23c
TL
401Creating/growing/shrinking/removing services::
402
f6b5b4d7
TL
403 ceph orch apply mds <fs_name> [--placement=<placement>] [--dry-run]
404 ceph orch apply rgw <realm> <zone> [--subcluster=<subcluster>] [--port=<port>] [--ssl] [--placement=<placement>] [--dry-run]
405 ceph orch apply nfs <name> <pool> [--namespace=<namespace>] [--placement=<placement>] [--dry-run]
406 ceph orch rm <service_name> [--force]
9f95a23c 407
f91f0fd5 408where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
9f95a23c 409
f6b5b4d7 410e.g., ``ceph orch apply mds myfs --placement="3 host1 host2 host3"``
9f95a23c 411
f6b5b4d7
TL
412Service Commands::
413
414 ceph orch <start|stop|restart|redeploy|reconfig> <service_name>
9f95a23c 415
f91f0fd5
TL
416Deploying custom containers
417===========================
418
419The orchestrator enables custom containers to be deployed using a YAML file.
420A corresponding :ref:`orchestrator-cli-service-spec` must look like:
421
422.. code-block:: yaml
423
424 service_type: container
425 service_id: foo
426 placement:
427 ...
428 image: docker.io/library/foo:latest
429 entrypoint: /usr/bin/foo
430 uid: 1000
431 gid: 1000
432 args:
433 - "--net=host"
434 - "--cpus=2"
435 ports:
436 - 8080
437 - 8443
438 envs:
439 - SECRET=mypassword
440 - PORT=8080
441 - PUID=1000
442 - PGID=1000
443 volume_mounts:
444 CONFIG_DIR: /etc/foo
445 bind_mounts:
446 - ['type=bind', 'source=lib/modules', 'destination=/lib/modules', 'ro=true']
447 dirs:
448 - CONFIG_DIR
449 files:
450 CONFIG_DIR/foo.conf:
451 - refresh=true
452 - username=xyz
453 - "port: 1234"
454
455where the properties of a service specification are:
456
457* ``service_id``
458 A unique name of the service.
459* ``image``
460 The name of the Docker image.
461* ``uid``
462 The UID to use when creating directories and files in the host system.
463* ``gid``
464 The GID to use when creating directories and files in the host system.
465* ``entrypoint``
466 Overwrite the default ENTRYPOINT of the image.
467* ``args``
468 A list of additional Podman/Docker command line arguments.
469* ``ports``
470 A list of TCP ports to open in the host firewall.
471* ``envs``
472 A list of environment variables.
473* ``bind_mounts``
474 When you use a bind mount, a file or directory on the host machine
475 is mounted into the container. Relative `source=...` paths will be
476 located below `/var/lib/ceph/<cluster-fsid>/<daemon-name>`.
477* ``volume_mounts``
478 When you use a volume mount, a new directory is created within
479 Docker’s storage directory on the host machine, and Docker manages
480 that directory’s contents. Relative source paths will be located below
481 `/var/lib/ceph/<cluster-fsid>/<daemon-name>`.
482* ``dirs``
483 A list of directories that are created below
484 `/var/lib/ceph/<cluster-fsid>/<daemon-name>`.
485* ``files``
486 A dictionary, where the key is the relative path of the file and the
487 value the file content. The content must be double quoted when using
488 a string. Use '\\n' for line breaks in that case. Otherwise define
489 multi-line content as list of strings. The given files will be created
490 below the directory `/var/lib/ceph/<cluster-fsid>/<daemon-name>`.
491 The absolute path of the directory where the file will be created must
492 exist. Use the `dirs` property to create them if necessary.
493
801d1391 494.. _orchestrator-cli-service-spec:
f6b5b4d7 495
801d1391
TL
496Service Specification
497=====================
498
f91f0fd5
TL
499A *Service Specification* is a data structure represented as YAML
500to specify the deployment of services. For example:
801d1391
TL
501
502.. code-block:: yaml
503
504 service_type: rgw
505 service_id: realm.zone
f6b5b4d7
TL
506 placement:
507 hosts:
801d1391
TL
508 - host1
509 - host2
510 - host3
e306af50 511 unmanaged: false
f91f0fd5
TL
512 ...
513
514where the properties of a service specification are:
515
516* ``service_type``
517 The type of the service. Needs to be either a Ceph
518 service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
519 ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), part of the
520 monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
521 ``prometheus``) or (``container``) for custom containers.
522* ``service_id``
523 The name of the service.
524* ``placement``
525 See :ref:`orchestrator-cli-placement-spec`.
526* ``unmanaged``
527 If set to ``true``, the orchestrator will not deploy nor
528 remove any daemon associated with this service. Placement and all other
529 properties will be ignored. This is useful, if this service should not
530 be managed temporarily.
531
532Each service type can have additional service specific properties.
801d1391
TL
533
534Service specifications of type ``mon``, ``mgr``, and the monitoring
f91f0fd5 535types do not require a ``service_id``.
801d1391 536
f91f0fd5 537A service of type ``nfs`` requires a pool name and may contain
801d1391
TL
538an optional namespace:
539
540.. code-block:: yaml
541
542 service_type: nfs
543 service_id: mynfs
f6b5b4d7
TL
544 placement:
545 hosts:
801d1391
TL
546 - host1
547 - host2
548 spec:
549 pool: mypool
550 namespace: mynamespace
551
f91f0fd5 552where ``pool`` is a RADOS pool where NFS client recovery data is stored
801d1391
TL
553and ``namespace`` is a RADOS namespace where NFS client recovery
554data is stored in the pool.
555
f91f0fd5 556A service of type ``osd`` is described in :ref:`drivegroups`
801d1391 557
f91f0fd5 558Many service specifications can be applied at once using
801d1391
TL
559``ceph orch apply -i`` by submitting a multi-document YAML file::
560
561 cat <<EOF | ceph orch apply -i -
562 service_type: mon
563 placement:
564 host_pattern: "mon*"
565 ---
566 service_type: mgr
567 placement:
568 host_pattern: "mgr*"
569 ---
570 service_type: osd
f6b5b4d7 571 service_id: default_drive_group
801d1391
TL
572 placement:
573 host_pattern: "osd*"
574 data_devices:
575 all: true
576 EOF
9f95a23c
TL
577
578.. _orchestrator-cli-placement-spec:
f6b5b4d7 579
9f95a23c
TL
580Placement Specification
581=======================
582
f91f0fd5
TL
583For the orchestrator to deploy a *service*, it needs to know where to deploy
584*daemons*, and how many to deploy. This is the role of a placement
585specification. Placement specifications can either be passed as command line arguments
586or in a YAML files.
9f95a23c
TL
587
588Explicit placements
589-------------------
590
f91f0fd5 591Daemons can be explicitly placed on hosts by simply specifying them::
9f95a23c 592
f91f0fd5 593 orch apply prometheus --placement="host1 host2 host3"
f6b5b4d7 594
f91f0fd5 595Or in YAML:
801d1391
TL
596
597.. code-block:: yaml
f6b5b4d7 598
9f95a23c
TL
599 service_type: prometheus
600 placement:
f6b5b4d7 601 hosts:
9f95a23c
TL
602 - host1
603 - host2
604 - host3
f6b5b4d7 605
9f95a23c
TL
606MONs and other services may require some enhanced network specifications::
607
f91f0fd5 608 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3000,v1:1.2.3.4:6789]=name"
f6b5b4d7 609
f91f0fd5 610where ``[v2:1.2.3.4:3000,v1:1.2.3.4:6789]`` is the network address of the monitor
9f95a23c
TL
611and ``=name`` specifies the name of the new monitor.
612
613Placement by labels
614-------------------
615
f91f0fd5 616Daemons can be explictly placed on hosts that match a specific label::
9f95a23c 617
f91f0fd5 618 orch apply prometheus --placement="label:mylabel"
9f95a23c 619
f91f0fd5 620Or in YAML:
801d1391
TL
621
622.. code-block:: yaml
9f95a23c
TL
623
624 service_type: prometheus
625 placement:
626 label: "mylabel"
627
628
629Placement by pattern matching
630-----------------------------
631
632Daemons can be placed on hosts as well::
633
f91f0fd5 634 orch apply prometheus --placement='myhost[1-3]'
801d1391 635
f91f0fd5 636Or in YAML:
9f95a23c 637
801d1391 638.. code-block:: yaml
9f95a23c
TL
639
640 service_type: prometheus
641 placement:
801d1391 642 host_pattern: "myhost[1-3]"
9f95a23c 643
801d1391 644To place a service on *all* hosts, use ``"*"``::
9f95a23c 645
f91f0fd5 646 orch apply crash --placement='*'
801d1391 647
f91f0fd5 648Or in YAML:
801d1391
TL
649
650.. code-block:: yaml
651
652 service_type: node-exporter
653 placement:
654 host_pattern: "*"
655
f6b5b4d7 656
9f95a23c
TL
657Setting a limit
658---------------
659
660By specifying ``count``, only that number of daemons will be created::
661
f91f0fd5 662 orch apply prometheus --placement=3
f6b5b4d7 663
9f95a23c
TL
664To deploy *daemons* on a subset of hosts, also specify the count::
665
f91f0fd5
TL
666 orch apply prometheus --placement="2 host1 host2 host3"
667
668If the count is bigger than the amount of hosts, cephadm deploys one per host::
f6b5b4d7 669
f91f0fd5 670 orch apply prometheus --placement="3 host1 host2"
9f95a23c 671
f91f0fd5 672results in two Prometheus daemons.
9f95a23c 673
f91f0fd5 674Or in YAML:
801d1391
TL
675
676.. code-block:: yaml
9f95a23c
TL
677
678 service_type: prometheus
679 placement:
680 count: 3
f6b5b4d7 681
801d1391
TL
682Or with hosts:
683
684.. code-block:: yaml
9f95a23c
TL
685
686 service_type: prometheus
687 placement:
688 count: 2
f6b5b4d7 689 hosts:
9f95a23c
TL
690 - host1
691 - host2
f6b5b4d7
TL
692 - host3
693
694Updating Service Specifications
695===============================
696
697The Ceph Orchestrator maintains a declarative state of each
698service in a ``ServiceSpec``. For certain operations, like updating
699the RGW HTTP port, we need to update the existing
700specification.
701
7021. List the current ``ServiceSpec``::
703
704 ceph orch ls --service_name=<service-name> --export > myservice.yaml
705
7062. Update the yaml file::
707
708 vi myservice.yaml
709
7103. Apply the new ``ServiceSpec``::
711
712 ceph orch apply -i myservice.yaml [--dry-run]
9f95a23c
TL
713
714Configuring the Orchestrator CLI
715================================
716
717To enable the orchestrator, select the orchestrator module to use
718with the ``set backend`` command::
719
720 ceph orch set backend <module>
721
722For example, to enable the Rook orchestrator module and use it with the CLI::
723
724 ceph mgr module enable rook
725 ceph orch set backend rook
726
727Check the backend is properly configured::
728
729 ceph orch status
730
731Disable the Orchestrator
732------------------------
733
734To disable the orchestrator, use the empty string ``""``::
735
736 ceph orch set backend ""
737 ceph mgr module disable rook
738
739Current Implementation Status
740=============================
741
742This is an overview of the current implementation status of the orchestrators.
743
744=================================== ====== =========
745 Command Rook Cephadm
746=================================== ====== =========
e306af50 747 apply iscsi ⚪ ✔
9f95a23c
TL
748 apply mds ✔ ✔
749 apply mgr ⚪ ✔
750 apply mon ✔ ✔
801d1391 751 apply nfs ✔ ✔
9f95a23c
TL
752 apply osd ✔ ✔
753 apply rbd-mirror ✔ ✔
754 apply rgw ⚪ ✔
f91f0fd5 755 apply container ⚪ ✔
9f95a23c
TL
756 host add ⚪ ✔
757 host ls ✔ ✔
758 host rm ⚪ ✔
759 daemon status ⚪ ✔
760 daemon {stop,start,...} ⚪ ✔
761 device {ident,fault}-(on,off} ⚪ ✔
762 device ls ✔ ✔
e306af50 763 iscsi add ⚪ ✔
f91f0fd5 764 mds add ⚪ ✔
801d1391 765 nfs add ✔ ✔
9f95a23c 766 rbd-mirror add ⚪ ✔
f91f0fd5 767 rgw add ⚪ ✔
9f95a23c
TL
768 ps ✔ ✔
769=================================== ====== =========
770
771where
772
773* ⚪ = not yet implemented
774* ❌ = not applicable
775* ✔ = implemented