Show current orchestrator mode and high-level status (whether the module able
to talk to it)
-Also show any in-progress actions.
-
Host Management
===============
ceph orch host add <hostname> [<addr>] [<labels>...]
ceph orch host rm <hostname>
+For cephadm, see also :ref:`cephadm-fqdn`.
+
Host Specification
------------------
ceph orch device zap my_hostname /dev/sdx
+.. note::
+ Cephadm orchestrator will automatically deploy drives that match the DriveGroup in your OSDSpec if the unmanaged flag is unset.
+ For example, if you use the ``all-available-devices`` option when creating OSD's, when you ``zap`` a device the cephadm orchestrator will automatically create a new OSD in the device .
+ To disable this behavior, see :ref:`orchestrator-cli-create-osds`.
+
+.. _orchestrator-cli-create-osds:
Create OSDs
-----------
-Create OSDs on a group of devices on a single host::
+Create OSDs on a set of devices on a single host::
ceph orch daemon add osd <host>:device1,device2
-or::
-
- ceph orch apply osd -i <json_file/yaml_file>
-
-
-or::
-
- ceph orch apply osd --all-available-devices
+Another way of doing it is using ``apply`` interface::
+ ceph orch apply osd -i <json_file/yaml_file> [--dry-run]
+Where the ``json_file/yaml_file`` is a DriveGroup specification.
For a more in-depth guide to DriveGroups please refer to :ref:`drivegroups`
-Example::
-
- # ceph orch daemon add osd node1:/dev/vdd
- Created osd(s) 6 on host 'node1'
-
-
-If the 'apply' method is used. You will be presented with a preview of what will happen.
+Along with ``apply`` interface if ``dry-run`` option is used, it will present a
+preview of what will happen.
Example::
- # ceph orch apply osd --all-available-devices
+ # ceph orch apply osd --all-available-devices --dry-run
NAME HOST DATA DB WAL
all-available-devices node1 /dev/vdb - -
all-available-devices node2 /dev/vdc - -
all-available-devices node3 /dev/vdd - -
-
.. note::
- Output form Cephadm orchestrator
+ Example output from cephadm orchestrator
+
+When the parameter ``all-available-devices`` or a DriveGroup specification is used, a cephadm service is created.
+This service guarantees that all available devices or devices included in the DriveGroup will be used for OSD's.
+Take into account the implications of this behavior, which is automatic and enabled by default.
+
+For example:
+
+After using::
+
+ ceph orch apply osd --all-available-devices
+
+* If you add new disks to the cluster they will automatically be used to create new OSD's.
+* A new OSD will be created automatically if you remove an OSD and clean the LVM physical volume.
+
+If you want to avoid this behavior (disable automatic creation of OSD in available devices), use the ``unmanaged`` parameter::
+
+ ceph orch apply osd --all-available-devices --unmanaged=true
Remove an OSD
--------------------
+-------------
::
- ceph orch osd rm <svc_id>... [--replace] [--force]
+ ceph orch osd rm <svc_id(s)> [--replace] [--force]
-Removes one or more OSDs from the cluster.
+Evacuates PGs from an OSD and removes it from the cluster.
Example::
- # ceph orch osd rm 4
+ # ceph orch osd rm 0
Scheduled OSD(s) for removal
You can query the state of the operation with::
# ceph orch osd rm status
- NAME HOST PGS STARTED_AT
- osd.7 node1 55 2020-04-22 19:28:38.785761
- osd.5 node3 3 2020-04-22 19:28:34.201685
- osd.3 node2 0 2020-04-22 19:28:34.201695
+ OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
+ 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
+ 3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158
+ 4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
When no PGs are left on the osd, it will be decommissioned and removed from the cluster.
+.. note::
+ After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created.
+ Read information about the ``unmanaged`` parameter in :ref:`orchestrator-cli-create-osds`.
+
+Stopping OSD Removal
+--------------------
+
+You can stop the operation with
+
+::
+
+ ceph orch osd rm stop <svc_id(s)>
+
+Example::
+
+ # ceph orch osd rm stop 4
+ Stopped OSD(s) removal
+
+This will reset the initial state of the OSD and remove it from the queue.
+
Replace an OSD
-------------------
::
- orch osd rm <svc_id>... --replace [--force]
+ orch osd rm <svc_id(s)> --replace [--force]
Example::
If you use OSDSpecs for osd deployment, your newly added disks will be assigned with the osd ids of their replaced
counterpart, granted the new disk still match the OSDSpecs.
-For assistance in this process you can use the 'preview' feature:
-
-Example::
-
-
- ceph orch apply osd --service-name <name_of_osd_spec> --preview
- NAME HOST DATA DB WAL
- <name_of_osd_spec> node1 /dev/vdb - -
+For assistance in this process you can use the '--dry-run' feature:
Tip: The name of your OSDSpec can be retrieved from **ceph orch ls**
Alternatively, you can use your OSDSpec file::
- ceph orch apply osd -i <osd_spec_file> --preview
+ ceph orch apply osd -i <osd_spec_file> --dry-run
NAME HOST DATA DB WAL
<name_of_osd_spec> node1 /dev/vdb - -
-If this matches your anticipated behavior, just omit the --preview flag to execute the deployment.
+If this matches your anticipated behavior, just omit the --dry-run flag to execute the deployment.
..
Update the number of monitor hosts::
- ceph orch apply mon <num> [host, host:network...]
+ ceph orch apply mon <num> [host, host:network...] [--dry-run]
Each host can optionally specify a network for the monitor to listen on.
Update the number of manager hosts::
- ceph orch apply mgr <num> [host...]
+ ceph orch apply mgr <num> [host...] [--dry-run]
..
.. note::
Discover the status of a particular service or daemons::
ceph orch ls --service_type type --service_name <name> [--refresh]
-
+
Export the service specs known to the orchestrator as yaml in format
that is compatible to ``ceph orch apply -i``::
Print a list of all daemons known to the orchestrator::
ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
-
+
Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
the id is the numeric OSD ID, for MDS services it is the file system name::
.. _orchestrator-cli-cephfs:
-
+
Depoying CephFS
===============
In order to set up a :term:`CephFS`, execute::
ceph fs volume create <fs_name> <placement spec>
-
-Where ``name`` is the name of the CephFS, ``placement`` is a
+
+Where ``name`` is the name of the CephFS, ``placement`` is a
:ref:`orchestrator-cli-placement-spec`.
-
-This command will create the required Ceph pools, create the new
+
+This command will create the required Ceph pools, create the new
CephFS, and deploy mds servers.
Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
* a CephFS file system for a group of MDS daemons,
* a zone name for a group of RGWs
-Sizing: the ``size`` parameter gives the number of daemons in the cluster
-(e.g. the number of MDS daemons for a particular CephFS file system).
-
Creating/growing/shrinking/removing services::
- ceph orch {mds,rgw} update <name> <size> [host…]
- ceph orch {mds,rgw} add <name>
- ceph orch nfs update <name> <size> [host…]
- ceph orch nfs add <name> <pool> [--namespace=<namespace>]
- ceph orch {mds,rgw,nfs} rm <name>
+ ceph orch apply mds <fs_name> [--placement=<placement>] [--dry-run]
+ ceph orch apply rgw <realm> <zone> [--subcluster=<subcluster>] [--port=<port>] [--ssl] [--placement=<placement>] [--dry-run]
+ ceph orch apply nfs <name> <pool> [--namespace=<namespace>] [--placement=<placement>] [--dry-run]
+ ceph orch rm <service_name> [--force]
-e.g., ``ceph orch mds update myfs 3 host1 host2 host3``
+Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
-Start/stop/reload::
+e.g., ``ceph orch apply mds myfs --placement="3 host1 host2 host3"``
- ceph orch service {stop,start,reload} <type> <name>
+Service Commands::
+
+ ceph orch <start|stop|restart|redeploy|reconfig> <service_name>
- ceph orch daemon {start,stop,reload} <type> <daemon-id>
-
.. _orchestrator-cli-service-spec:
-
+
Service Specification
=====================
-As *Service Specification* is a data structure often represented as YAML
+As *Service Specification* is a data structure often represented as YAML
to specify the deployment of services. For example:
.. code-block:: yaml
service_type: rgw
service_id: realm.zone
- placement:
- hosts:
+ placement:
+ hosts:
- host1
- host2
- host3
spec: ...
unmanaged: false
-
+
Where the properties of a service specification are the following:
* ``service_type`` is the type of the service. Needs to be either a Ceph
- service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
+ service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
``rbd-mirror``), a gateway (``nfs`` or ``rgw``), or part of the
monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
``prometheus``).
service_type: nfs
service_id: mynfs
- placement:
- hosts:
+ placement:
+ hosts:
- host1
- host2
spec:
host_pattern: "mgr*"
---
service_type: osd
+ service_id: default_drive_group
placement:
host_pattern: "osd*"
data_devices:
EOF
.. _orchestrator-cli-placement-spec:
-
+
Placement Specification
=======================
In order to allow the orchestrator to deploy a *service*, it needs to
-know how many and where it should deploy *daemons*. The orchestrator
+know how many and where it should deploy *daemons*. The orchestrator
defines a placement specification that can either be passed as a command line argument.
Explicit placements
Daemons can be explictly placed on hosts by simply specifying them::
orch apply prometheus "host1 host2 host3"
-
+
Or in yaml:
.. code-block:: yaml
-
+
service_type: prometheus
placement:
- hosts:
+ hosts:
- host1
- host2
- host3
-
+
MONs and other services may require some enhanced network specifications::
orch daemon add mon myhost:[v2:1.2.3.4:3000,v1:1.2.3.4:6789]=name
-
+
Where ``[v2:1.2.3.4:3000,v1:1.2.3.4:6789]`` is the network address of the monitor
and ``=name`` specifies the name of the new monitor.
placement:
host_pattern: "*"
-
+
Setting a limit
---------------
By specifying ``count``, only that number of daemons will be created::
orch apply prometheus 3
-
+
To deploy *daemons* on a subset of hosts, also specify the count::
orch apply prometheus "2 host1 host2 host3"
-
+
If the count is bigger than the amount of hosts, cephadm still deploys two daemons::
orch apply prometheus "3 host1 host2"
service_type: prometheus
placement:
count: 3
-
+
Or with hosts:
.. code-block:: yaml
service_type: prometheus
placement:
count: 2
- hosts:
+ hosts:
- host1
- host2
- - host3
+ - host3
+
+Updating Service Specifications
+===============================
+
+The Ceph Orchestrator maintains a declarative state of each
+service in a ``ServiceSpec``. For certain operations, like updating
+the RGW HTTP port, we need to update the existing
+specification.
+
+1. List the current ``ServiceSpec``::
+
+ ceph orch ls --service_name=<service-name> --export > myservice.yaml
+
+2. Update the yaml file::
+
+ vi myservice.yaml
+
+3. Apply the new ``ServiceSpec``::
+
+ ceph orch apply -i myservice.yaml [--dry-run]
Configuring the Orchestrator CLI
================================