1 Monitoring Stack with Cephadm
2 =============================
4 Ceph Dashboard uses `Prometheus <https://prometheus.io/>`_, `Grafana
5 <https://grafana.com/>`_, and related tools to store and visualize detailed
6 metrics on cluster utilization and performance. Ceph users have three options:
8 #. Have cephadm deploy and configure these services. This is the default
9 when bootstrapping a new cluster unless the ``--skip-monitoring-stack``
11 #. Deploy and configure these services manually. This is recommended for users
12 with existing prometheus services in their environment (and in cases where
13 Ceph is running in Kubernetes with Rook).
14 #. Skip the monitoring stack completely. Some Ceph dashboard graphs will
17 The monitoring stack consists of `Prometheus <https://prometheus.io/>`_,
18 Prometheus exporters (:ref:`mgr-prometheus`, `Node exporter
19 <https://prometheus.io/docs/guides/node-exporter/>`_), `Prometheus Alert
20 Manager <https://prometheus.io/docs/alerting/alertmanager/>`_ and `Grafana
21 <https://grafana.com/>`_.
25 Prometheus' security model presumes that untrusted users have access to the
26 Prometheus HTTP endpoint and logs. Untrusted users have access to all the
27 (meta)data Prometheus collects that is contained in the database, plus a
28 variety of operational and debugging information.
30 However, Prometheus' HTTP API is limited to read-only operations.
31 Configurations can *not* be changed using the API and secrets are not
32 exposed. Moreover, Prometheus has some built-in measures to mitigate the
33 impact of denial of service attacks.
35 Please see `Prometheus' Security model
36 <https://prometheus.io/docs/operating/security/>` for more detailed
39 By default, bootstrap will deploy a basic monitoring stack. If you
40 did not do this (by passing ``--skip-monitoring-stack``, or if you
41 converted an existing cluster to cephadm management, you can set up
42 monitoring by following the steps below.
44 #. Enable the prometheus module in the ceph-mgr daemon. This exposes the internal Ceph metrics so that prometheus can scrape them.
48 ceph mgr module enable prometheus
50 #. Deploy a node-exporter service on every node of the cluster. The node-exporter provides host-level metrics like CPU and memory utilization.
54 ceph orch apply node-exporter '*'
56 #. Deploy alertmanager
60 ceph orch apply alertmanager 1
62 #. Deploy prometheus. A single prometheus instance is sufficient, but
63 for HA you may want to deploy two.
67 ceph orch apply prometheus 1 # or 2
73 ceph orch apply grafana 1
75 Cephadm takes care of the configuration of Prometheus, Grafana, and Alertmanager
78 However, there is one exception to this rule. In a some setups, the Dashboard
79 user's browser might not be able to access the Grafana URL configured in Ceph
80 Dashboard. One such scenario is when the cluster and the accessing user are each
81 in a different DNS zone.
83 For this case, there is an extra configuration option for Ceph Dashboard, which
84 can be used to configure the URL for accessing Grafana by the user's browser.
85 This value will never be altered by cephadm. To set this configuration option,
86 issue the following command::
88 $ ceph dashboard set-grafana-frontend-api-url <grafana-server-api>
90 It may take a minute or two for services to be deployed. Once
91 completed, you should see something like this from ``ceph orch ls``
93 .. code-block:: console
96 NAME RUNNING REFRESHED IMAGE NAME IMAGE ID SPEC
97 alertmanager 1/1 6s ago docker.io/prom/alertmanager:latest 0881eb8f169f present
98 crash 2/2 6s ago docker.io/ceph/daemon-base:latest-master-devel mix present
99 grafana 1/1 0s ago docker.io/pcuzner/ceph-grafana-el8:latest f77afcf0bcf6 absent
100 node-exporter 2/2 6s ago docker.io/prom/node-exporter:latest e5a616e4b9cf present
101 prometheus 1/1 6s ago docker.io/prom/prometheus:latest e935122ab143 present
103 Configuring SSL/TLS for Grafana
104 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
106 ``cephadm`` will deploy Grafana using the certificate defined in the ceph
107 key/value store. If a certificate is not specified, ``cephadm`` will generate a
108 self-signed certificate during deployment of the Grafana service.
110 A custom certificate can be configured using the following commands.
114 ceph config-key set mgr/cephadm/grafana_key -i $PWD/key.pem
115 ceph config-key set mgr/cephadm/grafana_crt -i $PWD/certificate.pem
117 The ``cephadm`` manager module needs to be restarted to be able to read updates
122 ceph orch restart mgr
124 If you already deployed Grafana, you need to redeploy the service for the
125 configuration to be updated.
129 ceph orch redeploy grafana
131 The ``redeploy`` command also takes care of setting the right URL for Ceph
137 It is possible to install or upgrade monitoring components based on other
138 images. To do so, the name of the image to be used needs to be stored in the
139 configuration first. The following configuration options are available.
141 - ``container_image_prometheus``
142 - ``container_image_grafana``
143 - ``container_image_alertmanager``
144 - ``container_image_node_exporter``
146 Custom images can be set with the ``ceph config`` command
150 ceph config set mgr mgr/cephadm/<option_name> <value>
156 ceph config set mgr mgr/cephadm/container_image_prometheus prom/prometheus:v1.4.1
160 By setting a custom image, the default value will be overridden (but not
161 overwritten). The default value changes when updates become available.
162 By setting a custom image, you will not be able to update the component
163 you have set the custom image for automatically. You will need to
164 manually update the configuration (image name and tag) to be able to
167 If you choose to go with the recommendations instead, you can reset the
168 custom image you have set before. After that, the default value will be
169 used again. Use ``ceph config rm`` to reset the configuration option
173 ceph config rm mgr mgr/cephadm/<option_name>
179 ceph config rm mgr mgr/cephadm/container_image_prometheus
181 Using custom configuration files
182 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
184 By overriding cephadm templates, it is possible to completely customize the
185 configuration files for monitoring services.
187 Internally, cephadm already uses `Jinja2
188 <https://jinja.palletsprojects.com/en/2.11.x/>`_ templates to generate the
189 configuration files for all monitoring components. To be able to customize the
190 configuration of Prometheus, Grafana or the Alertmanager it is possible to store
191 a Jinja2 template for each service that will be used for configuration
192 generation instead. This template will be evaluated every time a service of that
193 kind is deployed or reconfigured. That way, the custom configuration is
194 preserved and automatically applied on future deployments of these services.
198 The configuration of the custom template is also preserved when the default
199 configuration of cephadm changes. If the updated configuration is to be used,
200 the custom template needs to be migrated *manually*.
205 The following templates for files that will be generated by cephadm can be
206 overridden. These are the names to be used when storing with ``ceph config-key
209 - ``alertmanager_alertmanager.yml``
210 - ``grafana_ceph-dashboard.yml``
211 - ``grafana_grafana.ini``
212 - ``prometheus_prometheus.yml``
214 You can look up the file templates that are currently used by cephadm in
215 ``src/pybind/mgr/cephadm/templates``:
217 - ``services/alertmanager/alertmanager.yml.j2``
218 - ``services/grafana/ceph-dashboard.yml.j2``
219 - ``services/grafana/grafana.ini.j2``
220 - ``services/prometheus/prometheus.yml.j2``
225 The following command applies a single line value:
229 ceph config-key set mgr/cephadm/<option_name> <value>
231 To set contents of files as template use the ``-i`` argument:
235 ceph config-key set mgr/cephadm/<option_name> -i $PWD/<filename>
239 When using files as input to ``config-key`` an absolute path to the file must
242 It is required to restart the cephadm mgr module after a configuration option
243 has been set. Then the configuration file for the service needs to be recreated.
244 This is done using `redeploy`. For more details see the following example.
251 # set the contents of ./prometheus.yml.j2 as template
252 ceph config-key set mgr/cephadm/services_prometheus_prometheus.yml \
253 -i $PWD/prometheus.yml.j2
255 # restart cephadm mgr module
256 ceph orch restart mgr
258 # redeploy the prometheus service
259 ceph orch redeploy prometheus
264 If you have deployed monitoring and would like to remove it, you can do
270 ceph orch rm prometheus --force # this will delete metrics data collected so far
271 ceph orch rm node-exporter
272 ceph orch rm alertmanager
273 ceph mgr module disable prometheus
276 Deploying monitoring manually
277 -----------------------------
279 If you have an existing prometheus monitoring infrastructure, or would like
280 to manage it yourself, you need to configure it to integrate with your Ceph
283 * Enable the prometheus module in the ceph-mgr daemon
287 ceph mgr module enable prometheus
289 By default, ceph-mgr presents prometheus metrics on port 9283 on each host
290 running a ceph-mgr daemon. Configure prometheus to scrape these.
292 * To enable the dashboard's prometheus-based alerting, see :ref:`dashboard-alerting`.
294 * To enable dashboard integration with Grafana, see :ref:`dashboard-grafana`.
296 Enabling RBD-Image monitoring
297 ---------------------------------
299 Due to performance reasons, monitoring of RBD images is disabled by default. For more information please see
300 :ref:`prometheus-rbd-io-statistics`. If disabled, the overview and details dashboards will stay empty in Grafana
301 and the metrics will not be visible in Prometheus.