]>
Commit | Line | Data |
---|---|---|
11fdf7f2 | 1 | .. _mgr-dashboard: |
31f18b77 | 2 | |
11fdf7f2 TL |
3 | Ceph Dashboard |
4 | ============== | |
5 | ||
6 | Overview | |
7 | -------- | |
8 | ||
9 | The Ceph Dashboard is a built-in web-based Ceph management and monitoring | |
10 | application to administer various aspects and objects of the cluster. It is | |
11 | implemented as a :ref:`ceph-manager-daemon` module. | |
12 | ||
13 | The original Ceph Dashboard that was shipped with Ceph Luminous started | |
14 | out as a simple read-only view into various run-time information and performance | |
15 | data of a Ceph cluster. It used a very simple architecture to achieve the | |
16 | original goal. However, there was a growing demand for adding more web-based | |
17 | management capabilities, to make it easier to administer Ceph for users that | |
18 | prefer a WebUI over using the command line. | |
19 | ||
20 | The new :term:`Ceph Dashboard` module is a replacement of the previous one and | |
21 | adds a built-in web based monitoring and administration application to the Ceph | |
22 | Manager. The architecture and functionality of this new plugin is derived from | |
23 | and inspired by the `openATTIC Ceph management and monitoring tool | |
24 | <https://openattic.org/>`_. The development is actively driven by the team | |
25 | behind openATTIC at `SUSE <https://www.suse.com/>`_, with a lot of support from | |
26 | companies like `Red Hat <https://redhat.com/>`_ and other members of the Ceph | |
27 | community. | |
28 | ||
29 | The dashboard module's backend code uses the CherryPy framework and a custom | |
30 | REST API implementation. The WebUI implementation is based on | |
31 | Angular/TypeScript, merging both functionality from the original dashboard as | |
32 | well as adding new functionality originally developed for the standalone version | |
33 | of openATTIC. The Ceph Dashboard module is implemented as a web | |
34 | application that visualizes information and statistics about the Ceph cluster | |
35 | using a web server hosted by ``ceph-mgr``. | |
36 | ||
37 | Feature Overview | |
38 | ^^^^^^^^^^^^^^^^ | |
39 | ||
40 | The dashboard provides the following features: | |
41 | ||
42 | * **Multi-User and Role Management**: The dashboard supports multiple user | |
43 | accounts with different permissions (roles). The user accounts and roles | |
44 | can be modified on both the command line and via the WebUI. | |
45 | See :ref:`dashboard-user-role-management` for details. | |
46 | * **Single Sign-On (SSO)**: the dashboard supports authentication | |
47 | via an external identity provider using the SAML 2.0 protocol. See | |
48 | :ref:`dashboard-sso-support` for details. | |
49 | * **SSL/TLS support**: All HTTP communication between the web browser and the | |
50 | dashboard is secured via SSL. A self-signed certificate can be created with | |
51 | a built-in command, but it's also possible to import custom certificates | |
52 | signed and issued by a CA. See :ref:`dashboard-ssl-tls-support` for details. | |
53 | * **Auditing**: the dashboard backend can be configured to log all PUT, POST | |
54 | and DELETE API requests in the Ceph audit log. See :ref:`dashboard-auditing` | |
55 | for instructions on how to enable this feature. | |
56 | * **Internationalization (I18N)**: use the dashboard in different languages. | |
57 | ||
58 | Currently, Ceph Dashboard is capable of monitoring and managing the following | |
59 | aspects of your Ceph cluster: | |
60 | ||
61 | * **Overall cluster health**: Displays overall cluster status, performance | |
62 | and capacity metrics. | |
63 | * **Embedded Grafana Dashboards**: Ceph Dashboard is capable of embedding | |
64 | `Grafana <https://grafana.com>`_ dashboards in many locations, to display | |
65 | additional information and performance metrics gathered by the | |
66 | :ref:`mgr-prometheus`. See :ref:`dashboard-grafana` for details on how to | |
67 | configure this functionality. | |
68 | * **Cluster logs**: Display the latest updates to the cluster's event and audit | |
69 | log files. | |
70 | * **Hosts**: Provides a list of all hosts associated to the cluster, which | |
71 | services are running and which version of Ceph is installed. | |
72 | * **Performance counters**: Displays detailed service-specific statistics for | |
73 | each running service. | |
74 | * **Monitors**: Lists all MONs, their quorum status, open sessions. | |
75 | * **Configuration Editor**: View all available configuration options, | |
76 | their description, type and default values and edit the current values. | |
77 | * **Pools**: List all Ceph pools and their details (e.g. applications, placement | |
78 | groups, replication size, EC profile, CRUSH ruleset, etc.) | |
79 | * **OSDs**: Lists all OSDs, their status and usage statistics as well as | |
80 | detailed information like attributes (OSD map), metadata, performance counters | |
81 | and usage histograms for read/write operations. Mark OSDs as up/down/out, | |
82 | perform scrub operations. Select between different recovery profiles to adjust | |
83 | the level of backfilling activity. | |
84 | * **iSCSI**: Lists all hosts that run the TCMU runner service, displaying all | |
85 | images and their performance characteristics (read/write ops, traffic). | |
86 | Create, modify and delete iSCSI targets (via ``ceph-iscsi``). See | |
87 | :ref:`dashboard-iscsi-management` for instructions on how to configure this | |
88 | feature. | |
89 | * **RBD**: Lists all RBD images and their properties (size, objects, features). | |
90 | Create, copy, modify and delete RBD images. Create, delete and rollback | |
91 | snapshots of selected images, protect/unprotect these snapshots against | |
92 | modification. Copy or clone snapshots, flatten cloned images. | |
93 | * **RBD mirroring**: Enable and configure RBD mirroring to a remote Ceph server. | |
94 | Lists all active sync daemons and their status, pools and RBD images including | |
95 | their synchronization state. | |
96 | * **CephFS**: Lists all active filesystem clients and associated pools, | |
97 | including their usage statistics. | |
98 | * **Object Gateway**: Lists all active object gateways and their performance | |
99 | counters. Display and manage (add/edit/delete) object gateway users and their | |
100 | details (e.g. quotas) as well as the users' buckets and their details (e.g. | |
101 | owner, quotas). See :ref:`dashboard-enabling-object-gateway` for configuration | |
102 | instructions. | |
103 | * **NFS**: Manage NFS exports of CephFS filesystems and RGW S3 buckets via NFS | |
104 | Ganesha. See :ref:`dashboard-nfs-ganesha-management` for details on how to | |
105 | enable this functionality. | |
106 | * **Ceph Manager Modules**: Enable and disable all Ceph Manager modules, change | |
107 | the module-specific configuration settings. | |
108 | ||
109 | ||
110 | Supported Browsers | |
111 | ^^^^^^^^^^^^^^^^^^ | |
112 | ||
113 | Ceph Dashboard is primarily tested and developed using the following web | |
114 | browsers: | |
115 | ||
116 | +----------------------------------------------+----------+ | |
117 | | Browser | Versions | | |
118 | +==============================================+==========+ | |
119 | | `Chrome <https://www.google.com/chrome/>`_ | 68+ | | |
120 | +----------------------------------------------+----------+ | |
121 | | `Firefox <http://www.mozilla.org/firefox/>`_ | 61+ | | |
122 | +----------------------------------------------+----------+ | |
123 | ||
124 | While Ceph Dashboard might work in older browsers, we cannot guarantee it and | |
125 | recommend you to update your browser to the latest version. | |
224ce89b WB |
126 | |
127 | Enabling | |
128 | -------- | |
129 | ||
11fdf7f2 TL |
130 | If you have installed ``ceph-mgr-dashboard`` from distribution packages, the |
131 | package management system should have taken care of installing all the required | |
132 | dependencies. | |
133 | ||
134 | If you're installing Ceph from source and want to start the dashboard from your | |
135 | development environment, please see the files ``README.rst`` and ``HACKING.rst`` | |
136 | in directory ``src/pybind/mgr/dashboard`` of the source code. | |
224ce89b | 137 | |
11fdf7f2 TL |
138 | Within a running Ceph cluster, the Ceph Dashboard is enabled with:: |
139 | ||
140 | $ ceph mgr module enable dashboard | |
224ce89b WB |
141 | |
142 | Configuration | |
143 | ------------- | |
144 | ||
11fdf7f2 TL |
145 | .. _dashboard-ssl-tls-support: |
146 | ||
147 | SSL/TLS Support | |
148 | ^^^^^^^^^^^^^^^ | |
149 | ||
150 | All HTTP connections to the dashboard are secured with SSL/TLS by default. | |
151 | ||
152 | To get the dashboard up and running quickly, you can generate and install a | |
153 | self-signed certificate using the following built-in command:: | |
154 | ||
155 | $ ceph dashboard create-self-signed-cert | |
156 | ||
157 | Note that most web browsers will complain about such self-signed certificates | |
158 | and require explicit confirmation before establishing a secure connection to the | |
159 | dashboard. | |
160 | ||
161 | To properly secure a deployment and to remove the certificate warning, a | |
162 | certificate that is issued by a certificate authority (CA) should be used. | |
163 | ||
164 | For example, a key pair can be generated with a command similar to:: | |
165 | ||
166 | $ openssl req -new -nodes -x509 \ | |
167 | -subj "/O=IT/CN=ceph-mgr-dashboard" -days 3650 \ | |
168 | -keyout dashboard.key -out dashboard.crt -extensions v3_ca | |
169 | ||
170 | The ``dashboard.crt`` file should then be signed by a CA. Once that is done, you | |
171 | can enable it for all Ceph manager instances by running the following commands:: | |
172 | ||
173 | $ ceph config-key set mgr/dashboard/crt -i dashboard.crt | |
174 | $ ceph config-key set mgr/dashboard/key -i dashboard.key | |
175 | ||
176 | If different certificates are desired for each manager instance for some reason, | |
177 | the name of the instance can be included as follows (where ``$name`` is the name | |
178 | of the ``ceph-mgr`` instance, usually the hostname):: | |
179 | ||
180 | $ ceph config-key set mgr/dashboard/$name/crt -i dashboard.crt | |
181 | $ ceph config-key set mgr/dashboard/$name/key -i dashboard.key | |
224ce89b | 182 | |
11fdf7f2 | 183 | SSL can also be disabled by setting this configuration value:: |
31f18b77 | 184 | |
11fdf7f2 | 185 | $ ceph config set mgr mgr/dashboard/ssl false |
31f18b77 | 186 | |
11fdf7f2 TL |
187 | This might be useful if the dashboard will be running behind a proxy which does |
188 | not support SSL for its upstream servers or other situations where SSL is not | |
189 | wanted or required. | |
224ce89b | 190 | |
11fdf7f2 | 191 | .. warning:: |
224ce89b | 192 | |
11fdf7f2 TL |
193 | Use caution when disabling SSL as usernames and passwords will be sent to the |
194 | dashboard unencrypted. | |
224ce89b | 195 | |
11fdf7f2 TL |
196 | |
197 | .. note:: | |
198 | ||
199 | You need to restart the Ceph manager processes manually after changing the SSL | |
200 | certificate and key. This can be accomplished by either running ``ceph mgr | |
201 | fail mgr`` or by disabling and re-enabling the dashboard module (which also | |
202 | triggers the manager to respawn itself):: | |
203 | ||
204 | $ ceph mgr module disable dashboard | |
205 | $ ceph mgr module enable dashboard | |
206 | ||
207 | Host Name and Port | |
208 | ^^^^^^^^^^^^^^^^^^ | |
209 | ||
210 | Like most web applications, dashboard binds to a TCP/IP address and TCP port. | |
211 | ||
212 | By default, the ``ceph-mgr`` daemon hosting the dashboard (i.e., the currently | |
213 | active manager) will bind to TCP port 8443 or 8080 when SSL is disabled. | |
214 | ||
215 | If no specific address has been configured, the web app will bind to ``::``, | |
224ce89b WB |
216 | which corresponds to all available IPv4 and IPv6 addresses. |
217 | ||
11fdf7f2 TL |
218 | These defaults can be changed via the configuration key facility on a |
219 | cluster-wide level (so they apply to all manager instances) as follows:: | |
220 | ||
221 | $ ceph config set mgr mgr/dashboard/server_addr $IP | |
222 | $ ceph config set mgr mgr/dashboard/server_port $PORT | |
223 | $ ceph config set mgr mgr/dashboard/ssl_server_port $PORT | |
224 | ||
225 | Since each ``ceph-mgr`` hosts its own instance of dashboard, it may also be | |
226 | necessary to configure them separately. The IP address and port for a specific | |
227 | manager instance can be changed with the following commands:: | |
228 | ||
229 | $ ceph config set mgr mgr/dashboard/$name/server_addr $IP | |
230 | $ ceph config set mgr mgr/dashboard/$name/server_port $PORT | |
231 | $ ceph config set mgr mgr/dashboard/$name/ssl_server_port $PORT | |
232 | ||
233 | Replace ``$name`` with the ID of the ceph-mgr instance hosting the dashboard web | |
234 | app. | |
235 | ||
236 | .. note:: | |
237 | ||
238 | The command ``ceph mgr services`` will show you all endpoints that are | |
239 | currently configured. Look for the ``dashboard`` key to obtain the URL for | |
240 | accessing the dashboard. | |
241 | ||
242 | Username and Password | |
243 | ^^^^^^^^^^^^^^^^^^^^^ | |
244 | ||
245 | In order to be able to log in, you need to create a user account and associate | |
246 | it with at least one role. We provide a set of predefined *system roles* that | |
247 | you can use. For more details please refer to the `User and Role Management`_ | |
248 | section. | |
249 | ||
250 | To create a user with the administrator role you can use the following | |
251 | commands:: | |
252 | ||
253 | $ ceph dashboard ac-user-create <username> <password> administrator | |
254 | ||
255 | .. _dashboard-enabling-object-gateway: | |
256 | ||
257 | Enabling the Object Gateway Management Frontend | |
258 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
259 | ||
260 | To use the Object Gateway management functionality of the dashboard, you will | |
261 | need to provide the login credentials of a user with the ``system`` flag | |
262 | enabled. | |
263 | ||
264 | If you do not have a user which shall be used for providing those credentials, | |
265 | you will also need to create one:: | |
266 | ||
267 | $ radosgw-admin user create --uid=<user_id> --display-name=<display_name> \ | |
268 | --system | |
269 | ||
270 | Take note of the keys ``access_key`` and ``secret_key`` in the output of this | |
271 | command. | |
272 | ||
273 | The credentials of an existing user can also be obtained by using | |
274 | `radosgw-admin`:: | |
275 | ||
276 | $ radosgw-admin user info --uid=<user_id> | |
277 | ||
278 | Finally, provide the credentials to the dashboard:: | |
279 | ||
280 | $ ceph dashboard set-rgw-api-access-key <access_key> | |
281 | $ ceph dashboard set-rgw-api-secret-key <secret_key> | |
282 | ||
81eedcae TL |
283 | In a typical default configuration with a single RGW endpoint, this is all you |
284 | have to do to get the Object Gateway management functionality working. The | |
285 | dashboard will try to automatically determine the host and port of the Object | |
286 | Gateway by obtaining this information from the Ceph Manager's service map. | |
11fdf7f2 TL |
287 | |
288 | If multiple zones are used, it will automatically determine the host within the | |
289 | master zone group and master zone. This should be sufficient for most setups, | |
290 | but in some circumstances you might want to set the host and port manually:: | |
291 | ||
292 | $ ceph dashboard set-rgw-api-host <host> | |
293 | $ ceph dashboard set-rgw-api-port <port> | |
294 | ||
295 | In addition to the settings mentioned so far, the following settings do also | |
296 | exist and you may find yourself in the situation that you have to use them:: | |
297 | ||
298 | $ ceph dashboard set-rgw-api-scheme <scheme> # http or https | |
299 | $ ceph dashboard set-rgw-api-admin-resource <admin_resource> | |
300 | $ ceph dashboard set-rgw-api-user-id <user_id> | |
301 | ||
302 | If you are using a self-signed certificate in your Object Gateway setup, then | |
303 | you should disable certificate verification in the dashboard to avoid refused | |
304 | connections, e.g. caused by certificates signed by unknown CA or not matching | |
305 | the host name:: | |
306 | ||
307 | $ ceph dashboard set-rgw-api-ssl-verify False | |
308 | ||
309 | If the Object Gateway takes too long to process requests and the dashboard runs | |
310 | into timeouts, then you can set the timeout value to your needs:: | |
311 | ||
312 | $ ceph dashboard set-rest-requests-timeout <seconds> | |
313 | ||
314 | The default value is 45 seconds. | |
315 | ||
316 | .. _dashboard-iscsi-management: | |
317 | ||
318 | Enabling iSCSI Management | |
319 | ^^^^^^^^^^^^^^^^^^^^^^^^^ | |
320 | ||
81eedcae TL |
321 | The Ceph Dashboard can manage iSCSI targets using the REST API provided by the |
322 | `rbd-target-api` service of the :ref:`ceph-iscsi`. Please make sure that it's | |
323 | installed and enabled on the iSCSI gateways. | |
324 | ||
325 | .. note:: | |
326 | The iSCSI management functionality of Ceph Dashboard depends on the latest | |
327 | version 3 of the `ceph-iscsi <https://github.com/ceph/ceph-iscsi>`_ project. | |
328 | Make sure that your operating system provides the correct version, otherwise | |
329 | the dashboard won't enable the management features. | |
11fdf7f2 TL |
330 | |
331 | If ceph-iscsi REST API is configured in HTTPS mode and its using a self-signed | |
81eedcae | 332 | certificate, then you need to configure the dashboard to avoid SSL certificate |
11fdf7f2 TL |
333 | verification when accessing ceph-iscsi API. |
334 | ||
335 | To disable API SSL verification run the following commmand:: | |
336 | ||
337 | $ ceph dashboard set-iscsi-api-ssl-verification false | |
338 | ||
339 | The available iSCSI gateways must be defined using the following commands:: | |
340 | ||
341 | $ ceph dashboard iscsi-gateway-list | |
342 | $ ceph dashboard iscsi-gateway-add <scheme>://<username>:<password>@<host>[:port] | |
343 | $ ceph dashboard iscsi-gateway-rm <gateway_name> | |
344 | ||
345 | ||
346 | .. _dashboard-grafana: | |
347 | ||
348 | Enabling the Embedding of Grafana Dashboards | |
349 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
350 | ||
351 | Grafana and Prometheus are likely going to be bundled and installed by some | |
352 | orchestration tools along Ceph in the near future, but currently, you will have | |
353 | to install and configure both manually. After you have installed Prometheus and | |
354 | Grafana on your preferred hosts, proceed with the following steps. | |
355 | ||
356 | #. Enable the Ceph Exporter which comes as Ceph Manager module by running:: | |
357 | ||
358 | $ ceph mgr module enable prometheus | |
359 | ||
360 | More details can be found on the `documentation <http://docs.ceph.com/docs/master/ | |
361 | mgr/prometheus/>`_ of the prometheus module. | |
362 | ||
363 | #. Add the corresponding scrape configuration to Prometheus. This may look | |
364 | like:: | |
365 | ||
366 | global: | |
367 | scrape_interval: 5s | |
368 | ||
369 | scrape_configs: | |
370 | - job_name: 'prometheus' | |
371 | static_configs: | |
372 | - targets: ['localhost:9090'] | |
373 | - job_name: 'ceph' | |
374 | static_configs: | |
375 | - targets: ['localhost:9283'] | |
376 | - job_name: 'node-exporter' | |
377 | static_configs: | |
378 | - targets: ['localhost:9100'] | |
379 | ||
380 | #. Add Prometheus as data source to Grafana | |
381 | ||
382 | #. Install the `vonage-status-panel and grafana-piechart-panel` plugins using:: | |
383 | ||
384 | grafana-cli plugins install vonage-status-panel | |
385 | grafana-cli plugins install grafana-piechart-panel | |
386 | ||
387 | #. Add the Dashboards to Grafana: | |
388 | ||
389 | Dashboards can be added to Grafana by importing dashboard jsons. | |
390 | Following command can be used for downloading json files:: | |
391 | ||
392 | wget https://raw.githubusercontent.com/ceph/ceph/master/monitoring/grafana/dashboards/<Dashboard-name>.json | |
393 | ||
394 | You can find all the dashboard jsons `here <https://github.com/ceph/ceph/tree/ | |
395 | master/monitoring/grafana/dashboards>`_ . | |
396 | ||
397 | For Example, for ceph-cluster overview you can use:: | |
398 | ||
399 | wget https://raw.githubusercontent.com/ceph/ceph/master/monitoring/grafana/dashboards/ceph-cluster.json | |
400 | ||
401 | #. Configure Grafana in `/etc/grafana/grafana.ini` to adapt anonymous mode:: | |
402 | ||
403 | [auth.anonymous] | |
404 | enabled = true | |
405 | org_name = Main Org. | |
406 | org_role = Viewer | |
407 | ||
408 | After you have set up Grafana and Prometheus, you will need to configure the | |
409 | connection information that the Ceph Dashboard will use to access Grafana. | |
410 | ||
411 | You need to tell the dashboard on which url Grafana instance is running/deployed:: | |
412 | ||
413 | $ ceph dashboard set-grafana-api-url <grafana-server-url> # default: '' | |
414 | ||
415 | The format of url is : `<protocol>:<IP-address>:<port>` | |
416 | ||
417 | .. note:: | |
418 | Ceph Dashboard embeds the Grafana dashboards via ``iframe`` HTML elements. | |
419 | If Grafana is configured without SSL/TLS support, most browsers will block the | |
420 | embedding of insecure content into a secured web page, if the SSL support in | |
421 | the dashboard has been enabled (which is the default configuration). If you | |
422 | can't see the embedded Grafana dashboards after enabling them as outlined | |
423 | above, check your browser's documentation on how to unblock mixed content. | |
424 | Alternatively, consider enabling SSL/TLS support in Grafana. | |
425 | ||
426 | You can directly access Grafana Instance as well to monitor your cluster. | |
427 | ||
428 | .. _dashboard-sso-support: | |
429 | ||
430 | Enabling Single Sign-On (SSO) | |
431 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
432 | ||
433 | The Ceph Dashboard supports external authentication of users via the | |
434 | `SAML 2.0 <https://en.wikipedia.org/wiki/SAML_2.0>`_ protocol. You need to create | |
435 | the user accounts and associate them with the desired roles first, as authorization | |
436 | is still performed by the Dashboard. However, the authentication process can be | |
437 | performed by an existing Identity Provider (IdP). | |
438 | ||
439 | .. note:: | |
440 | Ceph Dashboard SSO support relies on onelogin's | |
441 | `python-saml <https://pypi.org/project/python-saml/>`_ library. | |
442 | Please ensure that this library is installed on your system, either by using | |
443 | your distribution's package management or via Python's `pip` installer. | |
444 | ||
445 | To configure SSO on Ceph Dashboard, you should use the following command:: | |
446 | ||
447 | $ ceph dashboard sso setup saml2 <ceph_dashboard_base_url> <idp_metadata> {<idp_username_attribute>} {<idp_entity_id>} {<sp_x_509_cert>} {<sp_private_key>} | |
448 | ||
449 | Parameters: | |
450 | ||
451 | * **<ceph_dashboard_base_url>**: Base URL where Ceph Dashboard is accessible (e.g., `https://cephdashboard.local`) | |
452 | * **<idp_metadata>**: URL, file path or content of the IdP metadata XML (e.g., `https://myidp/metadata`) | |
453 | * **<idp_username_attribute>** *(optional)*: Attribute that should be used to get the username from the authentication response. Defaults to `uid`. | |
454 | * **<idp_entity_id>** *(optional)*: Use this when more than one entity id exists on the IdP metadata. | |
455 | * **<sp_x_509_cert> / <sp_private_key>** *(optional)*: File path or content of the certificate that should be used by Ceph Dashboard (Service Provider) for signing and encryption. | |
456 | ||
457 | .. note:: | |
458 | The issuer value of SAML requests will follow this pattern: **<ceph_dashboard_base_url>**/auth/saml2/metadata | |
459 | ||
460 | To display the current SAML 2.0 configuration, use the following command:: | |
461 | ||
462 | $ ceph dashboard sso show saml2 | |
463 | ||
464 | .. note:: | |
465 | For more information about `onelogin_settings`, please check the `onelogin documentation <https://github.com/onelogin/python-saml>`_. | |
466 | ||
467 | To disable SSO:: | |
468 | ||
469 | $ ceph dashboard sso disable | |
470 | ||
471 | To check if SSO is enabled:: | |
472 | ||
473 | $ ceph dashboard sso status | |
474 | ||
475 | To enable SSO:: | |
476 | ||
477 | $ ceph dashboard sso enable saml2 | |
478 | ||
479 | Enabling Prometheus Alerting | |
480 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
481 | ||
482 | Using Prometheus for monitoring, you have to define `alerting rules | |
483 | <https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules>`_. | |
484 | To manage them you need to use the `Alertmanager | |
485 | <https://prometheus.io/docs/alerting/alertmanager>`_. | |
486 | If you are not using the Alertmanager yet, please `install it | |
487 | <https://github.com/prometheus/alertmanager#install>`_ as it's mandatory in | |
488 | order to receive and manage alerts from Prometheus. | |
489 | ||
490 | The Alertmanager capabilities can be consumed by the dashboard in three different | |
491 | ways: | |
492 | ||
493 | #. Use the notification receiver of the dashboard. | |
494 | ||
495 | #. Use the Prometheus Alertmanager API. | |
496 | ||
497 | #. Use both sources simultaneously. | |
498 | ||
499 | All three methods are going to notify you about alerts. You won't be notified | |
500 | twice if you use both sources. | |
501 | ||
502 | #. Use the notification receiver of the dashboard: | |
503 | ||
504 | This allows you to get notifications as `configured | |
505 | <https://prometheus.io/docs/alerting/configuration/>`_ from the Alertmanager. | |
506 | You will get notified inside the dashboard once a notification is send out, | |
507 | but you are not able to manage alerts. | |
508 | ||
509 | Add the dashboard receiver and the new route to your Alertmanager configuration. | |
510 | This should look like:: | |
511 | ||
512 | route: | |
513 | receiver: 'ceph-dashboard' | |
514 | ... | |
515 | receivers: | |
516 | - name: 'ceph-dashboard' | |
517 | webhook_configs: | |
518 | - url: '<url-to-dashboard>/api/prometheus_receiver' | |
519 | ||
520 | ||
521 | Please make sure that the Alertmanager considers your SSL certificate in terms | |
522 | of the dashboard as valid. For more information about the correct | |
523 | configuration checkout the `<http_config> documentation | |
524 | <https://prometheus.io/docs/alerting/configuration/#%3Chttp_config%3E>`_. | |
525 | ||
526 | #. Use the API of the Prometheus Alertmanager | |
527 | ||
528 | This allows you to manage alerts. You will see all alerts, the Alertmanager | |
529 | currently knows of, in the alerts listing. It can be found in the *Cluster* | |
530 | submenu as *Alerts*. The alerts can be sorted by name, job, severity, | |
531 | state and start time. Unfortunately it's not possible to know when an alert | |
532 | was sent out through a notification by the Alertmanager based on your | |
533 | configuration, that's why the dashboard will notify the user on any visible | |
534 | change to an alert and will notify the changed alert. | |
535 | ||
536 | Currently it's not yet possible to silence an alert and expire an silenced | |
537 | alert, but this is work in progress and will be added in a future release. | |
538 | ||
539 | To use it, specify the host and port of the Alertmanager server:: | |
540 | ||
541 | $ ceph dashboard set-alertmanager-api-host <alertmanager-host:port> # default: '' | |
542 | ||
543 | For example:: | |
544 | ||
545 | $ ceph dashboard set-alertmanager-api-host 'http://localhost:9093' | |
546 | ||
547 | ||
548 | #. Use both methods | |
549 | ||
550 | The different behaviors of both methods are configured in a way that they | |
551 | should not disturb each other through annoying duplicated notifications | |
552 | popping up. | |
553 | ||
81eedcae | 554 | Accessing the Dashboard |
11fdf7f2 TL |
555 | ^^^^^^^^^^^^^^^^^^^^^^^ |
556 | ||
557 | You can now access the dashboard using your (JavaScript-enabled) web browser, by | |
558 | pointing it to any of the host names or IP addresses and the selected TCP port | |
559 | where a manager instance is running: e.g., ``httpS://<$IP>:<$PORT>/``. | |
560 | ||
561 | You should then be greeted by the dashboard login page, requesting your | |
562 | previously defined username and password. Select the **Keep me logged in** | |
563 | checkbox if you want to skip the username/password request when accessing the | |
564 | dashboard in the future. | |
565 | ||
566 | .. _dashboard-user-role-management: | |
567 | ||
568 | User and Role Management | |
569 | ------------------------ | |
570 | ||
571 | User Accounts | |
572 | ^^^^^^^^^^^^^ | |
573 | ||
574 | Ceph Dashboard supports managing multiple user accounts. Each user account | |
575 | consists of a username, a password (stored in encrypted form using ``bcrypt``), | |
576 | an optional name, and an optional email address. | |
577 | ||
578 | User accounts are stored in MON's configuration database, and are globally | |
579 | shared across all ceph-mgr instances. | |
580 | ||
581 | We provide a set of CLI commands to manage user accounts: | |
582 | ||
583 | - *Show User(s)*:: | |
584 | ||
585 | $ ceph dashboard ac-user-show [<username>] | |
586 | ||
587 | - *Create User*:: | |
588 | ||
589 | $ ceph dashboard ac-user-create <username> [<password>] [<rolename>] [<name>] [<email>] | |
590 | ||
591 | - *Delete User*:: | |
592 | ||
593 | $ ceph dashboard ac-user-delete <username> | |
594 | ||
595 | - *Change Password*:: | |
596 | ||
597 | $ ceph dashboard ac-user-set-password <username> <password> | |
598 | ||
599 | - *Modify User (name, and email)*:: | |
600 | ||
601 | $ ceph dashboard ac-user-set-info <username> <name> <email> | |
602 | ||
603 | ||
604 | User Roles and Permissions | |
605 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
606 | ||
607 | User accounts are also associated with a set of roles that define which | |
608 | dashboard functionality can be accessed by the user. | |
609 | ||
610 | The Dashboard functionality/modules are grouped within a *security scope*. | |
611 | Security scopes are predefined and static. The current available security | |
612 | scopes are: | |
613 | ||
614 | - **hosts**: includes all features related to the ``Hosts`` menu | |
615 | entry. | |
616 | - **config-opt**: includes all features related to management of Ceph | |
617 | configuration options. | |
618 | - **pool**: includes all features related to pool management. | |
619 | - **osd**: includes all features related to OSD management. | |
620 | - **monitor**: includes all features related to Monitor management. | |
621 | - **rbd-image**: includes all features related to RBD image | |
622 | management. | |
623 | - **rbd-mirroring**: includes all features related to RBD-Mirroring | |
624 | management. | |
625 | - **iscsi**: includes all features related to iSCSI management. | |
626 | - **rgw**: includes all features related to Rados Gateway management. | |
627 | - **cephfs**: includes all features related to CephFS management. | |
628 | - **manager**: include all features related to Ceph Manager | |
629 | management. | |
630 | - **log**: include all features related to Ceph logs management. | |
631 | - **grafana**: include all features related to Grafana proxy. | |
632 | - **prometheus**: include all features related to Prometheus alert management. | |
633 | - **dashboard-settings**: allows to change dashboard settings. | |
634 | ||
635 | A *role* specifies a set of mappings between a *security scope* and a set of | |
636 | *permissions*. There are four types of permissions: | |
637 | ||
638 | - **read** | |
639 | - **create** | |
640 | - **update** | |
641 | - **delete** | |
642 | ||
643 | See below for an example of a role specification based on a Python dictionary:: | |
644 | ||
645 | # example of a role | |
646 | { | |
647 | 'role': 'my_new_role', | |
648 | 'description': 'My new role', | |
649 | 'scopes_permissions': { | |
650 | 'pool': ['read', 'create'], | |
651 | 'rbd-image': ['read', 'create', 'update', 'delete'] | |
652 | } | |
653 | } | |
654 | ||
655 | The above role dictates that a user has *read* and *create* permissions for | |
656 | features related to pool management, and has full permissions for | |
657 | features related to RBD image management. | |
658 | ||
659 | The Dashboard already provides a set of predefined roles that we call | |
660 | *system roles*, and can be used right away in a fresh Ceph Dashboard | |
661 | installation. | |
662 | ||
663 | The list of system roles are: | |
664 | ||
665 | - **administrator**: provides full permissions for all security scopes. | |
666 | - **read-only**: provides *read* permission for all security scopes except | |
667 | the dashboard settings. | |
668 | - **block-manager**: provides full permissions for *rbd-image*, | |
669 | *rbd-mirroring*, and *iscsi* scopes. | |
670 | - **rgw-manager**: provides full permissions for the *rgw* scope | |
671 | - **cluster-manager**: provides full permissions for the *hosts*, *osd*, | |
672 | *monitor*, *manager*, and *config-opt* scopes. | |
673 | - **pool-manager**: provides full permissions for the *pool* scope. | |
674 | - **cephfs-manager**: provides full permissions for the *cephfs* scope. | |
675 | ||
676 | The list of currently available roles can be retrieved by the following | |
677 | command:: | |
678 | ||
679 | $ ceph dashboard ac-role-show [<rolename>] | |
680 | ||
681 | It is also possible to create new roles using CLI commands. The available | |
682 | commands to manage roles are the following: | |
683 | ||
684 | - *Create Role*:: | |
685 | ||
686 | $ ceph dashboard ac-role-create <rolename> [<description>] | |
687 | ||
688 | - *Delete Role*:: | |
689 | ||
690 | $ ceph dashboard ac-role-delete <rolename> | |
691 | ||
692 | - *Add Scope Permissions to Role*:: | |
693 | ||
694 | $ ceph dashboard ac-role-add-scope-perms <rolename> <scopename> <permission> [<permission>...] | |
695 | ||
696 | - *Delete Scope Permission from Role*:: | |
697 | ||
698 | $ ceph dashboard ac-role-del-perms <rolename> <scopename> | |
699 | ||
700 | To associate roles to users, the following CLI commands are available: | |
701 | ||
702 | - *Set User Roles*:: | |
703 | ||
704 | $ ceph dashboard ac-user-set-roles <username> <rolename> [<rolename>...] | |
705 | ||
706 | - *Add Roles To User*:: | |
707 | ||
708 | $ ceph dashboard ac-user-add-roles <username> <rolename> [<rolename>...] | |
709 | ||
710 | - *Delete Roles from User*:: | |
711 | ||
712 | $ ceph dashboard ac-user-del-roles <username> <rolename> [<rolename>...] | |
713 | ||
714 | ||
715 | Example of User and Custom Role Creation | |
716 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
717 | ||
718 | In this section we show a full example of the commands that need to be used | |
719 | in order to create a user account, that should be able to manage RBD images, | |
720 | view and create Ceph pools, and have read-only access to any other scopes. | |
721 | ||
722 | 1. *Create the user*:: | |
723 | ||
724 | $ ceph dashboard ac-user-create bob mypassword | |
725 | ||
726 | 2. *Create role and specify scope permissions*:: | |
727 | ||
728 | $ ceph dashboard ac-role-create rbd/pool-manager | |
729 | $ ceph dashboard ac-role-add-scope-perms rbd/pool-manager rbd-image read create update delete | |
730 | $ ceph dashboard ac-role-add-scope-perms rbd/pool-manager pool read create | |
731 | ||
732 | 3. *Associate roles to user*:: | |
733 | ||
734 | $ ceph dashboard ac-user-set-roles bob rbd/pool-manager read-only | |
735 | ||
736 | ||
81eedcae TL |
737 | Proxy Configuration |
738 | ------------------- | |
739 | ||
740 | In a Ceph cluster with multiple ceph-mgr instances, only the dashboard running | |
741 | on the currently active ceph-mgr daemon will serve incoming requests. Accessing | |
742 | the dashboard's TCP port on any of the other ceph-mgr instances that are | |
743 | currently on standby will perform a HTTP redirect (303) to the currently active | |
744 | manager's dashboard URL. This way, you can point your browser to any of the | |
745 | ceph-mgr instances in order to access the dashboard. | |
746 | ||
747 | If you want to establish a fixed URL to reach the dashboard or if you don't want | |
748 | to allow direct connections to the manager nodes, you could set up a proxy that | |
749 | automatically forwards incoming requests to the currently active ceph-mgr | |
750 | instance. | |
751 | ||
752 | .. note:: | |
753 | Note that putting the dashboard behind a load-balancing proxy like `HAProxy | |
754 | <https://www.haproxy.org/>`_ currently has some limitations, particularly if | |
755 | you require the traffic between the proxy and the dashboard to be encrypted | |
756 | via SSL/TLS. See `BUG#24662 <https://tracker.ceph.com/issues/24662>`_ for | |
757 | details. | |
758 | ||
759 | Configuring a URL Prefix | |
760 | ^^^^^^^^^^^^^^^^^^^^^^^^ | |
3efd9988 | 761 | |
b32b8144 | 762 | If you are accessing the dashboard via a reverse proxy configuration, |
81eedcae | 763 | you may wish to service it under a URL prefix. To get the dashboard |
b32b8144 FG |
764 | to use hyperlinks that include your prefix, you can set the |
765 | ``url_prefix`` setting: | |
3efd9988 | 766 | |
b32b8144 | 767 | :: |
3efd9988 | 768 | |
11fdf7f2 | 769 | ceph config set mgr mgr/dashboard/url_prefix $PREFIX |
3efd9988 | 770 | |
b32b8144 | 771 | so you can access the dashboard at ``http://$IP:$PORT/$PREFIX/``. |
31f18b77 | 772 | |
81eedcae | 773 | |
11fdf7f2 TL |
774 | .. _dashboard-auditing: |
775 | ||
776 | Auditing API Requests | |
777 | --------------------- | |
778 | ||
779 | The REST API is capable of logging PUT, POST and DELETE requests to the Ceph | |
780 | audit log. This feature is disabled by default, but can be enabled with the | |
781 | following command:: | |
782 | ||
783 | $ ceph dashboard set-audit-api-enabled <true|false> | |
784 | ||
785 | If enabled, the following parameters are logged per each request: | |
786 | ||
787 | * from - The origin of the request, e.g. https://[::1]:44410 | |
788 | * path - The REST API path, e.g. /api/auth | |
789 | * method - e.g. PUT, POST or DELETE | |
790 | * user - The name of the user, otherwise 'None' | |
791 | ||
792 | The logging of the request payload (the arguments and their values) is enabled | |
793 | by default. Execute the following command to disable this behaviour:: | |
794 | ||
795 | $ ceph dashboard set-audit-api-log-payload <true|false> | |
796 | ||
797 | A log entry may look like this:: | |
798 | ||
799 | 2018-10-22 15:27:01.302514 mgr.x [INF] [DASHBOARD] from='https://[::ffff:127.0.0.1]:37022' path='/api/rgw/user/klaus' method='PUT' user='admin' params='{"max_buckets": "1000", "display_name": "Klaus Mustermann", "uid": "klaus", "suspended": "0", "email": "klaus.mustermann@ceph.com"}' | |
800 | ||
801 | .. _dashboard-nfs-ganesha-management: | |
802 | ||
803 | NFS-Ganesha Management | |
804 | ---------------------- | |
805 | ||
806 | Ceph Dashboard can manage `NFS Ganesha <http://nfs-ganesha.github.io/>`_ exports that use | |
807 | CephFS or RadosGW as their backstore. | |
808 | ||
809 | To enable this feature in Ceph Dashboard there are some assumptions that need | |
810 | to be met regarding the way NFS-Ganesha services are configured. | |
811 | ||
812 | The dashboard manages NFS-Ganesha config files stored in RADOS objects on the Ceph Cluster. | |
813 | NFS-Ganesha must store part of their configuration in the Ceph cluster. | |
814 | ||
815 | These configuration files must follow some conventions. | |
816 | conventions. | |
817 | Each export block must be stored in its own RADOS object named | |
818 | ``export-<id>``, where ``<id>`` must match the ``Export_ID`` attribute of the | |
819 | export configuration. Then, for each NFS-Ganesha service daemon there should | |
820 | exist a RADOS object named ``conf-<daemon_id>``, where ``<daemon_id>`` is an | |
821 | arbitrary string that should uniquely identify the daemon instance (e.g., the | |
822 | hostname where the daemon is running). | |
823 | Each ``conf-<daemon_id>`` object contains the RADOS URLs to the exports that | |
824 | the NFS-Ganesha daemon should serve. These URLs are of the form:: | |
825 | ||
826 | %url rados://<pool_name>[/<namespace>]/export-<id> | |
827 | ||
828 | Both the ``conf-<daemon_id>`` and ``export-<id>`` objects must be stored in the | |
829 | same RADOS pool/namespace. | |
830 | ||
831 | ||
832 | Configuring NFS-Ganesha in the Dashboard | |
833 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
834 | ||
835 | To enable the management of NFS-Ganesha exports in Ceph Dashboard, we only | |
836 | need to tell the Dashboard, in which RADOS pool and namespace the | |
837 | configuration objects are stored. Then, Ceph Dashboard can access the objects | |
838 | by following the naming convention described above. | |
839 | ||
840 | The Dashboard command to configure the NFS-Ganesha configuration objects | |
841 | location is:: | |
842 | ||
843 | $ ceph dashboard set-ganesha-clusters-rados-pool-namespace <pool_name>[/<namespace>] | |
844 | ||
845 | After running the above command, Ceph Dashboard is able to find the NFS-Ganesha | |
846 | configuration objects and we can start manage the exports through the Web UI. | |
847 | ||
848 | ||
849 | Support for Multiple NFS-Ganesha Clusters | |
850 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
851 | ||
852 | Ceph Dashboard also supports the management of NFS-Ganesha exports belonging | |
853 | to different NFS-Ganesha clusters. An NFS-Ganesha cluster is a group of | |
854 | NFS-Ganesha service daemons sharing the same exports. Different NFS-Ganesha | |
855 | clusters are independent and don't share the exports configuration between each | |
856 | other. | |
857 | ||
858 | Each NFS-Ganesha cluster should store its configuration objects in a | |
859 | different RADOS pool/namespace to isolate the configuration from each other. | |
860 | ||
861 | To specify the locations of the configuration of each NFS-Ganesha cluster we | |
862 | can use the same command as above but with a different value pattern:: | |
863 | ||
864 | $ ceph dashboard set-ganesha-clusters-rados-pool-namespace <cluster_id>:<pool_name>[/<namespace>](,<cluster_id>:<pool_name>[/<namespace>])* | |
865 | ||
866 | The ``<cluster_id>`` is an arbitrary string that should uniquely identify the | |
867 | NFS-Ganesha cluster. | |
868 | ||
869 | When configuring the Ceph Dashboard with multiple NFS-Ganesha clusters, the | |
870 | Web UI will automatically allow to choose to which cluster an export belongs. | |
871 | ||
872 | ||
873 | Plug-ins | |
874 | -------- | |
875 | ||
876 | Dashboard Plug-ins allow to extend the functionality of the dashboard in a modular | |
877 | and loosely coupled approach. | |
878 | ||
879 | .. include:: dashboard_plugins/feature_toggles.inc.rst |