8 This module is a :ref:`Ceph orchestrator <orchestrator-modules>` module that uses `Ansible Runner Service <https://github.com/pcuzner/ansible-runner-service>`_ (a RESTful API server) to execute Ansible playbooks in order to satisfy the different operations supported.
10 These operations basically (and for the moment) are:
12 - Get an inventory of the Ceph cluster nodes and all the storage devices present in each node
24 # ceph mgr module enable ansible
30 # ceph mgr module disable ansible
33 Enable the Ansible orchestrator module and use it with the :ref:`CLI <orchestrator-cli-module>`:
37 ceph mgr module enable ansible
38 ceph orchestrator set backend ansible
44 Configuration must be set once the module is enabled by first time.
46 This can be done in one monitor node via the configuration key facility on a
47 cluster-wide level (so they apply to all manager instances) as follows::
50 # ceph config set mgr mgr/ansible/server_addr <ip_address/server_name>
51 # ceph config set mgr mgr/ansible/server_port <port>
52 # ceph config set mgr mgr/ansible/username <username>
53 # ceph config set mgr mgr/ansible/password <password>
54 # ceph config set mgr mgr/ansible/verify_server <verify_server_value>
58 * <ip_address/server_name>: Is the ip address/hostname of the server where the Ansible Runner Service is available.
59 * <port>: The port number where the Ansible Runner Service is listening
60 * <username>: The username of one authorized user in the Ansible Runner Service
61 * <password>: The password of the authorized user.
62 * <verify_server_value>: Either a boolean, in which case it controls whether the server's TLS certificate is verified, or a string, in which case it must be a path to a CA bundle to use in the verification. Defaults to ``True``.
68 Any kind of incident with this orchestrator module can be debugged using the Ceph manager logs:
70 Set the right log level in order to debug properly. Remember that the python log levels debug, info, warn, err are mapped into the Ceph severities 20, 4, 1 and 0 respectively.
72 And use the "active" manager node: ( "ceph -s" command in one monitor give you this information)
74 * Check current debug level::
76 [@mgr0 ~]# ceph daemon mgr.mgr0 config show | grep debug_mgr
80 * Change the log level to "debug"::
82 [mgr0 ~]# ceph daemon mgr.mgr0 config set debug_mgr 20/5
87 * Restore "info" log level::
89 [mgr0 ~]# ceph daemon mgr.mgr0 config set debug_mgr 1/5
100 Get the list of storage devices installed in all the cluster nodes. The output format is::
103 device_name (type_of_device , size_in_bytes)]
107 [root@mon0 ~]# ceph orchestrator device ls
109 vda (hdd, 44023414784b)
110 sda (hdd, 53687091200b)
111 sdb (hdd, 53687091200b)
112 sdc (hdd, 53687091200b)
114 vda (hdd, 44023414784b)
116 vda (hdd, 44023414784b)
118 vda (hdd, 44023414784b)
119 sda (hdd, 53687091200b)
120 sdb (hdd, 53687091200b)
121 sdc (hdd, 53687091200b)