]>
Commit | Line | Data |
---|---|---|
181888fb FG |
1 | ========================================== |
2 | Configuring the iSCSI Target using Ansible | |
3 | ========================================== | |
4 | ||
5 | The Ceph iSCSI gateway is the iSCSI target node and also a Ceph client | |
6 | node. The Ceph iSCSI gateway can be a standalone node or be colocated on | |
7 | a Ceph Object Store Disk (OSD) node. Completing the following steps will | |
8 | install, and configure the Ceph iSCSI gateway for basic operation. | |
9 | ||
10 | **Requirements:** | |
11 | ||
12 | - A running Ceph Luminous (12.2.x) cluster or newer | |
13 | ||
b32b8144 | 14 | - RHEL/CentOS 7.5; Linux kernel v4.17 or newer; or the `Ceph iSCSI client test kernel <https://shaman.ceph.com/repos/kernel/ceph-iscsi-test>`_ |
181888fb FG |
15 | |
16 | - The ``ceph-iscsi-config`` package installed on all the iSCSI gateway nodes | |
17 | ||
18 | **Installing:** | |
19 | ||
20 | #. On the Ansible installer node, which could be either the administration node | |
21 | or a dedicated deployment node, perform the following steps: | |
22 | ||
23 | #. As ``root``, install the ``ceph-ansible`` package: | |
24 | ||
25 | :: | |
26 | ||
27 | # yum install ceph-ansible | |
28 | ||
29 | #. Add an entry in ``/etc/ansible/hosts`` file for the gateway group: | |
30 | ||
31 | :: | |
32 | ||
33 | [ceph-iscsi-gw] | |
34 | ceph-igw-1 | |
35 | ceph-igw-2 | |
36 | ||
37 | .. note:: | |
38 | If co-locating the iSCSI gateway with an OSD node, then add the OSD node to the | |
39 | ``[ceph-iscsi-gw]`` section. | |
40 | ||
41 | **Configuring:** | |
42 | ||
43 | The ``ceph-ansible`` package places a file in the ``/usr/share/ceph-ansible/group_vars/`` | |
44 | directory called ``ceph-iscsi-gw.sample``. Create a copy of this sample file named | |
45 | ``ceph-iscsi-gw.yml``. Review the following Ansible variables and descriptions, | |
46 | and update accordingly. | |
47 | ||
48 | +--------------------------------------+--------------------------------------+ | |
49 | | Variable | Meaning/Purpose | | |
50 | +======================================+======================================+ | |
51 | | ``seed_monitor`` | Each gateway needs access to the | | |
52 | | | ceph cluster for rados and rbd | | |
53 | | | calls. This means the iSCSI gateway | | |
54 | | | must have an appropriate | | |
55 | | | ``/etc/ceph/`` directory defined. | | |
56 | | | The ``seed_monitor`` host is used to | | |
57 | | | populate the iSCSI gateway’s | | |
58 | | | ``/etc/ceph/`` directory. | | |
59 | +--------------------------------------+--------------------------------------+ | |
60 | | ``cluster_name`` | Define a custom storage cluster | | |
61 | | | name. | | |
62 | +--------------------------------------+--------------------------------------+ | |
63 | | ``gateway_keyring`` | Define a custom keyring name. | | |
64 | +--------------------------------------+--------------------------------------+ | |
65 | | ``deploy_settings`` | If set to ``true``, then deploy the | | |
66 | | | settings when the playbook is ran. | | |
67 | +--------------------------------------+--------------------------------------+ | |
68 | | ``perform_system_checks`` | This is a boolean value that checks | | |
69 | | | for multipath and lvm configuration | | |
70 | | | settings on each gateway. It must be | | |
71 | | | set to true for at least the first | | |
72 | | | run to ensure multipathd and lvm are | | |
73 | | | configured properly. | | |
74 | +--------------------------------------+--------------------------------------+ | |
75 | | ``gateway_iqn`` | This is the iSCSI IQN that all the | | |
76 | | | gateways will expose to clients. | | |
77 | | | This means each client will see the | | |
78 | | | gateway group as a single subsystem. | | |
79 | +--------------------------------------+--------------------------------------+ | |
80 | | ``gateway_ip_list`` | The ip list defines the IP addresses | | |
81 | | | that will be used on the front end | | |
82 | | | network for iSCSI traffic. This IP | | |
83 | | | will be bound to the active target | | |
84 | | | portal group on each node, and is | | |
85 | | | the access point for iSCSI traffic. | | |
86 | | | Each IP should correspond to an IP | | |
87 | | | available on the hosts defined in | | |
88 | | | the ``ceph-iscsi-gw`` host group in | | |
89 | | | ``/etc/ansible/hosts``. | | |
90 | +--------------------------------------+--------------------------------------+ | |
91 | | ``rbd_devices`` | This section defines the RBD images | | |
92 | | | that will be controlled and managed | | |
93 | | | within the iSCSI gateway | | |
94 | | | configuration. Parameters like | | |
95 | | | ``pool`` and ``image`` are self | | |
96 | | | explanatory. Here are the other | | |
97 | | | parameters: ``size`` = This defines | | |
98 | | | the size of the RBD. You may | | |
99 | | | increase the size later, by simply | | |
100 | | | changing this value, but shrinking | | |
101 | | | the size of an RBD is not supported | | |
102 | | | and is ignored. ``host`` = This is | | |
103 | | | the iSCSI gateway host name that | | |
104 | | | will be responsible for the rbd | | |
105 | | | allocation/resize. Every defined | | |
106 | | | ``rbd_device`` entry must have a | | |
107 | | | host assigned. ``state`` = This is | | |
108 | | | typical Ansible syntax for whether | | |
109 | | | the resource should be defined or | | |
110 | | | removed. A request with a state of | | |
111 | | | absent will first be checked to | | |
112 | | | ensure the rbd is not mapped to any | | |
113 | | | client. If the RBD is unallocated, | | |
114 | | | it will be removed from the iSCSI | | |
115 | | | gateway and deleted from the | | |
116 | | | configuration. | | |
117 | +--------------------------------------+--------------------------------------+ | |
118 | | ``client_connections`` | This section defines the iSCSI | | |
119 | | | client connection details together | | |
120 | | | with the LUN (RBD image) masking. | | |
121 | | | Currently only CHAP is supported as | | |
122 | | | an authentication mechanism. Each | | |
123 | | | connection defines an ``image_list`` | | |
124 | | | which is a comma separated list of | | |
125 | | | the form | | |
126 | | | ``pool.rbd_image[,pool.rbd_image]``. | | |
127 | | | RBD images can be added and removed | | |
128 | | | from this list, to change the client | | |
129 | | | masking. Note that there are no | | |
130 | | | checks done to limit RBD sharing | | |
131 | | | across client connections. | | |
132 | +--------------------------------------+--------------------------------------+ | |
133 | ||
134 | .. note:: | |
135 | When using the ``gateway_iqn`` variable, and for Red Hat Enterprise Linux | |
136 | clients, installing the ``iscsi-initiator-utils`` package is required for | |
137 | retrieving the gateway’s IQN name. The iSCSI initiator name is located in the | |
138 | ``/etc/iscsi/initiatorname.iscsi`` file. | |
139 | ||
140 | **Deploying:** | |
141 | ||
142 | On the Ansible installer node, perform the following steps. | |
143 | ||
144 | #. As ``root``, execute the Ansible playbook: | |
145 | ||
146 | :: | |
147 | ||
148 | # cd /usr/share/ceph-ansible | |
149 | # ansible-playbook ceph-iscsi-gw.yml | |
150 | ||
151 | .. note:: | |
152 | The Ansible playbook will handle RPM dependencies, RBD creation | |
153 | and Linux IO configuration. | |
154 | ||
155 | #. Verify the configuration from an iSCSI gateway node: | |
156 | ||
157 | :: | |
158 | ||
159 | # gwcli ls | |
160 | ||
161 | .. note:: | |
162 | For more information on using the ``gwcli`` command to install and configure | |
163 | a Ceph iSCSI gateaway, see the `Configuring the iSCSI Target using the Command Line Interface`_ | |
164 | section. | |
165 | ||
166 | .. important:: | |
167 | Attempting to use the ``targetcli`` tool to change the configuration will | |
168 | result in the following issues, such as ALUA misconfiguration and path failover | |
169 | problems. There is the potential to corrupt data, to have mismatched | |
170 | configuration across iSCSI gateways, and to have mismatched WWN information, | |
171 | which will lead to client multipath problems. | |
172 | ||
173 | **Service Management:** | |
174 | ||
175 | The ``ceph-iscsi-config`` package installs the configuration management | |
176 | logic and a Systemd service called ``rbd-target-gw``. When the Systemd | |
177 | service is enabled, the ``rbd-target-gw`` will start at boot time and | |
178 | will restore the Linux IO state. The Ansible playbook disables the | |
179 | target service during the deployment. Below are the outcomes of when | |
180 | interacting with the ``rbd-target-gw`` Systemd service. | |
181 | ||
182 | :: | |
183 | ||
184 | # systemctl <start|stop|restart|reload> rbd-target-gw | |
185 | ||
186 | - ``reload`` | |
187 | ||
188 | A reload request will force ``rbd-target-gw`` to reread the | |
189 | configuration and apply it to the current running environment. This | |
190 | is normally not required, since changes are deployed in parallel from | |
191 | Ansible to all iSCSI gateway nodes | |
192 | ||
193 | - ``stop`` | |
194 | ||
195 | A stop request will close the gateway’s portal interfaces, dropping | |
196 | connections to clients and wipe the current LIO configuration from | |
197 | the kernel. This returns the iSCSI gateway to a clean state. When | |
198 | clients are disconnected, active I/O is rescheduled to the other | |
199 | iSCSI gateways by the client side multipathing layer. | |
200 | ||
201 | **Administration:** | |
202 | ||
203 | Within the ``/usr/share/ceph-ansible/group_vars/ceph-iscsi-gw`` file | |
204 | there are a number of operational workflows that the Ansible playbook | |
205 | supports. | |
206 | ||
207 | .. warning:: | |
208 | Before removing RBD images from the iSCSI gateway configuration, | |
209 | follow the standard procedures for removing a storage device from | |
210 | the operating system. | |
211 | ||
212 | +--------------------------------------+--------------------------------------+ | |
213 | | I want to… | Update the ``ceph-iscsi-gw`` file | | |
214 | | | by… | | |
215 | +======================================+======================================+ | |
216 | | Add more RBD images | Adding another entry to the | | |
217 | | | ``rbd_devices`` section with the new | | |
218 | | | image. | | |
219 | +--------------------------------------+--------------------------------------+ | |
220 | | Resize an existing RBD image | Updating the size parameter within | | |
221 | | | the ``rbd_devices`` section. Client | | |
222 | | | side actions are required to pick up | | |
223 | | | the new size of the disk. | | |
224 | +--------------------------------------+--------------------------------------+ | |
225 | | Add a client | Adding an entry to the | | |
226 | | | ``client_connections`` section. | | |
227 | +--------------------------------------+--------------------------------------+ | |
228 | | Add another RBD to a client | Adding the relevant RBD | | |
229 | | | ``pool.image`` name to the | | |
230 | | | ``image_list`` variable for the | | |
231 | | | client. | | |
232 | +--------------------------------------+--------------------------------------+ | |
233 | | Remove an RBD from a client | Removing the RBD ``pool.image`` name | | |
234 | | | from the clients ``image_list`` | | |
235 | | | variable. | | |
236 | +--------------------------------------+--------------------------------------+ | |
237 | | Remove an RBD from the system | Changing the RBD entry state | | |
238 | | | variable to ``absent``. The RBD | | |
239 | | | image must be unallocated from the | | |
240 | | | operating system first for this to | | |
241 | | | succeed. | | |
242 | +--------------------------------------+--------------------------------------+ | |
243 | | Change the clients CHAP credentials | Updating the relevant CHAP details | | |
244 | | | in ``client_connections``. This will | | |
245 | | | need to be coordinated with the | | |
246 | | | clients. For example, the client | | |
247 | | | issues an iSCSI logout, the | | |
248 | | | credentials are changed by the | | |
249 | | | Ansible playbook, the credentials | | |
250 | | | are changed at the client, then the | | |
251 | | | client performs an iSCSI login. | | |
252 | +--------------------------------------+--------------------------------------+ | |
253 | | Remove a client | Updating the relevant | | |
254 | | | ``client_connections`` item with a | | |
255 | | | state of ``absent``. Once the | | |
256 | | | Ansible playbook is ran, the client | | |
257 | | | will be purged from the system, but | | |
258 | | | the disks will remain defined to | | |
259 | | | Linux IO for potential reuse. | | |
260 | +--------------------------------------+--------------------------------------+ | |
261 | ||
262 | Once a change has been made, rerun the Ansible playbook to apply the | |
263 | change across the iSCSI gateway nodes. | |
264 | ||
265 | :: | |
266 | ||
267 | # ansible-playbook ceph-iscsi-gw.yml | |
268 | ||
269 | **Removing the Configuration:** | |
270 | ||
271 | The ``ceph-ansible`` package provides an Ansible playbook to | |
272 | remove the iSCSI gateway configuration and related RBD images. The | |
273 | Ansible playbook is ``/usr/share/ceph-ansible/purge_gateways.yml``. When | |
274 | this Ansible playbook is ran a prompted for the type of purge to | |
275 | perform: | |
276 | ||
277 | *lio* : | |
278 | ||
279 | In this mode the LIO configuration is purged on all iSCSI gateways that | |
280 | are defined. Disks that were created are left untouched within the Ceph | |
281 | storage cluster. | |
282 | ||
283 | *all* : | |
284 | ||
285 | When ``all`` is chosen, the LIO configuration is removed together with | |
286 | **all** RBD images that were defined within the iSCSI gateway | |
287 | environment, other unrelated RBD images will not be removed. Ensure the | |
288 | correct mode is chosen, this operation will delete data. | |
289 | ||
290 | .. warning:: | |
291 | A purge operation is destructive action against your iSCSI gateway | |
292 | environment. | |
293 | ||
294 | .. warning:: | |
295 | A purge operation will fail, if RBD images have snapshots or clones | |
296 | and are exported through the Ceph iSCSI gateway. | |
297 | ||
298 | :: | |
299 | ||
300 | [root@rh7-iscsi-client ceph-ansible]# ansible-playbook purge_gateways.yml | |
301 | Which configuration elements should be purged? (all, lio or abort) [abort]: all | |
302 | ||
303 | ||
304 | PLAY [Confirm removal of the iSCSI gateway configuration] ********************* | |
305 | ||
306 | ||
307 | GATHERING FACTS *************************************************************** | |
308 | ok: [localhost] | |
309 | ||
310 | ||
311 | TASK: [Exit playbook if user aborted the purge] ******************************* | |
312 | skipping: [localhost] | |
313 | ||
314 | ||
315 | TASK: [set_fact ] ************************************************************* | |
316 | ok: [localhost] | |
317 | ||
318 | ||
319 | PLAY [Removing the gateway configuration] ************************************* | |
320 | ||
321 | ||
322 | GATHERING FACTS *************************************************************** | |
323 | ok: [ceph-igw-1] | |
324 | ok: [ceph-igw-2] | |
325 | ||
326 | ||
327 | TASK: [igw_purge | purging the gateway configuration] ************************* | |
328 | changed: [ceph-igw-1] | |
329 | changed: [ceph-igw-2] | |
330 | ||
331 | ||
332 | TASK: [igw_purge | deleting configured rbd devices] *************************** | |
333 | changed: [ceph-igw-1] | |
334 | changed: [ceph-igw-2] | |
335 | ||
336 | ||
337 | PLAY RECAP ******************************************************************** | |
338 | ceph-igw-1 : ok=3 changed=2 unreachable=0 failed=0 | |
339 | ceph-igw-2 : ok=3 changed=2 unreachable=0 failed=0 | |
340 | localhost : ok=2 changed=0 unreachable=0 failed=0 | |
341 | ||
342 | ||
343 | .. _Configuring the iSCSI Target using the Command Line Interface: ../iscsi-target-cli |