]>
Commit | Line | Data |
---|---|---|
11fdf7f2 TL |
1 | .. _multisite: |
2 | ||
7c673cae FG |
3 | ========== |
4 | Multi-Site | |
5 | ========== | |
6 | ||
7 | .. versionadded:: Jewel | |
8 | ||
9 | A single zone configuration typically consists of one zone group containing one | |
10 | zone and one or more `ceph-radosgw` instances where you may load-balance gateway | |
11 | client requests between the instances. In a single zone configuration, typically | |
12 | multiple gateway instances point to a single Ceph storage cluster. However, Kraken | |
13 | supports several multi-site configuration options for the Ceph Object Gateway: | |
14 | ||
15 | - **Multi-zone:** A more advanced configuration consists of one zone group and | |
16 | multiple zones, each zone with one or more `ceph-radosgw` instances. Each zone | |
17 | is backed by its own Ceph Storage Cluster. Multiple zones in a zone group | |
18 | provides disaster recovery for the zone group should one of the zones experience | |
19 | a significant failure. In Kraken, each zone is active and may receive write | |
20 | operations. In addition to disaster recovery, multiple active zones may also | |
21 | serve as a foundation for content delivery networks. | |
22 | ||
23 | - **Multi-zone-group:** Formerly called 'regions', Ceph Object Gateway can also | |
24 | support multiple zone groups, each zone group with one or more zones. Objects | |
25 | stored to zones in one zone group within the same realm as another zone | |
26 | group will share a global object namespace, ensuring unique object IDs across | |
27 | zone groups and zones. | |
28 | ||
29 | - **Multiple Realms:** In Kraken, the Ceph Object Gateway supports the notion | |
30 | of realms, which can be a single zone group or multiple zone groups and | |
31 | a globally unique namespace for the realm. Multiple realms provide the ability | |
32 | to support numerous configurations and namespaces. | |
33 | ||
34 | Replicating object data between zones within a zone group looks something | |
35 | like this: | |
36 | ||
37 | .. image:: ../images/zone-sync2.png | |
38 | :align: center | |
39 | ||
40 | For additional details on setting up a cluster, see `Ceph Object Gateway for | |
9f95a23c | 41 | Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_for_production/index/>`__. |
7c673cae FG |
42 | |
43 | Functional Changes from Infernalis | |
44 | ================================== | |
45 | ||
46 | In Kraken, you can configure each Ceph Object Gateway to | |
47 | work in an active-active zone configuration, allowing for writes to | |
48 | non-master zones. | |
49 | ||
50 | The multi-site configuration is stored within a container called a | |
51 | "realm." The realm stores zone groups, zones, and a time "period" with | |
52 | multiple epochs for tracking changes to the configuration. In Kraken, | |
53 | the ``ceph-radosgw`` daemons handle the synchronization, | |
54 | eliminating the need for a separate synchronization agent. Additionally, | |
55 | the new approach to synchronization allows the Ceph Object Gateway to | |
56 | operate with an "active-active" configuration instead of | |
57 | "active-passive". | |
58 | ||
59 | Requirements and Assumptions | |
60 | ============================ | |
61 | ||
62 | A multi-site configuration requires at least two Ceph storage clusters, | |
63 | preferably given a distinct cluster name. At least two Ceph object | |
64 | gateway instances, one for each Ceph storage cluster. | |
65 | ||
11fdf7f2 | 66 | This guide assumes at least two Ceph storage clusters are in geographically |
7c673cae | 67 | separate locations; however, the configuration can work on the same |
31f18b77 | 68 | site. This guide also assumes two Ceph object gateway servers named |
7c673cae FG |
69 | ``rgw1`` and ``rgw2``. |
70 | ||
11fdf7f2 TL |
71 | .. important:: Running a single Ceph storage cluster is NOT recommended unless you have |
72 | low latency WAN connections. | |
73 | ||
7c673cae FG |
74 | A multi-site configuration requires a master zone group and a master |
75 | zone. Additionally, each zone group requires a master zone. Zone groups | |
76 | may have one or more secondary or non-master zones. | |
77 | ||
78 | In this guide, the ``rgw1`` host will serve as the master zone of the | |
79 | master zone group; and, the ``rgw2`` host will serve as the secondary zone | |
80 | of the master zone group. | |
81 | ||
31f18b77 FG |
82 | See `Pools`_ for instructions on creating and tuning pools for Ceph |
83 | Object Storage. | |
7c673cae | 84 | |
9f95a23c TL |
85 | See `Sync Policy Config`_ for instructions on defining fine grained bucket sync |
86 | policy rules. | |
87 | ||
88 | .. _master-zone-label: | |
7c673cae FG |
89 | |
90 | Configuring a Master Zone | |
91 | ========================= | |
92 | ||
93 | All gateways in a multi-site configuration will retrieve their | |
94 | configuration from a ``ceph-radosgw`` daemon on a host within the master | |
95 | zone group and master zone. To configure your gateways in a multi-site | |
96 | configuration, choose a ``ceph-radosgw`` instance to configure the | |
97 | master zone group and master zone. | |
98 | ||
99 | Create a Realm | |
100 | -------------- | |
101 | ||
102 | A realm contains the multi-site configuration of zone groups and zones | |
103 | and also serves to enforce a globally unique namespace within the realm. | |
104 | ||
105 | Create a new realm for the multi-site configuration by opening a command | |
106 | line interface on a host identified to serve in the master zone group | |
107 | and zone. Then, execute the following: | |
108 | ||
109 | :: | |
110 | ||
111 | # radosgw-admin realm create --rgw-realm={realm-name} [--default] | |
112 | ||
113 | For example: | |
114 | ||
115 | :: | |
116 | ||
117 | # radosgw-admin realm create --rgw-realm=movies --default | |
118 | ||
119 | If the cluster will have a single realm, specify the ``--default`` flag. | |
120 | If ``--default`` is specified, ``radosgw-admin`` will use this realm by | |
121 | default. If ``--default`` is not specified, adding zone-groups and zones | |
122 | requires specifying either the ``--rgw-realm`` flag or the | |
123 | ``--realm-id`` flag to identify the realm when adding zone groups and | |
124 | zones. | |
125 | ||
126 | After creating the realm, ``radosgw-admin`` will echo back the realm | |
127 | configuration. For example: | |
128 | ||
129 | :: | |
130 | ||
131 | { | |
132 | "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62", | |
133 | "name": "movies", | |
134 | "current_period": "1950b710-3e63-4c41-a19e-46a715000980", | |
135 | "epoch": 1 | |
136 | } | |
137 | ||
138 | .. note:: Ceph generates a unique ID for the realm, which allows the renaming | |
139 | of a realm if the need arises. | |
140 | ||
141 | Create a Master Zone Group | |
142 | -------------------------- | |
143 | ||
144 | A realm must have at least one zone group, which will serve as the | |
145 | master zone group for the realm. | |
146 | ||
147 | Create a new master zone group for the multi-site configuration by | |
148 | opening a command line interface on a host identified to serve in the | |
149 | master zone group and zone. Then, execute the following: | |
150 | ||
151 | :: | |
152 | ||
153 | # radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default | |
154 | ||
155 | For example: | |
156 | ||
157 | :: | |
158 | ||
159 | # radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default | |
160 | ||
161 | If the realm will only have a single zone group, specify the | |
162 | ``--default`` flag. If ``--default`` is specified, ``radosgw-admin`` | |
163 | will use this zone group by default when adding new zones. If | |
164 | ``--default`` is not specified, adding zones will require either the | |
165 | ``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the | |
166 | zone group when adding or modifying zones. | |
167 | ||
168 | After creating the master zone group, ``radosgw-admin`` will echo back | |
169 | the zone group configuration. For example: | |
170 | ||
171 | :: | |
172 | ||
173 | { | |
174 | "id": "f1a233f5-c354-4107-b36c-df66126475a6", | |
175 | "name": "us", | |
176 | "api_name": "us", | |
177 | "is_master": "true", | |
178 | "endpoints": [ | |
179 | "http:\/\/rgw1:80" | |
180 | ], | |
181 | "hostnames": [], | |
182 | "hostnames_s3webzone": [], | |
183 | "master_zone": "", | |
184 | "zones": [], | |
185 | "placement_targets": [], | |
186 | "default_placement": "", | |
187 | "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62" | |
188 | } | |
189 | ||
190 | Create a Master Zone | |
191 | -------------------- | |
192 | ||
193 | .. important:: Zones must be created on a Ceph Object Gateway node that will be | |
194 | within the zone. | |
195 | ||
196 | Create a new master zone for the multi-site configuration by opening a | |
197 | command line interface on a host identified to serve in the master zone | |
198 | group and zone. Then, execute the following: | |
199 | ||
200 | :: | |
201 | ||
202 | # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \ | |
203 | --rgw-zone={zone-name} \ | |
204 | --master --default \ | |
205 | --endpoints={http://fqdn}[,{http://fqdn}] | |
206 | ||
207 | ||
208 | For example: | |
209 | ||
210 | :: | |
211 | ||
212 | # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \ | |
213 | --master --default \ | |
214 | --endpoints={http://fqdn}[,{http://fqdn}] | |
215 | ||
216 | ||
217 | .. note:: The ``--access-key`` and ``--secret`` aren’t specified. These | |
218 | settings will be added to the zone once the user is created in the | |
219 | next section. | |
220 | ||
221 | .. important:: The following steps assume a multi-site configuration using newly | |
222 | installed systems that aren’t storing data yet. DO NOT DELETE the | |
223 | ``default`` zone and its pools if you are already using it to store | |
224 | data, or the data will be deleted and unrecoverable. | |
225 | ||
226 | Delete Default Zone Group and Zone | |
227 | ---------------------------------- | |
228 | ||
229 | Delete the ``default`` zone if it exists. Make sure to remove it from | |
230 | the default zone group first. | |
231 | ||
232 | :: | |
233 | ||
234 | # radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default | |
235 | # radosgw-admin period update --commit | |
eafe8130 | 236 | # radosgw-admin zone rm --rgw-zone=default |
7c673cae FG |
237 | # radosgw-admin period update --commit |
238 | # radosgw-admin zonegroup delete --rgw-zonegroup=default | |
239 | # radosgw-admin period update --commit | |
240 | ||
241 | Finally, delete the ``default`` pools in your Ceph storage cluster if | |
242 | they exist. | |
243 | ||
244 | .. important:: The following step assumes a multi-site configuration using newly | |
245 | installed systems that aren’t currently storing data. DO NOT DELETE | |
246 | the ``default`` zone group if you are already using it to store | |
247 | data. | |
248 | ||
249 | :: | |
250 | ||
11fdf7f2 TL |
251 | # ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it |
252 | # ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it | |
253 | # ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it | |
254 | # ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it | |
255 | # ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it | |
7c673cae FG |
256 | |
257 | Create a System User | |
258 | -------------------- | |
259 | ||
260 | The ``ceph-radosgw`` daemons must authenticate before pulling realm and | |
261 | period information. In the master zone, create a system user to | |
262 | facilitate authentication between daemons. | |
263 | ||
264 | :: | |
265 | ||
266 | # radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system | |
267 | ||
268 | For example: | |
269 | ||
270 | :: | |
271 | ||
272 | # radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system | |
273 | ||
274 | Make a note of the ``access_key`` and ``secret_key``, as the secondary | |
275 | zones will require them to authenticate with the master zone. | |
276 | ||
277 | Finally, add the system user to the master zone. | |
278 | ||
279 | :: | |
280 | ||
281 | # radosgw-admin zone modify --rgw-zone=us-east --access-key={access-key} --secret={secret} | |
282 | # radosgw-admin period update --commit | |
283 | ||
284 | Update the Period | |
285 | ----------------- | |
286 | ||
287 | After updating the master zone configuration, update the period. | |
288 | ||
289 | :: | |
290 | ||
291 | # radosgw-admin period update --commit | |
292 | ||
293 | .. note:: Updating the period changes the epoch, and ensures that other zones | |
294 | will receive the updated configuration. | |
295 | ||
296 | Update the Ceph Configuration File | |
297 | ---------------------------------- | |
298 | ||
299 | Update the Ceph configuration file on master zone hosts by adding the | |
300 | ``rgw_zone`` configuration option and the name of the master zone to the | |
301 | instance entry. | |
302 | ||
303 | :: | |
304 | ||
305 | [client.rgw.{instance-name}] | |
306 | ... | |
307 | rgw_zone={zone-name} | |
308 | ||
309 | For example: | |
310 | ||
311 | :: | |
312 | ||
313 | [client.rgw.rgw1] | |
314 | host = rgw1 | |
315 | rgw frontends = "civetweb port=80" | |
316 | rgw_zone=us-east | |
317 | ||
318 | Start the Gateway | |
319 | ----------------- | |
320 | ||
321 | On the object gateway host, start and enable the Ceph Object Gateway | |
322 | service: | |
323 | ||
324 | :: | |
325 | ||
326 | # systemctl start ceph-radosgw@rgw.`hostname -s` | |
327 | # systemctl enable ceph-radosgw@rgw.`hostname -s` | |
328 | ||
9f95a23c TL |
329 | .. _secondary-zone-label: |
330 | ||
7c673cae FG |
331 | Configure Secondary Zones |
332 | ========================= | |
333 | ||
334 | Zones within a zone group replicate all data to ensure that each zone | |
335 | has the same data. When creating the secondary zone, execute all of the | |
336 | following operations on a host identified to serve the secondary zone. | |
337 | ||
338 | .. note:: To add a third zone, follow the same procedures as for adding the | |
339 | secondary zone. Use different zone name. | |
340 | ||
341 | .. important:: You must execute metadata operations, such as user creation, on a | |
342 | host within the master zone. The master zone and the secondary zone | |
343 | can receive bucket operations, but the secondary zone redirects | |
344 | bucket operations to the master zone. If the master zone is down, | |
345 | bucket operations will fail. | |
346 | ||
347 | Pull the Realm | |
348 | -------------- | |
349 | ||
350 | Using the URL path, access key and secret of the master zone in the | |
494da23a TL |
351 | master zone group, pull the realm configuration to the host. To pull a |
352 | non-default realm, specify the realm using the ``--rgw-realm`` or | |
353 | ``--realm-id`` configuration options. | |
7c673cae FG |
354 | |
355 | :: | |
356 | ||
357 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret} | |
358 | ||
494da23a TL |
359 | .. note:: Pulling the realm also retrieves the remote's current period |
360 | configuration, and makes it the current period on this host as well. | |
361 | ||
7c673cae FG |
362 | If this realm is the default realm or the only realm, make the realm the |
363 | default realm. | |
364 | ||
365 | :: | |
366 | ||
367 | # radosgw-admin realm default --rgw-realm={realm-name} | |
368 | ||
7c673cae FG |
369 | Create a Secondary Zone |
370 | ----------------------- | |
371 | ||
372 | .. important:: Zones must be created on a Ceph Object Gateway node that will be | |
373 | within the zone. | |
374 | ||
375 | Create a secondary zone for the multi-site configuration by opening a | |
376 | command line interface on a host identified to serve the secondary zone. | |
377 | Specify the zone group ID, the new zone name and an endpoint for the | |
378 | zone. **DO NOT** use the ``--master`` or ``--default`` flags. In Kraken, | |
379 | all zones run in an active-active configuration by | |
380 | default; that is, a gateway client may write data to any zone and the | |
381 | zone will replicate the data to all other zones within the zone group. | |
382 | If the secondary zone should not accept write operations, specify the | |
383 | ``--read-only`` flag to create an active-passive configuration between | |
384 | the master zone and the secondary zone. Additionally, provide the | |
385 | ``access_key`` and ``secret_key`` of the generated system user stored in | |
386 | the master zone of the master zone group. Execute the following: | |
387 | ||
388 | :: | |
389 | ||
390 | # radosgw-admin zone create --rgw-zonegroup={zone-group-name}\ | |
391 | --rgw-zone={zone-name} --endpoints={url} \ | |
392 | --access-key={system-key} --secret={secret}\ | |
393 | --endpoints=http://{fqdn}:80 \ | |
394 | [--read-only] | |
395 | ||
396 | For example: | |
397 | ||
398 | :: | |
399 | ||
400 | # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \ | |
401 | --access-key={system-key} --secret={secret} \ | |
402 | --endpoints=http://rgw2:80 | |
403 | ||
404 | .. important:: The following steps assume a multi-site configuration using newly | |
405 | installed systems that aren’t storing data. **DO NOT DELETE** the | |
406 | ``default`` zone and its pools if you are already using it to store | |
407 | data, or the data will be lost and unrecoverable. | |
408 | ||
409 | Delete the default zone if needed. | |
410 | ||
411 | :: | |
412 | ||
eafe8130 | 413 | # radosgw-admin zone rm --rgw-zone=default |
7c673cae FG |
414 | |
415 | Finally, delete the default pools in your Ceph storage cluster if | |
416 | needed. | |
417 | ||
418 | :: | |
419 | ||
11fdf7f2 TL |
420 | # ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it |
421 | # ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it | |
422 | # ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it | |
423 | # ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it | |
424 | # ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it | |
7c673cae FG |
425 | |
426 | Update the Ceph Configuration File | |
427 | ---------------------------------- | |
428 | ||
429 | Update the Ceph configuration file on the secondary zone hosts by adding | |
430 | the ``rgw_zone`` configuration option and the name of the secondary zone | |
431 | to the instance entry. | |
432 | ||
433 | :: | |
434 | ||
435 | [client.rgw.{instance-name}] | |
436 | ... | |
437 | rgw_zone={zone-name} | |
438 | ||
439 | For example: | |
440 | ||
441 | :: | |
442 | ||
443 | [client.rgw.rgw2] | |
444 | host = rgw2 | |
445 | rgw frontends = "civetweb port=80" | |
446 | rgw_zone=us-west | |
447 | ||
448 | Update the Period | |
449 | ----------------- | |
450 | ||
451 | After updating the master zone configuration, update the period. | |
452 | ||
453 | :: | |
454 | ||
455 | # radosgw-admin period update --commit | |
456 | ||
457 | .. note:: Updating the period changes the epoch, and ensures that other zones | |
458 | will receive the updated configuration. | |
459 | ||
460 | Start the Gateway | |
461 | ----------------- | |
462 | ||
463 | On the object gateway host, start and enable the Ceph Object Gateway | |
464 | service: | |
465 | ||
466 | :: | |
467 | ||
468 | # systemctl start ceph-radosgw@rgw.`hostname -s` | |
469 | # systemctl enable ceph-radosgw@rgw.`hostname -s` | |
470 | ||
471 | Check Synchronization Status | |
472 | ---------------------------- | |
473 | ||
474 | Once the secondary zone is up and running, check the synchronization | |
475 | status. Synchronization copies users and buckets created in the master | |
476 | zone to the secondary zone. | |
477 | ||
478 | :: | |
479 | ||
480 | # radosgw-admin sync status | |
481 | ||
482 | The output will provide the status of synchronization operations. For | |
483 | example: | |
484 | ||
485 | :: | |
486 | ||
487 | realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth) | |
488 | zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us) | |
489 | zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west) | |
490 | metadata sync syncing | |
491 | full sync: 0/64 shards | |
492 | metadata is caught up with master | |
493 | incremental sync: 64/64 shards | |
494 | data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east) | |
495 | syncing | |
496 | full sync: 0/128 shards | |
497 | incremental sync: 128/128 shards | |
498 | data is caught up with source | |
499 | ||
500 | .. note:: Secondary zones accept bucket operations; however, secondary zones | |
501 | redirect bucket operations to the master zone and then synchronize | |
502 | with the master zone to receive the result of the bucket operations. | |
503 | If the master zone is down, bucket operations executed on the | |
504 | secondary zone will fail, but object operations should succeed. | |
505 | ||
506 | ||
507 | Maintenance | |
508 | =========== | |
509 | ||
510 | Checking the Sync Status | |
511 | ------------------------ | |
512 | ||
513 | Information about the replication status of a zone can be queried with:: | |
514 | ||
515 | $ radosgw-admin sync status | |
516 | realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth) | |
517 | zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us) | |
518 | zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2) | |
519 | metadata sync syncing | |
520 | full sync: 0/64 shards | |
521 | incremental sync: 64/64 shards | |
522 | metadata is behind on 1 shards | |
523 | oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s | |
524 | data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1) | |
525 | syncing | |
526 | full sync: 0/128 shards | |
527 | incremental sync: 128/128 shards | |
528 | data is caught up with source | |
529 | source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3) | |
530 | syncing | |
531 | full sync: 0/128 shards | |
532 | incremental sync: 128/128 shards | |
533 | data is caught up with source | |
534 | ||
535 | Changing the Metadata Master Zone | |
536 | --------------------------------- | |
537 | ||
538 | .. important:: Care must be taken when changing which zone is the metadata | |
539 | master. If a zone has not finished syncing metadata from the current master | |
540 | zone, it will be unable to serve any remaining entries when promoted to | |
541 | master and those changes will be lost. For this reason, waiting for a | |
542 | zone's ``radosgw-admin sync status`` to catch up on metadata sync before | |
543 | promoting it to master is recommended. | |
544 | ||
545 | Similarly, if changes to metadata are being processed by the current master | |
546 | zone while another zone is being promoted to master, those changes are | |
547 | likely to be lost. To avoid this, shutting down any ``radosgw`` instances | |
548 | on the previous master zone is recommended. After promoting another zone, | |
549 | its new period can be fetched with ``radosgw-admin period pull`` and the | |
550 | gateway(s) can be restarted. | |
551 | ||
552 | To promote a zone (for example, zone ``us-2`` in zonegroup ``us``) to metadata | |
553 | master, run the following commands on that zone:: | |
554 | ||
555 | $ radosgw-admin zone modify --rgw-zone=us-2 --master | |
556 | $ radosgw-admin zonegroup modify --rgw-zonegroup=us --master | |
557 | $ radosgw-admin period update --commit | |
558 | ||
559 | This will generate a new period, and the radosgw instance(s) in zone ``us-2`` | |
560 | will send this period to other zones. | |
561 | ||
562 | Failover and Disaster Recovery | |
563 | ============================== | |
564 | ||
565 | If the master zone should fail, failover to the secondary zone for | |
566 | disaster recovery. | |
567 | ||
568 | 1. Make the secondary zone the master and default zone. For example: | |
569 | ||
570 | :: | |
571 | ||
572 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default | |
573 | ||
574 | By default, Ceph Object Gateway will run in an active-active | |
575 | configuration. If the cluster was configured to run in an | |
576 | active-passive configuration, the secondary zone is a read-only zone. | |
577 | Remove the ``--read-only`` status to allow the zone to receive write | |
578 | operations. For example: | |
579 | ||
580 | :: | |
581 | ||
582 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default \ | |
494da23a | 583 | --read-only=false |
7c673cae FG |
584 | |
585 | 2. Update the period to make the changes take effect. | |
586 | ||
587 | :: | |
588 | ||
589 | # radosgw-admin period update --commit | |
590 | ||
591 | 3. Finally, restart the Ceph Object Gateway. | |
592 | ||
593 | :: | |
594 | ||
595 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
596 | ||
597 | If the former master zone recovers, revert the operation. | |
598 | ||
494da23a TL |
599 | 1. From the recovered zone, pull the latest realm configuration |
600 | from the current master zone. | |
7c673cae FG |
601 | |
602 | :: | |
603 | ||
494da23a TL |
604 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} \ |
605 | --access-key={access-key} --secret={secret} | |
7c673cae FG |
606 | |
607 | 2. Make the recovered zone the master and default zone. | |
608 | ||
609 | :: | |
610 | ||
611 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default | |
612 | ||
613 | 3. Update the period to make the changes take effect. | |
614 | ||
615 | :: | |
616 | ||
617 | # radosgw-admin period update --commit | |
618 | ||
619 | 4. Then, restart the Ceph Object Gateway in the recovered zone. | |
620 | ||
621 | :: | |
622 | ||
623 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
624 | ||
625 | 5. If the secondary zone needs to be a read-only configuration, update | |
626 | the secondary zone. | |
627 | ||
628 | :: | |
629 | ||
630 | # radosgw-admin zone modify --rgw-zone={zone-name} --read-only | |
631 | ||
632 | 6. Update the period to make the changes take effect. | |
633 | ||
634 | :: | |
635 | ||
636 | # radosgw-admin period update --commit | |
637 | ||
638 | 7. Finally, restart the Ceph Object Gateway in the secondary zone. | |
639 | ||
640 | :: | |
641 | ||
642 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
643 | ||
644 | Migrating a Single Site System to Multi-Site | |
645 | ============================================ | |
646 | ||
647 | To migrate from a single site system with a ``default`` zone group and | |
648 | zone to a multi site system, use the following steps: | |
649 | ||
650 | 1. Create a realm. Replace ``<name>`` with the realm name. | |
651 | ||
652 | :: | |
653 | ||
654 | # radosgw-admin realm create --rgw-realm=<name> --default | |
655 | ||
656 | 2. Rename the default zone and zonegroup. Replace ``<name>`` with the | |
657 | zonegroup or zone name. | |
658 | ||
659 | :: | |
660 | ||
661 | # radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name> | |
662 | # radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name> | |
663 | ||
664 | 3. Configure the master zonegroup. Replace ``<name>`` with the realm or | |
665 | zonegroup name. Replace ``<fqdn>`` with the fully qualified domain | |
666 | name(s) in the zonegroup. | |
667 | ||
668 | :: | |
669 | ||
670 | # radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default | |
671 | ||
672 | 4. Configure the master zone. Replace ``<name>`` with the realm, | |
673 | zonegroup or zone name. Replace ``<fqdn>`` with the fully qualified | |
674 | domain name(s) in the zonegroup. | |
675 | ||
676 | :: | |
677 | ||
678 | # radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \ | |
679 | --rgw-zone=<name> --endpoints http://<fqdn>:80 \ | |
680 | --access-key=<access-key> --secret=<secret-key> \ | |
681 | --master --default | |
682 | ||
683 | 5. Create a system user. Replace ``<user-id>`` with the username. | |
684 | Replace ``<display-name>`` with a display name. It may contain | |
685 | spaces. | |
686 | ||
687 | :: | |
688 | ||
689 | # radosgw-admin user create --uid=<user-id> --display-name="<display-name>"\ | |
690 | --access-key=<access-key> --secret=<secret-key> --system | |
691 | ||
692 | 6. Commit the updated configuration. | |
693 | ||
694 | :: | |
695 | ||
696 | # radosgw-admin period update --commit | |
697 | ||
698 | 7. Finally, restart the Ceph Object Gateway. | |
699 | ||
700 | :: | |
701 | ||
702 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
703 | ||
704 | After completing this procedure, proceed to `Configure a Secondary | |
705 | Zone <#configure-secondary-zones>`__ to create a secondary zone | |
706 | in the master zone group. | |
707 | ||
708 | ||
709 | Multi-Site Configuration Reference | |
710 | ================================== | |
711 | ||
712 | The following sections provide additional details and command-line | |
713 | usage for realms, periods, zone groups and zones. | |
714 | ||
715 | Realms | |
716 | ------ | |
717 | ||
718 | A realm represents a globally unique namespace consisting of one or more | |
719 | zonegroups containing one or more zones, and zones containing buckets, | |
720 | which in turn contain objects. A realm enables the Ceph Object Gateway | |
721 | to support multiple namespaces and their configuration on the same | |
722 | hardware. | |
723 | ||
724 | A realm contains the notion of periods. Each period represents the state | |
725 | of the zone group and zone configuration in time. Each time you make a | |
726 | change to a zonegroup or zone, update the period and commit it. | |
727 | ||
728 | By default, the Ceph Object Gateway does not create a realm | |
729 | for backward compatibility with Infernalis and earlier releases. | |
730 | However, as a best practice, we recommend creating realms for new | |
731 | clusters. | |
732 | ||
733 | Create a Realm | |
734 | ~~~~~~~~~~~~~~ | |
735 | ||
736 | To create a realm, execute ``realm create`` and specify the realm name. | |
737 | If the realm is the default, specify ``--default``. | |
738 | ||
739 | :: | |
740 | ||
741 | # radosgw-admin realm create --rgw-realm={realm-name} [--default] | |
742 | ||
743 | For example: | |
744 | ||
745 | :: | |
746 | ||
747 | # radosgw-admin realm create --rgw-realm=movies --default | |
748 | ||
749 | By specifying ``--default``, the realm will be called implicitly with | |
750 | each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name | |
751 | are explicitly provided. | |
752 | ||
753 | Make a Realm the Default | |
754 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
755 | ||
756 | One realm in the list of realms should be the default realm. There may | |
757 | be only one default realm. If there is only one realm and it wasn’t | |
758 | specified as the default realm when it was created, make it the default | |
759 | realm. Alternatively, to change which realm is the default, execute: | |
760 | ||
761 | :: | |
762 | ||
763 | # radosgw-admin realm default --rgw-realm=movies | |
764 | ||
31f18b77 FG |
765 | .. note:: When the realm is default, the command line assumes |
766 | ``--rgw-realm=<realm-name>`` as an argument. | |
7c673cae FG |
767 | |
768 | Delete a Realm | |
769 | ~~~~~~~~~~~~~~ | |
770 | ||
771 | To delete a realm, execute ``realm delete`` and specify the realm name. | |
772 | ||
773 | :: | |
774 | ||
775 | # radosgw-admin realm delete --rgw-realm={realm-name} | |
776 | ||
777 | For example: | |
778 | ||
779 | :: | |
780 | ||
781 | # radosgw-admin realm delete --rgw-realm=movies | |
782 | ||
783 | Get a Realm | |
784 | ~~~~~~~~~~~ | |
785 | ||
786 | To get a realm, execute ``realm get`` and specify the realm name. | |
787 | ||
788 | :: | |
789 | ||
790 | #radosgw-admin realm get --rgw-realm=<name> | |
791 | ||
792 | For example: | |
793 | ||
794 | :: | |
795 | ||
796 | # radosgw-admin realm get --rgw-realm=movies [> filename.json] | |
797 | ||
798 | The CLI will echo a JSON object with the realm properties. | |
799 | ||
800 | :: | |
801 | ||
802 | { | |
803 | "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc", | |
804 | "name": "movies", | |
805 | "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b", | |
806 | "epoch": 1 | |
807 | } | |
808 | ||
809 | Use ``>`` and an output file name to output the JSON object to a file. | |
810 | ||
811 | Set a Realm | |
812 | ~~~~~~~~~~~ | |
813 | ||
814 | To set a realm, execute ``realm set``, specify the realm name, and | |
815 | ``--infile=`` with an input file name. | |
816 | ||
817 | :: | |
818 | ||
819 | #radosgw-admin realm set --rgw-realm=<name> --infile=<infilename> | |
820 | ||
821 | For example: | |
822 | ||
823 | :: | |
824 | ||
825 | # radosgw-admin realm set --rgw-realm=movies --infile=filename.json | |
826 | ||
827 | List Realms | |
828 | ~~~~~~~~~~~ | |
829 | ||
830 | To list realms, execute ``realm list``. | |
831 | ||
832 | :: | |
833 | ||
834 | # radosgw-admin realm list | |
835 | ||
836 | List Realm Periods | |
837 | ~~~~~~~~~~~~~~~~~~ | |
838 | ||
839 | To list realm periods, execute ``realm list-periods``. | |
840 | ||
841 | :: | |
842 | ||
843 | # radosgw-admin realm list-periods | |
844 | ||
845 | Pull a Realm | |
846 | ~~~~~~~~~~~~ | |
847 | ||
848 | To pull a realm from the node containing the master zone group and | |
849 | master zone to a node containing a secondary zone group or zone, execute | |
850 | ``realm pull`` on the node that will receive the realm configuration. | |
851 | ||
852 | :: | |
853 | ||
854 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret} | |
855 | ||
856 | Rename a Realm | |
857 | ~~~~~~~~~~~~~~ | |
858 | ||
859 | A realm is not part of the period. Consequently, renaming the realm is | |
860 | only applied locally, and will not get pulled with ``realm pull``. When | |
861 | renaming a realm with multiple zones, run the command on each zone. To | |
862 | rename a realm, execute the following: | |
863 | ||
864 | :: | |
865 | ||
866 | # radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name> | |
867 | ||
868 | .. note:: DO NOT use ``realm set`` to change the ``name`` parameter. That | |
869 | changes the internal name only. Specifying ``--rgw-realm`` would | |
870 | still use the old realm name. | |
871 | ||
872 | Zone Groups | |
873 | ----------- | |
874 | ||
875 | The Ceph Object Gateway supports multi-site deployments and a global | |
876 | namespace by using the notion of zone groups. Formerly called a region | |
877 | in Infernalis, a zone group defines the geographic location of one or more Ceph | |
878 | Object Gateway instances within one or more zones. | |
879 | ||
880 | Configuring zone groups differs from typical configuration procedures, | |
881 | because not all of the settings end up in a Ceph configuration file. You | |
882 | can list zone groups, get a zone group configuration, and set a zone | |
883 | group configuration. | |
884 | ||
885 | Create a Zone Group | |
886 | ~~~~~~~~~~~~~~~~~~~ | |
887 | ||
888 | Creating a zone group consists of specifying the zone group name. | |
889 | Creating a zone assumes it will live in the default realm unless | |
890 | ``--rgw-realm=<realm-name>`` is specified. If the zonegroup is the | |
891 | default zonegroup, specify the ``--default`` flag. If the zonegroup is | |
892 | the master zonegroup, specify the ``--master`` flag. For example: | |
893 | ||
894 | :: | |
895 | ||
896 | # radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default] | |
897 | ||
898 | ||
899 | .. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify | |
900 | an existing zone group’s settings. | |
901 | ||
902 | Make a Zone Group the Default | |
903 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
904 | ||
905 | One zonegroup in the list of zonegroups should be the default zonegroup. | |
906 | There may be only one default zonegroup. If there is only one zonegroup | |
907 | and it wasn’t specified as the default zonegroup when it was created, | |
908 | make it the default zonegroup. Alternatively, to change which zonegroup | |
909 | is the default, execute: | |
910 | ||
911 | :: | |
912 | ||
913 | # radosgw-admin zonegroup default --rgw-zonegroup=comedy | |
914 | ||
915 | .. note:: When the zonegroup is default, the command line assumes | |
916 | ``--rgw-zonegroup=<zonegroup-name>`` as an argument. | |
917 | ||
918 | Then, update the period: | |
919 | ||
920 | :: | |
921 | ||
922 | # radosgw-admin period update --commit | |
923 | ||
924 | Add a Zone to a Zone Group | |
925 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
926 | ||
927 | To add a zone to a zonegroup, execute the following: | |
928 | ||
929 | :: | |
930 | ||
931 | # radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name> | |
932 | ||
933 | Then, update the period: | |
934 | ||
935 | :: | |
936 | ||
937 | # radosgw-admin period update --commit | |
938 | ||
939 | Remove a Zone from a Zone Group | |
940 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
941 | ||
942 | To remove a zone from a zonegroup, execute the following: | |
943 | ||
944 | :: | |
945 | ||
946 | # radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name> | |
947 | ||
948 | Then, update the period: | |
949 | ||
950 | :: | |
951 | ||
952 | # radosgw-admin period update --commit | |
953 | ||
954 | Rename a Zone Group | |
955 | ~~~~~~~~~~~~~~~~~~~ | |
956 | ||
957 | To rename a zonegroup, execute the following: | |
958 | ||
959 | :: | |
960 | ||
961 | # radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name> | |
962 | ||
963 | Then, update the period: | |
964 | ||
965 | :: | |
966 | ||
967 | # radosgw-admin period update --commit | |
968 | ||
969 | Delete a Zone Group | |
970 | ~~~~~~~~~~~~~~~~~~~ | |
971 | ||
972 | To delete a zonegroup, execute the following: | |
973 | ||
974 | :: | |
975 | ||
976 | # radosgw-admin zonegroup delete --rgw-zonegroup=<name> | |
977 | ||
978 | Then, update the period: | |
979 | ||
980 | :: | |
981 | ||
982 | # radosgw-admin period update --commit | |
983 | ||
984 | List Zone Groups | |
985 | ~~~~~~~~~~~~~~~~ | |
986 | ||
987 | A Ceph cluster contains a list of zone groups. To list the zone groups, | |
988 | execute: | |
989 | ||
990 | :: | |
991 | ||
992 | # radosgw-admin zonegroup list | |
993 | ||
994 | The ``radosgw-admin`` returns a JSON formatted list of zone groups. | |
995 | ||
996 | :: | |
997 | ||
998 | { | |
999 | "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1000 | "zonegroups": [ | |
1001 | "us" | |
1002 | ] | |
1003 | } | |
1004 | ||
1005 | Get a Zone Group Map | |
1006 | ~~~~~~~~~~~~~~~~~~~~ | |
1007 | ||
1008 | To list the details of each zone group, execute: | |
1009 | ||
1010 | :: | |
1011 | ||
1012 | # radosgw-admin zonegroup-map get | |
1013 | ||
1014 | .. note:: If you receive a ``failed to read zonegroup map`` error, run | |
1015 | ``radosgw-admin zonegroup-map update`` as ``root`` first. | |
1016 | ||
1017 | Get a Zone Group | |
1018 | ~~~~~~~~~~~~~~~~ | |
1019 | ||
1020 | To view the configuration of a zone group, execute: | |
1021 | ||
1022 | :: | |
1023 | ||
1024 | radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>] | |
1025 | ||
1026 | The zone group configuration looks like this: | |
1027 | ||
1028 | :: | |
1029 | ||
1030 | { | |
1031 | "id": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1032 | "name": "us", | |
1033 | "api_name": "us", | |
1034 | "is_master": "true", | |
1035 | "endpoints": [ | |
1036 | "http:\/\/rgw1:80" | |
1037 | ], | |
1038 | "hostnames": [], | |
1039 | "hostnames_s3website": [], | |
1040 | "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1041 | "zones": [ | |
1042 | { | |
1043 | "id": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1044 | "name": "us-east", | |
1045 | "endpoints": [ | |
1046 | "http:\/\/rgw1" | |
1047 | ], | |
1048 | "log_meta": "true", | |
1049 | "log_data": "true", | |
1050 | "bucket_index_max_shards": 0, | |
1051 | "read_only": "false" | |
1052 | }, | |
1053 | { | |
1054 | "id": "d1024e59-7d28-49d1-8222-af101965a939", | |
1055 | "name": "us-west", | |
1056 | "endpoints": [ | |
1057 | "http:\/\/rgw2:80" | |
1058 | ], | |
1059 | "log_meta": "false", | |
1060 | "log_data": "true", | |
1061 | "bucket_index_max_shards": 0, | |
1062 | "read_only": "false" | |
1063 | } | |
1064 | ], | |
1065 | "placement_targets": [ | |
1066 | { | |
1067 | "name": "default-placement", | |
1068 | "tags": [] | |
1069 | } | |
1070 | ], | |
1071 | "default_placement": "default-placement", | |
1072 | "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe" | |
1073 | } | |
1074 | ||
1075 | Set a Zone Group | |
1076 | ~~~~~~~~~~~~~~~~ | |
1077 | ||
1078 | Defining a zone group consists of creating a JSON object, specifying at | |
1079 | least the required settings: | |
1080 | ||
1081 | 1. ``name``: The name of the zone group. Required. | |
1082 | ||
1083 | 2. ``api_name``: The API name for the zone group. Optional. | |
1084 | ||
1085 | 3. ``is_master``: Determines if the zone group is the master zone group. | |
1086 | Required. **note:** You can only have one master zone group. | |
1087 | ||
1088 | 4. ``endpoints``: A list of all the endpoints in the zone group. For | |
1089 | example, you may use multiple domain names to refer to the same zone | |
1090 | group. Remember to escape the forward slashes (``\/``). You may also | |
1091 | specify a port (``fqdn:port``) for each endpoint. Optional. | |
1092 | ||
1093 | 5. ``hostnames``: A list of all the hostnames in the zone group. For | |
1094 | example, you may use multiple domain names to refer to the same zone | |
1095 | group. Optional. The ``rgw dns name`` setting will automatically be | |
1096 | included in this list. You should restart the gateway daemon(s) after | |
1097 | changing this setting. | |
1098 | ||
1099 | 6. ``master_zone``: The master zone for the zone group. Optional. Uses | |
1100 | the default zone if not specified. **note:** You can only have one | |
1101 | master zone per zone group. | |
1102 | ||
1103 | 7. ``zones``: A list of all zones within the zone group. Each zone has a | |
1104 | name (required), a list of endpoints (optional), and whether or not | |
1105 | the gateway will log metadata and data operations (false by default). | |
1106 | ||
1107 | 8. ``placement_targets``: A list of placement targets (optional). Each | |
1108 | placement target contains a name (required) for the placement target | |
1109 | and a list of tags (optional) so that only users with the tag can use | |
1110 | the placement target (i.e., the user’s ``placement_tags`` field in | |
1111 | the user info). | |
1112 | ||
1113 | 9. ``default_placement``: The default placement target for the object | |
1114 | index and object data. Set to ``default-placement`` by default. You | |
1115 | may also set a per-user default placement in the user info for each | |
1116 | user. | |
1117 | ||
1118 | To set a zone group, create a JSON object consisting of the required | |
1119 | fields, save the object to a file (e.g., ``zonegroup.json``); then, | |
1120 | execute the following command: | |
1121 | ||
1122 | :: | |
1123 | ||
1124 | # radosgw-admin zonegroup set --infile zonegroup.json | |
1125 | ||
1126 | Where ``zonegroup.json`` is the JSON file you created. | |
1127 | ||
1128 | .. important:: The ``default`` zone group ``is_master`` setting is ``true`` by | |
1129 | default. If you create a new zone group and want to make it the | |
1130 | master zone group, you must either set the ``default`` zone group | |
1131 | ``is_master`` setting to ``false``, or delete the ``default`` zone | |
1132 | group. | |
1133 | ||
1134 | Finally, update the period: | |
1135 | ||
1136 | :: | |
1137 | ||
1138 | # radosgw-admin period update --commit | |
1139 | ||
1140 | Set a Zone Group Map | |
1141 | ~~~~~~~~~~~~~~~~~~~~ | |
1142 | ||
1143 | Setting a zone group map consists of creating a JSON object consisting | |
1144 | of one or more zone groups, and setting the ``master_zonegroup`` for the | |
1145 | cluster. Each zone group in the zone group map consists of a key/value | |
1146 | pair, where the ``key`` setting is equivalent to the ``name`` setting | |
1147 | for an individual zone group configuration, and the ``val`` is a JSON | |
1148 | object consisting of an individual zone group configuration. | |
1149 | ||
1150 | You may only have one zone group with ``is_master`` equal to ``true``, | |
1151 | and it must be specified as the ``master_zonegroup`` at the end of the | |
1152 | zone group map. The following JSON object is an example of a default | |
1153 | zone group map. | |
1154 | ||
1155 | :: | |
1156 | ||
1157 | { | |
1158 | "zonegroups": [ | |
1159 | { | |
1160 | "key": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1161 | "val": { | |
1162 | "id": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1163 | "name": "us", | |
1164 | "api_name": "us", | |
1165 | "is_master": "true", | |
1166 | "endpoints": [ | |
1167 | "http:\/\/rgw1:80" | |
1168 | ], | |
1169 | "hostnames": [], | |
1170 | "hostnames_s3website": [], | |
1171 | "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1172 | "zones": [ | |
1173 | { | |
1174 | "id": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1175 | "name": "us-east", | |
1176 | "endpoints": [ | |
1177 | "http:\/\/rgw1" | |
1178 | ], | |
1179 | "log_meta": "true", | |
1180 | "log_data": "true", | |
1181 | "bucket_index_max_shards": 0, | |
1182 | "read_only": "false" | |
1183 | }, | |
1184 | { | |
1185 | "id": "d1024e59-7d28-49d1-8222-af101965a939", | |
1186 | "name": "us-west", | |
1187 | "endpoints": [ | |
1188 | "http:\/\/rgw2:80" | |
1189 | ], | |
1190 | "log_meta": "false", | |
1191 | "log_data": "true", | |
1192 | "bucket_index_max_shards": 0, | |
1193 | "read_only": "false" | |
1194 | } | |
1195 | ], | |
1196 | "placement_targets": [ | |
1197 | { | |
1198 | "name": "default-placement", | |
1199 | "tags": [] | |
1200 | } | |
1201 | ], | |
1202 | "default_placement": "default-placement", | |
1203 | "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe" | |
1204 | } | |
1205 | } | |
1206 | ], | |
1207 | "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1208 | "bucket_quota": { | |
1209 | "enabled": false, | |
1210 | "max_size_kb": -1, | |
1211 | "max_objects": -1 | |
1212 | }, | |
1213 | "user_quota": { | |
1214 | "enabled": false, | |
1215 | "max_size_kb": -1, | |
1216 | "max_objects": -1 | |
1217 | } | |
1218 | } | |
1219 | ||
1220 | To set a zone group map, execute the following: | |
1221 | ||
1222 | :: | |
1223 | ||
1224 | # radosgw-admin zonegroup-map set --infile zonegroupmap.json | |
1225 | ||
1226 | Where ``zonegroupmap.json`` is the JSON file you created. Ensure that | |
1227 | you have zones created for the ones specified in the zone group map. | |
1228 | Finally, update the period. | |
1229 | ||
1230 | :: | |
1231 | ||
1232 | # radosgw-admin period update --commit | |
1233 | ||
1234 | Zones | |
1235 | ----- | |
1236 | ||
1237 | Ceph Object Gateway supports the notion of zones. A zone defines a | |
1238 | logical group consisting of one or more Ceph Object Gateway instances. | |
1239 | ||
1240 | Configuring zones differs from typical configuration procedures, because | |
1241 | not all of the settings end up in a Ceph configuration file. You can | |
1242 | list zones, get a zone configuration and set a zone configuration. | |
1243 | ||
1244 | Create a Zone | |
1245 | ~~~~~~~~~~~~~ | |
1246 | ||
1247 | To create a zone, specify a zone name. If it is a master zone, specify | |
1248 | the ``--master`` option. Only one zone in a zone group may be a master | |
1249 | zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup`` | |
1250 | option with the zonegroup name. | |
1251 | ||
1252 | :: | |
1253 | ||
1254 | # radosgw-admin zone create --rgw-zone=<name> \ | |
1255 | [--zonegroup=<zonegroup-name]\ | |
1256 | [--endpoints=<endpoint>[,<endpoint>] \ | |
1257 | [--master] [--default] \ | |
1258 | --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY | |
1259 | ||
1260 | Then, update the period: | |
1261 | ||
1262 | :: | |
1263 | ||
1264 | # radosgw-admin period update --commit | |
1265 | ||
1266 | Delete a Zone | |
1267 | ~~~~~~~~~~~~~ | |
1268 | ||
1269 | To delete zone, first remove it from the zonegroup. | |
1270 | ||
1271 | :: | |
1272 | ||
1273 | # radosgw-admin zonegroup remove --zonegroup=<name>\ | |
1274 | --zone=<name> | |
1275 | ||
1276 | Then, update the period: | |
1277 | ||
1278 | :: | |
1279 | ||
1280 | # radosgw-admin period update --commit | |
1281 | ||
1282 | Next, delete the zone. Execute the following: | |
1283 | ||
1284 | :: | |
1285 | ||
eafe8130 | 1286 | # radosgw-admin zone rm --rgw-zone<name> |
7c673cae FG |
1287 | |
1288 | Finally, update the period: | |
1289 | ||
1290 | :: | |
1291 | ||
1292 | # radosgw-admin period update --commit | |
1293 | ||
1294 | .. important:: Do not delete a zone without removing it from a zone group first. | |
1295 | Otherwise, updating the period will fail. | |
1296 | ||
1297 | If the pools for the deleted zone will not be used anywhere else, | |
1298 | consider deleting the pools. Replace ``<del-zone>`` in the example below | |
1299 | with the deleted zone’s name. | |
1300 | ||
1301 | .. important:: Only delete the pools with prepended zone names. Deleting the root | |
1302 | pool, such as, ``.rgw.root`` will remove all of the system’s | |
1303 | configuration. | |
1304 | ||
1305 | .. important:: Once the pools are deleted, all of the data within them are deleted | |
1306 | in an unrecoverable manner. Only delete the pools if the pool | |
1307 | contents are no longer needed. | |
1308 | ||
1309 | :: | |
1310 | ||
11fdf7f2 TL |
1311 | # ceph osd pool rm <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it |
1312 | # ceph osd pool rm <del-zone>.rgw.data.root <del-zone>.rgw.data.root --yes-i-really-really-mean-it | |
1313 | # ceph osd pool rm <del-zone>.rgw.gc <del-zone>.rgw.gc --yes-i-really-really-mean-it | |
1314 | # ceph osd pool rm <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it | |
1315 | # ceph osd pool rm <del-zone>.rgw.users.uid <del-zone>.rgw.users.uid --yes-i-really-really-mean-it | |
7c673cae FG |
1316 | |
1317 | Modify a Zone | |
1318 | ~~~~~~~~~~~~~ | |
1319 | ||
1320 | To modify a zone, specify the zone name and the parameters you wish to | |
1321 | modify. | |
1322 | ||
1323 | :: | |
1324 | ||
1325 | # radosgw-admin zone modify [options] | |
1326 | ||
1327 | Where ``[options]``: | |
1328 | ||
1329 | - ``--access-key=<key>`` | |
1330 | - ``--secret/--secret-key=<key>`` | |
1331 | - ``--master`` | |
1332 | - ``--default`` | |
1333 | - ``--endpoints=<list>`` | |
1334 | ||
1335 | Then, update the period: | |
1336 | ||
1337 | :: | |
1338 | ||
1339 | # radosgw-admin period update --commit | |
1340 | ||
1341 | List Zones | |
1342 | ~~~~~~~~~~ | |
1343 | ||
1344 | As ``root``, to list the zones in a cluster, execute: | |
1345 | ||
1346 | :: | |
1347 | ||
1348 | # radosgw-admin zone list | |
1349 | ||
1350 | Get a Zone | |
1351 | ~~~~~~~~~~ | |
1352 | ||
1353 | As ``root``, to get the configuration of a zone, execute: | |
1354 | ||
1355 | :: | |
1356 | ||
1357 | # radosgw-admin zone get [--rgw-zone=<zone>] | |
1358 | ||
1359 | The ``default`` zone looks like this: | |
1360 | ||
1361 | :: | |
1362 | ||
1363 | { "domain_root": ".rgw", | |
1364 | "control_pool": ".rgw.control", | |
1365 | "gc_pool": ".rgw.gc", | |
1366 | "log_pool": ".log", | |
1367 | "intent_log_pool": ".intent-log", | |
1368 | "usage_log_pool": ".usage", | |
1369 | "user_keys_pool": ".users", | |
1370 | "user_email_pool": ".users.email", | |
1371 | "user_swift_pool": ".users.swift", | |
1372 | "user_uid_pool": ".users.uid", | |
1373 | "system_key": { "access_key": "", "secret_key": ""}, | |
1374 | "placement_pools": [ | |
1375 | { "key": "default-placement", | |
1376 | "val": { "index_pool": ".rgw.buckets.index", | |
1377 | "data_pool": ".rgw.buckets"} | |
1378 | } | |
1379 | ] | |
1380 | } | |
1381 | ||
1382 | Set a Zone | |
1383 | ~~~~~~~~~~ | |
1384 | ||
1385 | Configuring a zone involves specifying a series of Ceph Object Gateway | |
1386 | pools. For consistency, we recommend using a pool prefix that is the | |
1387 | same as the zone name. See | |
1388 | `Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__ | |
1389 | for details of configuring pools. | |
1390 | ||
1391 | To set a zone, create a JSON object consisting of the pools, save the | |
1392 | object to a file (e.g., ``zone.json``); then, execute the following | |
1393 | command, replacing ``{zone-name}`` with the name of the zone: | |
1394 | ||
1395 | :: | |
1396 | ||
1397 | # radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json | |
1398 | ||
1399 | Where ``zone.json`` is the JSON file you created. | |
1400 | ||
1401 | Then, as ``root``, update the period: | |
1402 | ||
1403 | :: | |
1404 | ||
1405 | # radosgw-admin period update --commit | |
1406 | ||
1407 | Rename a Zone | |
1408 | ~~~~~~~~~~~~~ | |
1409 | ||
1410 | To rename a zone, specify the zone name and the new zone name. | |
1411 | ||
1412 | :: | |
1413 | ||
1414 | # radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name> | |
1415 | ||
1416 | Then, update the period: | |
1417 | ||
1418 | :: | |
1419 | ||
1420 | # radosgw-admin period update --commit | |
1421 | ||
1422 | Zone Group and Zone Settings | |
1423 | ---------------------------- | |
1424 | ||
1425 | When configuring a default zone group and zone, the pool name includes | |
1426 | the zone name. For example: | |
1427 | ||
1428 | - ``default.rgw.control`` | |
1429 | ||
1430 | To change the defaults, include the following settings in your Ceph | |
1431 | configuration file under each ``[client.radosgw.{instance-name}]`` | |
1432 | instance. | |
1433 | ||
1434 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1435 | | Name | Description | Type | Default | | |
1436 | +=====================================+===================================+=========+=======================+ | |
1437 | | ``rgw_zone`` | The name of the zone for the | String | None | | |
1438 | | | gateway instance. | | | | |
1439 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1440 | | ``rgw_zonegroup`` | The name of the zone group for | String | None | | |
1441 | | | the gateway instance. | | | | |
1442 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1443 | | ``rgw_zonegroup_root_pool`` | The root pool for the zone group. | String | ``.rgw.root`` | | |
1444 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1445 | | ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` | | |
1446 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1447 | | ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` | | |
1448 | | | zone group. We do not recommend | | | | |
1449 | | | changing this setting. | | | | |
1450 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
31f18b77 FG |
1451 | |
1452 | ||
1453 | .. _`Pools`: ../pools | |
9f95a23c | 1454 | .. _`Sync Policy Config`: ../multisite-sync-policy |