]> git.proxmox.com Git - ceph.git/blob - ceph/doc/radosgw/multisite.rst
f497425fd8bcc07f0163a00cc491418f939e1710
[ceph.git] / ceph / doc / radosgw / multisite.rst
1 .. _multisite:
2
3 ==========
4 Multi-Site
5 ==========
6
7 Single-zone Configurations and Multi-site Configurations
8 ========================================================
9
10 Single-zone Configurations
11 --------------------------
12
13 A single-zone configuration typically consists of two things:
14
15 #. One "zonegroup", which contains one zone.
16 #. One or more `ceph-radosgw` instances that have `ceph-radosgw` client
17 requests load-balanced between them.
18
19 In a typical single-zone configuration, there are multiple `ceph-radosgw`
20 instances that make use of a single Ceph storage cluster.
21
22 Varieties of Multi-site Configuration
23 -------------------------------------
24
25 .. versionadded:: Jewel
26
27 Beginning with the Kraken release, Ceph supports several multi-site
28 configurations for the Ceph Object Gateway:
29
30 - **Multi-zone:** A more advanced topology, the "multi-zone" configuration, is
31 possible. A multi-zone configuration consists of one zonegroup and
32 multiple zones, with each zone consisting of one or more `ceph-radosgw`
33 instances. **Each zone is backed by its own Ceph Storage Cluster.**
34
35 The presence of multiple zones in a given zonegroup provides disaster
36 recovery for that zonegroup in the event that one of the zones experiences a
37 significant failure. Beginning with the Kraken release, each zone is active
38 and can receive write operations. A multi-zone configuration that contains
39 multiple active zones enhances disaster recovery and can also be used as a
40 foundation for content delivery networks.
41
42 - **Multi-zonegroups:** Ceph Object Gateway supports multiple zonegroups (which
43 were formerly called "regions"). Each zonegroup contains one or more zones.
44 If two zones are in the same zonegroup, and if that zonegroup is in the same
45 realm as a second zonegroup, then the objects stored in the two zones share
46 a global object namespace. This global object namespace ensures unique
47 object IDs across zonegroups and zones.
48
49 - **Multiple Realms:** Beginning with the Kraken release, the Ceph Object
50 Gateway supports "realms", which are containers for zonegroups. Realms make
51 it possible to set policies that apply to multiple zonegroups. Realms have a
52 globally unique namespace and can contain either a single zonegroup or
53 multiple zonegroups. If you choose to make use of multiple realms, you can
54 define multiple namespaces and multiple configurations (this means that each
55 realm can have a configuration that is distinct from the configuration of
56 other realms).
57
58 Diagram - Replication of Object Data Between Zones
59 --------------------------------------------------
60
61 The replication of object data between zones within a zonegroup looks
62 something like this:
63
64 .. image:: ../images/zone-sync.svg
65 :align: center
66
67 At the top of this diagram, we see two applications (also known as "clients").
68 The application on the right is both writing and reading data from the Ceph
69 Cluster, by means of the RADOS Gateway (RGW). The application on the left is
70 only *reading* data from the Ceph Cluster, by means of an instance of RADOS
71 Gateway (RGW). In both cases (read-and-write and read-only), the transmssion of
72 data is handled RESTfully.
73
74 In the middle of this diagram, we see two zones, each of which contains an
75 instance of RADOS Gateway (RGW). These instances of RGW are handling the
76 movement of data from the applications to the zonegroup. The arrow from the
77 master zone (US-EAST) to the secondary zone (US-WEST) represents an act of data
78 synchronization.
79
80 At the bottom of this diagram, we see the data distributed into the Ceph
81 Storage Cluster.
82
83 For additional details on setting up a cluster, see `Ceph Object Gateway for
84 Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_for_production/index/>`__.
85
86 Functional Changes from Infernalis
87 ==================================
88
89 Beginning with Kraken, each Ceph Object Gateway can be configured to work in an
90 active-active zone mode. This makes it possible to write to non-master zones.
91
92 The multi-site configuration is stored within a container called a "realm". The
93 realm stores zonegroups, zones, and a time "period" with multiple epochs (which
94 (the epochs) are used for tracking changes to the configuration).
95
96 Beginning with Kraken, the ``ceph-radosgw`` daemons handle the synchronization
97 of data across zones, which eliminates the need for a separate synchronization
98 agent. This new approach to synchronization allows the Ceph Object Gateway to
99 operate with an "active-active" configuration instead of with an
100 "active-passive" configuration.
101
102 Requirements and Assumptions
103 ============================
104
105 A multi-site configuration requires at least two Ceph storage clusters. The
106 multi-site configuration must have at least two Ceph object gateway instances
107 (one for each Ceph storage cluster).
108
109 This guide assumes that at least two Ceph storage clusters are in
110 geographically separate locations; however, the configuration can work on the
111 same site. This guide also assumes two Ceph object gateway servers named
112 ``rgw1`` and ``rgw2``.
113
114 .. important:: Running a single geographically-distributed Ceph storage cluster
115 is NOT recommended unless you have low latency WAN connections.
116
117 A multi-site configuration requires a master zonegroup and a master zone. Each
118 zonegroup requires a master zone. Zonegroups may have one or more secondary
119 or non-master zones.
120
121 In this guide, the ``rgw1`` host will serve as the master zone of the master
122 zonegroup; and, the ``rgw2`` host will serve as the secondary zone of the
123 master zonegroup.
124
125 See `Pools`_ for instructions on creating and tuning pools for Ceph Object
126 Storage.
127
128 See `Sync Policy Config`_ for instructions on defining fine-grained bucket sync
129 policy rules.
130
131 .. _master-zone-label:
132
133 Configuring a Master Zone
134 =========================
135
136 All gateways in a multi-site configuration retrieve their configurations from a
137 ``ceph-radosgw`` daemon that is on a host within both the master zonegroup and
138 the master zone. To configure your gateways in a multi-site configuration,
139 choose a ``ceph-radosgw`` instance to configure the master zonegroup and
140 master zone.
141
142 Create a Realm
143 --------------
144
145 A realm contains the multi-site configuration of zonegroups and zones. The
146 realm enforces a globally unique namespace within itself.
147
148 #. Create a new realm for the multi-site configuration by opening a command
149 line interface on a host that will serve in the master zonegroup and zone.
150 Then run the following command:
151
152 .. prompt:: bash #
153
154 radosgw-admin realm create --rgw-realm={realm-name} [--default]
155
156 For example:
157
158 .. prompt:: bash #
159
160 radosgw-admin realm create --rgw-realm=movies --default
161
162 .. note:: If you intend the cluster to have a single realm, specify the ``--default`` flag.
163
164 If ``--default`` is specified, ``radosgw-admin`` uses this realm by default.
165
166 If ``--default`` is not specified, you must specify either the ``--rgw-realm`` flag or the ``--realm-id`` flag to identify the realm when adding zonegroups and zones.
167
168 #. After the realm has been created, ``radosgw-admin`` echoes back the realm
169 configuration. For example:
170
171 ::
172
173 {
174 "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62",
175 "name": "movies",
176 "current_period": "1950b710-3e63-4c41-a19e-46a715000980",
177 "epoch": 1
178 }
179
180 .. note:: Ceph generates a unique ID for the realm, which can be used to rename the realm if the need arises.
181
182 Create a Master Zonegroup
183 --------------------------
184
185 A realm must have at least one zonegroup which serves as the master zonegroup
186 for the realm.
187
188 #. To create a new master zonegroup for the multi-site configuration, open a
189 command-line interface on a host in the master zonegroup and zone. Then
190 run the following command:
191
192 .. prompt:: bash #
193
194 radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default
195
196 For example:
197
198 .. prompt:: bash #
199
200 radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default
201
202 .. note:: If the realm will have only a single zonegroup, specify the ``--default`` flag.
203
204 If ``--default`` is specified, ``radosgw-admin`` uses this zonegroup by default when adding new zones.
205
206 If ``--default`` is not specified, you must use either the ``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the zonegroup when adding or modifying zones.
207
208 #. After creating the master zonegroup, ``radosgw-admin`` echoes back the
209 zonegroup configuration. For example:
210
211 ::
212
213 {
214 "id": "f1a233f5-c354-4107-b36c-df66126475a6",
215 "name": "us",
216 "api_name": "us",
217 "is_master": "true",
218 "endpoints": [
219 "http:\/\/rgw1:80"
220 ],
221 "hostnames": [],
222 "hostnames_s3website": [],
223 "master_zone": "",
224 "zones": [],
225 "placement_targets": [],
226 "default_placement": "",
227 "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62"
228 }
229
230 Create a Master Zone
231 --------------------
232
233 .. important:: Zones must be created on a Ceph Object Gateway node that will be
234 within the zone.
235
236 Create a new master zone for the multi-site configuration by opening a command
237 line interface on a host that serves in the master zonegroup and zone. Then
238 run the following command:
239
240 .. prompt:: bash #
241
242 radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
243 --rgw-zone={zone-name} \
244 --master --default \
245 --endpoints={http://fqdn}[,{http://fqdn}]
246
247 For example:
248
249 .. prompt:: bash #
250
251 radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \
252 --master --default \
253 --endpoints={http://fqdn}[,{http://fqdn}]
254
255
256 .. note:: The ``--access-key`` and ``--secret`` aren’t specified. These
257 settings will be added to the zone once the user is created in the
258 next section.
259
260 .. important:: The following steps assume a multi-site configuration that uses
261 newly installed systems that aren’t storing data yet. DO NOT DELETE the
262 ``default`` zone and its pools if you are already using the zone to store
263 data, or the data will be deleted and unrecoverable.
264
265 Delete Default Zonegroup and Zone
266 ----------------------------------
267
268 #. Delete the ``default`` zone if it exists. Remove it from the default
269 zonegroup first.
270
271 .. prompt:: bash #
272
273 radosgw-admin zonegroup delete --rgw-zonegroup=default --rgw-zone=default
274 radosgw-admin period update --commit
275 radosgw-admin zone delete --rgw-zone=default
276 radosgw-admin period update --commit
277 radosgw-admin zonegroup delete --rgw-zonegroup=default
278 radosgw-admin period update --commit
279
280 #. Delete the ``default`` pools in your Ceph storage cluster if they exist.
281
282 .. important:: The following step assumes a multi-site configuration that uses newly installed systems that aren’t currently storing data. DO NOT DELETE the ``default`` zonegroup if you are already using it to store data.
283
284 .. prompt:: bash #
285
286 ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
287 ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
288 ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
289 ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
290 ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
291
292 Create a System User
293 --------------------
294
295 #. The ``ceph-radosgw`` daemons must authenticate before pulling realm and
296 period information. In the master zone, create a "system user" to facilitate
297 authentication between daemons.
298
299 .. prompt:: bash #
300
301 radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system
302
303 For example:
304
305 .. prompt:: bash #
306
307 radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system
308
309 #. Make a note of the ``access_key`` and ``secret_key``. The secondary zones
310 require them to authenticate against the master zone.
311
312 #. Add the system user to the master zone:
313
314 .. prompt:: bash #
315
316 radosgw-admin zone modify --rgw-zone={zone-name} --access-key={access-key} --secret={secret}
317 radosgw-admin period update --commit
318
319 Update the Period
320 -----------------
321
322 After updating the master zone configuration, update the period.
323
324 .. prompt:: bash #
325
326 radosgw-admin period update --commit
327
328 .. note:: Updating the period changes the epoch, and ensures that other zones
329 will receive the updated configuration.
330
331 Update the Ceph Configuration File
332 ----------------------------------
333
334 Update the Ceph configuration file on master zone hosts by adding the
335 ``rgw_zone`` configuration option and the name of the master zone to the
336 instance entry.
337
338 ::
339
340 [client.rgw.{instance-name}]
341 ...
342 rgw_zone={zone-name}
343
344 For example:
345
346 ::
347
348 [client.rgw.rgw1]
349 host = rgw1
350 rgw frontends = "civetweb port=80"
351 rgw_zone=us-east
352
353 Start the Gateway
354 -----------------
355
356 On the object gateway host, start and enable the Ceph Object Gateway
357 service:
358
359 .. prompt:: bash #
360
361 systemctl start ceph-radosgw@rgw.`hostname -s`
362 systemctl enable ceph-radosgw@rgw.`hostname -s`
363
364 .. _secondary-zone-label:
365
366 Configuring Secondary Zones
367 ===========================
368
369 Zones that are within a zonegroup replicate all data in order to ensure that
370 every zone has the same data. When creating a secondary zone, run the following
371 operations on a host identified to serve the secondary zone.
372
373 .. note:: To add a second secondary zone (that is, a second non-master zone
374 within a zonegroup that already contains a secondary zone), follow :ref:`the
375 same procedures that are used for adding a secondary
376 zone<radosgw-multisite-secondary-zone-creating>`. Be sure to specify a
377 different zone name than the name of the first secondary zone.
378
379 .. important:: Metadata operations (for example, user creation) must be
380 run on a host within the master zone. Bucket operations can be received
381 by the master zone or the secondary zone, but the secondary zone will
382 redirect bucket operations to the master zone. If the master zone is down,
383 bucket operations will fail.
384
385 Pulling the Realm Configuration
386 -------------------------------
387
388 The URL path, access key, and secret of the master zone in the master zone
389 group are used to pull the realm configuration to the host. When pulling the
390 configuration of a non-default realm, specify the realm using the
391 ``--rgw-realm`` or ``--realm-id`` configuration options.
392
393 .. prompt:: bash #
394
395 radosgw-admin realm pull --url={url-to-master-zone-gateway}
396 --access-key={access-key} --secret={secret}
397
398 .. note:: Pulling the realm configuration also retrieves the remote's current
399 period configuration, and makes it the current period on this host as well.
400
401 If this realm is the only realm, run the following command to make it the
402 default realm:
403
404 .. prompt:: bash #
405
406 radosgw-admin realm default --rgw-realm={realm-name}
407
408 .. _radosgw-multisite-secondary-zone-creating:
409
410 Creating a Secondary Zone
411 -------------------------
412
413 .. important:: When a zone is created, it must be on a Ceph Object Gateway node
414 within the zone.
415
416 In order to create a secondary zone for the multi-site configuration, open a
417 command line interface on a host identified to serve the secondary zone.
418 Specify the zonegroup ID, the new zone name, and an endpoint for the zone.
419 **DO NOT** use the ``--master`` or ``--default`` flags. Beginning in Kraken,
420 all zones run in an active-active configuration by default, which means that a
421 gateway client may write data to any zone and the zone will replicate the data
422 to all other zones within the zonegroup. If you want to prevent the secondary
423 zone from accepting write operations, include the ``--read-only`` flag in the
424 command in order to create an active-passive configuration between the master
425 zone and the secondary zone. In any case, don't forget to provide the
426 ``access_key`` and ``secret_key`` of the generated system user that is stored
427 in the master zone of the master zonegroup. Run the following command:
428
429 .. prompt:: bash #
430
431 radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
432 --rgw-zone={zone-name} \
433 --access-key={system-key} --secret={secret} \
434 --endpoints=http://{fqdn}:80 \
435 [--read-only]
436
437 For example:
438
439
440 .. prompt:: bash #
441
442 radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \
443 --access-key={system-key} --secret={secret} \
444 --endpoints=http://rgw2:80
445
446 .. important:: The following steps assume a multi-site configuration that uses
447 newly installed systems that have not yet begun storing data. **DO NOT
448 DELETE the ``default`` zone or its pools** if you are already using it to
449 store data, or the data will be irretrievably lost.
450
451 Delete the default zone if needed:
452
453 .. prompt:: bash #
454
455 radosgw-admin zone delete --rgw-zone=default
456
457 Finally, delete the default pools in your Ceph storage cluster if needed:
458
459 .. prompt:: bash #
460
461 ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
462 ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
463 ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
464 ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
465 ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
466
467 Updating the Ceph Configuration File
468 ------------------------------------
469
470 To update the Ceph configuration file on the secondary zone hosts, add the
471 ``rgw_zone`` configuration option and the name of the secondary zone to the
472 instance entry.
473
474 ::
475
476 [client.rgw.{instance-name}]
477 ...
478 rgw_zone={zone-name}
479
480 For example:
481
482 ::
483
484 [client.rgw.rgw2]
485 host = rgw2
486 rgw frontends = "civetweb port=80"
487 rgw_zone=us-west
488
489 Updating the Period
490 -------------------
491
492 After updating the master zone configuration, update the period:
493
494 .. prompt:: bash #
495
496 radosgw-admin period update --commit
497
498 .. note:: Updating the period changes the epoch, and ensures that other zones
499 will receive the updated configuration.
500
501 Starting the Gateway
502 --------------------
503
504 To start the gateway, start and enable the Ceph Object Gateway service by
505 running the following commands on the object gateway host:
506
507 .. prompt:: bash #
508
509 systemctl start ceph-radosgw@rgw.`hostname -s`
510 systemctl enable ceph-radosgw@rgw.`hostname -s`
511
512 Checking Synchronization Status
513 -------------------------------
514
515 After the secondary zone is up and running, you can check the synchronization
516 status. The process of synchronization will copy users and buckets that were
517 created in the master zone from the master zone to the secondary zone.
518
519 .. prompt:: bash #
520
521 radosgw-admin sync status
522
523 The output reports the status of synchronization operations. For example:
524
525 ::
526
527 realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth)
528 zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us)
529 zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west)
530 metadata sync syncing
531 full sync: 0/64 shards
532 metadata is caught up with master
533 incremental sync: 64/64 shards
534 data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east)
535 syncing
536 full sync: 0/128 shards
537 incremental sync: 128/128 shards
538 data is caught up with source
539
540 .. note:: Secondary zones accept bucket operations; however, secondary zones
541 redirect bucket operations to the master zone and then synchronize with the
542 master zone to receive the result of the bucket operations. If the master
543 zone is down, bucket operations executed on the secondary zone will fail,
544 but object operations should succeed.
545
546
547 Verifying an Object
548 -------------------
549
550 By default, after the successful synchronization of an object there is no
551 subsequent verification of the object. However, you can enable verification by
552 setting :confval:`rgw_sync_obj_etag_verify` to ``true``. After this value is
553 set to true, an MD5 checksum is used to verify the integrity of the data that
554 was transferred from the source to the destination. This ensures the integrity
555 of any object that has been fetched from a remote server over HTTP (including
556 multisite sync). This option may decrease the performance of your RGW because
557 it requires more computation.
558
559
560 Maintenance
561 ===========
562
563 Checking the Sync Status
564 ------------------------
565
566 Information about the replication status of a zone can be queried with:
567
568 .. prompt:: bash $
569
570 radosgw-admin sync status
571
572 ::
573
574 realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth)
575 zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us)
576 zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2)
577 metadata sync syncing
578 full sync: 0/64 shards
579 incremental sync: 64/64 shards
580 metadata is behind on 1 shards
581 oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s
582 data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1)
583 syncing
584 full sync: 0/128 shards
585 incremental sync: 128/128 shards
586 data is caught up with source
587 source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3)
588 syncing
589 full sync: 0/128 shards
590 incremental sync: 128/128 shards
591 data is caught up with source
592
593 The output might be different, depending on the sync status. During sync, the
594 shards are of two types:
595
596 - **Behind shards** are shards that require a data sync (either a full data
597 sync or an incremental data sync) in order to be brought up to date.
598
599 - **Recovery shards** are shards that encountered an error during sync and have
600 been marked for retry. The error occurs mostly on minor issues, such as
601 acquiring a lock on a bucket. Errors of this kind typically resolve on their
602 own.
603
604 Check the logs
605 --------------
606
607 For multi-site deployments only, you can examine the metadata log (``mdlog``),
608 the bucket index log (``bilog``), and the data log (``datalog``). You can list
609 them and also trim them. Trimming is not needed in most cases because
610 :confval:`rgw_sync_log_trim_interval` is set to 20 minutes by default. It
611 should not be necessary to trim the logs unless
612 :confval:`rgw_sync_log_trim_interval` has been manually set to 0.
613
614 Changing the Metadata Master Zone
615 ---------------------------------
616
617 .. important:: Care must be taken when changing the metadata master zone by
618 promoting a zone to master. A zone that isn't finished syncing metadata from
619 the current master zone will be unable to serve any remaining entries if it
620 is promoted to master, and those metadata changes will be lost. For this
621 reason, we recommend waiting for a zone's ``radosgw-admin sync status`` to
622 complete the process of synchronizing the metadata before promoting the zone
623 to master.
624
625 Similarly, if the current master zone is processing changes to metadata at the
626 same time that another zone is being promoted to master, these changes are
627 likely to be lost. To avoid losing these changes, we recommend shutting down
628 any ``radosgw`` instances on the previous master zone. After the new master
629 zone has been promoted, the previous master zone's new period can be fetched
630 with ``radosgw-admin period pull`` and the gateway(s) can be restarted.
631
632 To promote a zone to metadata master, run the following commands on that zone
633 (in this example, the zone is zone ``us-2`` in zonegroup ``us``):
634
635 .. prompt:: bash $
636
637 radosgw-admin zone modify --rgw-zone=us-2 --master
638 radosgw-admin zonegroup modify --rgw-zonegroup=us --master
639 radosgw-admin period update --commit
640
641 This generates a new period, and the radosgw instance(s) in zone ``us-2`` sends
642 this period to other zones.
643
644 Failover and Disaster Recovery
645 ==============================
646
647 Setting Up Failover to the Secondary Zone
648 -----------------------------------------
649
650 If the master zone fails, you can fail over to the secondary zone for
651 disaster recovery by following these steps:
652
653 #. Make the secondary zone the master and default zone. For example:
654
655 .. prompt:: bash #
656
657 radosgw-admin zone modify --rgw-zone={zone-name} --master --default
658
659 By default, Ceph Object Gateway runs in an active-active
660 configuration. However, if the cluster is configured to run in an
661 active-passive configuration, the secondary zone is a read-only zone.
662 To allow the secondary zone to receive write
663 operations, remove its ``--read-only`` status. For example:
664
665 .. prompt:: bash #
666
667 radosgw-admin zone modify --rgw-zone={zone-name} --master --default \
668 --read-only=false
669
670 #. Update the period to make the changes take effect.
671
672 .. prompt:: bash #
673
674 radosgw-admin period update --commit
675
676 #. Finally, restart the Ceph Object Gateway.
677
678 .. prompt:: bash #
679
680 systemctl restart ceph-radosgw@rgw.`hostname -s`
681
682 Reverting from Failover
683 -----------------------
684
685 If the former master zone recovers, you can revert the failover operation by following these steps:
686
687 #. From within the recovered zone, pull the latest realm configuration
688 from the current master zone:
689
690 .. prompt:: bash #
691
692 radosgw-admin realm pull --url={url-to-master-zone-gateway} \
693 --access-key={access-key} --secret={secret}
694
695 #. Make the recovered zone the master and default zone:
696
697 .. prompt:: bash #
698
699 radosgw-admin zone modify --rgw-zone={zone-name} --master --default
700
701 #. Update the period so that the changes take effect:
702
703 .. prompt:: bash #
704
705 radosgw-admin period update --commit
706
707 #. Restart the Ceph Object Gateway in the recovered zone:
708
709 .. prompt:: bash #
710
711 systemctl restart ceph-radosgw@rgw.`hostname -s`
712
713 #. If the secondary zone needs to be a read-only configuration, update
714 the secondary zone:
715
716 .. prompt:: bash #
717
718 radosgw-admin zone modify --rgw-zone={zone-name} --read-only
719
720 #. Update the period so that the changes take effect:
721
722 .. prompt:: bash #
723
724 radosgw-admin period update --commit
725
726 #. Restart the Ceph Object Gateway in the secondary zone:
727
728 .. prompt:: bash #
729
730 systemctl restart ceph-radosgw@rgw.`hostname -s`
731
732 .. _rgw-multisite-migrate-from-single-site:
733
734 Migrating a Single-Site Deployment to Multi-Site
735 =================================================
736
737 To migrate from a single-site deployment with a ``default`` zonegroup and zone
738 to a multi-site system, follow these steps:
739
740 1. Create a realm. Replace ``<name>`` with the realm name:
741
742 .. prompt:: bash #
743
744 radosgw-admin realm create --rgw-realm=<name> --default
745
746 2. Rename the default zonegroup and zone. Replace ``<name>`` with the zone name
747 or zonegroup name:
748
749 .. prompt:: bash #
750
751 radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name>
752 radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name>
753
754 3. Configure the master zonegroup. Replace ``<name>`` with the realm name or
755 zonegroup name. Replace ``<fqdn>`` with the fully qualified domain name(s)
756 in the zonegroup:
757
758 .. prompt:: bash #
759
760 radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default
761
762 4. Configure the master zone. Replace ``<name>`` with the realm name, zone
763 name, or zonegroup name. Replace ``<fqdn>`` with the fully qualified domain
764 name(s) in the zonegroup:
765
766 .. prompt:: bash #
767
768 radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \
769 --rgw-zone=<name> --endpoints http://<fqdn>:80 \
770 --access-key=<access-key> --secret=<secret-key> \
771 --master --default
772
773 5. Create a system user. Replace ``<user-id>`` with the username. Replace
774 ``<display-name>`` with a display name. The display name is allowed to
775 contain spaces:
776
777 .. prompt:: bash #
778
779 radosgw-admin user create --uid=<user-id> \
780 --display-name="<display-name>" \
781 --access-key=<access-key> \
782 --secret=<secret-key> --system
783
784 6. Commit the updated configuration:
785
786 .. prompt:: bash #
787
788 radosgw-admin period update --commit
789
790 7. Restart the Ceph Object Gateway:
791
792 .. prompt:: bash #
793
794 systemctl restart ceph-radosgw@rgw.`hostname -s`
795
796 After completing this procedure, proceed to `Configure a Secondary
797 Zone <#configure-secondary-zones>`_ and create a secondary zone
798 in the master zonegroup.
799
800 Multi-Site Configuration Reference
801 ==================================
802
803 The following sections provide additional details and command-line
804 usage for realms, periods, zonegroups and zones.
805
806 For more details on every available configuration option, see
807 ``src/common/options/rgw.yaml.in``.
808
809 Alternatively, go to the :ref:`mgr-dashboard` configuration page (found under
810 `Cluster`), where you can view and set all of the options. While on the page,
811 set the level to ``advanced`` and search for RGW to see all basic and advanced
812 configuration options.
813
814 .. _rgw-realms:
815
816 Realms
817 ------
818
819 A realm is a globally unique namespace that consists of one or more zonegroups.
820 Zonegroups contain one or more zones. Zones contain buckets. Buckets contain
821 objects.
822
823 Realms make it possible for the Ceph Object Gateway to support multiple
824 namespaces and their configurations on the same hardware.
825
826 Each realm is associated with a "period". A period represents the state
827 of the zonegroup and zone configuration in time. Each time you make a
828 change to a zonegroup or zone, you should update and commit the period.
829
830 To ensure backward compatibility with Infernalis and earlier releases, the Ceph
831 Object Gateway does not by default create a realm. However, as a best practice,
832 we recommend that you create realms when creating new clusters.
833
834 Create a Realm
835 ~~~~~~~~~~~~~~
836
837 To create a realm, run ``realm create`` and specify the realm name.
838 If the realm is the default, specify ``--default``.
839
840 .. prompt:: bash #
841
842 radosgw-admin realm create --rgw-realm={realm-name} [--default]
843
844 For example:
845
846 .. prompt:: bash #
847
848 radosgw-admin realm create --rgw-realm=movies --default
849
850 By specifying ``--default``, the realm will be called implicitly with
851 each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name
852 are explicitly provided.
853
854 Make a Realm the Default
855 ~~~~~~~~~~~~~~~~~~~~~~~~
856
857 One realm in the list of realms should be the default realm. There may be only
858 one default realm. If there is only one realm and it wasn’t specified as the
859 default realm when it was created, make it the default realm. Alternatively, to
860 change which realm is the default, run the following command:
861
862 .. prompt:: bash #
863
864 radosgw-admin realm default --rgw-realm=movies
865
866 .. note:: When the realm is default, the command line assumes
867 ``--rgw-realm=<realm-name>`` as an argument.
868
869 Delete a Realm
870 ~~~~~~~~~~~~~~
871
872 To delete a realm, run ``realm rm`` and specify the realm name:
873
874 .. prompt:: bash #
875
876 radosgw-admin realm rm --rgw-realm={realm-name}
877
878 For example:
879
880 .. prompt:: bash #
881
882 radosgw-admin realm rm --rgw-realm=movies
883
884 Get a Realm
885 ~~~~~~~~~~~
886
887 To get a realm, run ``realm get`` and specify the realm name:
888
889 .. prompt:: bash #
890
891 radosgw-admin realm get --rgw-realm=<name>
892
893 For example:
894
895 .. prompt:: bash #
896
897 radosgw-admin realm get --rgw-realm=movies [> filename.json]
898
899 ::
900
901 {
902 "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc",
903 "name": "movies",
904 "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b",
905 "epoch": 1
906 }
907
908 Set a Realm
909 ~~~~~~~~~~~
910
911 To set a realm, run ``realm set``, specify the realm name, and use the
912 ``--infile=`` option (make sure that the ``--infile`` option has an input file
913 name as an argument):
914
915 .. prompt:: bash #
916
917 radosgw-admin realm set --rgw-realm=<name> --infile=<infilename>
918
919 For example:
920
921 .. prompt:: bash #
922
923 radosgw-admin realm set --rgw-realm=movies --infile=filename.json
924
925 List Realms
926 ~~~~~~~~~~~
927
928 To list realms, run ``realm list``:
929
930 .. prompt:: bash #
931
932 radosgw-admin realm list
933
934 List Realm Periods
935 ~~~~~~~~~~~~~~~~~~
936
937 To list realm periods, run ``realm list-periods``:
938
939 .. prompt:: bash #
940
941 radosgw-admin realm list-periods
942
943 Pull a Realm
944 ~~~~~~~~~~~~
945
946 To pull a realm from the node that contains both the master zonegroup and
947 master zone to a node that contains a secondary zonegroup or zone, run ``realm
948 pull`` on the node that will receive the realm configuration:
949
950 .. prompt:: bash #
951
952 radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
953
954 Rename a Realm
955 ~~~~~~~~~~~~~~
956
957 A realm is not part of the period. Consequently, any renaming of the realm is
958 applied only locally, and will therefore not get pulled when you run ``realm
959 pull``. If you are renaming a realm that contains multiple zones, run the
960 ``rename`` command on each zone.
961
962 To rename a realm, run the following:
963
964 .. prompt:: bash #
965
966 radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name>
967
968 .. note:: DO NOT use ``realm set`` to change the ``name`` parameter. Doing so
969 changes the internal name only. If you use ``realm set`` to change the
970 ``name`` parameter, then ``--rgw-realm`` still expects the realm's old name.
971
972 Zonegroups
973 -----------
974
975 Zonegroups make it possible for the Ceph Object Gateway to support multi-site
976 deployments and a global namespace. Zonegroups were formerly called "regions"
977 (in releases prior to and including Infernalis).
978
979 A zonegroup defines the geographic location of one or more Ceph Object Gateway
980 instances within one or more zones.
981
982 The configuration of zonegroups differs from typical configuration procedures,
983 because not all of the zonegroup configuration settings are stored to a
984 configuration file.
985
986 You can list zonegroups, get a zonegroup configuration, and set a zonegroup
987 configuration.
988
989 Creating a Zonegroup
990 ~~~~~~~~~~~~~~~~~~~~
991
992 Creating a zonegroup consists of specifying the zonegroup name. Newly created
993 zones reside in the default realm unless a different realm is specified by
994 using the option ``--rgw-realm=<realm-name>``.
995
996 If the zonegroup is the default zonegroup, specify the ``--default`` flag. If
997 the zonegroup is the master zonegroup, specify the ``--master`` flag. For
998 example:
999
1000 .. prompt:: bash #
1001
1002 radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default]
1003
1004
1005 .. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify
1006 an existing zonegroup’s settings.
1007
1008 Making a Zonegroup the Default
1009 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1010
1011 One zonegroup in the list of zonegroups must be the default zonegroup. There
1012 can be only one default zonegroup. In the case that there is only one zonegroup
1013 which was not designated the default zonegroup when it was created, use the
1014 following command to make it the default zonegroup. Commands of this form can
1015 be used to change which zonegroup is the default.
1016
1017 #. Designate a zonegroup as the default zonegroup:
1018
1019 .. prompt:: bash #
1020
1021 radosgw-admin zonegroup default --rgw-zonegroup=comedy
1022
1023 .. note:: When the zonegroup is default, the command line assumes that the name of the zonegroup will be the argument of the ``--rgw-zonegroup=<zonegroup-name>`` option. (In this example, ``<zonegroup-name>`` has been retained for the sake of consistency and legibility.)
1024
1025 #. Update the period:
1026
1027 .. prompt:: bash #
1028
1029 radosgw-admin period update --commit
1030
1031 Adding a Zone to a Zonegroup
1032 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1033
1034 This procedure explains how to add a zone to a zonegroup.
1035
1036 #. Run the following command to add a zone to a zonegroup:
1037
1038 .. prompt:: bash #
1039
1040 radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name>
1041
1042 #. Update the period:
1043
1044 .. prompt:: bash #
1045
1046 radosgw-admin period update --commit
1047
1048 Removing a Zone from a Zonegroup
1049 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1050
1051 #. Run this command to remove a zone from a zonegroup:
1052
1053 .. prompt:: bash #
1054
1055 radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name>
1056
1057 #. Update the period:
1058
1059 .. prompt:: bash #
1060
1061 radosgw-admin period update --commit
1062
1063 Renaming a Zonegroup
1064 ~~~~~~~~~~~~~~~~~~~~
1065
1066 #. Run this command to rename the zonegroup:
1067
1068 .. prompt:: bash #
1069
1070 radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name>
1071
1072 #. Update the period:
1073
1074 .. prompt:: bash #
1075
1076 radosgw-admin period update --commit
1077
1078 Deleting a Zonegroup
1079 ~~~~~~~~~~~~~~~~~~~~
1080
1081 #. To delete a zonegroup, run the following command:
1082
1083 .. prompt:: bash #
1084
1085 radosgw-admin zonegroup delete --rgw-zonegroup=<name>
1086
1087 #. Update the period:
1088
1089 .. prompt:: bash #
1090
1091 radosgw-admin period update --commit
1092
1093 Listing Zonegroups
1094 ~~~~~~~~~~~~~~~~~~
1095
1096 A Ceph cluster contains a list of zonegroup. To list the zonegroups, run
1097 this command:
1098
1099 .. prompt:: bash #
1100
1101 radosgw-admin zonegroup list
1102
1103 The ``radosgw-admin`` returns a JSON formatted list of zonegroups.
1104
1105 ::
1106
1107 {
1108 "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1109 "zonegroups": [
1110 "us"
1111 ]
1112 }
1113
1114 Getting a Zonegroup Map
1115 ~~~~~~~~~~~~~~~~~~~~~~~~
1116
1117 To list the details of each zonegroup, run this command:
1118
1119 .. prompt:: bash #
1120
1121 radosgw-admin zonegroup-map get
1122
1123 .. note:: If you receive a ``failed to read zonegroup map`` error, run
1124 ``radosgw-admin zonegroup-map update`` as ``root`` first.
1125
1126 Getting a Zonegroup
1127 ~~~~~~~~~~~~~~~~~~~~
1128
1129 To view the configuration of a zonegroup, run this command:
1130
1131 .. prompt:: bash #
1132
1133 radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>]
1134
1135 The zonegroup configuration looks like this:
1136
1137 ::
1138
1139 {
1140 "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1141 "name": "us",
1142 "api_name": "us",
1143 "is_master": "true",
1144 "endpoints": [
1145 "http:\/\/rgw1:80"
1146 ],
1147 "hostnames": [],
1148 "hostnames_s3website": [],
1149 "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
1150 "zones": [
1151 {
1152 "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
1153 "name": "us-east",
1154 "endpoints": [
1155 "http:\/\/rgw1"
1156 ],
1157 "log_meta": "true",
1158 "log_data": "true",
1159 "bucket_index_max_shards": 0,
1160 "read_only": "false"
1161 },
1162 {
1163 "id": "d1024e59-7d28-49d1-8222-af101965a939",
1164 "name": "us-west",
1165 "endpoints": [
1166 "http:\/\/rgw2:80"
1167 ],
1168 "log_meta": "false",
1169 "log_data": "true",
1170 "bucket_index_max_shards": 0,
1171 "read_only": "false"
1172 }
1173 ],
1174 "placement_targets": [
1175 {
1176 "name": "default-placement",
1177 "tags": []
1178 }
1179 ],
1180 "default_placement": "default-placement",
1181 "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
1182 }
1183
1184 Setting a Zonegroup
1185 ~~~~~~~~~~~~~~~~~~~~
1186
1187 The process of defining a zonegroup consists of creating a JSON object and
1188 specifying the required settings. Here is a list of the required settings:
1189
1190 1. ``name``: The name of the zonegroup. Required.
1191
1192 2. ``api_name``: The API name for the zonegroup. Optional.
1193
1194 3. ``is_master``: Determines whether the zonegroup is the master zonegroup.
1195 Required. **note:** You can only have one master zonegroup.
1196
1197 4. ``endpoints``: A list of all the endpoints in the zonegroup. For example,
1198 you may use multiple domain names to refer to the same zonegroup. Remember
1199 to escape the forward slashes (``\/``). You may also specify a port
1200 (``fqdn:port``) for each endpoint. Optional.
1201
1202 5. ``hostnames``: A list of all the hostnames in the zonegroup. For example,
1203 you may use multiple domain names to refer to the same zonegroup. Optional.
1204 The ``rgw dns name`` setting will be included in this list automatically.
1205 Restart the gateway daemon(s) after changing this setting.
1206
1207 6. ``master_zone``: The master zone for the zonegroup. Optional. Uses
1208 the default zone if not specified. **note:** You can only have one
1209 master zone per zonegroup.
1210
1211 7. ``zones``: A list of all zones within the zonegroup. Each zone has a name
1212 (required), a list of endpoints (optional), and a setting that determines
1213 whether the gateway will log metadata and data operations (false by
1214 default).
1215
1216 8. ``placement_targets``: A list of placement targets (optional). Each
1217 placement target contains a name (required) for the placement target
1218 and a list of tags (optional) so that only users with the tag can use
1219 the placement target (that is, the user’s ``placement_tags`` field in
1220 the user info).
1221
1222 9. ``default_placement``: The default placement target for the object index and
1223 object data. Set to ``default-placement`` by default. It is also possible
1224 to set a per-user default placement in the user info for each user.
1225
1226 Setting a Zonegroup - Procedure
1227 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1228
1229 #. To set a zonegroup, create a JSON object that contains the required fields,
1230 save the object to a file (for example, ``zonegroup.json``), and run the
1231 following command:
1232
1233 .. prompt:: bash #
1234
1235 radosgw-admin zonegroup set --infile zonegroup.json
1236
1237 Where ``zonegroup.json`` is the JSON file you created.
1238
1239 .. important:: The ``default`` zonegroup ``is_master`` setting is ``true`` by default. If you create an additional zonegroup and want to make it the master zonegroup, you must either set the ``default`` zonegroup ``is_master`` setting to ``false`` or delete the ``default`` zonegroup.
1240
1241 #. Update the period:
1242
1243 .. prompt:: bash #
1244
1245 radosgw-admin period update --commit
1246
1247 Setting a Zonegroup Map
1248 ~~~~~~~~~~~~~~~~~~~~~~~~
1249
1250 The process of setting a zonegroup map comprises (1) creating a JSON object
1251 that consists of one or more zonegroups, and (2) setting the
1252 ``master_zonegroup`` for the cluster. Each zonegroup in the zonegroup map
1253 consists of a key/value pair where the ``key`` setting is equivalent to the
1254 ``name`` setting for an individual zonegroup configuration and the ``val`` is
1255 a JSON object consisting of an individual zonegroup configuration.
1256
1257 You may only have one zonegroup with ``is_master`` equal to ``true``, and it
1258 must be specified as the ``master_zonegroup`` at the end of the zonegroup map.
1259 The following JSON object is an example of a default zonegroup map:
1260
1261 ::
1262
1263 {
1264 "zonegroups": [
1265 {
1266 "key": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1267 "val": {
1268 "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1269 "name": "us",
1270 "api_name": "us",
1271 "is_master": "true",
1272 "endpoints": [
1273 "http:\/\/rgw1:80"
1274 ],
1275 "hostnames": [],
1276 "hostnames_s3website": [],
1277 "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
1278 "zones": [
1279 {
1280 "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
1281 "name": "us-east",
1282 "endpoints": [
1283 "http:\/\/rgw1"
1284 ],
1285 "log_meta": "true",
1286 "log_data": "true",
1287 "bucket_index_max_shards": 0,
1288 "read_only": "false"
1289 },
1290 {
1291 "id": "d1024e59-7d28-49d1-8222-af101965a939",
1292 "name": "us-west",
1293 "endpoints": [
1294 "http:\/\/rgw2:80"
1295 ],
1296 "log_meta": "false",
1297 "log_data": "true",
1298 "bucket_index_max_shards": 0,
1299 "read_only": "false"
1300 }
1301 ],
1302 "placement_targets": [
1303 {
1304 "name": "default-placement",
1305 "tags": []
1306 }
1307 ],
1308 "default_placement": "default-placement",
1309 "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
1310 }
1311 }
1312 ],
1313 "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1314 "bucket_quota": {
1315 "enabled": false,
1316 "max_size_kb": -1,
1317 "max_objects": -1
1318 },
1319 "user_quota": {
1320 "enabled": false,
1321 "max_size_kb": -1,
1322 "max_objects": -1
1323 }
1324 }
1325
1326 #. To set a zonegroup map, run the following command:
1327
1328 .. prompt:: bash #
1329
1330 radosgw-admin zonegroup-map set --infile zonegroupmap.json
1331
1332 In this command, ``zonegroupmap.json`` is the JSON file you created. Ensure
1333 that you have zones created for the ones specified in the zonegroup map.
1334
1335 #. Update the period:
1336
1337 .. prompt:: bash #
1338
1339 radosgw-admin period update --commit
1340
1341 .. _radosgw-zones:
1342
1343 Zones
1344 -----
1345
1346 A zone defines a logical group that consists of one or more Ceph Object Gateway
1347 instances. Ceph Object Gateway supports zones.
1348
1349 The procedure for configuring zones differs from typical configuration
1350 procedures, because not all of the settings end up in a Ceph configuration
1351 file.
1352
1353 Zones can be listed. You can "get" a zone configuration and "set" a zone
1354 configuration.
1355
1356 Creating a Zone
1357 ~~~~~~~~~~~~~~~
1358
1359 To create a zone, specify a zone name. If you are creating a master zone,
1360 specify the ``--master`` flag. Only one zone in a zonegroup may be a master
1361 zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup`` option
1362 with the zonegroup name.
1363
1364 .. prompt:: bash #
1365
1366 radosgw-admin zone create --rgw-zone=<name> \
1367 [--zonegroup=<zonegroup-name]\
1368 [--endpoints=<endpoint>[,<endpoint>] \
1369 [--master] [--default] \
1370 --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY
1371
1372 After you have created the zone, update the period:
1373
1374 .. prompt:: bash #
1375
1376 radosgw-admin period update --commit
1377
1378 Deleting a Zone
1379 ~~~~~~~~~~~~~~~
1380
1381 To delete a zone, first remove it from the zonegroup:
1382
1383 .. prompt:: bash #
1384
1385 radosgw-admin zonegroup remove --zonegroup=<name>\
1386 --zone=<name>
1387
1388 Then, update the period:
1389
1390 .. prompt:: bash #
1391
1392 radosgw-admin period update --commit
1393
1394 Next, delete the zone:
1395
1396 .. prompt:: bash #
1397
1398 radosgw-admin zone delete --rgw-zone<name>
1399
1400 Finally, update the period:
1401
1402 .. prompt:: bash #
1403
1404 radosgw-admin period update --commit
1405
1406 .. important:: Do not delete a zone without removing it from a zonegroup first.
1407 Otherwise, updating the period will fail.
1408
1409 If the pools for the deleted zone will not be used anywhere else,
1410 consider deleting the pools. Replace ``<del-zone>`` in the example below
1411 with the deleted zone’s name.
1412
1413 .. important:: Only delete the pools with prepended zone names. Deleting the
1414 root pool (for example, ``.rgw.root``) will remove all of the system’s
1415 configuration.
1416
1417 .. important:: When the pools are deleted, all of the data within them are
1418 deleted in an unrecoverable manner. Delete the pools only if the pool's
1419 contents are no longer needed.
1420
1421 .. prompt:: bash #
1422
1423 ceph osd pool rm <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it
1424 ceph osd pool rm <del-zone>.rgw.meta <del-zone>.rgw.meta --yes-i-really-really-mean-it
1425 ceph osd pool rm <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it
1426 ceph osd pool rm <del-zone>.rgw.otp <del-zone>.rgw.otp --yes-i-really-really-mean-it
1427 ceph osd pool rm <del-zone>.rgw.buckets.index <del-zone>.rgw.buckets.index --yes-i-really-really-mean-it
1428 ceph osd pool rm <del-zone>.rgw.buckets.non-ec <del-zone>.rgw.buckets.non-ec --yes-i-really-really-mean-it
1429 ceph osd pool rm <del-zone>.rgw.buckets.data <del-zone>.rgw.buckets.data --yes-i-really-really-mean-it
1430
1431 Modifying a Zone
1432 ~~~~~~~~~~~~~~~~
1433
1434 To modify a zone, specify the zone name and the parameters you wish to
1435 modify.
1436
1437 .. prompt:: bash #
1438
1439 radosgw-admin zone modify [options]
1440
1441 Where ``[options]``:
1442
1443 - ``--access-key=<key>``
1444 - ``--secret/--secret-key=<key>``
1445 - ``--master``
1446 - ``--default``
1447 - ``--endpoints=<list>``
1448
1449 Then, update the period:
1450
1451 .. prompt:: bash #
1452
1453 radosgw-admin period update --commit
1454
1455 Listing Zones
1456 ~~~~~~~~~~~~~
1457
1458 As ``root``, to list the zones in a cluster, run the following command:
1459
1460 .. prompt:: bash #
1461
1462 radosgw-admin zone list
1463
1464 Getting a Zone
1465 ~~~~~~~~~~~~~~
1466
1467 As ``root``, to get the configuration of a zone, run the following command:
1468
1469 .. prompt:: bash #
1470
1471 radosgw-admin zone get [--rgw-zone=<zone>]
1472
1473 The ``default`` zone looks like this:
1474
1475 ::
1476
1477 { "domain_root": ".rgw",
1478 "control_pool": ".rgw.control",
1479 "gc_pool": ".rgw.gc",
1480 "log_pool": ".log",
1481 "intent_log_pool": ".intent-log",
1482 "usage_log_pool": ".usage",
1483 "user_keys_pool": ".users",
1484 "user_email_pool": ".users.email",
1485 "user_swift_pool": ".users.swift",
1486 "user_uid_pool": ".users.uid",
1487 "system_key": { "access_key": "", "secret_key": ""},
1488 "placement_pools": [
1489 { "key": "default-placement",
1490 "val": { "index_pool": ".rgw.buckets.index",
1491 "data_pool": ".rgw.buckets"}
1492 }
1493 ]
1494 }
1495
1496 Setting a Zone
1497 ~~~~~~~~~~~~~~
1498
1499 Configuring a zone involves specifying a series of Ceph Object Gateway
1500 pools. For consistency, we recommend using a pool prefix that is the
1501 same as the zone name. See
1502 `Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
1503 for details of configuring pools.
1504
1505 To set a zone, create a JSON object consisting of the pools, save the
1506 object to a file (e.g., ``zone.json``); then, run the following
1507 command, replacing ``{zone-name}`` with the name of the zone:
1508
1509 .. prompt:: bash #
1510
1511 radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json
1512
1513 Where ``zone.json`` is the JSON file you created.
1514
1515 Then, as ``root``, update the period:
1516
1517 .. prompt:: bash #
1518
1519 radosgw-admin period update --commit
1520
1521 Renaming a Zone
1522 ~~~~~~~~~~~~~~~
1523
1524 To rename a zone, specify the zone name and the new zone name.
1525
1526 .. prompt:: bash #
1527
1528 radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name>
1529
1530 Then, update the period:
1531
1532 .. prompt:: bash #
1533
1534 radosgw-admin period update --commit
1535
1536 Zonegroup and Zone Settings
1537 ----------------------------
1538
1539 When configuring a default zonegroup and zone, the pool name includes
1540 the zone name. For example:
1541
1542 - ``default.rgw.control``
1543
1544 To change the defaults, include the following settings in your Ceph
1545 configuration file under each ``[client.radosgw.{instance-name}]``
1546 instance.
1547
1548 +-------------------------------------+-----------------------------------+---------+-----------------------+
1549 | Name | Description | Type | Default |
1550 +=====================================+===================================+=========+=======================+
1551 | ``rgw_zone`` | The name of the zone for the | String | None |
1552 | | gateway instance. | | |
1553 +-------------------------------------+-----------------------------------+---------+-----------------------+
1554 | ``rgw_zonegroup`` | The name of the zonegroup for | String | None |
1555 | | the gateway instance. | | |
1556 +-------------------------------------+-----------------------------------+---------+-----------------------+
1557 | ``rgw_zonegroup_root_pool`` | The root pool for the zonegroup. | String | ``.rgw.root`` |
1558 +-------------------------------------+-----------------------------------+---------+-----------------------+
1559 | ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` |
1560 +-------------------------------------+-----------------------------------+---------+-----------------------+
1561 | ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` |
1562 | | zonegroup. We do not recommend | | |
1563 | | changing this setting. | | |
1564 +-------------------------------------+-----------------------------------+---------+-----------------------+
1565
1566
1567 Zone Features
1568 =============
1569
1570 Some multisite features require support from all zones before they can be enabled. Each zone lists its ``supported_features``, and each zonegroup lists its ``enabled_features``. Before a feature can be enabled in the zonegroup, it must be supported by all of its zones.
1571
1572 On creation of new zones and zonegroups, all known features are supported/enabled. After upgrading an existing multisite configuration, however, new features must be enabled manually.
1573
1574 Supported Features
1575 ------------------
1576
1577 +---------------------------+---------+
1578 | Feature | Release |
1579 +===========================+=========+
1580 | :ref:`feature_resharding` | Reef |
1581 +---------------------------+---------+
1582
1583 .. _feature_resharding:
1584
1585 Resharding
1586 ~~~~~~~~~~
1587
1588 This feature allows buckets to be resharded in a multisite configuration
1589 without interrupting the replication of their objects. When
1590 ``rgw_dynamic_resharding`` is enabled, it runs on each zone independently, and
1591 zones may choose different shard counts for the same bucket. When buckets are
1592 resharded manually with ``radosgw-admin bucket reshard``, only that zone's
1593 bucket is modified. A zone feature should only be marked as supported after all
1594 of its RGWs and OSDs have upgraded.
1595
1596 .. note:: Dynamic resharding is not supported in multisite deployments prior to
1597 the Reef release.
1598
1599
1600 Commands
1601 --------
1602
1603 Add support for a zone feature
1604 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1605
1606 On the cluster that contains the given zone:
1607
1608 .. prompt:: bash $
1609
1610 radosgw-admin zone modify --rgw-zone={zone-name} --enable-feature={feature-name}
1611 radosgw-admin period update --commit
1612
1613
1614 Remove support for a zone feature
1615 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1616
1617 On the cluster that contains the given zone:
1618
1619 .. prompt:: bash $
1620
1621 radosgw-admin zone modify --rgw-zone={zone-name} --disable-feature={feature-name}
1622 radosgw-admin period update --commit
1623
1624 Enable a zonegroup feature
1625 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1626
1627 On any cluster in the realm:
1628
1629 .. prompt:: bash $
1630
1631 radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --enable-feature={feature-name}
1632 radosgw-admin period update --commit
1633
1634 Disable a zonegroup feature
1635 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
1636
1637 On any cluster in the realm:
1638
1639 .. prompt:: bash $
1640
1641 radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --disable-feature={feature-name}
1642 radosgw-admin period update --commit
1643
1644
1645 .. _`Pools`: ../pools
1646 .. _`Sync Policy Config`: ../multisite-sync-policy