7 Single-zone Configurations and Multi-site Configurations
8 ========================================================
10 Single-zone Configurations
11 --------------------------
13 A single-zone configuration typically consists of two things:
15 #. One "zonegroup", which contains one zone.
16 #. One or more `ceph-radosgw` instances that have `ceph-radosgw` client
17 requests load-balanced between them.
19 In a typical single-zone configuration, there are multiple `ceph-radosgw`
20 instances that make use of a single Ceph storage cluster.
22 Varieties of Multi-site Configuration
23 -------------------------------------
25 .. versionadded:: Jewel
27 Beginning with the Kraken release, Ceph supports several multi-site
28 configurations for the Ceph Object Gateway:
30 - **Multi-zone:** A more advanced topology, the "multi-zone" configuration, is
31 possible. A multi-zone configuration consists of one zonegroup and
32 multiple zones, with each zone consisting of one or more `ceph-radosgw`
33 instances. **Each zone is backed by its own Ceph Storage Cluster.**
35 The presence of multiple zones in a given zonegroup provides disaster
36 recovery for that zonegroup in the event that one of the zones experiences a
37 significant failure. Beginning with the Kraken release, each zone is active
38 and can receive write operations. A multi-zone configuration that contains
39 multiple active zones enhances disaster recovery and can also be used as a
40 foundation for content delivery networks.
42 - **Multi-zonegroups:** Ceph Object Gateway supports multiple zonegroups (which
43 were formerly called "regions"). Each zonegroup contains one or more zones.
44 If two zones are in the same zonegroup, and if that zonegroup is in the same
45 realm as a second zonegroup, then the objects stored in the two zones share
46 a global object namespace. This global object namespace ensures unique
47 object IDs across zonegroups and zones.
49 - **Multiple Realms:** Beginning with the Kraken release, the Ceph Object
50 Gateway supports "realms", which are containers for zonegroups. Realms make
51 it possible to set policies that apply to multiple zonegroups. Realms have a
52 globally unique namespace and can contain either a single zonegroup or
53 multiple zonegroups. If you choose to make use of multiple realms, you can
54 define multiple namespaces and multiple configurations (this means that each
55 realm can have a configuration that is distinct from the configuration of
58 Diagram - Replication of Object Data Between Zones
59 --------------------------------------------------
61 The replication of object data between zones within a zonegroup looks
64 .. image:: ../images/zone-sync.svg
67 At the top of this diagram, we see two applications (also known as "clients").
68 The application on the right is both writing and reading data from the Ceph
69 Cluster, by means of the RADOS Gateway (RGW). The application on the left is
70 only *reading* data from the Ceph Cluster, by means of an instance of RADOS
71 Gateway (RGW). In both cases (read-and-write and read-only), the transmssion of
72 data is handled RESTfully.
74 In the middle of this diagram, we see two zones, each of which contains an
75 instance of RADOS Gateway (RGW). These instances of RGW are handling the
76 movement of data from the applications to the zonegroup. The arrow from the
77 master zone (US-EAST) to the secondary zone (US-WEST) represents an act of data
80 At the bottom of this diagram, we see the data distributed into the Ceph
83 For additional details on setting up a cluster, see `Ceph Object Gateway for
84 Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_for_production/index/>`__.
86 Functional Changes from Infernalis
87 ==================================
89 Beginning with Kraken, each Ceph Object Gateway can be configured to work in an
90 active-active zone mode. This makes it possible to write to non-master zones.
92 The multi-site configuration is stored within a container called a "realm". The
93 realm stores zonegroups, zones, and a time "period" with multiple epochs (which
94 (the epochs) are used for tracking changes to the configuration).
96 Beginning with Kraken, the ``ceph-radosgw`` daemons handle the synchronization
97 of data across zones, which eliminates the need for a separate synchronization
98 agent. This new approach to synchronization allows the Ceph Object Gateway to
99 operate with an "active-active" configuration instead of with an
100 "active-passive" configuration.
102 Requirements and Assumptions
103 ============================
105 A multi-site configuration requires at least two Ceph storage clusters. The
106 multi-site configuration must have at least two Ceph object gateway instances
107 (one for each Ceph storage cluster).
109 This guide assumes that at least two Ceph storage clusters are in
110 geographically separate locations; however, the configuration can work on the
111 same site. This guide also assumes two Ceph object gateway servers named
112 ``rgw1`` and ``rgw2``.
114 .. important:: Running a single geographically-distributed Ceph storage cluster
115 is NOT recommended unless you have low latency WAN connections.
117 A multi-site configuration requires a master zonegroup and a master zone. Each
118 zonegroup requires a master zone. Zonegroups may have one or more secondary
121 In this guide, the ``rgw1`` host will serve as the master zone of the master
122 zonegroup; and, the ``rgw2`` host will serve as the secondary zone of the
125 See `Pools`_ for instructions on creating and tuning pools for Ceph Object
128 See `Sync Policy Config`_ for instructions on defining fine-grained bucket sync
131 .. _master-zone-label:
133 Configuring a Master Zone
134 =========================
136 All gateways in a multi-site configuration retrieve their configurations from a
137 ``ceph-radosgw`` daemon that is on a host within both the master zonegroup and
138 the master zone. To configure your gateways in a multi-site configuration,
139 choose a ``ceph-radosgw`` instance to configure the master zonegroup and
145 A realm contains the multi-site configuration of zonegroups and zones. The
146 realm enforces a globally unique namespace within itself.
148 #. Create a new realm for the multi-site configuration by opening a command
149 line interface on a host that will serve in the master zonegroup and zone.
150 Then run the following command:
154 radosgw-admin realm create --rgw-realm={realm-name} [--default]
160 radosgw-admin realm create --rgw-realm=movies --default
162 .. note:: If you intend the cluster to have a single realm, specify the ``--default`` flag.
164 If ``--default`` is specified, ``radosgw-admin`` uses this realm by default.
166 If ``--default`` is not specified, you must specify either the ``--rgw-realm`` flag or the ``--realm-id`` flag to identify the realm when adding zonegroups and zones.
168 #. After the realm has been created, ``radosgw-admin`` echoes back the realm
169 configuration. For example:
174 "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62",
176 "current_period": "1950b710-3e63-4c41-a19e-46a715000980",
180 .. note:: Ceph generates a unique ID for the realm, which can be used to rename the realm if the need arises.
182 Create a Master Zonegroup
183 --------------------------
185 A realm must have at least one zonegroup which serves as the master zonegroup
188 #. To create a new master zonegroup for the multi-site configuration, open a
189 command-line interface on a host in the master zonegroup and zone. Then
190 run the following command:
194 radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default
200 radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default
202 .. note:: If the realm will have only a single zonegroup, specify the ``--default`` flag.
204 If ``--default`` is specified, ``radosgw-admin`` uses this zonegroup by default when adding new zones.
206 If ``--default`` is not specified, you must use either the ``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the zonegroup when adding or modifying zones.
208 #. After creating the master zonegroup, ``radosgw-admin`` echoes back the
209 zonegroup configuration. For example:
214 "id": "f1a233f5-c354-4107-b36c-df66126475a6",
222 "hostnames_s3website": [],
225 "placement_targets": [],
226 "default_placement": "",
227 "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62"
233 .. important:: Zones must be created on a Ceph Object Gateway node that will be
236 Create a new master zone for the multi-site configuration by opening a command
237 line interface on a host that serves in the master zonegroup and zone. Then
238 run the following command:
242 radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
243 --rgw-zone={zone-name} \
245 --endpoints={http://fqdn}[,{http://fqdn}]
251 radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \
253 --endpoints={http://fqdn}[,{http://fqdn}]
256 .. note:: The ``--access-key`` and ``--secret`` aren’t specified. These
257 settings will be added to the zone once the user is created in the
260 .. important:: The following steps assume a multi-site configuration that uses
261 newly installed systems that aren’t storing data yet. DO NOT DELETE the
262 ``default`` zone and its pools if you are already using the zone to store
263 data, or the data will be deleted and unrecoverable.
265 Delete Default Zonegroup and Zone
266 ----------------------------------
268 #. Delete the ``default`` zone if it exists. Remove it from the default
273 radosgw-admin zonegroup delete --rgw-zonegroup=default --rgw-zone=default
274 radosgw-admin period update --commit
275 radosgw-admin zone delete --rgw-zone=default
276 radosgw-admin period update --commit
277 radosgw-admin zonegroup delete --rgw-zonegroup=default
278 radosgw-admin period update --commit
280 #. Delete the ``default`` pools in your Ceph storage cluster if they exist.
282 .. important:: The following step assumes a multi-site configuration that uses newly installed systems that aren’t currently storing data. DO NOT DELETE the ``default`` zonegroup if you are already using it to store data.
286 ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
287 ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
288 ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
289 ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
290 ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
295 #. The ``ceph-radosgw`` daemons must authenticate before pulling realm and
296 period information. In the master zone, create a "system user" to facilitate
297 authentication between daemons.
301 radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system
307 radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system
309 #. Make a note of the ``access_key`` and ``secret_key``. The secondary zones
310 require them to authenticate against the master zone.
312 #. Add the system user to the master zone:
316 radosgw-admin zone modify --rgw-zone={zone-name} --access-key={access-key} --secret={secret}
317 radosgw-admin period update --commit
322 After updating the master zone configuration, update the period.
326 radosgw-admin period update --commit
328 .. note:: Updating the period changes the epoch, and ensures that other zones
329 will receive the updated configuration.
331 Update the Ceph Configuration File
332 ----------------------------------
334 Update the Ceph configuration file on master zone hosts by adding the
335 ``rgw_zone`` configuration option and the name of the master zone to the
340 [client.rgw.{instance-name}]
350 rgw frontends = "civetweb port=80"
356 On the object gateway host, start and enable the Ceph Object Gateway
361 systemctl start ceph-radosgw@rgw.`hostname -s`
362 systemctl enable ceph-radosgw@rgw.`hostname -s`
364 .. _secondary-zone-label:
366 Configuring Secondary Zones
367 ===========================
369 Zones that are within a zonegroup replicate all data in order to ensure that
370 every zone has the same data. When creating a secondary zone, run the following
371 operations on a host identified to serve the secondary zone.
373 .. note:: To add a second secondary zone (that is, a second non-master zone
374 within a zonegroup that already contains a secondary zone), follow :ref:`the
375 same procedures that are used for adding a secondary
376 zone<radosgw-multisite-secondary-zone-creating>`. Be sure to specify a
377 different zone name than the name of the first secondary zone.
379 .. important:: Metadata operations (for example, user creation) must be
380 run on a host within the master zone. Bucket operations can be received
381 by the master zone or the secondary zone, but the secondary zone will
382 redirect bucket operations to the master zone. If the master zone is down,
383 bucket operations will fail.
385 Pulling the Realm Configuration
386 -------------------------------
388 The URL path, access key, and secret of the master zone in the master zone
389 group are used to pull the realm configuration to the host. When pulling the
390 configuration of a non-default realm, specify the realm using the
391 ``--rgw-realm`` or ``--realm-id`` configuration options.
395 radosgw-admin realm pull --url={url-to-master-zone-gateway}
396 --access-key={access-key} --secret={secret}
398 .. note:: Pulling the realm configuration also retrieves the remote's current
399 period configuration, and makes it the current period on this host as well.
401 If this realm is the only realm, run the following command to make it the
406 radosgw-admin realm default --rgw-realm={realm-name}
408 .. _radosgw-multisite-secondary-zone-creating:
410 Creating a Secondary Zone
411 -------------------------
413 .. important:: When a zone is created, it must be on a Ceph Object Gateway node
416 In order to create a secondary zone for the multi-site configuration, open a
417 command line interface on a host identified to serve the secondary zone.
418 Specify the zonegroup ID, the new zone name, and an endpoint for the zone.
419 **DO NOT** use the ``--master`` or ``--default`` flags. Beginning in Kraken,
420 all zones run in an active-active configuration by default, which means that a
421 gateway client may write data to any zone and the zone will replicate the data
422 to all other zones within the zonegroup. If you want to prevent the secondary
423 zone from accepting write operations, include the ``--read-only`` flag in the
424 command in order to create an active-passive configuration between the master
425 zone and the secondary zone. In any case, don't forget to provide the
426 ``access_key`` and ``secret_key`` of the generated system user that is stored
427 in the master zone of the master zonegroup. Run the following command:
431 radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
432 --rgw-zone={zone-name} \
433 --access-key={system-key} --secret={secret} \
434 --endpoints=http://{fqdn}:80 \
442 radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \
443 --access-key={system-key} --secret={secret} \
444 --endpoints=http://rgw2:80
446 .. important:: The following steps assume a multi-site configuration that uses
447 newly installed systems that have not yet begun storing data. **DO NOT
448 DELETE the ``default`` zone or its pools** if you are already using it to
449 store data, or the data will be irretrievably lost.
451 Delete the default zone if needed:
455 radosgw-admin zone delete --rgw-zone=default
457 Finally, delete the default pools in your Ceph storage cluster if needed:
461 ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
462 ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
463 ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
464 ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
465 ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
467 Updating the Ceph Configuration File
468 ------------------------------------
470 To update the Ceph configuration file on the secondary zone hosts, add the
471 ``rgw_zone`` configuration option and the name of the secondary zone to the
476 [client.rgw.{instance-name}]
486 rgw frontends = "civetweb port=80"
492 After updating the master zone configuration, update the period:
496 radosgw-admin period update --commit
498 .. note:: Updating the period changes the epoch, and ensures that other zones
499 will receive the updated configuration.
504 To start the gateway, start and enable the Ceph Object Gateway service by
505 running the following commands on the object gateway host:
509 systemctl start ceph-radosgw@rgw.`hostname -s`
510 systemctl enable ceph-radosgw@rgw.`hostname -s`
512 Checking Synchronization Status
513 -------------------------------
515 After the secondary zone is up and running, you can check the synchronization
516 status. The process of synchronization will copy users and buckets that were
517 created in the master zone from the master zone to the secondary zone.
521 radosgw-admin sync status
523 The output reports the status of synchronization operations. For example:
527 realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth)
528 zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us)
529 zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west)
530 metadata sync syncing
531 full sync: 0/64 shards
532 metadata is caught up with master
533 incremental sync: 64/64 shards
534 data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east)
536 full sync: 0/128 shards
537 incremental sync: 128/128 shards
538 data is caught up with source
540 .. note:: Secondary zones accept bucket operations; however, secondary zones
541 redirect bucket operations to the master zone and then synchronize with the
542 master zone to receive the result of the bucket operations. If the master
543 zone is down, bucket operations executed on the secondary zone will fail,
544 but object operations should succeed.
550 By default, after the successful synchronization of an object there is no
551 subsequent verification of the object. However, you can enable verification by
552 setting :confval:`rgw_sync_obj_etag_verify` to ``true``. After this value is
553 set to true, an MD5 checksum is used to verify the integrity of the data that
554 was transferred from the source to the destination. This ensures the integrity
555 of any object that has been fetched from a remote server over HTTP (including
556 multisite sync). This option may decrease the performance of your RGW because
557 it requires more computation.
563 Checking the Sync Status
564 ------------------------
566 Information about the replication status of a zone can be queried with:
570 radosgw-admin sync status
574 realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth)
575 zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us)
576 zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2)
577 metadata sync syncing
578 full sync: 0/64 shards
579 incremental sync: 64/64 shards
580 metadata is behind on 1 shards
581 oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s
582 data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1)
584 full sync: 0/128 shards
585 incremental sync: 128/128 shards
586 data is caught up with source
587 source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3)
589 full sync: 0/128 shards
590 incremental sync: 128/128 shards
591 data is caught up with source
593 The output might be different, depending on the sync status. During sync, the
594 shards are of two types:
596 - **Behind shards** are shards that require a data sync (either a full data
597 sync or an incremental data sync) in order to be brought up to date.
599 - **Recovery shards** are shards that encountered an error during sync and have
600 been marked for retry. The error occurs mostly on minor issues, such as
601 acquiring a lock on a bucket. Errors of this kind typically resolve on their
607 For multi-site deployments only, you can examine the metadata log (``mdlog``),
608 the bucket index log (``bilog``), and the data log (``datalog``). You can list
609 them and also trim them. Trimming is not needed in most cases because
610 :confval:`rgw_sync_log_trim_interval` is set to 20 minutes by default. It
611 should not be necessary to trim the logs unless
612 :confval:`rgw_sync_log_trim_interval` has been manually set to 0.
614 Changing the Metadata Master Zone
615 ---------------------------------
617 .. important:: Care must be taken when changing the metadata master zone by
618 promoting a zone to master. A zone that isn't finished syncing metadata from
619 the current master zone will be unable to serve any remaining entries if it
620 is promoted to master, and those metadata changes will be lost. For this
621 reason, we recommend waiting for a zone's ``radosgw-admin sync status`` to
622 complete the process of synchronizing the metadata before promoting the zone
625 Similarly, if the current master zone is processing changes to metadata at the
626 same time that another zone is being promoted to master, these changes are
627 likely to be lost. To avoid losing these changes, we recommend shutting down
628 any ``radosgw`` instances on the previous master zone. After the new master
629 zone has been promoted, the previous master zone's new period can be fetched
630 with ``radosgw-admin period pull`` and the gateway(s) can be restarted.
632 To promote a zone to metadata master, run the following commands on that zone
633 (in this example, the zone is zone ``us-2`` in zonegroup ``us``):
637 radosgw-admin zone modify --rgw-zone=us-2 --master
638 radosgw-admin zonegroup modify --rgw-zonegroup=us --master
639 radosgw-admin period update --commit
641 This generates a new period, and the radosgw instance(s) in zone ``us-2`` sends
642 this period to other zones.
644 Failover and Disaster Recovery
645 ==============================
647 Setting Up Failover to the Secondary Zone
648 -----------------------------------------
650 If the master zone fails, you can fail over to the secondary zone for
651 disaster recovery by following these steps:
653 #. Make the secondary zone the master and default zone. For example:
657 radosgw-admin zone modify --rgw-zone={zone-name} --master --default
659 By default, Ceph Object Gateway runs in an active-active
660 configuration. However, if the cluster is configured to run in an
661 active-passive configuration, the secondary zone is a read-only zone.
662 To allow the secondary zone to receive write
663 operations, remove its ``--read-only`` status. For example:
667 radosgw-admin zone modify --rgw-zone={zone-name} --master --default \
670 #. Update the period to make the changes take effect.
674 radosgw-admin period update --commit
676 #. Finally, restart the Ceph Object Gateway.
680 systemctl restart ceph-radosgw@rgw.`hostname -s`
682 Reverting from Failover
683 -----------------------
685 If the former master zone recovers, you can revert the failover operation by following these steps:
687 #. From within the recovered zone, pull the latest realm configuration
688 from the current master zone:
692 radosgw-admin realm pull --url={url-to-master-zone-gateway} \
693 --access-key={access-key} --secret={secret}
695 #. Make the recovered zone the master and default zone:
699 radosgw-admin zone modify --rgw-zone={zone-name} --master --default
701 #. Update the period so that the changes take effect:
705 radosgw-admin period update --commit
707 #. Restart the Ceph Object Gateway in the recovered zone:
711 systemctl restart ceph-radosgw@rgw.`hostname -s`
713 #. If the secondary zone needs to be a read-only configuration, update
718 radosgw-admin zone modify --rgw-zone={zone-name} --read-only
720 #. Update the period so that the changes take effect:
724 radosgw-admin period update --commit
726 #. Restart the Ceph Object Gateway in the secondary zone:
730 systemctl restart ceph-radosgw@rgw.`hostname -s`
732 .. _rgw-multisite-migrate-from-single-site:
734 Migrating a Single-Site Deployment to Multi-Site
735 =================================================
737 To migrate from a single-site deployment with a ``default`` zonegroup and zone
738 to a multi-site system, follow these steps:
740 1. Create a realm. Replace ``<name>`` with the realm name:
744 radosgw-admin realm create --rgw-realm=<name> --default
746 2. Rename the default zonegroup and zone. Replace ``<name>`` with the zone name
751 radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name>
752 radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name>
754 3. Configure the master zonegroup. Replace ``<name>`` with the realm name or
755 zonegroup name. Replace ``<fqdn>`` with the fully qualified domain name(s)
760 radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default
762 4. Configure the master zone. Replace ``<name>`` with the realm name, zone
763 name, or zonegroup name. Replace ``<fqdn>`` with the fully qualified domain
764 name(s) in the zonegroup:
768 radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \
769 --rgw-zone=<name> --endpoints http://<fqdn>:80 \
770 --access-key=<access-key> --secret=<secret-key> \
773 5. Create a system user. Replace ``<user-id>`` with the username. Replace
774 ``<display-name>`` with a display name. The display name is allowed to
779 radosgw-admin user create --uid=<user-id> \
780 --display-name="<display-name>" \
781 --access-key=<access-key> \
782 --secret=<secret-key> --system
784 6. Commit the updated configuration:
788 radosgw-admin period update --commit
790 7. Restart the Ceph Object Gateway:
794 systemctl restart ceph-radosgw@rgw.`hostname -s`
796 After completing this procedure, proceed to `Configure a Secondary
797 Zone <#configure-secondary-zones>`_ and create a secondary zone
798 in the master zonegroup.
800 Multi-Site Configuration Reference
801 ==================================
803 The following sections provide additional details and command-line
804 usage for realms, periods, zonegroups and zones.
806 For more details on every available configuration option, see
807 ``src/common/options/rgw.yaml.in``.
809 Alternatively, go to the :ref:`mgr-dashboard` configuration page (found under
810 `Cluster`), where you can view and set all of the options. While on the page,
811 set the level to ``advanced`` and search for RGW to see all basic and advanced
812 configuration options.
819 A realm is a globally unique namespace that consists of one or more zonegroups.
820 Zonegroups contain one or more zones. Zones contain buckets. Buckets contain
823 Realms make it possible for the Ceph Object Gateway to support multiple
824 namespaces and their configurations on the same hardware.
826 Each realm is associated with a "period". A period represents the state
827 of the zonegroup and zone configuration in time. Each time you make a
828 change to a zonegroup or zone, you should update and commit the period.
830 To ensure backward compatibility with Infernalis and earlier releases, the Ceph
831 Object Gateway does not by default create a realm. However, as a best practice,
832 we recommend that you create realms when creating new clusters.
837 To create a realm, run ``realm create`` and specify the realm name.
838 If the realm is the default, specify ``--default``.
842 radosgw-admin realm create --rgw-realm={realm-name} [--default]
848 radosgw-admin realm create --rgw-realm=movies --default
850 By specifying ``--default``, the realm will be called implicitly with
851 each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name
852 are explicitly provided.
854 Make a Realm the Default
855 ~~~~~~~~~~~~~~~~~~~~~~~~
857 One realm in the list of realms should be the default realm. There may be only
858 one default realm. If there is only one realm and it wasn’t specified as the
859 default realm when it was created, make it the default realm. Alternatively, to
860 change which realm is the default, run the following command:
864 radosgw-admin realm default --rgw-realm=movies
866 .. note:: When the realm is default, the command line assumes
867 ``--rgw-realm=<realm-name>`` as an argument.
872 To delete a realm, run ``realm rm`` and specify the realm name:
876 radosgw-admin realm rm --rgw-realm={realm-name}
882 radosgw-admin realm rm --rgw-realm=movies
887 To get a realm, run ``realm get`` and specify the realm name:
891 radosgw-admin realm get --rgw-realm=<name>
897 radosgw-admin realm get --rgw-realm=movies [> filename.json]
902 "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc",
904 "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b",
911 To set a realm, run ``realm set``, specify the realm name, and use the
912 ``--infile=`` option (make sure that the ``--infile`` option has an input file
913 name as an argument):
917 radosgw-admin realm set --rgw-realm=<name> --infile=<infilename>
923 radosgw-admin realm set --rgw-realm=movies --infile=filename.json
928 To list realms, run ``realm list``:
932 radosgw-admin realm list
937 To list realm periods, run ``realm list-periods``:
941 radosgw-admin realm list-periods
946 To pull a realm from the node that contains both the master zonegroup and
947 master zone to a node that contains a secondary zonegroup or zone, run ``realm
948 pull`` on the node that will receive the realm configuration:
952 radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
957 A realm is not part of the period. Consequently, any renaming of the realm is
958 applied only locally, and will therefore not get pulled when you run ``realm
959 pull``. If you are renaming a realm that contains multiple zones, run the
960 ``rename`` command on each zone.
962 To rename a realm, run the following:
966 radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name>
968 .. note:: DO NOT use ``realm set`` to change the ``name`` parameter. Doing so
969 changes the internal name only. If you use ``realm set`` to change the
970 ``name`` parameter, then ``--rgw-realm`` still expects the realm's old name.
975 Zonegroups make it possible for the Ceph Object Gateway to support multi-site
976 deployments and a global namespace. Zonegroups were formerly called "regions"
977 (in releases prior to and including Infernalis).
979 A zonegroup defines the geographic location of one or more Ceph Object Gateway
980 instances within one or more zones.
982 The configuration of zonegroups differs from typical configuration procedures,
983 because not all of the zonegroup configuration settings are stored to a
986 You can list zonegroups, get a zonegroup configuration, and set a zonegroup
992 Creating a zonegroup consists of specifying the zonegroup name. Newly created
993 zones reside in the default realm unless a different realm is specified by
994 using the option ``--rgw-realm=<realm-name>``.
996 If the zonegroup is the default zonegroup, specify the ``--default`` flag. If
997 the zonegroup is the master zonegroup, specify the ``--master`` flag. For
1002 radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default]
1005 .. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify
1006 an existing zonegroup’s settings.
1008 Making a Zonegroup the Default
1009 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1011 One zonegroup in the list of zonegroups must be the default zonegroup. There
1012 can be only one default zonegroup. In the case that there is only one zonegroup
1013 which was not designated the default zonegroup when it was created, use the
1014 following command to make it the default zonegroup. Commands of this form can
1015 be used to change which zonegroup is the default.
1017 #. Designate a zonegroup as the default zonegroup:
1021 radosgw-admin zonegroup default --rgw-zonegroup=comedy
1023 .. note:: When the zonegroup is default, the command line assumes that the name of the zonegroup will be the argument of the ``--rgw-zonegroup=<zonegroup-name>`` option. (In this example, ``<zonegroup-name>`` has been retained for the sake of consistency and legibility.)
1025 #. Update the period:
1029 radosgw-admin period update --commit
1031 Adding a Zone to a Zonegroup
1032 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1034 This procedure explains how to add a zone to a zonegroup.
1036 #. Run the following command to add a zone to a zonegroup:
1040 radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name>
1042 #. Update the period:
1046 radosgw-admin period update --commit
1048 Removing a Zone from a Zonegroup
1049 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1051 #. Run this command to remove a zone from a zonegroup:
1055 radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name>
1057 #. Update the period:
1061 radosgw-admin period update --commit
1063 Renaming a Zonegroup
1064 ~~~~~~~~~~~~~~~~~~~~
1066 #. Run this command to rename the zonegroup:
1070 radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name>
1072 #. Update the period:
1076 radosgw-admin period update --commit
1078 Deleting a Zonegroup
1079 ~~~~~~~~~~~~~~~~~~~~
1081 #. To delete a zonegroup, run the following command:
1085 radosgw-admin zonegroup delete --rgw-zonegroup=<name>
1087 #. Update the period:
1091 radosgw-admin period update --commit
1096 A Ceph cluster contains a list of zonegroup. To list the zonegroups, run
1101 radosgw-admin zonegroup list
1103 The ``radosgw-admin`` returns a JSON formatted list of zonegroups.
1108 "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1114 Getting a Zonegroup Map
1115 ~~~~~~~~~~~~~~~~~~~~~~~~
1117 To list the details of each zonegroup, run this command:
1121 radosgw-admin zonegroup-map get
1123 .. note:: If you receive a ``failed to read zonegroup map`` error, run
1124 ``radosgw-admin zonegroup-map update`` as ``root`` first.
1127 ~~~~~~~~~~~~~~~~~~~~
1129 To view the configuration of a zonegroup, run this command:
1133 radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>]
1135 The zonegroup configuration looks like this:
1140 "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1143 "is_master": "true",
1148 "hostnames_s3website": [],
1149 "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
1152 "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
1159 "bucket_index_max_shards": 0,
1160 "read_only": "false"
1163 "id": "d1024e59-7d28-49d1-8222-af101965a939",
1168 "log_meta": "false",
1170 "bucket_index_max_shards": 0,
1171 "read_only": "false"
1174 "placement_targets": [
1176 "name": "default-placement",
1180 "default_placement": "default-placement",
1181 "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
1185 ~~~~~~~~~~~~~~~~~~~~
1187 The process of defining a zonegroup consists of creating a JSON object and
1188 specifying the required settings. Here is a list of the required settings:
1190 1. ``name``: The name of the zonegroup. Required.
1192 2. ``api_name``: The API name for the zonegroup. Optional.
1194 3. ``is_master``: Determines whether the zonegroup is the master zonegroup.
1195 Required. **note:** You can only have one master zonegroup.
1197 4. ``endpoints``: A list of all the endpoints in the zonegroup. For example,
1198 you may use multiple domain names to refer to the same zonegroup. Remember
1199 to escape the forward slashes (``\/``). You may also specify a port
1200 (``fqdn:port``) for each endpoint. Optional.
1202 5. ``hostnames``: A list of all the hostnames in the zonegroup. For example,
1203 you may use multiple domain names to refer to the same zonegroup. Optional.
1204 The ``rgw dns name`` setting will be included in this list automatically.
1205 Restart the gateway daemon(s) after changing this setting.
1207 6. ``master_zone``: The master zone for the zonegroup. Optional. Uses
1208 the default zone if not specified. **note:** You can only have one
1209 master zone per zonegroup.
1211 7. ``zones``: A list of all zones within the zonegroup. Each zone has a name
1212 (required), a list of endpoints (optional), and a setting that determines
1213 whether the gateway will log metadata and data operations (false by
1216 8. ``placement_targets``: A list of placement targets (optional). Each
1217 placement target contains a name (required) for the placement target
1218 and a list of tags (optional) so that only users with the tag can use
1219 the placement target (that is, the user’s ``placement_tags`` field in
1222 9. ``default_placement``: The default placement target for the object index and
1223 object data. Set to ``default-placement`` by default. It is also possible
1224 to set a per-user default placement in the user info for each user.
1226 Setting a Zonegroup - Procedure
1227 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1229 #. To set a zonegroup, create a JSON object that contains the required fields,
1230 save the object to a file (for example, ``zonegroup.json``), and run the
1235 radosgw-admin zonegroup set --infile zonegroup.json
1237 Where ``zonegroup.json`` is the JSON file you created.
1239 .. important:: The ``default`` zonegroup ``is_master`` setting is ``true`` by default. If you create an additional zonegroup and want to make it the master zonegroup, you must either set the ``default`` zonegroup ``is_master`` setting to ``false`` or delete the ``default`` zonegroup.
1241 #. Update the period:
1245 radosgw-admin period update --commit
1247 Setting a Zonegroup Map
1248 ~~~~~~~~~~~~~~~~~~~~~~~~
1250 The process of setting a zonegroup map comprises (1) creating a JSON object
1251 that consists of one or more zonegroups, and (2) setting the
1252 ``master_zonegroup`` for the cluster. Each zonegroup in the zonegroup map
1253 consists of a key/value pair where the ``key`` setting is equivalent to the
1254 ``name`` setting for an individual zonegroup configuration and the ``val`` is
1255 a JSON object consisting of an individual zonegroup configuration.
1257 You may only have one zonegroup with ``is_master`` equal to ``true``, and it
1258 must be specified as the ``master_zonegroup`` at the end of the zonegroup map.
1259 The following JSON object is an example of a default zonegroup map:
1266 "key": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1268 "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1271 "is_master": "true",
1276 "hostnames_s3website": [],
1277 "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
1280 "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
1287 "bucket_index_max_shards": 0,
1288 "read_only": "false"
1291 "id": "d1024e59-7d28-49d1-8222-af101965a939",
1296 "log_meta": "false",
1298 "bucket_index_max_shards": 0,
1299 "read_only": "false"
1302 "placement_targets": [
1304 "name": "default-placement",
1308 "default_placement": "default-placement",
1309 "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
1313 "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1326 #. To set a zonegroup map, run the following command:
1330 radosgw-admin zonegroup-map set --infile zonegroupmap.json
1332 In this command, ``zonegroupmap.json`` is the JSON file you created. Ensure
1333 that you have zones created for the ones specified in the zonegroup map.
1335 #. Update the period:
1339 radosgw-admin period update --commit
1346 A zone defines a logical group that consists of one or more Ceph Object Gateway
1347 instances. Ceph Object Gateway supports zones.
1349 The procedure for configuring zones differs from typical configuration
1350 procedures, because not all of the settings end up in a Ceph configuration
1353 Zones can be listed. You can "get" a zone configuration and "set" a zone
1359 To create a zone, specify a zone name. If you are creating a master zone,
1360 specify the ``--master`` flag. Only one zone in a zonegroup may be a master
1361 zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup`` option
1362 with the zonegroup name.
1366 radosgw-admin zone create --rgw-zone=<name> \
1367 [--zonegroup=<zonegroup-name]\
1368 [--endpoints=<endpoint>[,<endpoint>] \
1369 [--master] [--default] \
1370 --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY
1372 After you have created the zone, update the period:
1376 radosgw-admin period update --commit
1381 To delete a zone, first remove it from the zonegroup:
1385 radosgw-admin zonegroup remove --zonegroup=<name>\
1388 Then, update the period:
1392 radosgw-admin period update --commit
1394 Next, delete the zone:
1398 radosgw-admin zone delete --rgw-zone<name>
1400 Finally, update the period:
1404 radosgw-admin period update --commit
1406 .. important:: Do not delete a zone without removing it from a zonegroup first.
1407 Otherwise, updating the period will fail.
1409 If the pools for the deleted zone will not be used anywhere else,
1410 consider deleting the pools. Replace ``<del-zone>`` in the example below
1411 with the deleted zone’s name.
1413 .. important:: Only delete the pools with prepended zone names. Deleting the
1414 root pool (for example, ``.rgw.root``) will remove all of the system’s
1417 .. important:: When the pools are deleted, all of the data within them are
1418 deleted in an unrecoverable manner. Delete the pools only if the pool's
1419 contents are no longer needed.
1423 ceph osd pool rm <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it
1424 ceph osd pool rm <del-zone>.rgw.meta <del-zone>.rgw.meta --yes-i-really-really-mean-it
1425 ceph osd pool rm <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it
1426 ceph osd pool rm <del-zone>.rgw.otp <del-zone>.rgw.otp --yes-i-really-really-mean-it
1427 ceph osd pool rm <del-zone>.rgw.buckets.index <del-zone>.rgw.buckets.index --yes-i-really-really-mean-it
1428 ceph osd pool rm <del-zone>.rgw.buckets.non-ec <del-zone>.rgw.buckets.non-ec --yes-i-really-really-mean-it
1429 ceph osd pool rm <del-zone>.rgw.buckets.data <del-zone>.rgw.buckets.data --yes-i-really-really-mean-it
1434 To modify a zone, specify the zone name and the parameters you wish to
1439 radosgw-admin zone modify [options]
1441 Where ``[options]``:
1443 - ``--access-key=<key>``
1444 - ``--secret/--secret-key=<key>``
1447 - ``--endpoints=<list>``
1449 Then, update the period:
1453 radosgw-admin period update --commit
1458 As ``root``, to list the zones in a cluster, run the following command:
1462 radosgw-admin zone list
1467 As ``root``, to get the configuration of a zone, run the following command:
1471 radosgw-admin zone get [--rgw-zone=<zone>]
1473 The ``default`` zone looks like this:
1477 { "domain_root": ".rgw",
1478 "control_pool": ".rgw.control",
1479 "gc_pool": ".rgw.gc",
1481 "intent_log_pool": ".intent-log",
1482 "usage_log_pool": ".usage",
1483 "user_keys_pool": ".users",
1484 "user_email_pool": ".users.email",
1485 "user_swift_pool": ".users.swift",
1486 "user_uid_pool": ".users.uid",
1487 "system_key": { "access_key": "", "secret_key": ""},
1488 "placement_pools": [
1489 { "key": "default-placement",
1490 "val": { "index_pool": ".rgw.buckets.index",
1491 "data_pool": ".rgw.buckets"}
1499 Configuring a zone involves specifying a series of Ceph Object Gateway
1500 pools. For consistency, we recommend using a pool prefix that is the
1501 same as the zone name. See
1502 `Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
1503 for details of configuring pools.
1505 To set a zone, create a JSON object consisting of the pools, save the
1506 object to a file (e.g., ``zone.json``); then, run the following
1507 command, replacing ``{zone-name}`` with the name of the zone:
1511 radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json
1513 Where ``zone.json`` is the JSON file you created.
1515 Then, as ``root``, update the period:
1519 radosgw-admin period update --commit
1524 To rename a zone, specify the zone name and the new zone name.
1528 radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name>
1530 Then, update the period:
1534 radosgw-admin period update --commit
1536 Zonegroup and Zone Settings
1537 ----------------------------
1539 When configuring a default zonegroup and zone, the pool name includes
1540 the zone name. For example:
1542 - ``default.rgw.control``
1544 To change the defaults, include the following settings in your Ceph
1545 configuration file under each ``[client.radosgw.{instance-name}]``
1548 +-------------------------------------+-----------------------------------+---------+-----------------------+
1549 | Name | Description | Type | Default |
1550 +=====================================+===================================+=========+=======================+
1551 | ``rgw_zone`` | The name of the zone for the | String | None |
1552 | | gateway instance. | | |
1553 +-------------------------------------+-----------------------------------+---------+-----------------------+
1554 | ``rgw_zonegroup`` | The name of the zonegroup for | String | None |
1555 | | the gateway instance. | | |
1556 +-------------------------------------+-----------------------------------+---------+-----------------------+
1557 | ``rgw_zonegroup_root_pool`` | The root pool for the zonegroup. | String | ``.rgw.root`` |
1558 +-------------------------------------+-----------------------------------+---------+-----------------------+
1559 | ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` |
1560 +-------------------------------------+-----------------------------------+---------+-----------------------+
1561 | ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` |
1562 | | zonegroup. We do not recommend | | |
1563 | | changing this setting. | | |
1564 +-------------------------------------+-----------------------------------+---------+-----------------------+
1570 Some multisite features require support from all zones before they can be enabled. Each zone lists its ``supported_features``, and each zonegroup lists its ``enabled_features``. Before a feature can be enabled in the zonegroup, it must be supported by all of its zones.
1572 On creation of new zones and zonegroups, all known features are supported/enabled. After upgrading an existing multisite configuration, however, new features must be enabled manually.
1577 +---------------------------+---------+
1578 | Feature | Release |
1579 +===========================+=========+
1580 | :ref:`feature_resharding` | Reef |
1581 +---------------------------+---------+
1583 .. _feature_resharding:
1588 This feature allows buckets to be resharded in a multisite configuration
1589 without interrupting the replication of their objects. When
1590 ``rgw_dynamic_resharding`` is enabled, it runs on each zone independently, and
1591 zones may choose different shard counts for the same bucket. When buckets are
1592 resharded manually with ``radosgw-admin bucket reshard``, only that zone's
1593 bucket is modified. A zone feature should only be marked as supported after all
1594 of its RGWs and OSDs have upgraded.
1596 .. note:: Dynamic resharding is not supported in multisite deployments prior to
1603 Add support for a zone feature
1604 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1606 On the cluster that contains the given zone:
1610 radosgw-admin zone modify --rgw-zone={zone-name} --enable-feature={feature-name}
1611 radosgw-admin period update --commit
1614 Remove support for a zone feature
1615 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1617 On the cluster that contains the given zone:
1621 radosgw-admin zone modify --rgw-zone={zone-name} --disable-feature={feature-name}
1622 radosgw-admin period update --commit
1624 Enable a zonegroup feature
1625 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1627 On any cluster in the realm:
1631 radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --enable-feature={feature-name}
1632 radosgw-admin period update --commit
1634 Disable a zonegroup feature
1635 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
1637 On any cluster in the realm:
1641 radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --disable-feature={feature-name}
1642 radosgw-admin period update --commit
1645 .. _`Pools`: ../pools
1646 .. _`Sync Policy Config`: ../multisite-sync-policy