]> git.proxmox.com Git - ceph.git/blob - ceph/doc/radosgw/multisite.rst
b216164d5d3606ffe87a56340fbf64349a568d48
[ceph.git] / ceph / doc / radosgw / multisite.rst
1 .. _multisite:
2
3 ==========
4 Multi-Site
5 ==========
6
7 .. versionadded:: Jewel
8
9 A single zone configuration typically consists of one zone group containing one
10 zone and one or more `ceph-radosgw` instances where you may load-balance gateway
11 client requests between the instances. In a single zone configuration, typically
12 multiple gateway instances point to a single Ceph storage cluster. However, Kraken
13 supports several multi-site configuration options for the Ceph Object Gateway:
14
15 - **Multi-zone:** A more advanced configuration consists of one zone group and
16 multiple zones, each zone with one or more `ceph-radosgw` instances. Each zone
17 is backed by its own Ceph Storage Cluster. Multiple zones in a zone group
18 provides disaster recovery for the zone group should one of the zones experience
19 a significant failure. In Kraken, each zone is active and may receive write
20 operations. In addition to disaster recovery, multiple active zones may also
21 serve as a foundation for content delivery networks.
22
23 - **Multi-zone-group:** Formerly called 'regions', Ceph Object Gateway can also
24 support multiple zone groups, each zone group with one or more zones. Objects
25 stored to zones in one zone group within the same realm as another zone
26 group will share a global object namespace, ensuring unique object IDs across
27 zone groups and zones.
28
29 - **Multiple Realms:** In Kraken, the Ceph Object Gateway supports the notion
30 of realms, which can be a single zone group or multiple zone groups and
31 a globally unique namespace for the realm. Multiple realms provide the ability
32 to support numerous configurations and namespaces.
33
34 Replicating object data between zones within a zone group looks something
35 like this:
36
37 .. image:: ../images/zone-sync2.png
38 :align: center
39
40 For additional details on setting up a cluster, see `Ceph Object Gateway for
41 Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_for_production/index/>`__.
42
43 Functional Changes from Infernalis
44 ==================================
45
46 In Kraken, you can configure each Ceph Object Gateway to
47 work in an active-active zone configuration, allowing for writes to
48 non-master zones.
49
50 The multi-site configuration is stored within a container called a
51 "realm." The realm stores zone groups, zones, and a time "period" with
52 multiple epochs for tracking changes to the configuration. In Kraken,
53 the ``ceph-radosgw`` daemons handle the synchronization,
54 eliminating the need for a separate synchronization agent. Additionally,
55 the new approach to synchronization allows the Ceph Object Gateway to
56 operate with an "active-active" configuration instead of
57 "active-passive".
58
59 Requirements and Assumptions
60 ============================
61
62 A multi-site configuration requires at least two Ceph storage clusters,
63 preferably given a distinct cluster name. At least two Ceph object
64 gateway instances, one for each Ceph storage cluster.
65
66 This guide assumes at least two Ceph storage clusters are in geographically
67 separate locations; however, the configuration can work on the same
68 site. This guide also assumes two Ceph object gateway servers named
69 ``rgw1`` and ``rgw2``.
70
71 .. important:: Running a single Ceph storage cluster is NOT recommended unless you have
72 low latency WAN connections.
73
74 A multi-site configuration requires a master zone group and a master
75 zone. Additionally, each zone group requires a master zone. Zone groups
76 may have one or more secondary or non-master zones.
77
78 In this guide, the ``rgw1`` host will serve as the master zone of the
79 master zone group; and, the ``rgw2`` host will serve as the secondary zone
80 of the master zone group.
81
82 See `Pools`_ for instructions on creating and tuning pools for Ceph
83 Object Storage.
84
85 See `Sync Policy Config`_ for instructions on defining fine grained bucket sync
86 policy rules.
87
88 .. _master-zone-label:
89
90 Configuring a Master Zone
91 =========================
92
93 All gateways in a multi-site configuration will retrieve their
94 configuration from a ``ceph-radosgw`` daemon on a host within the master
95 zone group and master zone. To configure your gateways in a multi-site
96 configuration, choose a ``ceph-radosgw`` instance to configure the
97 master zone group and master zone.
98
99 Create a Realm
100 --------------
101
102 A realm contains the multi-site configuration of zone groups and zones
103 and also serves to enforce a globally unique namespace within the realm.
104
105 Create a new realm for the multi-site configuration by opening a command
106 line interface on a host identified to serve in the master zone group
107 and zone. Then, execute the following:
108
109 ::
110
111 # radosgw-admin realm create --rgw-realm={realm-name} [--default]
112
113 For example:
114
115 ::
116
117 # radosgw-admin realm create --rgw-realm=movies --default
118
119 If the cluster will have a single realm, specify the ``--default`` flag.
120 If ``--default`` is specified, ``radosgw-admin`` will use this realm by
121 default. If ``--default`` is not specified, adding zone-groups and zones
122 requires specifying either the ``--rgw-realm`` flag or the
123 ``--realm-id`` flag to identify the realm when adding zone groups and
124 zones.
125
126 After creating the realm, ``radosgw-admin`` will echo back the realm
127 configuration. For example:
128
129 ::
130
131 {
132 "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62",
133 "name": "movies",
134 "current_period": "1950b710-3e63-4c41-a19e-46a715000980",
135 "epoch": 1
136 }
137
138 .. note:: Ceph generates a unique ID for the realm, which allows the renaming
139 of a realm if the need arises.
140
141 Create a Master Zone Group
142 --------------------------
143
144 A realm must have at least one zone group, which will serve as the
145 master zone group for the realm.
146
147 Create a new master zone group for the multi-site configuration by
148 opening a command line interface on a host identified to serve in the
149 master zone group and zone. Then, execute the following:
150
151 ::
152
153 # radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default
154
155 For example:
156
157 ::
158
159 # radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default
160
161 If the realm will only have a single zone group, specify the
162 ``--default`` flag. If ``--default`` is specified, ``radosgw-admin``
163 will use this zone group by default when adding new zones. If
164 ``--default`` is not specified, adding zones will require either the
165 ``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the
166 zone group when adding or modifying zones.
167
168 After creating the master zone group, ``radosgw-admin`` will echo back
169 the zone group configuration. For example:
170
171 ::
172
173 {
174 "id": "f1a233f5-c354-4107-b36c-df66126475a6",
175 "name": "us",
176 "api_name": "us",
177 "is_master": "true",
178 "endpoints": [
179 "http:\/\/rgw1:80"
180 ],
181 "hostnames": [],
182 "hostnames_s3website": [],
183 "master_zone": "",
184 "zones": [],
185 "placement_targets": [],
186 "default_placement": "",
187 "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62"
188 }
189
190 Create a Master Zone
191 --------------------
192
193 .. important:: Zones must be created on a Ceph Object Gateway node that will be
194 within the zone.
195
196 Create a new master zone for the multi-site configuration by opening a
197 command line interface on a host identified to serve in the master zone
198 group and zone. Then, execute the following:
199
200 ::
201
202 # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
203 --rgw-zone={zone-name} \
204 --master --default \
205 --endpoints={http://fqdn}[,{http://fqdn}]
206
207
208 For example:
209
210 ::
211
212 # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \
213 --master --default \
214 --endpoints={http://fqdn}[,{http://fqdn}]
215
216
217 .. note:: The ``--access-key`` and ``--secret`` aren’t specified. These
218 settings will be added to the zone once the user is created in the
219 next section.
220
221 .. important:: The following steps assume a multi-site configuration using newly
222 installed systems that aren’t storing data yet. DO NOT DELETE the
223 ``default`` zone and its pools if you are already using it to store
224 data, or the data will be deleted and unrecoverable.
225
226 Delete Default Zone Group and Zone
227 ----------------------------------
228
229 Delete the ``default`` zone if it exists. Make sure to remove it from
230 the default zone group first.
231
232 ::
233
234 # radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default
235 # radosgw-admin period update --commit
236 # radosgw-admin zone rm --rgw-zone=default
237 # radosgw-admin period update --commit
238 # radosgw-admin zonegroup delete --rgw-zonegroup=default
239 # radosgw-admin period update --commit
240
241 Finally, delete the ``default`` pools in your Ceph storage cluster if
242 they exist.
243
244 .. important:: The following step assumes a multi-site configuration using newly
245 installed systems that aren’t currently storing data. DO NOT DELETE
246 the ``default`` zone group if you are already using it to store
247 data.
248
249 ::
250
251 # ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
252 # ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
253 # ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
254 # ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
255 # ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
256
257 Create a System User
258 --------------------
259
260 The ``ceph-radosgw`` daemons must authenticate before pulling realm and
261 period information. In the master zone, create a system user to
262 facilitate authentication between daemons.
263
264 ::
265
266 # radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system
267
268 For example:
269
270 ::
271
272 # radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system
273
274 Make a note of the ``access_key`` and ``secret_key``, as the secondary
275 zones will require them to authenticate with the master zone.
276
277 Finally, add the system user to the master zone.
278
279 ::
280
281 # radosgw-admin zone modify --rgw-zone=us-east --access-key={access-key} --secret={secret}
282 # radosgw-admin period update --commit
283
284 Update the Period
285 -----------------
286
287 After updating the master zone configuration, update the period.
288
289 ::
290
291 # radosgw-admin period update --commit
292
293 .. note:: Updating the period changes the epoch, and ensures that other zones
294 will receive the updated configuration.
295
296 Update the Ceph Configuration File
297 ----------------------------------
298
299 Update the Ceph configuration file on master zone hosts by adding the
300 ``rgw_zone`` configuration option and the name of the master zone to the
301 instance entry.
302
303 ::
304
305 [client.rgw.{instance-name}]
306 ...
307 rgw_zone={zone-name}
308
309 For example:
310
311 ::
312
313 [client.rgw.rgw1]
314 host = rgw1
315 rgw frontends = "civetweb port=80"
316 rgw_zone=us-east
317
318 Start the Gateway
319 -----------------
320
321 On the object gateway host, start and enable the Ceph Object Gateway
322 service:
323
324 ::
325
326 # systemctl start ceph-radosgw@rgw.`hostname -s`
327 # systemctl enable ceph-radosgw@rgw.`hostname -s`
328
329 .. _secondary-zone-label:
330
331 Configure Secondary Zones
332 =========================
333
334 Zones within a zone group replicate all data to ensure that each zone
335 has the same data. When creating the secondary zone, execute all of the
336 following operations on a host identified to serve the secondary zone.
337
338 .. note:: To add a third zone, follow the same procedures as for adding the
339 secondary zone. Use different zone name.
340
341 .. important:: You must execute metadata operations, such as user creation, on a
342 host within the master zone. The master zone and the secondary zone
343 can receive bucket operations, but the secondary zone redirects
344 bucket operations to the master zone. If the master zone is down,
345 bucket operations will fail.
346
347 Pull the Realm
348 --------------
349
350 Using the URL path, access key and secret of the master zone in the
351 master zone group, pull the realm configuration to the host. To pull a
352 non-default realm, specify the realm using the ``--rgw-realm`` or
353 ``--realm-id`` configuration options.
354
355 ::
356
357 # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
358
359 .. note:: Pulling the realm also retrieves the remote's current period
360 configuration, and makes it the current period on this host as well.
361
362 If this realm is the default realm or the only realm, make the realm the
363 default realm.
364
365 ::
366
367 # radosgw-admin realm default --rgw-realm={realm-name}
368
369 Create a Secondary Zone
370 -----------------------
371
372 .. important:: Zones must be created on a Ceph Object Gateway node that will be
373 within the zone.
374
375 Create a secondary zone for the multi-site configuration by opening a
376 command line interface on a host identified to serve the secondary zone.
377 Specify the zone group ID, the new zone name and an endpoint for the
378 zone. **DO NOT** use the ``--master`` or ``--default`` flags. In Kraken,
379 all zones run in an active-active configuration by
380 default; that is, a gateway client may write data to any zone and the
381 zone will replicate the data to all other zones within the zone group.
382 If the secondary zone should not accept write operations, specify the
383 ``--read-only`` flag to create an active-passive configuration between
384 the master zone and the secondary zone. Additionally, provide the
385 ``access_key`` and ``secret_key`` of the generated system user stored in
386 the master zone of the master zone group. Execute the following:
387
388 ::
389
390 # radosgw-admin zone create --rgw-zonegroup={zone-group-name}\
391 --rgw-zone={zone-name} --endpoints={url} \
392 --access-key={system-key} --secret={secret}\
393 --endpoints=http://{fqdn}:80 \
394 [--read-only]
395
396 For example:
397
398 ::
399
400 # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \
401 --access-key={system-key} --secret={secret} \
402 --endpoints=http://rgw2:80
403
404 .. important:: The following steps assume a multi-site configuration using newly
405 installed systems that aren’t storing data. **DO NOT DELETE** the
406 ``default`` zone and its pools if you are already using it to store
407 data, or the data will be lost and unrecoverable.
408
409 Delete the default zone if needed.
410
411 ::
412
413 # radosgw-admin zone rm --rgw-zone=default
414
415 Finally, delete the default pools in your Ceph storage cluster if
416 needed.
417
418 ::
419
420 # ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
421 # ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
422 # ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
423 # ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
424 # ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
425
426 Update the Ceph Configuration File
427 ----------------------------------
428
429 Update the Ceph configuration file on the secondary zone hosts by adding
430 the ``rgw_zone`` configuration option and the name of the secondary zone
431 to the instance entry.
432
433 ::
434
435 [client.rgw.{instance-name}]
436 ...
437 rgw_zone={zone-name}
438
439 For example:
440
441 ::
442
443 [client.rgw.rgw2]
444 host = rgw2
445 rgw frontends = "civetweb port=80"
446 rgw_zone=us-west
447
448 Update the Period
449 -----------------
450
451 After updating the master zone configuration, update the period.
452
453 ::
454
455 # radosgw-admin period update --commit
456
457 .. note:: Updating the period changes the epoch, and ensures that other zones
458 will receive the updated configuration.
459
460 Start the Gateway
461 -----------------
462
463 On the object gateway host, start and enable the Ceph Object Gateway
464 service:
465
466 ::
467
468 # systemctl start ceph-radosgw@rgw.`hostname -s`
469 # systemctl enable ceph-radosgw@rgw.`hostname -s`
470
471 Check Synchronization Status
472 ----------------------------
473
474 Once the secondary zone is up and running, check the synchronization
475 status. Synchronization copies users and buckets created in the master
476 zone to the secondary zone.
477
478 ::
479
480 # radosgw-admin sync status
481
482 The output will provide the status of synchronization operations. For
483 example:
484
485 ::
486
487 realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth)
488 zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us)
489 zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west)
490 metadata sync syncing
491 full sync: 0/64 shards
492 metadata is caught up with master
493 incremental sync: 64/64 shards
494 data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east)
495 syncing
496 full sync: 0/128 shards
497 incremental sync: 128/128 shards
498 data is caught up with source
499
500 .. note:: Secondary zones accept bucket operations; however, secondary zones
501 redirect bucket operations to the master zone and then synchronize
502 with the master zone to receive the result of the bucket operations.
503 If the master zone is down, bucket operations executed on the
504 secondary zone will fail, but object operations should succeed.
505
506 Verification of an Object
507 -------------------------
508
509 By default, the objects are not verified again after the synchronization of an
510 object was successful. To enable that, you can set :confval:`rgw_sync_obj_etag_verify`
511 to ``true``. After enabling the optional objects that will be synchronized
512 going forward, an additional MD5 checksum will verify that it is computed on
513 the source and the destination. This is to ensure the integrity of the objects
514 fetched from a remote server over HTTP including multisite sync. This option
515 can decrease the performance of your RGW as more computation is needed.
516
517
518 Maintenance
519 ===========
520
521 Checking the Sync Status
522 ------------------------
523
524 Information about the replication status of a zone can be queried with::
525
526 $ radosgw-admin sync status
527 realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth)
528 zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us)
529 zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2)
530 metadata sync syncing
531 full sync: 0/64 shards
532 incremental sync: 64/64 shards
533 metadata is behind on 1 shards
534 oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s
535 data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1)
536 syncing
537 full sync: 0/128 shards
538 incremental sync: 128/128 shards
539 data is caught up with source
540 source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3)
541 syncing
542 full sync: 0/128 shards
543 incremental sync: 128/128 shards
544 data is caught up with source
545
546 The output can differ depending on the sync status. The shards are described
547 as two different types during sync:
548
549 - **Behind shards** are shards that need a full data sync and shards needing
550 an incremental data sync because they are not up-to-date.
551
552 - **Recovery shards** are shards that encountered an error during sync and marked
553 for retry. The error mostly occurs on minor issues like acquiring a lock on
554 a bucket. This will typically resolve itself.
555
556 Check the logs
557 --------------
558
559 For multi-site only, you can check out the metadata log (``mdlog``),
560 the bucket index log (``bilog``) and the data log (``datalog``).
561 You can list them and also trim them which is not needed in most cases as
562 :confval:`rgw_sync_log_trim_interval` is set to 20 minutes as default. If it isn't manually
563 set to 0, you shouldn't have to trim it at any time as it could cause side effects otherwise.
564
565 Changing the Metadata Master Zone
566 ---------------------------------
567
568 .. important:: Care must be taken when changing which zone is the metadata
569 master. If a zone has not finished syncing metadata from the current master
570 zone, it will be unable to serve any remaining entries when promoted to
571 master and those changes will be lost. For this reason, waiting for a
572 zone's ``radosgw-admin sync status`` to catch up on metadata sync before
573 promoting it to master is recommended.
574
575 Similarly, if changes to metadata are being processed by the current master
576 zone while another zone is being promoted to master, those changes are
577 likely to be lost. To avoid this, shutting down any ``radosgw`` instances
578 on the previous master zone is recommended. After promoting another zone,
579 its new period can be fetched with ``radosgw-admin period pull`` and the
580 gateway(s) can be restarted.
581
582 To promote a zone (for example, zone ``us-2`` in zonegroup ``us``) to metadata
583 master, run the following commands on that zone::
584
585 $ radosgw-admin zone modify --rgw-zone=us-2 --master
586 $ radosgw-admin zonegroup modify --rgw-zonegroup=us --master
587 $ radosgw-admin period update --commit
588
589 This will generate a new period, and the radosgw instance(s) in zone ``us-2``
590 will send this period to other zones.
591
592 Failover and Disaster Recovery
593 ==============================
594
595 If the master zone should fail, failover to the secondary zone for
596 disaster recovery.
597
598 1. Make the secondary zone the master and default zone. For example:
599
600 ::
601
602 # radosgw-admin zone modify --rgw-zone={zone-name} --master --default
603
604 By default, Ceph Object Gateway will run in an active-active
605 configuration. If the cluster was configured to run in an
606 active-passive configuration, the secondary zone is a read-only zone.
607 Remove the ``--read-only`` status to allow the zone to receive write
608 operations. For example:
609
610 ::
611
612 # radosgw-admin zone modify --rgw-zone={zone-name} --master --default \
613 --read-only=false
614
615 2. Update the period to make the changes take effect.
616
617 ::
618
619 # radosgw-admin period update --commit
620
621 3. Finally, restart the Ceph Object Gateway.
622
623 ::
624
625 # systemctl restart ceph-radosgw@rgw.`hostname -s`
626
627 If the former master zone recovers, revert the operation.
628
629 1. From the recovered zone, pull the latest realm configuration
630 from the current master zone.
631
632 ::
633
634 # radosgw-admin realm pull --url={url-to-master-zone-gateway} \
635 --access-key={access-key} --secret={secret}
636
637 2. Make the recovered zone the master and default zone.
638
639 ::
640
641 # radosgw-admin zone modify --rgw-zone={zone-name} --master --default
642
643 3. Update the period to make the changes take effect.
644
645 ::
646
647 # radosgw-admin period update --commit
648
649 4. Then, restart the Ceph Object Gateway in the recovered zone.
650
651 ::
652
653 # systemctl restart ceph-radosgw@rgw.`hostname -s`
654
655 5. If the secondary zone needs to be a read-only configuration, update
656 the secondary zone.
657
658 ::
659
660 # radosgw-admin zone modify --rgw-zone={zone-name} --read-only
661
662 6. Update the period to make the changes take effect.
663
664 ::
665
666 # radosgw-admin period update --commit
667
668 7. Finally, restart the Ceph Object Gateway in the secondary zone.
669
670 ::
671
672 # systemctl restart ceph-radosgw@rgw.`hostname -s`
673
674 .. _rgw-multisite-migrate-from-single-site:
675
676 Migrating a Single Site System to Multi-Site
677 ============================================
678
679 To migrate from a single site system with a ``default`` zone group and
680 zone to a multi site system, use the following steps:
681
682 1. Create a realm. Replace ``<name>`` with the realm name.
683
684 ::
685
686 # radosgw-admin realm create --rgw-realm=<name> --default
687
688 2. Rename the default zone and zonegroup. Replace ``<name>`` with the
689 zonegroup or zone name.
690
691 ::
692
693 # radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name>
694 # radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name>
695
696 3. Configure the master zonegroup. Replace ``<name>`` with the realm or
697 zonegroup name. Replace ``<fqdn>`` with the fully qualified domain
698 name(s) in the zonegroup.
699
700 ::
701
702 # radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default
703
704 4. Configure the master zone. Replace ``<name>`` with the realm,
705 zonegroup or zone name. Replace ``<fqdn>`` with the fully qualified
706 domain name(s) in the zonegroup.
707
708 ::
709
710 # radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \
711 --rgw-zone=<name> --endpoints http://<fqdn>:80 \
712 --access-key=<access-key> --secret=<secret-key> \
713 --master --default
714
715 5. Create a system user. Replace ``<user-id>`` with the username.
716 Replace ``<display-name>`` with a display name. It may contain
717 spaces.
718
719 ::
720
721 # radosgw-admin user create --uid=<user-id> --display-name="<display-name>"\
722 --access-key=<access-key> --secret=<secret-key> --system
723
724 6. Commit the updated configuration.
725
726 ::
727
728 # radosgw-admin period update --commit
729
730 7. Finally, restart the Ceph Object Gateway.
731
732 ::
733
734 # systemctl restart ceph-radosgw@rgw.`hostname -s`
735
736 After completing this procedure, proceed to `Configure a Secondary
737 Zone <#configure-secondary-zones>`__ to create a secondary zone
738 in the master zone group.
739
740
741 Multi-Site Configuration Reference
742 ==================================
743
744 The following sections provide additional details and command-line
745 usage for realms, periods, zone groups and zones.
746
747 For more details on every available configuration option, please check out
748 ``src/common/options/rgw.yaml.in`` or go to the more comfortable :ref:`mgr-dashboard`
749 configuration page (found under `Cluster`) where you can view and set all
750 options easily. On the page, set the level to ``advanced`` and search for RGW,
751 to see all basic and advanced configuration options with a short description.
752 Expand the details of an option to reveal a longer description.
753
754 Realms
755 ------
756
757 A realm represents a globally unique namespace consisting of one or more
758 zonegroups containing one or more zones, and zones containing buckets,
759 which in turn contain objects. A realm enables the Ceph Object Gateway
760 to support multiple namespaces and their configuration on the same
761 hardware.
762
763 A realm contains the notion of periods. Each period represents the state
764 of the zone group and zone configuration in time. Each time you make a
765 change to a zonegroup or zone, update the period and commit it.
766
767 By default, the Ceph Object Gateway does not create a realm
768 for backward compatibility with Infernalis and earlier releases.
769 However, as a best practice, we recommend creating realms for new
770 clusters.
771
772 Create a Realm
773 ~~~~~~~~~~~~~~
774
775 To create a realm, execute ``realm create`` and specify the realm name.
776 If the realm is the default, specify ``--default``.
777
778 ::
779
780 # radosgw-admin realm create --rgw-realm={realm-name} [--default]
781
782 For example:
783
784 ::
785
786 # radosgw-admin realm create --rgw-realm=movies --default
787
788 By specifying ``--default``, the realm will be called implicitly with
789 each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name
790 are explicitly provided.
791
792 Make a Realm the Default
793 ~~~~~~~~~~~~~~~~~~~~~~~~
794
795 One realm in the list of realms should be the default realm. There may
796 be only one default realm. If there is only one realm and it wasn’t
797 specified as the default realm when it was created, make it the default
798 realm. Alternatively, to change which realm is the default, execute:
799
800 ::
801
802 # radosgw-admin realm default --rgw-realm=movies
803
804 .. note:: When the realm is default, the command line assumes
805 ``--rgw-realm=<realm-name>`` as an argument.
806
807 Delete a Realm
808 ~~~~~~~~~~~~~~
809
810 To delete a realm, execute ``realm rm`` and specify the realm name.
811
812 ::
813
814 # radosgw-admin realm rm --rgw-realm={realm-name}
815
816 For example:
817
818 ::
819
820 # radosgw-admin realm rm --rgw-realm=movies
821
822 Get a Realm
823 ~~~~~~~~~~~
824
825 To get a realm, execute ``realm get`` and specify the realm name.
826
827 ::
828
829 #radosgw-admin realm get --rgw-realm=<name>
830
831 For example:
832
833 ::
834
835 # radosgw-admin realm get --rgw-realm=movies [> filename.json]
836
837 The CLI will echo a JSON object with the realm properties.
838
839 ::
840
841 {
842 "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc",
843 "name": "movies",
844 "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b",
845 "epoch": 1
846 }
847
848 Use ``>`` and an output file name to output the JSON object to a file.
849
850 Set a Realm
851 ~~~~~~~~~~~
852
853 To set a realm, execute ``realm set``, specify the realm name, and
854 ``--infile=`` with an input file name.
855
856 ::
857
858 #radosgw-admin realm set --rgw-realm=<name> --infile=<infilename>
859
860 For example:
861
862 ::
863
864 # radosgw-admin realm set --rgw-realm=movies --infile=filename.json
865
866 List Realms
867 ~~~~~~~~~~~
868
869 To list realms, execute ``realm list``.
870
871 ::
872
873 # radosgw-admin realm list
874
875 List Realm Periods
876 ~~~~~~~~~~~~~~~~~~
877
878 To list realm periods, execute ``realm list-periods``.
879
880 ::
881
882 # radosgw-admin realm list-periods
883
884 Pull a Realm
885 ~~~~~~~~~~~~
886
887 To pull a realm from the node containing the master zone group and
888 master zone to a node containing a secondary zone group or zone, execute
889 ``realm pull`` on the node that will receive the realm configuration.
890
891 ::
892
893 # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
894
895 Rename a Realm
896 ~~~~~~~~~~~~~~
897
898 A realm is not part of the period. Consequently, renaming the realm is
899 only applied locally, and will not get pulled with ``realm pull``. When
900 renaming a realm with multiple zones, run the command on each zone. To
901 rename a realm, execute the following:
902
903 ::
904
905 # radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name>
906
907 .. note:: DO NOT use ``realm set`` to change the ``name`` parameter. That
908 changes the internal name only. Specifying ``--rgw-realm`` would
909 still use the old realm name.
910
911 Zone Groups
912 -----------
913
914 The Ceph Object Gateway supports multi-site deployments and a global
915 namespace by using the notion of zone groups. Formerly called a region
916 in Infernalis, a zone group defines the geographic location of one or more Ceph
917 Object Gateway instances within one or more zones.
918
919 Configuring zone groups differs from typical configuration procedures,
920 because not all of the settings end up in a Ceph configuration file. You
921 can list zone groups, get a zone group configuration, and set a zone
922 group configuration.
923
924 Create a Zone Group
925 ~~~~~~~~~~~~~~~~~~~
926
927 Creating a zone group consists of specifying the zone group name.
928 Creating a zone assumes it will live in the default realm unless
929 ``--rgw-realm=<realm-name>`` is specified. If the zonegroup is the
930 default zonegroup, specify the ``--default`` flag. If the zonegroup is
931 the master zonegroup, specify the ``--master`` flag. For example:
932
933 ::
934
935 # radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default]
936
937
938 .. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify
939 an existing zone group’s settings.
940
941 Make a Zone Group the Default
942 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
943
944 One zonegroup in the list of zonegroups should be the default zonegroup.
945 There may be only one default zonegroup. If there is only one zonegroup
946 and it wasn’t specified as the default zonegroup when it was created,
947 make it the default zonegroup. Alternatively, to change which zonegroup
948 is the default, execute:
949
950 ::
951
952 # radosgw-admin zonegroup default --rgw-zonegroup=comedy
953
954 .. note:: When the zonegroup is default, the command line assumes
955 ``--rgw-zonegroup=<zonegroup-name>`` as an argument.
956
957 Then, update the period:
958
959 ::
960
961 # radosgw-admin period update --commit
962
963 Add a Zone to a Zone Group
964 ~~~~~~~~~~~~~~~~~~~~~~~~~~
965
966 To add a zone to a zonegroup, execute the following:
967
968 ::
969
970 # radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name>
971
972 Then, update the period:
973
974 ::
975
976 # radosgw-admin period update --commit
977
978 Remove a Zone from a Zone Group
979 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
980
981 To remove a zone from a zonegroup, execute the following:
982
983 ::
984
985 # radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name>
986
987 Then, update the period:
988
989 ::
990
991 # radosgw-admin period update --commit
992
993 Rename a Zone Group
994 ~~~~~~~~~~~~~~~~~~~
995
996 To rename a zonegroup, execute the following:
997
998 ::
999
1000 # radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name>
1001
1002 Then, update the period:
1003
1004 ::
1005
1006 # radosgw-admin period update --commit
1007
1008 Delete a Zone Group
1009 ~~~~~~~~~~~~~~~~~~~
1010
1011 To delete a zonegroup, execute the following:
1012
1013 ::
1014
1015 # radosgw-admin zonegroup delete --rgw-zonegroup=<name>
1016
1017 Then, update the period:
1018
1019 ::
1020
1021 # radosgw-admin period update --commit
1022
1023 List Zone Groups
1024 ~~~~~~~~~~~~~~~~
1025
1026 A Ceph cluster contains a list of zone groups. To list the zone groups,
1027 execute:
1028
1029 ::
1030
1031 # radosgw-admin zonegroup list
1032
1033 The ``radosgw-admin`` returns a JSON formatted list of zone groups.
1034
1035 ::
1036
1037 {
1038 "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1039 "zonegroups": [
1040 "us"
1041 ]
1042 }
1043
1044 Get a Zone Group Map
1045 ~~~~~~~~~~~~~~~~~~~~
1046
1047 To list the details of each zone group, execute:
1048
1049 ::
1050
1051 # radosgw-admin zonegroup-map get
1052
1053 .. note:: If you receive a ``failed to read zonegroup map`` error, run
1054 ``radosgw-admin zonegroup-map update`` as ``root`` first.
1055
1056 Get a Zone Group
1057 ~~~~~~~~~~~~~~~~
1058
1059 To view the configuration of a zone group, execute:
1060
1061 ::
1062
1063 radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>]
1064
1065 The zone group configuration looks like this:
1066
1067 ::
1068
1069 {
1070 "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1071 "name": "us",
1072 "api_name": "us",
1073 "is_master": "true",
1074 "endpoints": [
1075 "http:\/\/rgw1:80"
1076 ],
1077 "hostnames": [],
1078 "hostnames_s3website": [],
1079 "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
1080 "zones": [
1081 {
1082 "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
1083 "name": "us-east",
1084 "endpoints": [
1085 "http:\/\/rgw1"
1086 ],
1087 "log_meta": "true",
1088 "log_data": "true",
1089 "bucket_index_max_shards": 0,
1090 "read_only": "false"
1091 },
1092 {
1093 "id": "d1024e59-7d28-49d1-8222-af101965a939",
1094 "name": "us-west",
1095 "endpoints": [
1096 "http:\/\/rgw2:80"
1097 ],
1098 "log_meta": "false",
1099 "log_data": "true",
1100 "bucket_index_max_shards": 0,
1101 "read_only": "false"
1102 }
1103 ],
1104 "placement_targets": [
1105 {
1106 "name": "default-placement",
1107 "tags": []
1108 }
1109 ],
1110 "default_placement": "default-placement",
1111 "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
1112 }
1113
1114 Set a Zone Group
1115 ~~~~~~~~~~~~~~~~
1116
1117 Defining a zone group consists of creating a JSON object, specifying at
1118 least the required settings:
1119
1120 1. ``name``: The name of the zone group. Required.
1121
1122 2. ``api_name``: The API name for the zone group. Optional.
1123
1124 3. ``is_master``: Determines if the zone group is the master zone group.
1125 Required. **note:** You can only have one master zone group.
1126
1127 4. ``endpoints``: A list of all the endpoints in the zone group. For
1128 example, you may use multiple domain names to refer to the same zone
1129 group. Remember to escape the forward slashes (``\/``). You may also
1130 specify a port (``fqdn:port``) for each endpoint. Optional.
1131
1132 5. ``hostnames``: A list of all the hostnames in the zone group. For
1133 example, you may use multiple domain names to refer to the same zone
1134 group. Optional. The ``rgw dns name`` setting will automatically be
1135 included in this list. You should restart the gateway daemon(s) after
1136 changing this setting.
1137
1138 6. ``master_zone``: The master zone for the zone group. Optional. Uses
1139 the default zone if not specified. **note:** You can only have one
1140 master zone per zone group.
1141
1142 7. ``zones``: A list of all zones within the zone group. Each zone has a
1143 name (required), a list of endpoints (optional), and whether or not
1144 the gateway will log metadata and data operations (false by default).
1145
1146 8. ``placement_targets``: A list of placement targets (optional). Each
1147 placement target contains a name (required) for the placement target
1148 and a list of tags (optional) so that only users with the tag can use
1149 the placement target (i.e., the user’s ``placement_tags`` field in
1150 the user info).
1151
1152 9. ``default_placement``: The default placement target for the object
1153 index and object data. Set to ``default-placement`` by default. You
1154 may also set a per-user default placement in the user info for each
1155 user.
1156
1157 To set a zone group, create a JSON object consisting of the required
1158 fields, save the object to a file (e.g., ``zonegroup.json``); then,
1159 execute the following command:
1160
1161 ::
1162
1163 # radosgw-admin zonegroup set --infile zonegroup.json
1164
1165 Where ``zonegroup.json`` is the JSON file you created.
1166
1167 .. important:: The ``default`` zone group ``is_master`` setting is ``true`` by
1168 default. If you create a new zone group and want to make it the
1169 master zone group, you must either set the ``default`` zone group
1170 ``is_master`` setting to ``false``, or delete the ``default`` zone
1171 group.
1172
1173 Finally, update the period:
1174
1175 ::
1176
1177 # radosgw-admin period update --commit
1178
1179 Set a Zone Group Map
1180 ~~~~~~~~~~~~~~~~~~~~
1181
1182 Setting a zone group map consists of creating a JSON object consisting
1183 of one or more zone groups, and setting the ``master_zonegroup`` for the
1184 cluster. Each zone group in the zone group map consists of a key/value
1185 pair, where the ``key`` setting is equivalent to the ``name`` setting
1186 for an individual zone group configuration, and the ``val`` is a JSON
1187 object consisting of an individual zone group configuration.
1188
1189 You may only have one zone group with ``is_master`` equal to ``true``,
1190 and it must be specified as the ``master_zonegroup`` at the end of the
1191 zone group map. The following JSON object is an example of a default
1192 zone group map.
1193
1194 ::
1195
1196 {
1197 "zonegroups": [
1198 {
1199 "key": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1200 "val": {
1201 "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1202 "name": "us",
1203 "api_name": "us",
1204 "is_master": "true",
1205 "endpoints": [
1206 "http:\/\/rgw1:80"
1207 ],
1208 "hostnames": [],
1209 "hostnames_s3website": [],
1210 "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
1211 "zones": [
1212 {
1213 "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
1214 "name": "us-east",
1215 "endpoints": [
1216 "http:\/\/rgw1"
1217 ],
1218 "log_meta": "true",
1219 "log_data": "true",
1220 "bucket_index_max_shards": 0,
1221 "read_only": "false"
1222 },
1223 {
1224 "id": "d1024e59-7d28-49d1-8222-af101965a939",
1225 "name": "us-west",
1226 "endpoints": [
1227 "http:\/\/rgw2:80"
1228 ],
1229 "log_meta": "false",
1230 "log_data": "true",
1231 "bucket_index_max_shards": 0,
1232 "read_only": "false"
1233 }
1234 ],
1235 "placement_targets": [
1236 {
1237 "name": "default-placement",
1238 "tags": []
1239 }
1240 ],
1241 "default_placement": "default-placement",
1242 "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
1243 }
1244 }
1245 ],
1246 "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda",
1247 "bucket_quota": {
1248 "enabled": false,
1249 "max_size_kb": -1,
1250 "max_objects": -1
1251 },
1252 "user_quota": {
1253 "enabled": false,
1254 "max_size_kb": -1,
1255 "max_objects": -1
1256 }
1257 }
1258
1259 To set a zone group map, execute the following:
1260
1261 ::
1262
1263 # radosgw-admin zonegroup-map set --infile zonegroupmap.json
1264
1265 Where ``zonegroupmap.json`` is the JSON file you created. Ensure that
1266 you have zones created for the ones specified in the zone group map.
1267 Finally, update the period.
1268
1269 ::
1270
1271 # radosgw-admin period update --commit
1272
1273 Zones
1274 -----
1275
1276 Ceph Object Gateway supports the notion of zones. A zone defines a
1277 logical group consisting of one or more Ceph Object Gateway instances.
1278
1279 Configuring zones differs from typical configuration procedures, because
1280 not all of the settings end up in a Ceph configuration file. You can
1281 list zones, get a zone configuration and set a zone configuration.
1282
1283 Create a Zone
1284 ~~~~~~~~~~~~~
1285
1286 To create a zone, specify a zone name. If it is a master zone, specify
1287 the ``--master`` option. Only one zone in a zone group may be a master
1288 zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup``
1289 option with the zonegroup name.
1290
1291 ::
1292
1293 # radosgw-admin zone create --rgw-zone=<name> \
1294 [--zonegroup=<zonegroup-name]\
1295 [--endpoints=<endpoint>[,<endpoint>] \
1296 [--master] [--default] \
1297 --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY
1298
1299 Then, update the period:
1300
1301 ::
1302
1303 # radosgw-admin period update --commit
1304
1305 Delete a Zone
1306 ~~~~~~~~~~~~~
1307
1308 To delete zone, first remove it from the zonegroup.
1309
1310 ::
1311
1312 # radosgw-admin zonegroup remove --zonegroup=<name>\
1313 --zone=<name>
1314
1315 Then, update the period:
1316
1317 ::
1318
1319 # radosgw-admin period update --commit
1320
1321 Next, delete the zone. Execute the following:
1322
1323 ::
1324
1325 # radosgw-admin zone rm --rgw-zone<name>
1326
1327 Finally, update the period:
1328
1329 ::
1330
1331 # radosgw-admin period update --commit
1332
1333 .. important:: Do not delete a zone without removing it from a zone group first.
1334 Otherwise, updating the period will fail.
1335
1336 If the pools for the deleted zone will not be used anywhere else,
1337 consider deleting the pools. Replace ``<del-zone>`` in the example below
1338 with the deleted zone’s name.
1339
1340 .. important:: Only delete the pools with prepended zone names. Deleting the root
1341 pool, such as, ``.rgw.root`` will remove all of the system’s
1342 configuration.
1343
1344 .. important:: Once the pools are deleted, all of the data within them are deleted
1345 in an unrecoverable manner. Only delete the pools if the pool
1346 contents are no longer needed.
1347
1348 ::
1349
1350 # ceph osd pool rm <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it
1351 # ceph osd pool rm <del-zone>.rgw.data.root <del-zone>.rgw.data.root --yes-i-really-really-mean-it
1352 # ceph osd pool rm <del-zone>.rgw.gc <del-zone>.rgw.gc --yes-i-really-really-mean-it
1353 # ceph osd pool rm <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it
1354 # ceph osd pool rm <del-zone>.rgw.users.uid <del-zone>.rgw.users.uid --yes-i-really-really-mean-it
1355
1356 Modify a Zone
1357 ~~~~~~~~~~~~~
1358
1359 To modify a zone, specify the zone name and the parameters you wish to
1360 modify.
1361
1362 ::
1363
1364 # radosgw-admin zone modify [options]
1365
1366 Where ``[options]``:
1367
1368 - ``--access-key=<key>``
1369 - ``--secret/--secret-key=<key>``
1370 - ``--master``
1371 - ``--default``
1372 - ``--endpoints=<list>``
1373
1374 Then, update the period:
1375
1376 ::
1377
1378 # radosgw-admin period update --commit
1379
1380 List Zones
1381 ~~~~~~~~~~
1382
1383 As ``root``, to list the zones in a cluster, execute:
1384
1385 ::
1386
1387 # radosgw-admin zone list
1388
1389 Get a Zone
1390 ~~~~~~~~~~
1391
1392 As ``root``, to get the configuration of a zone, execute:
1393
1394 ::
1395
1396 # radosgw-admin zone get [--rgw-zone=<zone>]
1397
1398 The ``default`` zone looks like this:
1399
1400 ::
1401
1402 { "domain_root": ".rgw",
1403 "control_pool": ".rgw.control",
1404 "gc_pool": ".rgw.gc",
1405 "log_pool": ".log",
1406 "intent_log_pool": ".intent-log",
1407 "usage_log_pool": ".usage",
1408 "user_keys_pool": ".users",
1409 "user_email_pool": ".users.email",
1410 "user_swift_pool": ".users.swift",
1411 "user_uid_pool": ".users.uid",
1412 "system_key": { "access_key": "", "secret_key": ""},
1413 "placement_pools": [
1414 { "key": "default-placement",
1415 "val": { "index_pool": ".rgw.buckets.index",
1416 "data_pool": ".rgw.buckets"}
1417 }
1418 ]
1419 }
1420
1421 Set a Zone
1422 ~~~~~~~~~~
1423
1424 Configuring a zone involves specifying a series of Ceph Object Gateway
1425 pools. For consistency, we recommend using a pool prefix that is the
1426 same as the zone name. See
1427 `Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
1428 for details of configuring pools.
1429
1430 To set a zone, create a JSON object consisting of the pools, save the
1431 object to a file (e.g., ``zone.json``); then, execute the following
1432 command, replacing ``{zone-name}`` with the name of the zone:
1433
1434 ::
1435
1436 # radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json
1437
1438 Where ``zone.json`` is the JSON file you created.
1439
1440 Then, as ``root``, update the period:
1441
1442 ::
1443
1444 # radosgw-admin period update --commit
1445
1446 Rename a Zone
1447 ~~~~~~~~~~~~~
1448
1449 To rename a zone, specify the zone name and the new zone name.
1450
1451 ::
1452
1453 # radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name>
1454
1455 Then, update the period:
1456
1457 ::
1458
1459 # radosgw-admin period update --commit
1460
1461 Zone Group and Zone Settings
1462 ----------------------------
1463
1464 When configuring a default zone group and zone, the pool name includes
1465 the zone name. For example:
1466
1467 - ``default.rgw.control``
1468
1469 To change the defaults, include the following settings in your Ceph
1470 configuration file under each ``[client.radosgw.{instance-name}]``
1471 instance.
1472
1473 +-------------------------------------+-----------------------------------+---------+-----------------------+
1474 | Name | Description | Type | Default |
1475 +=====================================+===================================+=========+=======================+
1476 | ``rgw_zone`` | The name of the zone for the | String | None |
1477 | | gateway instance. | | |
1478 +-------------------------------------+-----------------------------------+---------+-----------------------+
1479 | ``rgw_zonegroup`` | The name of the zone group for | String | None |
1480 | | the gateway instance. | | |
1481 +-------------------------------------+-----------------------------------+---------+-----------------------+
1482 | ``rgw_zonegroup_root_pool`` | The root pool for the zone group. | String | ``.rgw.root`` |
1483 +-------------------------------------+-----------------------------------+---------+-----------------------+
1484 | ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` |
1485 +-------------------------------------+-----------------------------------+---------+-----------------------+
1486 | ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` |
1487 | | zone group. We do not recommend | | |
1488 | | changing this setting. | | |
1489 +-------------------------------------+-----------------------------------+---------+-----------------------+
1490
1491
1492 .. _`Pools`: ../pools
1493 .. _`Sync Policy Config`: ../multisite-sync-policy