]>
Commit | Line | Data |
---|---|---|
1 | .. _multisite: | |
2 | ||
3 | ========== | |
4 | Multi-Site | |
5 | ========== | |
6 | ||
7 | .. versionadded:: Jewel | |
8 | ||
9 | A single zone configuration typically consists of one zone group containing one | |
10 | zone and one or more `ceph-radosgw` instances where you may load-balance gateway | |
11 | client requests between the instances. In a single zone configuration, typically | |
12 | multiple gateway instances point to a single Ceph storage cluster. However, Kraken | |
13 | supports several multi-site configuration options for the Ceph Object Gateway: | |
14 | ||
15 | - **Multi-zone:** A more advanced configuration consists of one zone group and | |
16 | multiple zones, each zone with one or more `ceph-radosgw` instances. Each zone | |
17 | is backed by its own Ceph Storage Cluster. Multiple zones in a zone group | |
18 | provides disaster recovery for the zone group should one of the zones experience | |
19 | a significant failure. In Kraken, each zone is active and may receive write | |
20 | operations. In addition to disaster recovery, multiple active zones may also | |
21 | serve as a foundation for content delivery networks. | |
22 | ||
23 | - **Multi-zone-group:** Formerly called 'regions', Ceph Object Gateway can also | |
24 | support multiple zone groups, each zone group with one or more zones. Objects | |
25 | stored to zones in one zone group within the same realm as another zone | |
26 | group will share a global object namespace, ensuring unique object IDs across | |
27 | zone groups and zones. | |
28 | ||
29 | - **Multiple Realms:** In Kraken, the Ceph Object Gateway supports the notion | |
30 | of realms, which can be a single zone group or multiple zone groups and | |
31 | a globally unique namespace for the realm. Multiple realms provide the ability | |
32 | to support numerous configurations and namespaces. | |
33 | ||
34 | Replicating object data between zones within a zone group looks something | |
35 | like this: | |
36 | ||
37 | .. image:: ../images/zone-sync2.png | |
38 | :align: center | |
39 | ||
40 | For additional details on setting up a cluster, see `Ceph Object Gateway for | |
41 | Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_for_production/index/>`__. | |
42 | ||
43 | Functional Changes from Infernalis | |
44 | ================================== | |
45 | ||
46 | In Kraken, you can configure each Ceph Object Gateway to | |
47 | work in an active-active zone configuration, allowing for writes to | |
48 | non-master zones. | |
49 | ||
50 | The multi-site configuration is stored within a container called a | |
51 | "realm." The realm stores zone groups, zones, and a time "period" with | |
52 | multiple epochs for tracking changes to the configuration. In Kraken, | |
53 | the ``ceph-radosgw`` daemons handle the synchronization, | |
54 | eliminating the need for a separate synchronization agent. Additionally, | |
55 | the new approach to synchronization allows the Ceph Object Gateway to | |
56 | operate with an "active-active" configuration instead of | |
57 | "active-passive". | |
58 | ||
59 | Requirements and Assumptions | |
60 | ============================ | |
61 | ||
62 | A multi-site configuration requires at least two Ceph storage clusters, | |
63 | preferably given a distinct cluster name. At least two Ceph object | |
64 | gateway instances, one for each Ceph storage cluster. | |
65 | ||
66 | This guide assumes at least two Ceph storage clusters are in geographically | |
67 | separate locations; however, the configuration can work on the same | |
68 | site. This guide also assumes two Ceph object gateway servers named | |
69 | ``rgw1`` and ``rgw2``. | |
70 | ||
71 | .. important:: Running a single Ceph storage cluster is NOT recommended unless you have | |
72 | low latency WAN connections. | |
73 | ||
74 | A multi-site configuration requires a master zone group and a master | |
75 | zone. Additionally, each zone group requires a master zone. Zone groups | |
76 | may have one or more secondary or non-master zones. | |
77 | ||
78 | In this guide, the ``rgw1`` host will serve as the master zone of the | |
79 | master zone group; and, the ``rgw2`` host will serve as the secondary zone | |
80 | of the master zone group. | |
81 | ||
82 | See `Pools`_ for instructions on creating and tuning pools for Ceph | |
83 | Object Storage. | |
84 | ||
85 | See `Sync Policy Config`_ for instructions on defining fine grained bucket sync | |
86 | policy rules. | |
87 | ||
88 | .. _master-zone-label: | |
89 | ||
90 | Configuring a Master Zone | |
91 | ========================= | |
92 | ||
93 | All gateways in a multi-site configuration will retrieve their | |
94 | configuration from a ``ceph-radosgw`` daemon on a host within the master | |
95 | zone group and master zone. To configure your gateways in a multi-site | |
96 | configuration, choose a ``ceph-radosgw`` instance to configure the | |
97 | master zone group and master zone. | |
98 | ||
99 | Create a Realm | |
100 | -------------- | |
101 | ||
102 | A realm contains the multi-site configuration of zone groups and zones | |
103 | and also serves to enforce a globally unique namespace within the realm. | |
104 | ||
105 | Create a new realm for the multi-site configuration by opening a command | |
106 | line interface on a host identified to serve in the master zone group | |
107 | and zone. Then, execute the following: | |
108 | ||
109 | :: | |
110 | ||
111 | # radosgw-admin realm create --rgw-realm={realm-name} [--default] | |
112 | ||
113 | For example: | |
114 | ||
115 | :: | |
116 | ||
117 | # radosgw-admin realm create --rgw-realm=movies --default | |
118 | ||
119 | If the cluster will have a single realm, specify the ``--default`` flag. | |
120 | If ``--default`` is specified, ``radosgw-admin`` will use this realm by | |
121 | default. If ``--default`` is not specified, adding zone-groups and zones | |
122 | requires specifying either the ``--rgw-realm`` flag or the | |
123 | ``--realm-id`` flag to identify the realm when adding zone groups and | |
124 | zones. | |
125 | ||
126 | After creating the realm, ``radosgw-admin`` will echo back the realm | |
127 | configuration. For example: | |
128 | ||
129 | :: | |
130 | ||
131 | { | |
132 | "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62", | |
133 | "name": "movies", | |
134 | "current_period": "1950b710-3e63-4c41-a19e-46a715000980", | |
135 | "epoch": 1 | |
136 | } | |
137 | ||
138 | .. note:: Ceph generates a unique ID for the realm, which allows the renaming | |
139 | of a realm if the need arises. | |
140 | ||
141 | Create a Master Zone Group | |
142 | -------------------------- | |
143 | ||
144 | A realm must have at least one zone group, which will serve as the | |
145 | master zone group for the realm. | |
146 | ||
147 | Create a new master zone group for the multi-site configuration by | |
148 | opening a command line interface on a host identified to serve in the | |
149 | master zone group and zone. Then, execute the following: | |
150 | ||
151 | :: | |
152 | ||
153 | # radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default | |
154 | ||
155 | For example: | |
156 | ||
157 | :: | |
158 | ||
159 | # radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default | |
160 | ||
161 | If the realm will only have a single zone group, specify the | |
162 | ``--default`` flag. If ``--default`` is specified, ``radosgw-admin`` | |
163 | will use this zone group by default when adding new zones. If | |
164 | ``--default`` is not specified, adding zones will require either the | |
165 | ``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the | |
166 | zone group when adding or modifying zones. | |
167 | ||
168 | After creating the master zone group, ``radosgw-admin`` will echo back | |
169 | the zone group configuration. For example: | |
170 | ||
171 | :: | |
172 | ||
173 | { | |
174 | "id": "f1a233f5-c354-4107-b36c-df66126475a6", | |
175 | "name": "us", | |
176 | "api_name": "us", | |
177 | "is_master": "true", | |
178 | "endpoints": [ | |
179 | "http:\/\/rgw1:80" | |
180 | ], | |
181 | "hostnames": [], | |
182 | "hostnames_s3webzone": [], | |
183 | "master_zone": "", | |
184 | "zones": [], | |
185 | "placement_targets": [], | |
186 | "default_placement": "", | |
187 | "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62" | |
188 | } | |
189 | ||
190 | Create a Master Zone | |
191 | -------------------- | |
192 | ||
193 | .. important:: Zones must be created on a Ceph Object Gateway node that will be | |
194 | within the zone. | |
195 | ||
196 | Create a new master zone for the multi-site configuration by opening a | |
197 | command line interface on a host identified to serve in the master zone | |
198 | group and zone. Then, execute the following: | |
199 | ||
200 | :: | |
201 | ||
202 | # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \ | |
203 | --rgw-zone={zone-name} \ | |
204 | --master --default \ | |
205 | --endpoints={http://fqdn}[,{http://fqdn}] | |
206 | ||
207 | ||
208 | For example: | |
209 | ||
210 | :: | |
211 | ||
212 | # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \ | |
213 | --master --default \ | |
214 | --endpoints={http://fqdn}[,{http://fqdn}] | |
215 | ||
216 | ||
217 | .. note:: The ``--access-key`` and ``--secret`` aren’t specified. These | |
218 | settings will be added to the zone once the user is created in the | |
219 | next section. | |
220 | ||
221 | .. important:: The following steps assume a multi-site configuration using newly | |
222 | installed systems that aren’t storing data yet. DO NOT DELETE the | |
223 | ``default`` zone and its pools if you are already using it to store | |
224 | data, or the data will be deleted and unrecoverable. | |
225 | ||
226 | Delete Default Zone Group and Zone | |
227 | ---------------------------------- | |
228 | ||
229 | Delete the ``default`` zone if it exists. Make sure to remove it from | |
230 | the default zone group first. | |
231 | ||
232 | :: | |
233 | ||
234 | # radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default | |
235 | # radosgw-admin period update --commit | |
236 | # radosgw-admin zone rm --rgw-zone=default | |
237 | # radosgw-admin period update --commit | |
238 | # radosgw-admin zonegroup delete --rgw-zonegroup=default | |
239 | # radosgw-admin period update --commit | |
240 | ||
241 | Finally, delete the ``default`` pools in your Ceph storage cluster if | |
242 | they exist. | |
243 | ||
244 | .. important:: The following step assumes a multi-site configuration using newly | |
245 | installed systems that aren’t currently storing data. DO NOT DELETE | |
246 | the ``default`` zone group if you are already using it to store | |
247 | data. | |
248 | ||
249 | :: | |
250 | ||
251 | # ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it | |
252 | # ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it | |
253 | # ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it | |
254 | # ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it | |
255 | # ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it | |
256 | ||
257 | Create a System User | |
258 | -------------------- | |
259 | ||
260 | The ``ceph-radosgw`` daemons must authenticate before pulling realm and | |
261 | period information. In the master zone, create a system user to | |
262 | facilitate authentication between daemons. | |
263 | ||
264 | :: | |
265 | ||
266 | # radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system | |
267 | ||
268 | For example: | |
269 | ||
270 | :: | |
271 | ||
272 | # radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system | |
273 | ||
274 | Make a note of the ``access_key`` and ``secret_key``, as the secondary | |
275 | zones will require them to authenticate with the master zone. | |
276 | ||
277 | Finally, add the system user to the master zone. | |
278 | ||
279 | :: | |
280 | ||
281 | # radosgw-admin zone modify --rgw-zone=us-east --access-key={access-key} --secret={secret} | |
282 | # radosgw-admin period update --commit | |
283 | ||
284 | Update the Period | |
285 | ----------------- | |
286 | ||
287 | After updating the master zone configuration, update the period. | |
288 | ||
289 | :: | |
290 | ||
291 | # radosgw-admin period update --commit | |
292 | ||
293 | .. note:: Updating the period changes the epoch, and ensures that other zones | |
294 | will receive the updated configuration. | |
295 | ||
296 | Update the Ceph Configuration File | |
297 | ---------------------------------- | |
298 | ||
299 | Update the Ceph configuration file on master zone hosts by adding the | |
300 | ``rgw_zone`` configuration option and the name of the master zone to the | |
301 | instance entry. | |
302 | ||
303 | :: | |
304 | ||
305 | [client.rgw.{instance-name}] | |
306 | ... | |
307 | rgw_zone={zone-name} | |
308 | ||
309 | For example: | |
310 | ||
311 | :: | |
312 | ||
313 | [client.rgw.rgw1] | |
314 | host = rgw1 | |
315 | rgw frontends = "civetweb port=80" | |
316 | rgw_zone=us-east | |
317 | ||
318 | Start the Gateway | |
319 | ----------------- | |
320 | ||
321 | On the object gateway host, start and enable the Ceph Object Gateway | |
322 | service: | |
323 | ||
324 | :: | |
325 | ||
326 | # systemctl start ceph-radosgw@rgw.`hostname -s` | |
327 | # systemctl enable ceph-radosgw@rgw.`hostname -s` | |
328 | ||
329 | .. _secondary-zone-label: | |
330 | ||
331 | Configure Secondary Zones | |
332 | ========================= | |
333 | ||
334 | Zones within a zone group replicate all data to ensure that each zone | |
335 | has the same data. When creating the secondary zone, execute all of the | |
336 | following operations on a host identified to serve the secondary zone. | |
337 | ||
338 | .. note:: To add a third zone, follow the same procedures as for adding the | |
339 | secondary zone. Use different zone name. | |
340 | ||
341 | .. important:: You must execute metadata operations, such as user creation, on a | |
342 | host within the master zone. The master zone and the secondary zone | |
343 | can receive bucket operations, but the secondary zone redirects | |
344 | bucket operations to the master zone. If the master zone is down, | |
345 | bucket operations will fail. | |
346 | ||
347 | Pull the Realm | |
348 | -------------- | |
349 | ||
350 | Using the URL path, access key and secret of the master zone in the | |
351 | master zone group, pull the realm configuration to the host. To pull a | |
352 | non-default realm, specify the realm using the ``--rgw-realm`` or | |
353 | ``--realm-id`` configuration options. | |
354 | ||
355 | :: | |
356 | ||
357 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret} | |
358 | ||
359 | .. note:: Pulling the realm also retrieves the remote's current period | |
360 | configuration, and makes it the current period on this host as well. | |
361 | ||
362 | If this realm is the default realm or the only realm, make the realm the | |
363 | default realm. | |
364 | ||
365 | :: | |
366 | ||
367 | # radosgw-admin realm default --rgw-realm={realm-name} | |
368 | ||
369 | Create a Secondary Zone | |
370 | ----------------------- | |
371 | ||
372 | .. important:: Zones must be created on a Ceph Object Gateway node that will be | |
373 | within the zone. | |
374 | ||
375 | Create a secondary zone for the multi-site configuration by opening a | |
376 | command line interface on a host identified to serve the secondary zone. | |
377 | Specify the zone group ID, the new zone name and an endpoint for the | |
378 | zone. **DO NOT** use the ``--master`` or ``--default`` flags. In Kraken, | |
379 | all zones run in an active-active configuration by | |
380 | default; that is, a gateway client may write data to any zone and the | |
381 | zone will replicate the data to all other zones within the zone group. | |
382 | If the secondary zone should not accept write operations, specify the | |
383 | ``--read-only`` flag to create an active-passive configuration between | |
384 | the master zone and the secondary zone. Additionally, provide the | |
385 | ``access_key`` and ``secret_key`` of the generated system user stored in | |
386 | the master zone of the master zone group. Execute the following: | |
387 | ||
388 | :: | |
389 | ||
390 | # radosgw-admin zone create --rgw-zonegroup={zone-group-name}\ | |
391 | --rgw-zone={zone-name} --endpoints={url} \ | |
392 | --access-key={system-key} --secret={secret}\ | |
393 | --endpoints=http://{fqdn}:80 \ | |
394 | [--read-only] | |
395 | ||
396 | For example: | |
397 | ||
398 | :: | |
399 | ||
400 | # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \ | |
401 | --access-key={system-key} --secret={secret} \ | |
402 | --endpoints=http://rgw2:80 | |
403 | ||
404 | .. important:: The following steps assume a multi-site configuration using newly | |
405 | installed systems that aren’t storing data. **DO NOT DELETE** the | |
406 | ``default`` zone and its pools if you are already using it to store | |
407 | data, or the data will be lost and unrecoverable. | |
408 | ||
409 | Delete the default zone if needed. | |
410 | ||
411 | :: | |
412 | ||
413 | # radosgw-admin zone rm --rgw-zone=default | |
414 | ||
415 | Finally, delete the default pools in your Ceph storage cluster if | |
416 | needed. | |
417 | ||
418 | :: | |
419 | ||
420 | # ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it | |
421 | # ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it | |
422 | # ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it | |
423 | # ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it | |
424 | # ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it | |
425 | ||
426 | Update the Ceph Configuration File | |
427 | ---------------------------------- | |
428 | ||
429 | Update the Ceph configuration file on the secondary zone hosts by adding | |
430 | the ``rgw_zone`` configuration option and the name of the secondary zone | |
431 | to the instance entry. | |
432 | ||
433 | :: | |
434 | ||
435 | [client.rgw.{instance-name}] | |
436 | ... | |
437 | rgw_zone={zone-name} | |
438 | ||
439 | For example: | |
440 | ||
441 | :: | |
442 | ||
443 | [client.rgw.rgw2] | |
444 | host = rgw2 | |
445 | rgw frontends = "civetweb port=80" | |
446 | rgw_zone=us-west | |
447 | ||
448 | Update the Period | |
449 | ----------------- | |
450 | ||
451 | After updating the master zone configuration, update the period. | |
452 | ||
453 | :: | |
454 | ||
455 | # radosgw-admin period update --commit | |
456 | ||
457 | .. note:: Updating the period changes the epoch, and ensures that other zones | |
458 | will receive the updated configuration. | |
459 | ||
460 | Start the Gateway | |
461 | ----------------- | |
462 | ||
463 | On the object gateway host, start and enable the Ceph Object Gateway | |
464 | service: | |
465 | ||
466 | :: | |
467 | ||
468 | # systemctl start ceph-radosgw@rgw.`hostname -s` | |
469 | # systemctl enable ceph-radosgw@rgw.`hostname -s` | |
470 | ||
471 | Check Synchronization Status | |
472 | ---------------------------- | |
473 | ||
474 | Once the secondary zone is up and running, check the synchronization | |
475 | status. Synchronization copies users and buckets created in the master | |
476 | zone to the secondary zone. | |
477 | ||
478 | :: | |
479 | ||
480 | # radosgw-admin sync status | |
481 | ||
482 | The output will provide the status of synchronization operations. For | |
483 | example: | |
484 | ||
485 | :: | |
486 | ||
487 | realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth) | |
488 | zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us) | |
489 | zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west) | |
490 | metadata sync syncing | |
491 | full sync: 0/64 shards | |
492 | metadata is caught up with master | |
493 | incremental sync: 64/64 shards | |
494 | data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east) | |
495 | syncing | |
496 | full sync: 0/128 shards | |
497 | incremental sync: 128/128 shards | |
498 | data is caught up with source | |
499 | ||
500 | .. note:: Secondary zones accept bucket operations; however, secondary zones | |
501 | redirect bucket operations to the master zone and then synchronize | |
502 | with the master zone to receive the result of the bucket operations. | |
503 | If the master zone is down, bucket operations executed on the | |
504 | secondary zone will fail, but object operations should succeed. | |
505 | ||
506 | ||
507 | Maintenance | |
508 | =========== | |
509 | ||
510 | Checking the Sync Status | |
511 | ------------------------ | |
512 | ||
513 | Information about the replication status of a zone can be queried with:: | |
514 | ||
515 | $ radosgw-admin sync status | |
516 | realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth) | |
517 | zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us) | |
518 | zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2) | |
519 | metadata sync syncing | |
520 | full sync: 0/64 shards | |
521 | incremental sync: 64/64 shards | |
522 | metadata is behind on 1 shards | |
523 | oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s | |
524 | data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1) | |
525 | syncing | |
526 | full sync: 0/128 shards | |
527 | incremental sync: 128/128 shards | |
528 | data is caught up with source | |
529 | source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3) | |
530 | syncing | |
531 | full sync: 0/128 shards | |
532 | incremental sync: 128/128 shards | |
533 | data is caught up with source | |
534 | ||
535 | Changing the Metadata Master Zone | |
536 | --------------------------------- | |
537 | ||
538 | .. important:: Care must be taken when changing which zone is the metadata | |
539 | master. If a zone has not finished syncing metadata from the current master | |
540 | zone, it will be unable to serve any remaining entries when promoted to | |
541 | master and those changes will be lost. For this reason, waiting for a | |
542 | zone's ``radosgw-admin sync status`` to catch up on metadata sync before | |
543 | promoting it to master is recommended. | |
544 | ||
545 | Similarly, if changes to metadata are being processed by the current master | |
546 | zone while another zone is being promoted to master, those changes are | |
547 | likely to be lost. To avoid this, shutting down any ``radosgw`` instances | |
548 | on the previous master zone is recommended. After promoting another zone, | |
549 | its new period can be fetched with ``radosgw-admin period pull`` and the | |
550 | gateway(s) can be restarted. | |
551 | ||
552 | To promote a zone (for example, zone ``us-2`` in zonegroup ``us``) to metadata | |
553 | master, run the following commands on that zone:: | |
554 | ||
555 | $ radosgw-admin zone modify --rgw-zone=us-2 --master | |
556 | $ radosgw-admin zonegroup modify --rgw-zonegroup=us --master | |
557 | $ radosgw-admin period update --commit | |
558 | ||
559 | This will generate a new period, and the radosgw instance(s) in zone ``us-2`` | |
560 | will send this period to other zones. | |
561 | ||
562 | Failover and Disaster Recovery | |
563 | ============================== | |
564 | ||
565 | If the master zone should fail, failover to the secondary zone for | |
566 | disaster recovery. | |
567 | ||
568 | 1. Make the secondary zone the master and default zone. For example: | |
569 | ||
570 | :: | |
571 | ||
572 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default | |
573 | ||
574 | By default, Ceph Object Gateway will run in an active-active | |
575 | configuration. If the cluster was configured to run in an | |
576 | active-passive configuration, the secondary zone is a read-only zone. | |
577 | Remove the ``--read-only`` status to allow the zone to receive write | |
578 | operations. For example: | |
579 | ||
580 | :: | |
581 | ||
582 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default \ | |
583 | --read-only=false | |
584 | ||
585 | 2. Update the period to make the changes take effect. | |
586 | ||
587 | :: | |
588 | ||
589 | # radosgw-admin period update --commit | |
590 | ||
591 | 3. Finally, restart the Ceph Object Gateway. | |
592 | ||
593 | :: | |
594 | ||
595 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
596 | ||
597 | If the former master zone recovers, revert the operation. | |
598 | ||
599 | 1. From the recovered zone, pull the latest realm configuration | |
600 | from the current master zone. | |
601 | ||
602 | :: | |
603 | ||
604 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} \ | |
605 | --access-key={access-key} --secret={secret} | |
606 | ||
607 | 2. Make the recovered zone the master and default zone. | |
608 | ||
609 | :: | |
610 | ||
611 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default | |
612 | ||
613 | 3. Update the period to make the changes take effect. | |
614 | ||
615 | :: | |
616 | ||
617 | # radosgw-admin period update --commit | |
618 | ||
619 | 4. Then, restart the Ceph Object Gateway in the recovered zone. | |
620 | ||
621 | :: | |
622 | ||
623 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
624 | ||
625 | 5. If the secondary zone needs to be a read-only configuration, update | |
626 | the secondary zone. | |
627 | ||
628 | :: | |
629 | ||
630 | # radosgw-admin zone modify --rgw-zone={zone-name} --read-only | |
631 | ||
632 | 6. Update the period to make the changes take effect. | |
633 | ||
634 | :: | |
635 | ||
636 | # radosgw-admin period update --commit | |
637 | ||
638 | 7. Finally, restart the Ceph Object Gateway in the secondary zone. | |
639 | ||
640 | :: | |
641 | ||
642 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
643 | ||
644 | .. _rgw-multisite-migrate-from-single-site: | |
645 | ||
646 | Migrating a Single Site System to Multi-Site | |
647 | ============================================ | |
648 | ||
649 | To migrate from a single site system with a ``default`` zone group and | |
650 | zone to a multi site system, use the following steps: | |
651 | ||
652 | 1. Create a realm. Replace ``<name>`` with the realm name. | |
653 | ||
654 | :: | |
655 | ||
656 | # radosgw-admin realm create --rgw-realm=<name> --default | |
657 | ||
658 | 2. Rename the default zone and zonegroup. Replace ``<name>`` with the | |
659 | zonegroup or zone name. | |
660 | ||
661 | :: | |
662 | ||
663 | # radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name> | |
664 | # radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name> | |
665 | ||
666 | 3. Configure the master zonegroup. Replace ``<name>`` with the realm or | |
667 | zonegroup name. Replace ``<fqdn>`` with the fully qualified domain | |
668 | name(s) in the zonegroup. | |
669 | ||
670 | :: | |
671 | ||
672 | # radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default | |
673 | ||
674 | 4. Configure the master zone. Replace ``<name>`` with the realm, | |
675 | zonegroup or zone name. Replace ``<fqdn>`` with the fully qualified | |
676 | domain name(s) in the zonegroup. | |
677 | ||
678 | :: | |
679 | ||
680 | # radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \ | |
681 | --rgw-zone=<name> --endpoints http://<fqdn>:80 \ | |
682 | --access-key=<access-key> --secret=<secret-key> \ | |
683 | --master --default | |
684 | ||
685 | 5. Create a system user. Replace ``<user-id>`` with the username. | |
686 | Replace ``<display-name>`` with a display name. It may contain | |
687 | spaces. | |
688 | ||
689 | :: | |
690 | ||
691 | # radosgw-admin user create --uid=<user-id> --display-name="<display-name>"\ | |
692 | --access-key=<access-key> --secret=<secret-key> --system | |
693 | ||
694 | 6. Commit the updated configuration. | |
695 | ||
696 | :: | |
697 | ||
698 | # radosgw-admin period update --commit | |
699 | ||
700 | 7. Finally, restart the Ceph Object Gateway. | |
701 | ||
702 | :: | |
703 | ||
704 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
705 | ||
706 | After completing this procedure, proceed to `Configure a Secondary | |
707 | Zone <#configure-secondary-zones>`__ to create a secondary zone | |
708 | in the master zone group. | |
709 | ||
710 | ||
711 | Multi-Site Configuration Reference | |
712 | ================================== | |
713 | ||
714 | The following sections provide additional details and command-line | |
715 | usage for realms, periods, zone groups and zones. | |
716 | ||
717 | Realms | |
718 | ------ | |
719 | ||
720 | A realm represents a globally unique namespace consisting of one or more | |
721 | zonegroups containing one or more zones, and zones containing buckets, | |
722 | which in turn contain objects. A realm enables the Ceph Object Gateway | |
723 | to support multiple namespaces and their configuration on the same | |
724 | hardware. | |
725 | ||
726 | A realm contains the notion of periods. Each period represents the state | |
727 | of the zone group and zone configuration in time. Each time you make a | |
728 | change to a zonegroup or zone, update the period and commit it. | |
729 | ||
730 | By default, the Ceph Object Gateway does not create a realm | |
731 | for backward compatibility with Infernalis and earlier releases. | |
732 | However, as a best practice, we recommend creating realms for new | |
733 | clusters. | |
734 | ||
735 | Create a Realm | |
736 | ~~~~~~~~~~~~~~ | |
737 | ||
738 | To create a realm, execute ``realm create`` and specify the realm name. | |
739 | If the realm is the default, specify ``--default``. | |
740 | ||
741 | :: | |
742 | ||
743 | # radosgw-admin realm create --rgw-realm={realm-name} [--default] | |
744 | ||
745 | For example: | |
746 | ||
747 | :: | |
748 | ||
749 | # radosgw-admin realm create --rgw-realm=movies --default | |
750 | ||
751 | By specifying ``--default``, the realm will be called implicitly with | |
752 | each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name | |
753 | are explicitly provided. | |
754 | ||
755 | Make a Realm the Default | |
756 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
757 | ||
758 | One realm in the list of realms should be the default realm. There may | |
759 | be only one default realm. If there is only one realm and it wasn’t | |
760 | specified as the default realm when it was created, make it the default | |
761 | realm. Alternatively, to change which realm is the default, execute: | |
762 | ||
763 | :: | |
764 | ||
765 | # radosgw-admin realm default --rgw-realm=movies | |
766 | ||
767 | .. note:: When the realm is default, the command line assumes | |
768 | ``--rgw-realm=<realm-name>`` as an argument. | |
769 | ||
770 | Delete a Realm | |
771 | ~~~~~~~~~~~~~~ | |
772 | ||
773 | To delete a realm, execute ``realm delete`` and specify the realm name. | |
774 | ||
775 | :: | |
776 | ||
777 | # radosgw-admin realm delete --rgw-realm={realm-name} | |
778 | ||
779 | For example: | |
780 | ||
781 | :: | |
782 | ||
783 | # radosgw-admin realm delete --rgw-realm=movies | |
784 | ||
785 | Get a Realm | |
786 | ~~~~~~~~~~~ | |
787 | ||
788 | To get a realm, execute ``realm get`` and specify the realm name. | |
789 | ||
790 | :: | |
791 | ||
792 | #radosgw-admin realm get --rgw-realm=<name> | |
793 | ||
794 | For example: | |
795 | ||
796 | :: | |
797 | ||
798 | # radosgw-admin realm get --rgw-realm=movies [> filename.json] | |
799 | ||
800 | The CLI will echo a JSON object with the realm properties. | |
801 | ||
802 | :: | |
803 | ||
804 | { | |
805 | "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc", | |
806 | "name": "movies", | |
807 | "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b", | |
808 | "epoch": 1 | |
809 | } | |
810 | ||
811 | Use ``>`` and an output file name to output the JSON object to a file. | |
812 | ||
813 | Set a Realm | |
814 | ~~~~~~~~~~~ | |
815 | ||
816 | To set a realm, execute ``realm set``, specify the realm name, and | |
817 | ``--infile=`` with an input file name. | |
818 | ||
819 | :: | |
820 | ||
821 | #radosgw-admin realm set --rgw-realm=<name> --infile=<infilename> | |
822 | ||
823 | For example: | |
824 | ||
825 | :: | |
826 | ||
827 | # radosgw-admin realm set --rgw-realm=movies --infile=filename.json | |
828 | ||
829 | List Realms | |
830 | ~~~~~~~~~~~ | |
831 | ||
832 | To list realms, execute ``realm list``. | |
833 | ||
834 | :: | |
835 | ||
836 | # radosgw-admin realm list | |
837 | ||
838 | List Realm Periods | |
839 | ~~~~~~~~~~~~~~~~~~ | |
840 | ||
841 | To list realm periods, execute ``realm list-periods``. | |
842 | ||
843 | :: | |
844 | ||
845 | # radosgw-admin realm list-periods | |
846 | ||
847 | Pull a Realm | |
848 | ~~~~~~~~~~~~ | |
849 | ||
850 | To pull a realm from the node containing the master zone group and | |
851 | master zone to a node containing a secondary zone group or zone, execute | |
852 | ``realm pull`` on the node that will receive the realm configuration. | |
853 | ||
854 | :: | |
855 | ||
856 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret} | |
857 | ||
858 | Rename a Realm | |
859 | ~~~~~~~~~~~~~~ | |
860 | ||
861 | A realm is not part of the period. Consequently, renaming the realm is | |
862 | only applied locally, and will not get pulled with ``realm pull``. When | |
863 | renaming a realm with multiple zones, run the command on each zone. To | |
864 | rename a realm, execute the following: | |
865 | ||
866 | :: | |
867 | ||
868 | # radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name> | |
869 | ||
870 | .. note:: DO NOT use ``realm set`` to change the ``name`` parameter. That | |
871 | changes the internal name only. Specifying ``--rgw-realm`` would | |
872 | still use the old realm name. | |
873 | ||
874 | Zone Groups | |
875 | ----------- | |
876 | ||
877 | The Ceph Object Gateway supports multi-site deployments and a global | |
878 | namespace by using the notion of zone groups. Formerly called a region | |
879 | in Infernalis, a zone group defines the geographic location of one or more Ceph | |
880 | Object Gateway instances within one or more zones. | |
881 | ||
882 | Configuring zone groups differs from typical configuration procedures, | |
883 | because not all of the settings end up in a Ceph configuration file. You | |
884 | can list zone groups, get a zone group configuration, and set a zone | |
885 | group configuration. | |
886 | ||
887 | Create a Zone Group | |
888 | ~~~~~~~~~~~~~~~~~~~ | |
889 | ||
890 | Creating a zone group consists of specifying the zone group name. | |
891 | Creating a zone assumes it will live in the default realm unless | |
892 | ``--rgw-realm=<realm-name>`` is specified. If the zonegroup is the | |
893 | default zonegroup, specify the ``--default`` flag. If the zonegroup is | |
894 | the master zonegroup, specify the ``--master`` flag. For example: | |
895 | ||
896 | :: | |
897 | ||
898 | # radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default] | |
899 | ||
900 | ||
901 | .. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify | |
902 | an existing zone group’s settings. | |
903 | ||
904 | Make a Zone Group the Default | |
905 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
906 | ||
907 | One zonegroup in the list of zonegroups should be the default zonegroup. | |
908 | There may be only one default zonegroup. If there is only one zonegroup | |
909 | and it wasn’t specified as the default zonegroup when it was created, | |
910 | make it the default zonegroup. Alternatively, to change which zonegroup | |
911 | is the default, execute: | |
912 | ||
913 | :: | |
914 | ||
915 | # radosgw-admin zonegroup default --rgw-zonegroup=comedy | |
916 | ||
917 | .. note:: When the zonegroup is default, the command line assumes | |
918 | ``--rgw-zonegroup=<zonegroup-name>`` as an argument. | |
919 | ||
920 | Then, update the period: | |
921 | ||
922 | :: | |
923 | ||
924 | # radosgw-admin period update --commit | |
925 | ||
926 | Add a Zone to a Zone Group | |
927 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
928 | ||
929 | To add a zone to a zonegroup, execute the following: | |
930 | ||
931 | :: | |
932 | ||
933 | # radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name> | |
934 | ||
935 | Then, update the period: | |
936 | ||
937 | :: | |
938 | ||
939 | # radosgw-admin period update --commit | |
940 | ||
941 | Remove a Zone from a Zone Group | |
942 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
943 | ||
944 | To remove a zone from a zonegroup, execute the following: | |
945 | ||
946 | :: | |
947 | ||
948 | # radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name> | |
949 | ||
950 | Then, update the period: | |
951 | ||
952 | :: | |
953 | ||
954 | # radosgw-admin period update --commit | |
955 | ||
956 | Rename a Zone Group | |
957 | ~~~~~~~~~~~~~~~~~~~ | |
958 | ||
959 | To rename a zonegroup, execute the following: | |
960 | ||
961 | :: | |
962 | ||
963 | # radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name> | |
964 | ||
965 | Then, update the period: | |
966 | ||
967 | :: | |
968 | ||
969 | # radosgw-admin period update --commit | |
970 | ||
971 | Delete a Zone Group | |
972 | ~~~~~~~~~~~~~~~~~~~ | |
973 | ||
974 | To delete a zonegroup, execute the following: | |
975 | ||
976 | :: | |
977 | ||
978 | # radosgw-admin zonegroup delete --rgw-zonegroup=<name> | |
979 | ||
980 | Then, update the period: | |
981 | ||
982 | :: | |
983 | ||
984 | # radosgw-admin period update --commit | |
985 | ||
986 | List Zone Groups | |
987 | ~~~~~~~~~~~~~~~~ | |
988 | ||
989 | A Ceph cluster contains a list of zone groups. To list the zone groups, | |
990 | execute: | |
991 | ||
992 | :: | |
993 | ||
994 | # radosgw-admin zonegroup list | |
995 | ||
996 | The ``radosgw-admin`` returns a JSON formatted list of zone groups. | |
997 | ||
998 | :: | |
999 | ||
1000 | { | |
1001 | "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1002 | "zonegroups": [ | |
1003 | "us" | |
1004 | ] | |
1005 | } | |
1006 | ||
1007 | Get a Zone Group Map | |
1008 | ~~~~~~~~~~~~~~~~~~~~ | |
1009 | ||
1010 | To list the details of each zone group, execute: | |
1011 | ||
1012 | :: | |
1013 | ||
1014 | # radosgw-admin zonegroup-map get | |
1015 | ||
1016 | .. note:: If you receive a ``failed to read zonegroup map`` error, run | |
1017 | ``radosgw-admin zonegroup-map update`` as ``root`` first. | |
1018 | ||
1019 | Get a Zone Group | |
1020 | ~~~~~~~~~~~~~~~~ | |
1021 | ||
1022 | To view the configuration of a zone group, execute: | |
1023 | ||
1024 | :: | |
1025 | ||
1026 | radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>] | |
1027 | ||
1028 | The zone group configuration looks like this: | |
1029 | ||
1030 | :: | |
1031 | ||
1032 | { | |
1033 | "id": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1034 | "name": "us", | |
1035 | "api_name": "us", | |
1036 | "is_master": "true", | |
1037 | "endpoints": [ | |
1038 | "http:\/\/rgw1:80" | |
1039 | ], | |
1040 | "hostnames": [], | |
1041 | "hostnames_s3website": [], | |
1042 | "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1043 | "zones": [ | |
1044 | { | |
1045 | "id": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1046 | "name": "us-east", | |
1047 | "endpoints": [ | |
1048 | "http:\/\/rgw1" | |
1049 | ], | |
1050 | "log_meta": "true", | |
1051 | "log_data": "true", | |
1052 | "bucket_index_max_shards": 0, | |
1053 | "read_only": "false" | |
1054 | }, | |
1055 | { | |
1056 | "id": "d1024e59-7d28-49d1-8222-af101965a939", | |
1057 | "name": "us-west", | |
1058 | "endpoints": [ | |
1059 | "http:\/\/rgw2:80" | |
1060 | ], | |
1061 | "log_meta": "false", | |
1062 | "log_data": "true", | |
1063 | "bucket_index_max_shards": 0, | |
1064 | "read_only": "false" | |
1065 | } | |
1066 | ], | |
1067 | "placement_targets": [ | |
1068 | { | |
1069 | "name": "default-placement", | |
1070 | "tags": [] | |
1071 | } | |
1072 | ], | |
1073 | "default_placement": "default-placement", | |
1074 | "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe" | |
1075 | } | |
1076 | ||
1077 | Set a Zone Group | |
1078 | ~~~~~~~~~~~~~~~~ | |
1079 | ||
1080 | Defining a zone group consists of creating a JSON object, specifying at | |
1081 | least the required settings: | |
1082 | ||
1083 | 1. ``name``: The name of the zone group. Required. | |
1084 | ||
1085 | 2. ``api_name``: The API name for the zone group. Optional. | |
1086 | ||
1087 | 3. ``is_master``: Determines if the zone group is the master zone group. | |
1088 | Required. **note:** You can only have one master zone group. | |
1089 | ||
1090 | 4. ``endpoints``: A list of all the endpoints in the zone group. For | |
1091 | example, you may use multiple domain names to refer to the same zone | |
1092 | group. Remember to escape the forward slashes (``\/``). You may also | |
1093 | specify a port (``fqdn:port``) for each endpoint. Optional. | |
1094 | ||
1095 | 5. ``hostnames``: A list of all the hostnames in the zone group. For | |
1096 | example, you may use multiple domain names to refer to the same zone | |
1097 | group. Optional. The ``rgw dns name`` setting will automatically be | |
1098 | included in this list. You should restart the gateway daemon(s) after | |
1099 | changing this setting. | |
1100 | ||
1101 | 6. ``master_zone``: The master zone for the zone group. Optional. Uses | |
1102 | the default zone if not specified. **note:** You can only have one | |
1103 | master zone per zone group. | |
1104 | ||
1105 | 7. ``zones``: A list of all zones within the zone group. Each zone has a | |
1106 | name (required), a list of endpoints (optional), and whether or not | |
1107 | the gateway will log metadata and data operations (false by default). | |
1108 | ||
1109 | 8. ``placement_targets``: A list of placement targets (optional). Each | |
1110 | placement target contains a name (required) for the placement target | |
1111 | and a list of tags (optional) so that only users with the tag can use | |
1112 | the placement target (i.e., the user’s ``placement_tags`` field in | |
1113 | the user info). | |
1114 | ||
1115 | 9. ``default_placement``: The default placement target for the object | |
1116 | index and object data. Set to ``default-placement`` by default. You | |
1117 | may also set a per-user default placement in the user info for each | |
1118 | user. | |
1119 | ||
1120 | To set a zone group, create a JSON object consisting of the required | |
1121 | fields, save the object to a file (e.g., ``zonegroup.json``); then, | |
1122 | execute the following command: | |
1123 | ||
1124 | :: | |
1125 | ||
1126 | # radosgw-admin zonegroup set --infile zonegroup.json | |
1127 | ||
1128 | Where ``zonegroup.json`` is the JSON file you created. | |
1129 | ||
1130 | .. important:: The ``default`` zone group ``is_master`` setting is ``true`` by | |
1131 | default. If you create a new zone group and want to make it the | |
1132 | master zone group, you must either set the ``default`` zone group | |
1133 | ``is_master`` setting to ``false``, or delete the ``default`` zone | |
1134 | group. | |
1135 | ||
1136 | Finally, update the period: | |
1137 | ||
1138 | :: | |
1139 | ||
1140 | # radosgw-admin period update --commit | |
1141 | ||
1142 | Set a Zone Group Map | |
1143 | ~~~~~~~~~~~~~~~~~~~~ | |
1144 | ||
1145 | Setting a zone group map consists of creating a JSON object consisting | |
1146 | of one or more zone groups, and setting the ``master_zonegroup`` for the | |
1147 | cluster. Each zone group in the zone group map consists of a key/value | |
1148 | pair, where the ``key`` setting is equivalent to the ``name`` setting | |
1149 | for an individual zone group configuration, and the ``val`` is a JSON | |
1150 | object consisting of an individual zone group configuration. | |
1151 | ||
1152 | You may only have one zone group with ``is_master`` equal to ``true``, | |
1153 | and it must be specified as the ``master_zonegroup`` at the end of the | |
1154 | zone group map. The following JSON object is an example of a default | |
1155 | zone group map. | |
1156 | ||
1157 | :: | |
1158 | ||
1159 | { | |
1160 | "zonegroups": [ | |
1161 | { | |
1162 | "key": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1163 | "val": { | |
1164 | "id": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1165 | "name": "us", | |
1166 | "api_name": "us", | |
1167 | "is_master": "true", | |
1168 | "endpoints": [ | |
1169 | "http:\/\/rgw1:80" | |
1170 | ], | |
1171 | "hostnames": [], | |
1172 | "hostnames_s3website": [], | |
1173 | "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1174 | "zones": [ | |
1175 | { | |
1176 | "id": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1177 | "name": "us-east", | |
1178 | "endpoints": [ | |
1179 | "http:\/\/rgw1" | |
1180 | ], | |
1181 | "log_meta": "true", | |
1182 | "log_data": "true", | |
1183 | "bucket_index_max_shards": 0, | |
1184 | "read_only": "false" | |
1185 | }, | |
1186 | { | |
1187 | "id": "d1024e59-7d28-49d1-8222-af101965a939", | |
1188 | "name": "us-west", | |
1189 | "endpoints": [ | |
1190 | "http:\/\/rgw2:80" | |
1191 | ], | |
1192 | "log_meta": "false", | |
1193 | "log_data": "true", | |
1194 | "bucket_index_max_shards": 0, | |
1195 | "read_only": "false" | |
1196 | } | |
1197 | ], | |
1198 | "placement_targets": [ | |
1199 | { | |
1200 | "name": "default-placement", | |
1201 | "tags": [] | |
1202 | } | |
1203 | ], | |
1204 | "default_placement": "default-placement", | |
1205 | "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe" | |
1206 | } | |
1207 | } | |
1208 | ], | |
1209 | "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1210 | "bucket_quota": { | |
1211 | "enabled": false, | |
1212 | "max_size_kb": -1, | |
1213 | "max_objects": -1 | |
1214 | }, | |
1215 | "user_quota": { | |
1216 | "enabled": false, | |
1217 | "max_size_kb": -1, | |
1218 | "max_objects": -1 | |
1219 | } | |
1220 | } | |
1221 | ||
1222 | To set a zone group map, execute the following: | |
1223 | ||
1224 | :: | |
1225 | ||
1226 | # radosgw-admin zonegroup-map set --infile zonegroupmap.json | |
1227 | ||
1228 | Where ``zonegroupmap.json`` is the JSON file you created. Ensure that | |
1229 | you have zones created for the ones specified in the zone group map. | |
1230 | Finally, update the period. | |
1231 | ||
1232 | :: | |
1233 | ||
1234 | # radosgw-admin period update --commit | |
1235 | ||
1236 | Zones | |
1237 | ----- | |
1238 | ||
1239 | Ceph Object Gateway supports the notion of zones. A zone defines a | |
1240 | logical group consisting of one or more Ceph Object Gateway instances. | |
1241 | ||
1242 | Configuring zones differs from typical configuration procedures, because | |
1243 | not all of the settings end up in a Ceph configuration file. You can | |
1244 | list zones, get a zone configuration and set a zone configuration. | |
1245 | ||
1246 | Create a Zone | |
1247 | ~~~~~~~~~~~~~ | |
1248 | ||
1249 | To create a zone, specify a zone name. If it is a master zone, specify | |
1250 | the ``--master`` option. Only one zone in a zone group may be a master | |
1251 | zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup`` | |
1252 | option with the zonegroup name. | |
1253 | ||
1254 | :: | |
1255 | ||
1256 | # radosgw-admin zone create --rgw-zone=<name> \ | |
1257 | [--zonegroup=<zonegroup-name]\ | |
1258 | [--endpoints=<endpoint>[,<endpoint>] \ | |
1259 | [--master] [--default] \ | |
1260 | --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY | |
1261 | ||
1262 | Then, update the period: | |
1263 | ||
1264 | :: | |
1265 | ||
1266 | # radosgw-admin period update --commit | |
1267 | ||
1268 | Delete a Zone | |
1269 | ~~~~~~~~~~~~~ | |
1270 | ||
1271 | To delete zone, first remove it from the zonegroup. | |
1272 | ||
1273 | :: | |
1274 | ||
1275 | # radosgw-admin zonegroup remove --zonegroup=<name>\ | |
1276 | --zone=<name> | |
1277 | ||
1278 | Then, update the period: | |
1279 | ||
1280 | :: | |
1281 | ||
1282 | # radosgw-admin period update --commit | |
1283 | ||
1284 | Next, delete the zone. Execute the following: | |
1285 | ||
1286 | :: | |
1287 | ||
1288 | # radosgw-admin zone rm --rgw-zone<name> | |
1289 | ||
1290 | Finally, update the period: | |
1291 | ||
1292 | :: | |
1293 | ||
1294 | # radosgw-admin period update --commit | |
1295 | ||
1296 | .. important:: Do not delete a zone without removing it from a zone group first. | |
1297 | Otherwise, updating the period will fail. | |
1298 | ||
1299 | If the pools for the deleted zone will not be used anywhere else, | |
1300 | consider deleting the pools. Replace ``<del-zone>`` in the example below | |
1301 | with the deleted zone’s name. | |
1302 | ||
1303 | .. important:: Only delete the pools with prepended zone names. Deleting the root | |
1304 | pool, such as, ``.rgw.root`` will remove all of the system’s | |
1305 | configuration. | |
1306 | ||
1307 | .. important:: Once the pools are deleted, all of the data within them are deleted | |
1308 | in an unrecoverable manner. Only delete the pools if the pool | |
1309 | contents are no longer needed. | |
1310 | ||
1311 | :: | |
1312 | ||
1313 | # ceph osd pool rm <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it | |
1314 | # ceph osd pool rm <del-zone>.rgw.data.root <del-zone>.rgw.data.root --yes-i-really-really-mean-it | |
1315 | # ceph osd pool rm <del-zone>.rgw.gc <del-zone>.rgw.gc --yes-i-really-really-mean-it | |
1316 | # ceph osd pool rm <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it | |
1317 | # ceph osd pool rm <del-zone>.rgw.users.uid <del-zone>.rgw.users.uid --yes-i-really-really-mean-it | |
1318 | ||
1319 | Modify a Zone | |
1320 | ~~~~~~~~~~~~~ | |
1321 | ||
1322 | To modify a zone, specify the zone name and the parameters you wish to | |
1323 | modify. | |
1324 | ||
1325 | :: | |
1326 | ||
1327 | # radosgw-admin zone modify [options] | |
1328 | ||
1329 | Where ``[options]``: | |
1330 | ||
1331 | - ``--access-key=<key>`` | |
1332 | - ``--secret/--secret-key=<key>`` | |
1333 | - ``--master`` | |
1334 | - ``--default`` | |
1335 | - ``--endpoints=<list>`` | |
1336 | ||
1337 | Then, update the period: | |
1338 | ||
1339 | :: | |
1340 | ||
1341 | # radosgw-admin period update --commit | |
1342 | ||
1343 | List Zones | |
1344 | ~~~~~~~~~~ | |
1345 | ||
1346 | As ``root``, to list the zones in a cluster, execute: | |
1347 | ||
1348 | :: | |
1349 | ||
1350 | # radosgw-admin zone list | |
1351 | ||
1352 | Get a Zone | |
1353 | ~~~~~~~~~~ | |
1354 | ||
1355 | As ``root``, to get the configuration of a zone, execute: | |
1356 | ||
1357 | :: | |
1358 | ||
1359 | # radosgw-admin zone get [--rgw-zone=<zone>] | |
1360 | ||
1361 | The ``default`` zone looks like this: | |
1362 | ||
1363 | :: | |
1364 | ||
1365 | { "domain_root": ".rgw", | |
1366 | "control_pool": ".rgw.control", | |
1367 | "gc_pool": ".rgw.gc", | |
1368 | "log_pool": ".log", | |
1369 | "intent_log_pool": ".intent-log", | |
1370 | "usage_log_pool": ".usage", | |
1371 | "user_keys_pool": ".users", | |
1372 | "user_email_pool": ".users.email", | |
1373 | "user_swift_pool": ".users.swift", | |
1374 | "user_uid_pool": ".users.uid", | |
1375 | "system_key": { "access_key": "", "secret_key": ""}, | |
1376 | "placement_pools": [ | |
1377 | { "key": "default-placement", | |
1378 | "val": { "index_pool": ".rgw.buckets.index", | |
1379 | "data_pool": ".rgw.buckets"} | |
1380 | } | |
1381 | ] | |
1382 | } | |
1383 | ||
1384 | Set a Zone | |
1385 | ~~~~~~~~~~ | |
1386 | ||
1387 | Configuring a zone involves specifying a series of Ceph Object Gateway | |
1388 | pools. For consistency, we recommend using a pool prefix that is the | |
1389 | same as the zone name. See | |
1390 | `Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__ | |
1391 | for details of configuring pools. | |
1392 | ||
1393 | To set a zone, create a JSON object consisting of the pools, save the | |
1394 | object to a file (e.g., ``zone.json``); then, execute the following | |
1395 | command, replacing ``{zone-name}`` with the name of the zone: | |
1396 | ||
1397 | :: | |
1398 | ||
1399 | # radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json | |
1400 | ||
1401 | Where ``zone.json`` is the JSON file you created. | |
1402 | ||
1403 | Then, as ``root``, update the period: | |
1404 | ||
1405 | :: | |
1406 | ||
1407 | # radosgw-admin period update --commit | |
1408 | ||
1409 | Rename a Zone | |
1410 | ~~~~~~~~~~~~~ | |
1411 | ||
1412 | To rename a zone, specify the zone name and the new zone name. | |
1413 | ||
1414 | :: | |
1415 | ||
1416 | # radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name> | |
1417 | ||
1418 | Then, update the period: | |
1419 | ||
1420 | :: | |
1421 | ||
1422 | # radosgw-admin period update --commit | |
1423 | ||
1424 | Zone Group and Zone Settings | |
1425 | ---------------------------- | |
1426 | ||
1427 | When configuring a default zone group and zone, the pool name includes | |
1428 | the zone name. For example: | |
1429 | ||
1430 | - ``default.rgw.control`` | |
1431 | ||
1432 | To change the defaults, include the following settings in your Ceph | |
1433 | configuration file under each ``[client.radosgw.{instance-name}]`` | |
1434 | instance. | |
1435 | ||
1436 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1437 | | Name | Description | Type | Default | | |
1438 | +=====================================+===================================+=========+=======================+ | |
1439 | | ``rgw_zone`` | The name of the zone for the | String | None | | |
1440 | | | gateway instance. | | | | |
1441 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1442 | | ``rgw_zonegroup`` | The name of the zone group for | String | None | | |
1443 | | | the gateway instance. | | | | |
1444 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1445 | | ``rgw_zonegroup_root_pool`` | The root pool for the zone group. | String | ``.rgw.root`` | | |
1446 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1447 | | ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` | | |
1448 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1449 | | ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` | | |
1450 | | | zone group. We do not recommend | | | | |
1451 | | | changing this setting. | | | | |
1452 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1453 | ||
1454 | ||
1455 | .. _`Pools`: ../pools | |
1456 | .. _`Sync Policy Config`: ../multisite-sync-policy |