]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | ========== |
2 | Multi-Site | |
3 | ========== | |
4 | ||
5 | .. versionadded:: Jewel | |
6 | ||
7 | A single zone configuration typically consists of one zone group containing one | |
8 | zone and one or more `ceph-radosgw` instances where you may load-balance gateway | |
9 | client requests between the instances. In a single zone configuration, typically | |
10 | multiple gateway instances point to a single Ceph storage cluster. However, Kraken | |
11 | supports several multi-site configuration options for the Ceph Object Gateway: | |
12 | ||
13 | - **Multi-zone:** A more advanced configuration consists of one zone group and | |
14 | multiple zones, each zone with one or more `ceph-radosgw` instances. Each zone | |
15 | is backed by its own Ceph Storage Cluster. Multiple zones in a zone group | |
16 | provides disaster recovery for the zone group should one of the zones experience | |
17 | a significant failure. In Kraken, each zone is active and may receive write | |
18 | operations. In addition to disaster recovery, multiple active zones may also | |
19 | serve as a foundation for content delivery networks. | |
20 | ||
21 | - **Multi-zone-group:** Formerly called 'regions', Ceph Object Gateway can also | |
22 | support multiple zone groups, each zone group with one or more zones. Objects | |
23 | stored to zones in one zone group within the same realm as another zone | |
24 | group will share a global object namespace, ensuring unique object IDs across | |
25 | zone groups and zones. | |
26 | ||
27 | - **Multiple Realms:** In Kraken, the Ceph Object Gateway supports the notion | |
28 | of realms, which can be a single zone group or multiple zone groups and | |
29 | a globally unique namespace for the realm. Multiple realms provide the ability | |
30 | to support numerous configurations and namespaces. | |
31 | ||
32 | Replicating object data between zones within a zone group looks something | |
33 | like this: | |
34 | ||
35 | .. image:: ../images/zone-sync2.png | |
36 | :align: center | |
37 | ||
38 | For additional details on setting up a cluster, see `Ceph Object Gateway for | |
39 | Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_object_gateway_for_production/>`__. | |
40 | ||
41 | Functional Changes from Infernalis | |
42 | ================================== | |
43 | ||
44 | In Kraken, you can configure each Ceph Object Gateway to | |
45 | work in an active-active zone configuration, allowing for writes to | |
46 | non-master zones. | |
47 | ||
48 | The multi-site configuration is stored within a container called a | |
49 | "realm." The realm stores zone groups, zones, and a time "period" with | |
50 | multiple epochs for tracking changes to the configuration. In Kraken, | |
51 | the ``ceph-radosgw`` daemons handle the synchronization, | |
52 | eliminating the need for a separate synchronization agent. Additionally, | |
53 | the new approach to synchronization allows the Ceph Object Gateway to | |
54 | operate with an "active-active" configuration instead of | |
55 | "active-passive". | |
56 | ||
57 | Requirements and Assumptions | |
58 | ============================ | |
59 | ||
60 | A multi-site configuration requires at least two Ceph storage clusters, | |
61 | preferably given a distinct cluster name. At least two Ceph object | |
62 | gateway instances, one for each Ceph storage cluster. | |
63 | ||
64 | This guide assumes at least two Ceph storage clusters in geographically | |
65 | separate locations; however, the configuration can work on the same | |
66 | site. This guide also assumes four Ceph object gateway servers named | |
67 | ``rgw1`` and ``rgw2``. | |
68 | ||
69 | A multi-site configuration requires a master zone group and a master | |
70 | zone. Additionally, each zone group requires a master zone. Zone groups | |
71 | may have one or more secondary or non-master zones. | |
72 | ||
73 | In this guide, the ``rgw1`` host will serve as the master zone of the | |
74 | master zone group; and, the ``rgw2`` host will serve as the secondary zone | |
75 | of the master zone group. | |
76 | ||
77 | Pools | |
78 | ===== | |
79 | ||
80 | We recommend using the `Ceph Placement Group’s per Pool | |
81 | Calculator <http://ceph.com/pgcalc/>`__ to calculate a | |
82 | suitable number of placement groups for the pools the ``ceph-radosgw`` | |
83 | daemon will create. Set the calculated values as defaults in your Ceph | |
84 | configuration file. For example: | |
85 | ||
86 | :: | |
87 | ||
88 | osd pool default pg num = 50 | |
89 | osd pool default pgp num = 50 | |
90 | ||
91 | .. note:: Make this change to the Ceph configuration file on your | |
92 | storage cluster; then, either make a runtime change to the | |
93 | configuration so that it will use those defaults when the gateway | |
94 | instance creates the pools. | |
95 | ||
96 | Alternatively, create the pools manually. See | |
97 | `Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__ | |
98 | for details on creating pools. | |
99 | ||
100 | Pool names particular to a zone follow the naming convention | |
101 | ``{zone-name}.pool-name``. For example, a zone named ``us-east`` will | |
102 | have the following pools: | |
103 | ||
104 | - ``.rgw.root`` | |
105 | ||
106 | - ``us-east.rgw.control`` | |
107 | ||
108 | - ``us-east.rgw.data.root`` | |
109 | ||
110 | - ``us-east.rgw.gc`` | |
111 | ||
112 | - ``us-east.rgw.log`` | |
113 | ||
114 | - ``us-east.rgw.intent-log`` | |
115 | ||
116 | - ``us-east.rgw.usage`` | |
117 | ||
118 | - ``us-east.rgw.users.keys`` | |
119 | ||
120 | - ``us-east.rgw.users.email`` | |
121 | ||
122 | - ``us-east.rgw.users.swift`` | |
123 | ||
124 | - ``us-east.rgw.users.uid`` | |
125 | ||
126 | - ``us-east.rgw.buckets.index`` | |
127 | ||
128 | - ``us-east.rgw.buckets.data`` | |
129 | ||
130 | ||
131 | Configuring a Master Zone | |
132 | ========================= | |
133 | ||
134 | All gateways in a multi-site configuration will retrieve their | |
135 | configuration from a ``ceph-radosgw`` daemon on a host within the master | |
136 | zone group and master zone. To configure your gateways in a multi-site | |
137 | configuration, choose a ``ceph-radosgw`` instance to configure the | |
138 | master zone group and master zone. | |
139 | ||
140 | Create a Realm | |
141 | -------------- | |
142 | ||
143 | A realm contains the multi-site configuration of zone groups and zones | |
144 | and also serves to enforce a globally unique namespace within the realm. | |
145 | ||
146 | Create a new realm for the multi-site configuration by opening a command | |
147 | line interface on a host identified to serve in the master zone group | |
148 | and zone. Then, execute the following: | |
149 | ||
150 | :: | |
151 | ||
152 | # radosgw-admin realm create --rgw-realm={realm-name} [--default] | |
153 | ||
154 | For example: | |
155 | ||
156 | :: | |
157 | ||
158 | # radosgw-admin realm create --rgw-realm=movies --default | |
159 | ||
160 | If the cluster will have a single realm, specify the ``--default`` flag. | |
161 | If ``--default`` is specified, ``radosgw-admin`` will use this realm by | |
162 | default. If ``--default`` is not specified, adding zone-groups and zones | |
163 | requires specifying either the ``--rgw-realm`` flag or the | |
164 | ``--realm-id`` flag to identify the realm when adding zone groups and | |
165 | zones. | |
166 | ||
167 | After creating the realm, ``radosgw-admin`` will echo back the realm | |
168 | configuration. For example: | |
169 | ||
170 | :: | |
171 | ||
172 | { | |
173 | "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62", | |
174 | "name": "movies", | |
175 | "current_period": "1950b710-3e63-4c41-a19e-46a715000980", | |
176 | "epoch": 1 | |
177 | } | |
178 | ||
179 | .. note:: Ceph generates a unique ID for the realm, which allows the renaming | |
180 | of a realm if the need arises. | |
181 | ||
182 | Create a Master Zone Group | |
183 | -------------------------- | |
184 | ||
185 | A realm must have at least one zone group, which will serve as the | |
186 | master zone group for the realm. | |
187 | ||
188 | Create a new master zone group for the multi-site configuration by | |
189 | opening a command line interface on a host identified to serve in the | |
190 | master zone group and zone. Then, execute the following: | |
191 | ||
192 | :: | |
193 | ||
194 | # radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default | |
195 | ||
196 | For example: | |
197 | ||
198 | :: | |
199 | ||
200 | # radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default | |
201 | ||
202 | If the realm will only have a single zone group, specify the | |
203 | ``--default`` flag. If ``--default`` is specified, ``radosgw-admin`` | |
204 | will use this zone group by default when adding new zones. If | |
205 | ``--default`` is not specified, adding zones will require either the | |
206 | ``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the | |
207 | zone group when adding or modifying zones. | |
208 | ||
209 | After creating the master zone group, ``radosgw-admin`` will echo back | |
210 | the zone group configuration. For example: | |
211 | ||
212 | :: | |
213 | ||
214 | { | |
215 | "id": "f1a233f5-c354-4107-b36c-df66126475a6", | |
216 | "name": "us", | |
217 | "api_name": "us", | |
218 | "is_master": "true", | |
219 | "endpoints": [ | |
220 | "http:\/\/rgw1:80" | |
221 | ], | |
222 | "hostnames": [], | |
223 | "hostnames_s3webzone": [], | |
224 | "master_zone": "", | |
225 | "zones": [], | |
226 | "placement_targets": [], | |
227 | "default_placement": "", | |
228 | "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62" | |
229 | } | |
230 | ||
231 | Create a Master Zone | |
232 | -------------------- | |
233 | ||
234 | .. important:: Zones must be created on a Ceph Object Gateway node that will be | |
235 | within the zone. | |
236 | ||
237 | Create a new master zone for the multi-site configuration by opening a | |
238 | command line interface on a host identified to serve in the master zone | |
239 | group and zone. Then, execute the following: | |
240 | ||
241 | :: | |
242 | ||
243 | # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \ | |
244 | --rgw-zone={zone-name} \ | |
245 | --master --default \ | |
246 | --endpoints={http://fqdn}[,{http://fqdn}] | |
247 | ||
248 | ||
249 | For example: | |
250 | ||
251 | :: | |
252 | ||
253 | # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \ | |
254 | --master --default \ | |
255 | --endpoints={http://fqdn}[,{http://fqdn}] | |
256 | ||
257 | ||
258 | .. note:: The ``--access-key`` and ``--secret`` aren’t specified. These | |
259 | settings will be added to the zone once the user is created in the | |
260 | next section. | |
261 | ||
262 | .. important:: The following steps assume a multi-site configuration using newly | |
263 | installed systems that aren’t storing data yet. DO NOT DELETE the | |
264 | ``default`` zone and its pools if you are already using it to store | |
265 | data, or the data will be deleted and unrecoverable. | |
266 | ||
267 | Delete Default Zone Group and Zone | |
268 | ---------------------------------- | |
269 | ||
270 | Delete the ``default`` zone if it exists. Make sure to remove it from | |
271 | the default zone group first. | |
272 | ||
273 | :: | |
274 | ||
275 | # radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default | |
276 | # radosgw-admin period update --commit | |
277 | # radosgw-admin zone delete --rgw-zone=default | |
278 | # radosgw-admin period update --commit | |
279 | # radosgw-admin zonegroup delete --rgw-zonegroup=default | |
280 | # radosgw-admin period update --commit | |
281 | ||
282 | Finally, delete the ``default`` pools in your Ceph storage cluster if | |
283 | they exist. | |
284 | ||
285 | .. important:: The following step assumes a multi-site configuration using newly | |
286 | installed systems that aren’t currently storing data. DO NOT DELETE | |
287 | the ``default`` zone group if you are already using it to store | |
288 | data. | |
289 | ||
290 | :: | |
291 | ||
292 | # rados rmpool default.rgw.control default.rgw.control --yes-i-really-really-mean-it | |
293 | # rados rmpool default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it | |
294 | # rados rmpool default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it | |
295 | # rados rmpool default.rgw.log default.rgw.log --yes-i-really-really-mean-it | |
296 | # rados rmpool default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it | |
297 | ||
298 | Create a System User | |
299 | -------------------- | |
300 | ||
301 | The ``ceph-radosgw`` daemons must authenticate before pulling realm and | |
302 | period information. In the master zone, create a system user to | |
303 | facilitate authentication between daemons. | |
304 | ||
305 | :: | |
306 | ||
307 | # radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system | |
308 | ||
309 | For example: | |
310 | ||
311 | :: | |
312 | ||
313 | # radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system | |
314 | ||
315 | Make a note of the ``access_key`` and ``secret_key``, as the secondary | |
316 | zones will require them to authenticate with the master zone. | |
317 | ||
318 | Finally, add the system user to the master zone. | |
319 | ||
320 | :: | |
321 | ||
322 | # radosgw-admin zone modify --rgw-zone=us-east --access-key={access-key} --secret={secret} | |
323 | # radosgw-admin period update --commit | |
324 | ||
325 | Update the Period | |
326 | ----------------- | |
327 | ||
328 | After updating the master zone configuration, update the period. | |
329 | ||
330 | :: | |
331 | ||
332 | # radosgw-admin period update --commit | |
333 | ||
334 | .. note:: Updating the period changes the epoch, and ensures that other zones | |
335 | will receive the updated configuration. | |
336 | ||
337 | Update the Ceph Configuration File | |
338 | ---------------------------------- | |
339 | ||
340 | Update the Ceph configuration file on master zone hosts by adding the | |
341 | ``rgw_zone`` configuration option and the name of the master zone to the | |
342 | instance entry. | |
343 | ||
344 | :: | |
345 | ||
346 | [client.rgw.{instance-name}] | |
347 | ... | |
348 | rgw_zone={zone-name} | |
349 | ||
350 | For example: | |
351 | ||
352 | :: | |
353 | ||
354 | [client.rgw.rgw1] | |
355 | host = rgw1 | |
356 | rgw frontends = "civetweb port=80" | |
357 | rgw_zone=us-east | |
358 | ||
359 | Start the Gateway | |
360 | ----------------- | |
361 | ||
362 | On the object gateway host, start and enable the Ceph Object Gateway | |
363 | service: | |
364 | ||
365 | :: | |
366 | ||
367 | # systemctl start ceph-radosgw@rgw.`hostname -s` | |
368 | # systemctl enable ceph-radosgw@rgw.`hostname -s` | |
369 | ||
370 | Configure Secondary Zones | |
371 | ========================= | |
372 | ||
373 | Zones within a zone group replicate all data to ensure that each zone | |
374 | has the same data. When creating the secondary zone, execute all of the | |
375 | following operations on a host identified to serve the secondary zone. | |
376 | ||
377 | .. note:: To add a third zone, follow the same procedures as for adding the | |
378 | secondary zone. Use different zone name. | |
379 | ||
380 | .. important:: You must execute metadata operations, such as user creation, on a | |
381 | host within the master zone. The master zone and the secondary zone | |
382 | can receive bucket operations, but the secondary zone redirects | |
383 | bucket operations to the master zone. If the master zone is down, | |
384 | bucket operations will fail. | |
385 | ||
386 | Pull the Realm | |
387 | -------------- | |
388 | ||
389 | Using the URL path, access key and secret of the master zone in the | |
390 | master zone group, pull the realm to the host. To pull a non-default | |
391 | realm, specify the realm using the ``--rgw-realm`` or ``--realm-id`` | |
392 | configuration options. | |
393 | ||
394 | :: | |
395 | ||
396 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret} | |
397 | ||
398 | If this realm is the default realm or the only realm, make the realm the | |
399 | default realm. | |
400 | ||
401 | :: | |
402 | ||
403 | # radosgw-admin realm default --rgw-realm={realm-name} | |
404 | ||
405 | Pull the Period | |
406 | --------------- | |
407 | ||
408 | Using the URL path, access key and secret of the master zone in the | |
409 | master zone group, pull the period to the host. To pull a period from a | |
410 | non-default realm, specify the realm using the ``--rgw-realm`` or | |
411 | ``--realm-id`` configuration options. | |
412 | ||
413 | :: | |
414 | ||
415 | # radosgw-admin period pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret} | |
416 | ||
417 | ||
418 | .. note:: Pulling the period retrieves the latest version of the zone group | |
419 | and zone configurations for the realm. | |
420 | ||
421 | Create a Secondary Zone | |
422 | ----------------------- | |
423 | ||
424 | .. important:: Zones must be created on a Ceph Object Gateway node that will be | |
425 | within the zone. | |
426 | ||
427 | Create a secondary zone for the multi-site configuration by opening a | |
428 | command line interface on a host identified to serve the secondary zone. | |
429 | Specify the zone group ID, the new zone name and an endpoint for the | |
430 | zone. **DO NOT** use the ``--master`` or ``--default`` flags. In Kraken, | |
431 | all zones run in an active-active configuration by | |
432 | default; that is, a gateway client may write data to any zone and the | |
433 | zone will replicate the data to all other zones within the zone group. | |
434 | If the secondary zone should not accept write operations, specify the | |
435 | ``--read-only`` flag to create an active-passive configuration between | |
436 | the master zone and the secondary zone. Additionally, provide the | |
437 | ``access_key`` and ``secret_key`` of the generated system user stored in | |
438 | the master zone of the master zone group. Execute the following: | |
439 | ||
440 | :: | |
441 | ||
442 | # radosgw-admin zone create --rgw-zonegroup={zone-group-name}\ | |
443 | --rgw-zone={zone-name} --endpoints={url} \ | |
444 | --access-key={system-key} --secret={secret}\ | |
445 | --endpoints=http://{fqdn}:80 \ | |
446 | [--read-only] | |
447 | ||
448 | For example: | |
449 | ||
450 | :: | |
451 | ||
452 | # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \ | |
453 | --access-key={system-key} --secret={secret} \ | |
454 | --endpoints=http://rgw2:80 | |
455 | ||
456 | .. important:: The following steps assume a multi-site configuration using newly | |
457 | installed systems that aren’t storing data. **DO NOT DELETE** the | |
458 | ``default`` zone and its pools if you are already using it to store | |
459 | data, or the data will be lost and unrecoverable. | |
460 | ||
461 | Delete the default zone if needed. | |
462 | ||
463 | :: | |
464 | ||
465 | # radosgw-admin zone delete --rgw-zone=default | |
466 | ||
467 | Finally, delete the default pools in your Ceph storage cluster if | |
468 | needed. | |
469 | ||
470 | :: | |
471 | ||
472 | # rados rmpool default.rgw.control default.rgw.control --yes-i-really-really-mean-it | |
473 | # rados rmpool default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it | |
474 | # rados rmpool default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it | |
475 | # rados rmpool default.rgw.log default.rgw.log --yes-i-really-really-mean-it | |
476 | # rados rmpool default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it | |
477 | ||
478 | Update the Ceph Configuration File | |
479 | ---------------------------------- | |
480 | ||
481 | Update the Ceph configuration file on the secondary zone hosts by adding | |
482 | the ``rgw_zone`` configuration option and the name of the secondary zone | |
483 | to the instance entry. | |
484 | ||
485 | :: | |
486 | ||
487 | [client.rgw.{instance-name}] | |
488 | ... | |
489 | rgw_zone={zone-name} | |
490 | ||
491 | For example: | |
492 | ||
493 | :: | |
494 | ||
495 | [client.rgw.rgw2] | |
496 | host = rgw2 | |
497 | rgw frontends = "civetweb port=80" | |
498 | rgw_zone=us-west | |
499 | ||
500 | Update the Period | |
501 | ----------------- | |
502 | ||
503 | After updating the master zone configuration, update the period. | |
504 | ||
505 | :: | |
506 | ||
507 | # radosgw-admin period update --commit | |
508 | ||
509 | .. note:: Updating the period changes the epoch, and ensures that other zones | |
510 | will receive the updated configuration. | |
511 | ||
512 | Start the Gateway | |
513 | ----------------- | |
514 | ||
515 | On the object gateway host, start and enable the Ceph Object Gateway | |
516 | service: | |
517 | ||
518 | :: | |
519 | ||
520 | # systemctl start ceph-radosgw@rgw.`hostname -s` | |
521 | # systemctl enable ceph-radosgw@rgw.`hostname -s` | |
522 | ||
523 | Check Synchronization Status | |
524 | ---------------------------- | |
525 | ||
526 | Once the secondary zone is up and running, check the synchronization | |
527 | status. Synchronization copies users and buckets created in the master | |
528 | zone to the secondary zone. | |
529 | ||
530 | :: | |
531 | ||
532 | # radosgw-admin sync status | |
533 | ||
534 | The output will provide the status of synchronization operations. For | |
535 | example: | |
536 | ||
537 | :: | |
538 | ||
539 | realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth) | |
540 | zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us) | |
541 | zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west) | |
542 | metadata sync syncing | |
543 | full sync: 0/64 shards | |
544 | metadata is caught up with master | |
545 | incremental sync: 64/64 shards | |
546 | data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east) | |
547 | syncing | |
548 | full sync: 0/128 shards | |
549 | incremental sync: 128/128 shards | |
550 | data is caught up with source | |
551 | ||
552 | .. note:: Secondary zones accept bucket operations; however, secondary zones | |
553 | redirect bucket operations to the master zone and then synchronize | |
554 | with the master zone to receive the result of the bucket operations. | |
555 | If the master zone is down, bucket operations executed on the | |
556 | secondary zone will fail, but object operations should succeed. | |
557 | ||
558 | ||
559 | Maintenance | |
560 | =========== | |
561 | ||
562 | Checking the Sync Status | |
563 | ------------------------ | |
564 | ||
565 | Information about the replication status of a zone can be queried with:: | |
566 | ||
567 | $ radosgw-admin sync status | |
568 | realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth) | |
569 | zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us) | |
570 | zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2) | |
571 | metadata sync syncing | |
572 | full sync: 0/64 shards | |
573 | incremental sync: 64/64 shards | |
574 | metadata is behind on 1 shards | |
575 | oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s | |
576 | data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1) | |
577 | syncing | |
578 | full sync: 0/128 shards | |
579 | incremental sync: 128/128 shards | |
580 | data is caught up with source | |
581 | source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3) | |
582 | syncing | |
583 | full sync: 0/128 shards | |
584 | incremental sync: 128/128 shards | |
585 | data is caught up with source | |
586 | ||
587 | Changing the Metadata Master Zone | |
588 | --------------------------------- | |
589 | ||
590 | .. important:: Care must be taken when changing which zone is the metadata | |
591 | master. If a zone has not finished syncing metadata from the current master | |
592 | zone, it will be unable to serve any remaining entries when promoted to | |
593 | master and those changes will be lost. For this reason, waiting for a | |
594 | zone's ``radosgw-admin sync status`` to catch up on metadata sync before | |
595 | promoting it to master is recommended. | |
596 | ||
597 | Similarly, if changes to metadata are being processed by the current master | |
598 | zone while another zone is being promoted to master, those changes are | |
599 | likely to be lost. To avoid this, shutting down any ``radosgw`` instances | |
600 | on the previous master zone is recommended. After promoting another zone, | |
601 | its new period can be fetched with ``radosgw-admin period pull`` and the | |
602 | gateway(s) can be restarted. | |
603 | ||
604 | To promote a zone (for example, zone ``us-2`` in zonegroup ``us``) to metadata | |
605 | master, run the following commands on that zone:: | |
606 | ||
607 | $ radosgw-admin zone modify --rgw-zone=us-2 --master | |
608 | $ radosgw-admin zonegroup modify --rgw-zonegroup=us --master | |
609 | $ radosgw-admin period update --commit | |
610 | ||
611 | This will generate a new period, and the radosgw instance(s) in zone ``us-2`` | |
612 | will send this period to other zones. | |
613 | ||
614 | Failover and Disaster Recovery | |
615 | ============================== | |
616 | ||
617 | If the master zone should fail, failover to the secondary zone for | |
618 | disaster recovery. | |
619 | ||
620 | 1. Make the secondary zone the master and default zone. For example: | |
621 | ||
622 | :: | |
623 | ||
624 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default | |
625 | ||
626 | By default, Ceph Object Gateway will run in an active-active | |
627 | configuration. If the cluster was configured to run in an | |
628 | active-passive configuration, the secondary zone is a read-only zone. | |
629 | Remove the ``--read-only`` status to allow the zone to receive write | |
630 | operations. For example: | |
631 | ||
632 | :: | |
633 | ||
634 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default \ | |
635 | --read-only=False | |
636 | ||
637 | 2. Update the period to make the changes take effect. | |
638 | ||
639 | :: | |
640 | ||
641 | # radosgw-admin period update --commit | |
642 | ||
643 | 3. Finally, restart the Ceph Object Gateway. | |
644 | ||
645 | :: | |
646 | ||
647 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
648 | ||
649 | If the former master zone recovers, revert the operation. | |
650 | ||
651 | 1. From the recovered zone, pull the period from the current master | |
652 | zone. | |
653 | ||
654 | :: | |
655 | ||
656 | # radosgw-admin period pull --url={url-to-master-zone-gateway} \ | |
657 | --access-key={access-key} --secret={secret} | |
658 | ||
659 | 2. Make the recovered zone the master and default zone. | |
660 | ||
661 | :: | |
662 | ||
663 | # radosgw-admin zone modify --rgw-zone={zone-name} --master --default | |
664 | ||
665 | 3. Update the period to make the changes take effect. | |
666 | ||
667 | :: | |
668 | ||
669 | # radosgw-admin period update --commit | |
670 | ||
671 | 4. Then, restart the Ceph Object Gateway in the recovered zone. | |
672 | ||
673 | :: | |
674 | ||
675 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
676 | ||
677 | 5. If the secondary zone needs to be a read-only configuration, update | |
678 | the secondary zone. | |
679 | ||
680 | :: | |
681 | ||
682 | # radosgw-admin zone modify --rgw-zone={zone-name} --read-only | |
683 | ||
684 | 6. Update the period to make the changes take effect. | |
685 | ||
686 | :: | |
687 | ||
688 | # radosgw-admin period update --commit | |
689 | ||
690 | 7. Finally, restart the Ceph Object Gateway in the secondary zone. | |
691 | ||
692 | :: | |
693 | ||
694 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
695 | ||
696 | Migrating a Single Site System to Multi-Site | |
697 | ============================================ | |
698 | ||
699 | To migrate from a single site system with a ``default`` zone group and | |
700 | zone to a multi site system, use the following steps: | |
701 | ||
702 | 1. Create a realm. Replace ``<name>`` with the realm name. | |
703 | ||
704 | :: | |
705 | ||
706 | # radosgw-admin realm create --rgw-realm=<name> --default | |
707 | ||
708 | 2. Rename the default zone and zonegroup. Replace ``<name>`` with the | |
709 | zonegroup or zone name. | |
710 | ||
711 | :: | |
712 | ||
713 | # radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name> | |
714 | # radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name> | |
715 | ||
716 | 3. Configure the master zonegroup. Replace ``<name>`` with the realm or | |
717 | zonegroup name. Replace ``<fqdn>`` with the fully qualified domain | |
718 | name(s) in the zonegroup. | |
719 | ||
720 | :: | |
721 | ||
722 | # radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default | |
723 | ||
724 | 4. Configure the master zone. Replace ``<name>`` with the realm, | |
725 | zonegroup or zone name. Replace ``<fqdn>`` with the fully qualified | |
726 | domain name(s) in the zonegroup. | |
727 | ||
728 | :: | |
729 | ||
730 | # radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \ | |
731 | --rgw-zone=<name> --endpoints http://<fqdn>:80 \ | |
732 | --access-key=<access-key> --secret=<secret-key> \ | |
733 | --master --default | |
734 | ||
735 | 5. Create a system user. Replace ``<user-id>`` with the username. | |
736 | Replace ``<display-name>`` with a display name. It may contain | |
737 | spaces. | |
738 | ||
739 | :: | |
740 | ||
741 | # radosgw-admin user create --uid=<user-id> --display-name="<display-name>"\ | |
742 | --access-key=<access-key> --secret=<secret-key> --system | |
743 | ||
744 | 6. Commit the updated configuration. | |
745 | ||
746 | :: | |
747 | ||
748 | # radosgw-admin period update --commit | |
749 | ||
750 | 7. Finally, restart the Ceph Object Gateway. | |
751 | ||
752 | :: | |
753 | ||
754 | # systemctl restart ceph-radosgw@rgw.`hostname -s` | |
755 | ||
756 | After completing this procedure, proceed to `Configure a Secondary | |
757 | Zone <#configure-secondary-zones>`__ to create a secondary zone | |
758 | in the master zone group. | |
759 | ||
760 | ||
761 | Multi-Site Configuration Reference | |
762 | ================================== | |
763 | ||
764 | The following sections provide additional details and command-line | |
765 | usage for realms, periods, zone groups and zones. | |
766 | ||
767 | Realms | |
768 | ------ | |
769 | ||
770 | A realm represents a globally unique namespace consisting of one or more | |
771 | zonegroups containing one or more zones, and zones containing buckets, | |
772 | which in turn contain objects. A realm enables the Ceph Object Gateway | |
773 | to support multiple namespaces and their configuration on the same | |
774 | hardware. | |
775 | ||
776 | A realm contains the notion of periods. Each period represents the state | |
777 | of the zone group and zone configuration in time. Each time you make a | |
778 | change to a zonegroup or zone, update the period and commit it. | |
779 | ||
780 | By default, the Ceph Object Gateway does not create a realm | |
781 | for backward compatibility with Infernalis and earlier releases. | |
782 | However, as a best practice, we recommend creating realms for new | |
783 | clusters. | |
784 | ||
785 | Create a Realm | |
786 | ~~~~~~~~~~~~~~ | |
787 | ||
788 | To create a realm, execute ``realm create`` and specify the realm name. | |
789 | If the realm is the default, specify ``--default``. | |
790 | ||
791 | :: | |
792 | ||
793 | # radosgw-admin realm create --rgw-realm={realm-name} [--default] | |
794 | ||
795 | For example: | |
796 | ||
797 | :: | |
798 | ||
799 | # radosgw-admin realm create --rgw-realm=movies --default | |
800 | ||
801 | By specifying ``--default``, the realm will be called implicitly with | |
802 | each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name | |
803 | are explicitly provided. | |
804 | ||
805 | Make a Realm the Default | |
806 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
807 | ||
808 | One realm in the list of realms should be the default realm. There may | |
809 | be only one default realm. If there is only one realm and it wasn’t | |
810 | specified as the default realm when it was created, make it the default | |
811 | realm. Alternatively, to change which realm is the default, execute: | |
812 | ||
813 | :: | |
814 | ||
815 | # radosgw-admin realm default --rgw-realm=movies | |
816 | ||
817 | ..note:: When the realm is default, the command line assumes | |
818 | ``--rgw-realm=<realm-name>`` as an argument. | |
819 | ||
820 | Delete a Realm | |
821 | ~~~~~~~~~~~~~~ | |
822 | ||
823 | To delete a realm, execute ``realm delete`` and specify the realm name. | |
824 | ||
825 | :: | |
826 | ||
827 | # radosgw-admin realm delete --rgw-realm={realm-name} | |
828 | ||
829 | For example: | |
830 | ||
831 | :: | |
832 | ||
833 | # radosgw-admin realm delete --rgw-realm=movies | |
834 | ||
835 | Get a Realm | |
836 | ~~~~~~~~~~~ | |
837 | ||
838 | To get a realm, execute ``realm get`` and specify the realm name. | |
839 | ||
840 | :: | |
841 | ||
842 | #radosgw-admin realm get --rgw-realm=<name> | |
843 | ||
844 | For example: | |
845 | ||
846 | :: | |
847 | ||
848 | # radosgw-admin realm get --rgw-realm=movies [> filename.json] | |
849 | ||
850 | The CLI will echo a JSON object with the realm properties. | |
851 | ||
852 | :: | |
853 | ||
854 | { | |
855 | "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc", | |
856 | "name": "movies", | |
857 | "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b", | |
858 | "epoch": 1 | |
859 | } | |
860 | ||
861 | Use ``>`` and an output file name to output the JSON object to a file. | |
862 | ||
863 | Set a Realm | |
864 | ~~~~~~~~~~~ | |
865 | ||
866 | To set a realm, execute ``realm set``, specify the realm name, and | |
867 | ``--infile=`` with an input file name. | |
868 | ||
869 | :: | |
870 | ||
871 | #radosgw-admin realm set --rgw-realm=<name> --infile=<infilename> | |
872 | ||
873 | For example: | |
874 | ||
875 | :: | |
876 | ||
877 | # radosgw-admin realm set --rgw-realm=movies --infile=filename.json | |
878 | ||
879 | List Realms | |
880 | ~~~~~~~~~~~ | |
881 | ||
882 | To list realms, execute ``realm list``. | |
883 | ||
884 | :: | |
885 | ||
886 | # radosgw-admin realm list | |
887 | ||
888 | List Realm Periods | |
889 | ~~~~~~~~~~~~~~~~~~ | |
890 | ||
891 | To list realm periods, execute ``realm list-periods``. | |
892 | ||
893 | :: | |
894 | ||
895 | # radosgw-admin realm list-periods | |
896 | ||
897 | Pull a Realm | |
898 | ~~~~~~~~~~~~ | |
899 | ||
900 | To pull a realm from the node containing the master zone group and | |
901 | master zone to a node containing a secondary zone group or zone, execute | |
902 | ``realm pull`` on the node that will receive the realm configuration. | |
903 | ||
904 | :: | |
905 | ||
906 | # radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret} | |
907 | ||
908 | Rename a Realm | |
909 | ~~~~~~~~~~~~~~ | |
910 | ||
911 | A realm is not part of the period. Consequently, renaming the realm is | |
912 | only applied locally, and will not get pulled with ``realm pull``. When | |
913 | renaming a realm with multiple zones, run the command on each zone. To | |
914 | rename a realm, execute the following: | |
915 | ||
916 | :: | |
917 | ||
918 | # radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name> | |
919 | ||
920 | .. note:: DO NOT use ``realm set`` to change the ``name`` parameter. That | |
921 | changes the internal name only. Specifying ``--rgw-realm`` would | |
922 | still use the old realm name. | |
923 | ||
924 | Zone Groups | |
925 | ----------- | |
926 | ||
927 | The Ceph Object Gateway supports multi-site deployments and a global | |
928 | namespace by using the notion of zone groups. Formerly called a region | |
929 | in Infernalis, a zone group defines the geographic location of one or more Ceph | |
930 | Object Gateway instances within one or more zones. | |
931 | ||
932 | Configuring zone groups differs from typical configuration procedures, | |
933 | because not all of the settings end up in a Ceph configuration file. You | |
934 | can list zone groups, get a zone group configuration, and set a zone | |
935 | group configuration. | |
936 | ||
937 | Create a Zone Group | |
938 | ~~~~~~~~~~~~~~~~~~~ | |
939 | ||
940 | Creating a zone group consists of specifying the zone group name. | |
941 | Creating a zone assumes it will live in the default realm unless | |
942 | ``--rgw-realm=<realm-name>`` is specified. If the zonegroup is the | |
943 | default zonegroup, specify the ``--default`` flag. If the zonegroup is | |
944 | the master zonegroup, specify the ``--master`` flag. For example: | |
945 | ||
946 | :: | |
947 | ||
948 | # radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default] | |
949 | ||
950 | ||
951 | .. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify | |
952 | an existing zone group’s settings. | |
953 | ||
954 | Make a Zone Group the Default | |
955 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
956 | ||
957 | One zonegroup in the list of zonegroups should be the default zonegroup. | |
958 | There may be only one default zonegroup. If there is only one zonegroup | |
959 | and it wasn’t specified as the default zonegroup when it was created, | |
960 | make it the default zonegroup. Alternatively, to change which zonegroup | |
961 | is the default, execute: | |
962 | ||
963 | :: | |
964 | ||
965 | # radosgw-admin zonegroup default --rgw-zonegroup=comedy | |
966 | ||
967 | .. note:: When the zonegroup is default, the command line assumes | |
968 | ``--rgw-zonegroup=<zonegroup-name>`` as an argument. | |
969 | ||
970 | Then, update the period: | |
971 | ||
972 | :: | |
973 | ||
974 | # radosgw-admin period update --commit | |
975 | ||
976 | Add a Zone to a Zone Group | |
977 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
978 | ||
979 | To add a zone to a zonegroup, execute the following: | |
980 | ||
981 | :: | |
982 | ||
983 | # radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name> | |
984 | ||
985 | Then, update the period: | |
986 | ||
987 | :: | |
988 | ||
989 | # radosgw-admin period update --commit | |
990 | ||
991 | Remove a Zone from a Zone Group | |
992 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
993 | ||
994 | To remove a zone from a zonegroup, execute the following: | |
995 | ||
996 | :: | |
997 | ||
998 | # radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name> | |
999 | ||
1000 | Then, update the period: | |
1001 | ||
1002 | :: | |
1003 | ||
1004 | # radosgw-admin period update --commit | |
1005 | ||
1006 | Rename a Zone Group | |
1007 | ~~~~~~~~~~~~~~~~~~~ | |
1008 | ||
1009 | To rename a zonegroup, execute the following: | |
1010 | ||
1011 | :: | |
1012 | ||
1013 | # radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name> | |
1014 | ||
1015 | Then, update the period: | |
1016 | ||
1017 | :: | |
1018 | ||
1019 | # radosgw-admin period update --commit | |
1020 | ||
1021 | Delete a Zone Group | |
1022 | ~~~~~~~~~~~~~~~~~~~ | |
1023 | ||
1024 | To delete a zonegroup, execute the following: | |
1025 | ||
1026 | :: | |
1027 | ||
1028 | # radosgw-admin zonegroup delete --rgw-zonegroup=<name> | |
1029 | ||
1030 | Then, update the period: | |
1031 | ||
1032 | :: | |
1033 | ||
1034 | # radosgw-admin period update --commit | |
1035 | ||
1036 | List Zone Groups | |
1037 | ~~~~~~~~~~~~~~~~ | |
1038 | ||
1039 | A Ceph cluster contains a list of zone groups. To list the zone groups, | |
1040 | execute: | |
1041 | ||
1042 | :: | |
1043 | ||
1044 | # radosgw-admin zonegroup list | |
1045 | ||
1046 | The ``radosgw-admin`` returns a JSON formatted list of zone groups. | |
1047 | ||
1048 | :: | |
1049 | ||
1050 | { | |
1051 | "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1052 | "zonegroups": [ | |
1053 | "us" | |
1054 | ] | |
1055 | } | |
1056 | ||
1057 | Get a Zone Group Map | |
1058 | ~~~~~~~~~~~~~~~~~~~~ | |
1059 | ||
1060 | To list the details of each zone group, execute: | |
1061 | ||
1062 | :: | |
1063 | ||
1064 | # radosgw-admin zonegroup-map get | |
1065 | ||
1066 | .. note:: If you receive a ``failed to read zonegroup map`` error, run | |
1067 | ``radosgw-admin zonegroup-map update`` as ``root`` first. | |
1068 | ||
1069 | Get a Zone Group | |
1070 | ~~~~~~~~~~~~~~~~ | |
1071 | ||
1072 | To view the configuration of a zone group, execute: | |
1073 | ||
1074 | :: | |
1075 | ||
1076 | radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>] | |
1077 | ||
1078 | The zone group configuration looks like this: | |
1079 | ||
1080 | :: | |
1081 | ||
1082 | { | |
1083 | "id": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1084 | "name": "us", | |
1085 | "api_name": "us", | |
1086 | "is_master": "true", | |
1087 | "endpoints": [ | |
1088 | "http:\/\/rgw1:80" | |
1089 | ], | |
1090 | "hostnames": [], | |
1091 | "hostnames_s3website": [], | |
1092 | "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1093 | "zones": [ | |
1094 | { | |
1095 | "id": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1096 | "name": "us-east", | |
1097 | "endpoints": [ | |
1098 | "http:\/\/rgw1" | |
1099 | ], | |
1100 | "log_meta": "true", | |
1101 | "log_data": "true", | |
1102 | "bucket_index_max_shards": 0, | |
1103 | "read_only": "false" | |
1104 | }, | |
1105 | { | |
1106 | "id": "d1024e59-7d28-49d1-8222-af101965a939", | |
1107 | "name": "us-west", | |
1108 | "endpoints": [ | |
1109 | "http:\/\/rgw2:80" | |
1110 | ], | |
1111 | "log_meta": "false", | |
1112 | "log_data": "true", | |
1113 | "bucket_index_max_shards": 0, | |
1114 | "read_only": "false" | |
1115 | } | |
1116 | ], | |
1117 | "placement_targets": [ | |
1118 | { | |
1119 | "name": "default-placement", | |
1120 | "tags": [] | |
1121 | } | |
1122 | ], | |
1123 | "default_placement": "default-placement", | |
1124 | "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe" | |
1125 | } | |
1126 | ||
1127 | Set a Zone Group | |
1128 | ~~~~~~~~~~~~~~~~ | |
1129 | ||
1130 | Defining a zone group consists of creating a JSON object, specifying at | |
1131 | least the required settings: | |
1132 | ||
1133 | 1. ``name``: The name of the zone group. Required. | |
1134 | ||
1135 | 2. ``api_name``: The API name for the zone group. Optional. | |
1136 | ||
1137 | 3. ``is_master``: Determines if the zone group is the master zone group. | |
1138 | Required. **note:** You can only have one master zone group. | |
1139 | ||
1140 | 4. ``endpoints``: A list of all the endpoints in the zone group. For | |
1141 | example, you may use multiple domain names to refer to the same zone | |
1142 | group. Remember to escape the forward slashes (``\/``). You may also | |
1143 | specify a port (``fqdn:port``) for each endpoint. Optional. | |
1144 | ||
1145 | 5. ``hostnames``: A list of all the hostnames in the zone group. For | |
1146 | example, you may use multiple domain names to refer to the same zone | |
1147 | group. Optional. The ``rgw dns name`` setting will automatically be | |
1148 | included in this list. You should restart the gateway daemon(s) after | |
1149 | changing this setting. | |
1150 | ||
1151 | 6. ``master_zone``: The master zone for the zone group. Optional. Uses | |
1152 | the default zone if not specified. **note:** You can only have one | |
1153 | master zone per zone group. | |
1154 | ||
1155 | 7. ``zones``: A list of all zones within the zone group. Each zone has a | |
1156 | name (required), a list of endpoints (optional), and whether or not | |
1157 | the gateway will log metadata and data operations (false by default). | |
1158 | ||
1159 | 8. ``placement_targets``: A list of placement targets (optional). Each | |
1160 | placement target contains a name (required) for the placement target | |
1161 | and a list of tags (optional) so that only users with the tag can use | |
1162 | the placement target (i.e., the user’s ``placement_tags`` field in | |
1163 | the user info). | |
1164 | ||
1165 | 9. ``default_placement``: The default placement target for the object | |
1166 | index and object data. Set to ``default-placement`` by default. You | |
1167 | may also set a per-user default placement in the user info for each | |
1168 | user. | |
1169 | ||
1170 | To set a zone group, create a JSON object consisting of the required | |
1171 | fields, save the object to a file (e.g., ``zonegroup.json``); then, | |
1172 | execute the following command: | |
1173 | ||
1174 | :: | |
1175 | ||
1176 | # radosgw-admin zonegroup set --infile zonegroup.json | |
1177 | ||
1178 | Where ``zonegroup.json`` is the JSON file you created. | |
1179 | ||
1180 | .. important:: The ``default`` zone group ``is_master`` setting is ``true`` by | |
1181 | default. If you create a new zone group and want to make it the | |
1182 | master zone group, you must either set the ``default`` zone group | |
1183 | ``is_master`` setting to ``false``, or delete the ``default`` zone | |
1184 | group. | |
1185 | ||
1186 | Finally, update the period: | |
1187 | ||
1188 | :: | |
1189 | ||
1190 | # radosgw-admin period update --commit | |
1191 | ||
1192 | Set a Zone Group Map | |
1193 | ~~~~~~~~~~~~~~~~~~~~ | |
1194 | ||
1195 | Setting a zone group map consists of creating a JSON object consisting | |
1196 | of one or more zone groups, and setting the ``master_zonegroup`` for the | |
1197 | cluster. Each zone group in the zone group map consists of a key/value | |
1198 | pair, where the ``key`` setting is equivalent to the ``name`` setting | |
1199 | for an individual zone group configuration, and the ``val`` is a JSON | |
1200 | object consisting of an individual zone group configuration. | |
1201 | ||
1202 | You may only have one zone group with ``is_master`` equal to ``true``, | |
1203 | and it must be specified as the ``master_zonegroup`` at the end of the | |
1204 | zone group map. The following JSON object is an example of a default | |
1205 | zone group map. | |
1206 | ||
1207 | :: | |
1208 | ||
1209 | { | |
1210 | "zonegroups": [ | |
1211 | { | |
1212 | "key": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1213 | "val": { | |
1214 | "id": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1215 | "name": "us", | |
1216 | "api_name": "us", | |
1217 | "is_master": "true", | |
1218 | "endpoints": [ | |
1219 | "http:\/\/rgw1:80" | |
1220 | ], | |
1221 | "hostnames": [], | |
1222 | "hostnames_s3website": [], | |
1223 | "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1224 | "zones": [ | |
1225 | { | |
1226 | "id": "9248cab2-afe7-43d8-a661-a40bf316665e", | |
1227 | "name": "us-east", | |
1228 | "endpoints": [ | |
1229 | "http:\/\/rgw1" | |
1230 | ], | |
1231 | "log_meta": "true", | |
1232 | "log_data": "true", | |
1233 | "bucket_index_max_shards": 0, | |
1234 | "read_only": "false" | |
1235 | }, | |
1236 | { | |
1237 | "id": "d1024e59-7d28-49d1-8222-af101965a939", | |
1238 | "name": "us-west", | |
1239 | "endpoints": [ | |
1240 | "http:\/\/rgw2:80" | |
1241 | ], | |
1242 | "log_meta": "false", | |
1243 | "log_data": "true", | |
1244 | "bucket_index_max_shards": 0, | |
1245 | "read_only": "false" | |
1246 | } | |
1247 | ], | |
1248 | "placement_targets": [ | |
1249 | { | |
1250 | "name": "default-placement", | |
1251 | "tags": [] | |
1252 | } | |
1253 | ], | |
1254 | "default_placement": "default-placement", | |
1255 | "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe" | |
1256 | } | |
1257 | } | |
1258 | ], | |
1259 | "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda", | |
1260 | "bucket_quota": { | |
1261 | "enabled": false, | |
1262 | "max_size_kb": -1, | |
1263 | "max_objects": -1 | |
1264 | }, | |
1265 | "user_quota": { | |
1266 | "enabled": false, | |
1267 | "max_size_kb": -1, | |
1268 | "max_objects": -1 | |
1269 | } | |
1270 | } | |
1271 | ||
1272 | To set a zone group map, execute the following: | |
1273 | ||
1274 | :: | |
1275 | ||
1276 | # radosgw-admin zonegroup-map set --infile zonegroupmap.json | |
1277 | ||
1278 | Where ``zonegroupmap.json`` is the JSON file you created. Ensure that | |
1279 | you have zones created for the ones specified in the zone group map. | |
1280 | Finally, update the period. | |
1281 | ||
1282 | :: | |
1283 | ||
1284 | # radosgw-admin period update --commit | |
1285 | ||
1286 | Zones | |
1287 | ----- | |
1288 | ||
1289 | Ceph Object Gateway supports the notion of zones. A zone defines a | |
1290 | logical group consisting of one or more Ceph Object Gateway instances. | |
1291 | ||
1292 | Configuring zones differs from typical configuration procedures, because | |
1293 | not all of the settings end up in a Ceph configuration file. You can | |
1294 | list zones, get a zone configuration and set a zone configuration. | |
1295 | ||
1296 | Create a Zone | |
1297 | ~~~~~~~~~~~~~ | |
1298 | ||
1299 | To create a zone, specify a zone name. If it is a master zone, specify | |
1300 | the ``--master`` option. Only one zone in a zone group may be a master | |
1301 | zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup`` | |
1302 | option with the zonegroup name. | |
1303 | ||
1304 | :: | |
1305 | ||
1306 | # radosgw-admin zone create --rgw-zone=<name> \ | |
1307 | [--zonegroup=<zonegroup-name]\ | |
1308 | [--endpoints=<endpoint>[,<endpoint>] \ | |
1309 | [--master] [--default] \ | |
1310 | --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY | |
1311 | ||
1312 | Then, update the period: | |
1313 | ||
1314 | :: | |
1315 | ||
1316 | # radosgw-admin period update --commit | |
1317 | ||
1318 | Delete a Zone | |
1319 | ~~~~~~~~~~~~~ | |
1320 | ||
1321 | To delete zone, first remove it from the zonegroup. | |
1322 | ||
1323 | :: | |
1324 | ||
1325 | # radosgw-admin zonegroup remove --zonegroup=<name>\ | |
1326 | --zone=<name> | |
1327 | ||
1328 | Then, update the period: | |
1329 | ||
1330 | :: | |
1331 | ||
1332 | # radosgw-admin period update --commit | |
1333 | ||
1334 | Next, delete the zone. Execute the following: | |
1335 | ||
1336 | :: | |
1337 | ||
1338 | # radosgw-admin zone delete --rgw-zone<name> | |
1339 | ||
1340 | Finally, update the period: | |
1341 | ||
1342 | :: | |
1343 | ||
1344 | # radosgw-admin period update --commit | |
1345 | ||
1346 | .. important:: Do not delete a zone without removing it from a zone group first. | |
1347 | Otherwise, updating the period will fail. | |
1348 | ||
1349 | If the pools for the deleted zone will not be used anywhere else, | |
1350 | consider deleting the pools. Replace ``<del-zone>`` in the example below | |
1351 | with the deleted zone’s name. | |
1352 | ||
1353 | .. important:: Only delete the pools with prepended zone names. Deleting the root | |
1354 | pool, such as, ``.rgw.root`` will remove all of the system’s | |
1355 | configuration. | |
1356 | ||
1357 | .. important:: Once the pools are deleted, all of the data within them are deleted | |
1358 | in an unrecoverable manner. Only delete the pools if the pool | |
1359 | contents are no longer needed. | |
1360 | ||
1361 | :: | |
1362 | ||
1363 | # rados rmpool <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it | |
1364 | # rados rmpool <del-zone>.rgw.data.root <del-zone>.rgw.data.root --yes-i-really-really-mean-it | |
1365 | # rados rmpool <del-zone>.rgw.gc <del-zone>.rgw.gc --yes-i-really-really-mean-it | |
1366 | # rados rmpool <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it | |
1367 | # rados rmpool <del-zone>.rgw.users.uid <del-zone>.rgw.users.uid --yes-i-really-really-mean-it | |
1368 | ||
1369 | Modify a Zone | |
1370 | ~~~~~~~~~~~~~ | |
1371 | ||
1372 | To modify a zone, specify the zone name and the parameters you wish to | |
1373 | modify. | |
1374 | ||
1375 | :: | |
1376 | ||
1377 | # radosgw-admin zone modify [options] | |
1378 | ||
1379 | Where ``[options]``: | |
1380 | ||
1381 | - ``--access-key=<key>`` | |
1382 | - ``--secret/--secret-key=<key>`` | |
1383 | - ``--master`` | |
1384 | - ``--default`` | |
1385 | - ``--endpoints=<list>`` | |
1386 | ||
1387 | Then, update the period: | |
1388 | ||
1389 | :: | |
1390 | ||
1391 | # radosgw-admin period update --commit | |
1392 | ||
1393 | List Zones | |
1394 | ~~~~~~~~~~ | |
1395 | ||
1396 | As ``root``, to list the zones in a cluster, execute: | |
1397 | ||
1398 | :: | |
1399 | ||
1400 | # radosgw-admin zone list | |
1401 | ||
1402 | Get a Zone | |
1403 | ~~~~~~~~~~ | |
1404 | ||
1405 | As ``root``, to get the configuration of a zone, execute: | |
1406 | ||
1407 | :: | |
1408 | ||
1409 | # radosgw-admin zone get [--rgw-zone=<zone>] | |
1410 | ||
1411 | The ``default`` zone looks like this: | |
1412 | ||
1413 | :: | |
1414 | ||
1415 | { "domain_root": ".rgw", | |
1416 | "control_pool": ".rgw.control", | |
1417 | "gc_pool": ".rgw.gc", | |
1418 | "log_pool": ".log", | |
1419 | "intent_log_pool": ".intent-log", | |
1420 | "usage_log_pool": ".usage", | |
1421 | "user_keys_pool": ".users", | |
1422 | "user_email_pool": ".users.email", | |
1423 | "user_swift_pool": ".users.swift", | |
1424 | "user_uid_pool": ".users.uid", | |
1425 | "system_key": { "access_key": "", "secret_key": ""}, | |
1426 | "placement_pools": [ | |
1427 | { "key": "default-placement", | |
1428 | "val": { "index_pool": ".rgw.buckets.index", | |
1429 | "data_pool": ".rgw.buckets"} | |
1430 | } | |
1431 | ] | |
1432 | } | |
1433 | ||
1434 | Set a Zone | |
1435 | ~~~~~~~~~~ | |
1436 | ||
1437 | Configuring a zone involves specifying a series of Ceph Object Gateway | |
1438 | pools. For consistency, we recommend using a pool prefix that is the | |
1439 | same as the zone name. See | |
1440 | `Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__ | |
1441 | for details of configuring pools. | |
1442 | ||
1443 | To set a zone, create a JSON object consisting of the pools, save the | |
1444 | object to a file (e.g., ``zone.json``); then, execute the following | |
1445 | command, replacing ``{zone-name}`` with the name of the zone: | |
1446 | ||
1447 | :: | |
1448 | ||
1449 | # radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json | |
1450 | ||
1451 | Where ``zone.json`` is the JSON file you created. | |
1452 | ||
1453 | Then, as ``root``, update the period: | |
1454 | ||
1455 | :: | |
1456 | ||
1457 | # radosgw-admin period update --commit | |
1458 | ||
1459 | Rename a Zone | |
1460 | ~~~~~~~~~~~~~ | |
1461 | ||
1462 | To rename a zone, specify the zone name and the new zone name. | |
1463 | ||
1464 | :: | |
1465 | ||
1466 | # radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name> | |
1467 | ||
1468 | Then, update the period: | |
1469 | ||
1470 | :: | |
1471 | ||
1472 | # radosgw-admin period update --commit | |
1473 | ||
1474 | Zone Group and Zone Settings | |
1475 | ---------------------------- | |
1476 | ||
1477 | When configuring a default zone group and zone, the pool name includes | |
1478 | the zone name. For example: | |
1479 | ||
1480 | - ``default.rgw.control`` | |
1481 | ||
1482 | To change the defaults, include the following settings in your Ceph | |
1483 | configuration file under each ``[client.radosgw.{instance-name}]`` | |
1484 | instance. | |
1485 | ||
1486 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1487 | | Name | Description | Type | Default | | |
1488 | +=====================================+===================================+=========+=======================+ | |
1489 | | ``rgw_zone`` | The name of the zone for the | String | None | | |
1490 | | | gateway instance. | | | | |
1491 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1492 | | ``rgw_zonegroup`` | The name of the zone group for | String | None | | |
1493 | | | the gateway instance. | | | | |
1494 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1495 | | ``rgw_zonegroup_root_pool`` | The root pool for the zone group. | String | ``.rgw.root`` | | |
1496 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1497 | | ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` | | |
1498 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1499 | | ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` | | |
1500 | | | zone group. We do not recommend | | | | |
1501 | | | changing this setting. | | | | |
1502 | +-------------------------------------+-----------------------------------+---------+-----------------------+ | |
1503 | | ``rgw_num_zone_opstate_shards`` | The maximum number of shards for | Integer | ``128`` | | |
1504 | | | keeping inter-zone group | | | | |
1505 | | | synchronization progress. | | | | |
1506 | +-------------------------------------+-----------------------------------+---------+-----------------------+ |