]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/services/rgw.rst
update ceph source to reef 18.2.1
[ceph.git] / ceph / doc / cephadm / services / rgw.rst
CommitLineData
f67539c2
TL
1===========
2RGW Service
3===========
4
5.. _cephadm-deploy-rgw:
6
7Deploy RGWs
8===========
9
10Cephadm deploys radosgw as a collection of daemons that manage a
11single-cluster deployment or a particular *realm* and *zone* in a
12multisite deployment. (For more information about realms and zones,
13see :ref:`multisite`.)
14
15Note that with cephadm, radosgw daemons are configured via the monitor
16configuration database instead of via a `ceph.conf` or the command line. If
17that configuration isn't already in place (usually in the
18``client.rgw.<something>`` section), then the radosgw
19daemons will start up with default settings (e.g., binding to port
2080).
21
22To deploy a set of radosgw daemons, with an arbitrary service name
23*name*, run the following command:
24
25.. prompt:: bash #
26
27 ceph orch apply rgw *<name>* [--realm=*<realm-name>*] [--zone=*<zone-name>*] --placement="*<num-daemons>* [*<host1>* ...]"
28
29Trivial setup
30-------------
31
32For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment
33under the arbitrary service id *foo*:
34
35.. prompt:: bash #
36
37 ceph orch apply rgw foo
38
20effc67
TL
39.. _cephadm-rgw-designated_gateways:
40
f67539c2
TL
41Designated gateways
42-------------------
43
44A common scenario is to have a labeled set of hosts that will act
45as gateways, with multiple instances of radosgw running on consecutive
46ports 8000 and 8001:
47
48.. prompt:: bash #
49
50 ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything
51 ceph orch host label add gwhost2 rgw
52 ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000
53
20effc67
TL
54See also: :ref:`cephadm_co_location`.
55
a4b75251
TL
56.. _cephadm-rgw-networks:
57
58Specifying Networks
59-------------------
60
61The RGW service can have the network they bind to configured with a yaml service specification.
62
63example spec file:
64
65.. code-block:: yaml
66
67 service_type: rgw
2a845540 68 service_id: foo
a4b75251
TL
69 placement:
70 label: rgw
2a845540 71 count_per_host: 2
a4b75251
TL
72 networks:
73 - 192.169.142.0/24
74 spec:
2a845540 75 rgw_frontend_port: 8080
a4b75251 76
1e59de90
TL
77Passing Frontend Extra Arguments
78--------------------------------
79
80The RGW service specification can be used to pass extra arguments to the rgw frontend by using
81the `rgw_frontend_extra_args` arguments list.
82
83example spec file:
84
85.. code-block:: yaml
86
87 service_type: rgw
88 service_id: foo
89 placement:
90 label: rgw
91 count_per_host: 2
92 spec:
93 rgw_realm: myrealm
94 rgw_zone: myzone
95 rgw_frontend_type: "beast"
96 rgw_frontend_port: 5000
97 rgw_frontend_extra_args:
98 - "tcp_nodelay=1"
99 - "max_header_size=65536"
100
101.. note:: cephadm combines the arguments from the `spec` section and the ones from
102 the `rgw_frontend_extra_args` into a single space-separated arguments list
103 which is used to set the value of `rgw_frontends` configuration parameter.
a4b75251 104
f67539c2
TL
105Multisite zones
106---------------
107
108To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on
109*myhost1* and *myhost2*:
110
111.. prompt:: bash #
112
1e59de90 113 ceph orch apply rgw east --realm=myorg --zonegroup=us-east-zg-1 --zone=us-east-1 --placement="2 myhost1 myhost2"
f67539c2
TL
114
115Note that in a multisite situation, cephadm only deploys the daemons. It does not create
1e59de90
TL
116or update the realm or zone configurations. To create a new realms, zones and zonegroups
117you can use :ref:`mgr-rgw-module` or manually using something like:
f67539c2
TL
118
119.. prompt:: bash #
120
1e59de90
TL
121 radosgw-admin realm create --rgw-realm=<realm-name>
122
f67539c2
TL
123.. prompt:: bash #
124
1e59de90 125 radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master
f67539c2
TL
126
127.. prompt:: bash #
128
1e59de90 129 radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master
f67539c2
TL
130
131.. prompt:: bash #
132
133 radosgw-admin period update --rgw-realm=<realm-name> --commit
134
135See :ref:`orchestrator-cli-placement-spec` for details of the placement
136specification. See :ref:`multisite` for more information of setting up multisite RGW.
137
a4b75251
TL
138See also :ref:`multisite`.
139
522d829b
TL
140Setting up HTTPS
141----------------
142
143In order to enable HTTPS for RGW services, apply a spec file following this scheme:
144
145.. code-block:: yaml
146
147 service_type: rgw
148 service_id: myrgw
149 spec:
150 rgw_frontend_ssl_certificate: |
151 -----BEGIN PRIVATE KEY-----
152 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
153 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
154 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
155 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
156 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
157 -----END PRIVATE KEY-----
158 -----BEGIN CERTIFICATE-----
159 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
160 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
161 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
162 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
163 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
164 -----END CERTIFICATE-----
165 ssl: true
166
167Then apply this yaml document:
168
169.. prompt:: bash #
170
171 ceph orch apply -i myrgw.yaml
172
173Note the value of ``rgw_frontend_ssl_certificate`` is a literal string as
a4b75251
TL
174indicated by a ``|`` character preserving newline characters.
175
176Service specification
177---------------------
178
179.. py:currentmodule:: ceph.deployment.service_spec
180
181.. autoclass:: RGWSpec
182 :members:
f67539c2
TL
183
184.. _orchestrator-haproxy-service-spec:
185
186High availability service for RGW
187=================================
188
189The *ingress* service allows you to create a high availability endpoint
20effc67 190for RGW with a minimum set of configuration options. The orchestrator will
f67539c2
TL
191deploy and manage a combination of haproxy and keepalived to provide load
192balancing on a floating virtual IP.
193
39ae355f
TL
194If the RGW service is configured with SSL enabled, then the ingress service
195will use the `ssl` and `verify none` options in the backend configuration.
196Trust verification is disabled because the backends are accessed by IP
197address instead of FQDN.
f67539c2 198
a4b75251 199.. image:: ../../images/HAProxy_for_RGW.svg
f67539c2
TL
200
201There are N hosts where the ingress service is deployed. Each host
202has a haproxy daemon and a keepalived daemon. A virtual IP is
203automatically configured on only one of these hosts at a time.
204
205Each keepalived daemon checks every few seconds whether the haproxy
206daemon on the same host is responding. Keepalived will also check
207that the master keepalived daemon is running without problems. If the
208"master" keepalived daemon or the active haproxy is not responding,
209one of the remaining keepalived daemons running in backup mode will be
210elected as master, and the virtual IP will be moved to that node.
211
212The active haproxy acts like a load balancer, distributing all RGW requests
213between all the RGW daemons available.
214
b3b6e05e
TL
215Prerequisites
216-------------
f67539c2 217
39ae355f 218* An existing RGW service.
f67539c2 219
b3b6e05e
TL
220Deploying
221---------
f67539c2
TL
222
223Use the command::
224
225 ceph orch apply -i <ingress_spec_file>
226
b3b6e05e
TL
227Service specification
228---------------------
f67539c2
TL
229
230It is a yaml format file with the following properties:
231
232.. code-block:: yaml
233
234 service_type: ingress
235 service_id: rgw.something # adjust to match your existing RGW service
236 placement:
237 hosts:
238 - host1
239 - host2
240 - host3
241 spec:
aee94f69
TL
242 backend_service: rgw.something # adjust to match your existing RGW service
243 virtual_ip: <string>/<string> # ex: 192.168.20.1/24
244 frontend_port: <integer> # ex: 8080
245 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
246 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
247 use_keepalived_multicast: <bool> # optional: Default is False.
248 vrrp_interface_network: <string>/<string> # optional: ex: 192.168.20.0/24
249 ssl_cert: | # optional: SSL certificate and key
f67539c2
TL
250 -----BEGIN CERTIFICATE-----
251 ...
252 -----END CERTIFICATE-----
253 -----BEGIN PRIVATE KEY-----
254 ...
255 -----END PRIVATE KEY-----
256
2a845540
TL
257.. code-block:: yaml
258
259 service_type: ingress
260 service_id: rgw.something # adjust to match your existing RGW service
261 placement:
262 hosts:
263 - host1
264 - host2
265 - host3
266 spec:
267 backend_service: rgw.something # adjust to match your existing RGW service
268 virtual_ips_list:
269 - <string>/<string> # ex: 192.168.20.1/24
270 - <string>/<string> # ex: 192.168.20.2/24
271 - <string>/<string> # ex: 192.168.20.3/24
272 frontend_port: <integer> # ex: 8080
273 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
274 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
aee94f69 275 first_virtual_router_id: <integer> # optional: default 50
2a845540
TL
276 ssl_cert: | # optional: SSL certificate and key
277 -----BEGIN CERTIFICATE-----
278 ...
279 -----END CERTIFICATE-----
280 -----BEGIN PRIVATE KEY-----
281 ...
282 -----END PRIVATE KEY-----
283
284
f67539c2
TL
285where the properties of this service specification are:
286
287* ``service_type``
288 Mandatory and set to "ingress"
289* ``service_id``
290 The name of the service. We suggest naming this after the service you are
291 controlling ingress for (e.g., ``rgw.foo``).
292* ``placement hosts``
293 The hosts where it is desired to run the HA daemons. An haproxy and a
294 keepalived container will be deployed on these hosts. These hosts do not need
295 to match the nodes where RGW is deployed.
296* ``virtual_ip``
297 The virtual IP (and network) in CIDR format where the ingress service will be available.
2a845540
TL
298* ``virtual_ips_list``
299 The virtual IP address in CIDR format where the ingress service will be available.
300 Each virtual IP address will be primary on one node running the ingress service. The number
301 of virtual IP addresses must be less than or equal to the number of ingress nodes.
f67539c2
TL
302* ``virtual_interface_networks``
303 A list of networks to identify which ethernet interface to use for the virtual IP.
304* ``frontend_port``
305 The port used to access the ingress service.
306* ``ssl_cert``:
307 SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
308 private key blocks in .pem format.
aee94f69
TL
309* ``use_keepalived_multicast``
310 Default is False. By default, cephadm will deploy keepalived config to use unicast IPs,
311 using the IPs of the hosts. The IPs chosen will be the same IPs cephadm uses to connect
312 to the machines. But if multicast is prefered, we can set ``use_keepalived_multicast``
313 to ``True`` and Keepalived will use multicast IP (224.0.0.18) to communicate between instances,
314 using the same interfaces as where the VIPs are.
315* ``vrrp_interface_network``
316 By default, cephadm will configure keepalived to use the same interface where the VIPs are
317 for VRRP communication. If another interface is needed, it can be set via ``vrrp_interface_network``
318 with a network to identify which ethernet interface to use.
319* ``first_virtual_router_id``
320 Default is 50. When deploying more than 1 ingress, this parameter can be used to ensure each
321 keepalived will have different virtual_router_id. In the case of using ``virtual_ips_list``,
322 each IP will create its own virtual router. So the first one will have ``first_virtual_router_id``,
323 second one will have ``first_virtual_router_id`` + 1, etc. Valid values go from 1 to 255.
f67539c2 324
b3b6e05e
TL
325.. _ingress-virtual-ip:
326
327Selecting ethernet interfaces for the virtual IP
328------------------------------------------------
f67539c2
TL
329
330You cannot simply provide the name of the network interface on which
331to configure the virtual IP because interface names tend to vary
332across hosts (and/or reboots). Instead, cephadm will select
333interfaces based on other existing IP addresses that are already
334configured.
335
336Normally, the virtual IP will be configured on the first network
337interface that has an existing IP in the same subnet. For example, if
338the virtual IP is 192.168.0.80/24 and eth2 has the static IP
339192.168.0.40/24, cephadm will use eth2.
340
341In some cases, the virtual IP may not belong to the same subnet as an existing static
342IP. In such cases, you can provide a list of subnets to match against existing IPs,
343and cephadm will put the virtual IP on the first network interface to match. For example,
344if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's
345static IP in 10.10.0.0/16, you can use a spec like::
346
347 service_type: ingress
348 service_id: rgw.something
349 spec:
350 virtual_ip: 192.168.0.80/24
351 virtual_interface_networks:
352 - 10.10.0.0/16
353 ...
354
355A consequence of this strategy is that you cannot currently configure the virtual IP
356on an interface that has no existing IP address. In this situation, we suggest
357configuring a "dummy" IP address is an unroutable network on the correct interface
358and reference that dummy network in the networks list (see above).
359
360
b3b6e05e
TL
361Useful hints for ingress
362------------------------
f67539c2 363
b3b6e05e
TL
364* It is good to have at least 3 RGW daemons.
365* We recommend at least 3 hosts for the ingress service.
a4b75251
TL
366
367Further Reading
368===============
369
370* :ref:`object-gateway`
20effc67 371* :ref:`mgr-rgw-module`