]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/services/rgw.rst
bump version to 18.2.2-pve1
[ceph.git] / ceph / doc / cephadm / services / rgw.rst
1 ===========
2 RGW Service
3 ===========
4
5 .. _cephadm-deploy-rgw:
6
7 Deploy RGWs
8 ===========
9
10 Cephadm deploys radosgw as a collection of daemons that manage a
11 single-cluster deployment or a particular *realm* and *zone* in a
12 multisite deployment. (For more information about realms and zones,
13 see :ref:`multisite`.)
14
15 Note that with cephadm, radosgw daemons are configured via the monitor
16 configuration database instead of via a `ceph.conf` or the command line. If
17 that configuration isn't already in place (usually in the
18 ``client.rgw.<something>`` section), then the radosgw
19 daemons will start up with default settings (e.g., binding to port
20 80).
21
22 To deploy a set of radosgw daemons, with an arbitrary service name
23 *name*, run the following command:
24
25 .. prompt:: bash #
26
27 ceph orch apply rgw *<name>* [--realm=*<realm-name>*] [--zone=*<zone-name>*] --placement="*<num-daemons>* [*<host1>* ...]"
28
29 Trivial setup
30 -------------
31
32 For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment
33 under the arbitrary service id *foo*:
34
35 .. prompt:: bash #
36
37 ceph orch apply rgw foo
38
39 .. _cephadm-rgw-designated_gateways:
40
41 Designated gateways
42 -------------------
43
44 A common scenario is to have a labeled set of hosts that will act
45 as gateways, with multiple instances of radosgw running on consecutive
46 ports 8000 and 8001:
47
48 .. prompt:: bash #
49
50 ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything
51 ceph orch host label add gwhost2 rgw
52 ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000
53
54 See also: :ref:`cephadm_co_location`.
55
56 .. _cephadm-rgw-networks:
57
58 Specifying Networks
59 -------------------
60
61 The RGW service can have the network they bind to configured with a yaml service specification.
62
63 example spec file:
64
65 .. code-block:: yaml
66
67 service_type: rgw
68 service_id: foo
69 placement:
70 label: rgw
71 count_per_host: 2
72 networks:
73 - 192.169.142.0/24
74 spec:
75 rgw_frontend_port: 8080
76
77 Passing Frontend Extra Arguments
78 --------------------------------
79
80 The RGW service specification can be used to pass extra arguments to the rgw frontend by using
81 the `rgw_frontend_extra_args` arguments list.
82
83 example spec file:
84
85 .. code-block:: yaml
86
87 service_type: rgw
88 service_id: foo
89 placement:
90 label: rgw
91 count_per_host: 2
92 spec:
93 rgw_realm: myrealm
94 rgw_zone: myzone
95 rgw_frontend_type: "beast"
96 rgw_frontend_port: 5000
97 rgw_frontend_extra_args:
98 - "tcp_nodelay=1"
99 - "max_header_size=65536"
100
101 .. note:: cephadm combines the arguments from the `spec` section and the ones from
102 the `rgw_frontend_extra_args` into a single space-separated arguments list
103 which is used to set the value of `rgw_frontends` configuration parameter.
104
105 Multisite zones
106 ---------------
107
108 To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on
109 *myhost1* and *myhost2*:
110
111 .. prompt:: bash #
112
113 ceph orch apply rgw east --realm=myorg --zonegroup=us-east-zg-1 --zone=us-east-1 --placement="2 myhost1 myhost2"
114
115 Note that in a multisite situation, cephadm only deploys the daemons. It does not create
116 or update the realm or zone configurations. To create a new realms, zones and zonegroups
117 you can use :ref:`mgr-rgw-module` or manually using something like:
118
119 .. prompt:: bash #
120
121 radosgw-admin realm create --rgw-realm=<realm-name>
122
123 .. prompt:: bash #
124
125 radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master
126
127 .. prompt:: bash #
128
129 radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master
130
131 .. prompt:: bash #
132
133 radosgw-admin period update --rgw-realm=<realm-name> --commit
134
135 See :ref:`orchestrator-cli-placement-spec` for details of the placement
136 specification. See :ref:`multisite` for more information of setting up multisite RGW.
137
138 See also :ref:`multisite`.
139
140 Setting up HTTPS
141 ----------------
142
143 In order to enable HTTPS for RGW services, apply a spec file following this scheme:
144
145 .. code-block:: yaml
146
147 service_type: rgw
148 service_id: myrgw
149 spec:
150 rgw_frontend_ssl_certificate: |
151 -----BEGIN PRIVATE KEY-----
152 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
153 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
154 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
155 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
156 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
157 -----END PRIVATE KEY-----
158 -----BEGIN CERTIFICATE-----
159 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
160 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
161 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
162 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
163 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
164 -----END CERTIFICATE-----
165 ssl: true
166
167 Then apply this yaml document:
168
169 .. prompt:: bash #
170
171 ceph orch apply -i myrgw.yaml
172
173 Note the value of ``rgw_frontend_ssl_certificate`` is a literal string as
174 indicated by a ``|`` character preserving newline characters.
175
176 Service specification
177 ---------------------
178
179 .. py:currentmodule:: ceph.deployment.service_spec
180
181 .. autoclass:: RGWSpec
182 :members:
183
184 .. _orchestrator-haproxy-service-spec:
185
186 High availability service for RGW
187 =================================
188
189 The *ingress* service allows you to create a high availability endpoint
190 for RGW with a minimum set of configuration options. The orchestrator will
191 deploy and manage a combination of haproxy and keepalived to provide load
192 balancing on a floating virtual IP.
193
194 If the RGW service is configured with SSL enabled, then the ingress service
195 will use the `ssl` and `verify none` options in the backend configuration.
196 Trust verification is disabled because the backends are accessed by IP
197 address instead of FQDN.
198
199 .. image:: ../../images/HAProxy_for_RGW.svg
200
201 There are N hosts where the ingress service is deployed. Each host
202 has a haproxy daemon and a keepalived daemon. A virtual IP is
203 automatically configured on only one of these hosts at a time.
204
205 Each keepalived daemon checks every few seconds whether the haproxy
206 daemon on the same host is responding. Keepalived will also check
207 that the master keepalived daemon is running without problems. If the
208 "master" keepalived daemon or the active haproxy is not responding,
209 one of the remaining keepalived daemons running in backup mode will be
210 elected as master, and the virtual IP will be moved to that node.
211
212 The active haproxy acts like a load balancer, distributing all RGW requests
213 between all the RGW daemons available.
214
215 Prerequisites
216 -------------
217
218 * An existing RGW service.
219
220 Deploying
221 ---------
222
223 Use the command::
224
225 ceph orch apply -i <ingress_spec_file>
226
227 Service specification
228 ---------------------
229
230 It is a yaml format file with the following properties:
231
232 .. code-block:: yaml
233
234 service_type: ingress
235 service_id: rgw.something # adjust to match your existing RGW service
236 placement:
237 hosts:
238 - host1
239 - host2
240 - host3
241 spec:
242 backend_service: rgw.something # adjust to match your existing RGW service
243 virtual_ip: <string>/<string> # ex: 192.168.20.1/24
244 frontend_port: <integer> # ex: 8080
245 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
246 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
247 use_keepalived_multicast: <bool> # optional: Default is False.
248 vrrp_interface_network: <string>/<string> # optional: ex: 192.168.20.0/24
249 ssl_cert: | # optional: SSL certificate and key
250 -----BEGIN CERTIFICATE-----
251 ...
252 -----END CERTIFICATE-----
253 -----BEGIN PRIVATE KEY-----
254 ...
255 -----END PRIVATE KEY-----
256
257 .. code-block:: yaml
258
259 service_type: ingress
260 service_id: rgw.something # adjust to match your existing RGW service
261 placement:
262 hosts:
263 - host1
264 - host2
265 - host3
266 spec:
267 backend_service: rgw.something # adjust to match your existing RGW service
268 virtual_ips_list:
269 - <string>/<string> # ex: 192.168.20.1/24
270 - <string>/<string> # ex: 192.168.20.2/24
271 - <string>/<string> # ex: 192.168.20.3/24
272 frontend_port: <integer> # ex: 8080
273 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
274 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
275 first_virtual_router_id: <integer> # optional: default 50
276 ssl_cert: | # optional: SSL certificate and key
277 -----BEGIN CERTIFICATE-----
278 ...
279 -----END CERTIFICATE-----
280 -----BEGIN PRIVATE KEY-----
281 ...
282 -----END PRIVATE KEY-----
283
284
285 where the properties of this service specification are:
286
287 * ``service_type``
288 Mandatory and set to "ingress"
289 * ``service_id``
290 The name of the service. We suggest naming this after the service you are
291 controlling ingress for (e.g., ``rgw.foo``).
292 * ``placement hosts``
293 The hosts where it is desired to run the HA daemons. An haproxy and a
294 keepalived container will be deployed on these hosts. These hosts do not need
295 to match the nodes where RGW is deployed.
296 * ``virtual_ip``
297 The virtual IP (and network) in CIDR format where the ingress service will be available.
298 * ``virtual_ips_list``
299 The virtual IP address in CIDR format where the ingress service will be available.
300 Each virtual IP address will be primary on one node running the ingress service. The number
301 of virtual IP addresses must be less than or equal to the number of ingress nodes.
302 * ``virtual_interface_networks``
303 A list of networks to identify which ethernet interface to use for the virtual IP.
304 * ``frontend_port``
305 The port used to access the ingress service.
306 * ``ssl_cert``:
307 SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
308 private key blocks in .pem format.
309 * ``use_keepalived_multicast``
310 Default is False. By default, cephadm will deploy keepalived config to use unicast IPs,
311 using the IPs of the hosts. The IPs chosen will be the same IPs cephadm uses to connect
312 to the machines. But if multicast is prefered, we can set ``use_keepalived_multicast``
313 to ``True`` and Keepalived will use multicast IP (224.0.0.18) to communicate between instances,
314 using the same interfaces as where the VIPs are.
315 * ``vrrp_interface_network``
316 By default, cephadm will configure keepalived to use the same interface where the VIPs are
317 for VRRP communication. If another interface is needed, it can be set via ``vrrp_interface_network``
318 with a network to identify which ethernet interface to use.
319 * ``first_virtual_router_id``
320 Default is 50. When deploying more than 1 ingress, this parameter can be used to ensure each
321 keepalived will have different virtual_router_id. In the case of using ``virtual_ips_list``,
322 each IP will create its own virtual router. So the first one will have ``first_virtual_router_id``,
323 second one will have ``first_virtual_router_id`` + 1, etc. Valid values go from 1 to 255.
324
325 .. _ingress-virtual-ip:
326
327 Selecting ethernet interfaces for the virtual IP
328 ------------------------------------------------
329
330 You cannot simply provide the name of the network interface on which
331 to configure the virtual IP because interface names tend to vary
332 across hosts (and/or reboots). Instead, cephadm will select
333 interfaces based on other existing IP addresses that are already
334 configured.
335
336 Normally, the virtual IP will be configured on the first network
337 interface that has an existing IP in the same subnet. For example, if
338 the virtual IP is 192.168.0.80/24 and eth2 has the static IP
339 192.168.0.40/24, cephadm will use eth2.
340
341 In some cases, the virtual IP may not belong to the same subnet as an existing static
342 IP. In such cases, you can provide a list of subnets to match against existing IPs,
343 and cephadm will put the virtual IP on the first network interface to match. For example,
344 if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's
345 static IP in 10.10.0.0/16, you can use a spec like::
346
347 service_type: ingress
348 service_id: rgw.something
349 spec:
350 virtual_ip: 192.168.0.80/24
351 virtual_interface_networks:
352 - 10.10.0.0/16
353 ...
354
355 A consequence of this strategy is that you cannot currently configure the virtual IP
356 on an interface that has no existing IP address. In this situation, we suggest
357 configuring a "dummy" IP address is an unroutable network on the correct interface
358 and reference that dummy network in the networks list (see above).
359
360
361 Useful hints for ingress
362 ------------------------
363
364 * It is good to have at least 3 RGW daemons.
365 * We recommend at least 3 hosts for the ingress service.
366
367 Further Reading
368 ===============
369
370 * :ref:`object-gateway`
371 * :ref:`mgr-rgw-module`