]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/services/rgw.rst
58318b727e14562819b6da56f93b6dbce306fa59
[ceph.git] / ceph / doc / cephadm / services / rgw.rst
1 ===========
2 RGW Service
3 ===========
4
5 .. _cephadm-deploy-rgw:
6
7 Deploy RGWs
8 ===========
9
10 Cephadm deploys radosgw as a collection of daemons that manage a
11 single-cluster deployment or a particular *realm* and *zone* in a
12 multisite deployment. (For more information about realms and zones,
13 see :ref:`multisite`.)
14
15 Note that with cephadm, radosgw daemons are configured via the monitor
16 configuration database instead of via a `ceph.conf` or the command line. If
17 that configuration isn't already in place (usually in the
18 ``client.rgw.<something>`` section), then the radosgw
19 daemons will start up with default settings (e.g., binding to port
20 80).
21
22 To deploy a set of radosgw daemons, with an arbitrary service name
23 *name*, run the following command:
24
25 .. prompt:: bash #
26
27 ceph orch apply rgw *<name>* [--realm=*<realm-name>*] [--zone=*<zone-name>*] --placement="*<num-daemons>* [*<host1>* ...]"
28
29 Trivial setup
30 -------------
31
32 For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment
33 under the arbitrary service id *foo*:
34
35 .. prompt:: bash #
36
37 ceph orch apply rgw foo
38
39 .. _cephadm-rgw-designated_gateways:
40
41 Designated gateways
42 -------------------
43
44 A common scenario is to have a labeled set of hosts that will act
45 as gateways, with multiple instances of radosgw running on consecutive
46 ports 8000 and 8001:
47
48 .. prompt:: bash #
49
50 ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything
51 ceph orch host label add gwhost2 rgw
52 ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000
53
54 See also: :ref:`cephadm_co_location`.
55
56 .. _cephadm-rgw-networks:
57
58 Specifying Networks
59 -------------------
60
61 The RGW service can have the network they bind to configured with a yaml service specification.
62
63 example spec file:
64
65 .. code-block:: yaml
66
67 service_type: rgw
68 service_id: foo
69 placement:
70 label: rgw
71 count_per_host: 2
72 networks:
73 - 192.169.142.0/24
74 spec:
75 rgw_frontend_port: 8080
76
77
78 Multisite zones
79 ---------------
80
81 To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on
82 *myhost1* and *myhost2*:
83
84 .. prompt:: bash #
85
86 ceph orch apply rgw east --realm=myorg --zone=us-east-1 --placement="2 myhost1 myhost2"
87
88 Note that in a multisite situation, cephadm only deploys the daemons. It does not create
89 or update the realm or zone configurations. To create a new realm and zone, you need to do
90 something like:
91
92 .. prompt:: bash #
93
94 radosgw-admin realm create --rgw-realm=<realm-name> --default
95
96 .. prompt:: bash #
97
98 radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
99
100 .. prompt:: bash #
101
102 radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
103
104 .. prompt:: bash #
105
106 radosgw-admin period update --rgw-realm=<realm-name> --commit
107
108 See :ref:`orchestrator-cli-placement-spec` for details of the placement
109 specification. See :ref:`multisite` for more information of setting up multisite RGW.
110
111 See also :ref:`multisite`.
112
113 Setting up HTTPS
114 ----------------
115
116 In order to enable HTTPS for RGW services, apply a spec file following this scheme:
117
118 .. code-block:: yaml
119
120 service_type: rgw
121 service_id: myrgw
122 spec:
123 rgw_frontend_ssl_certificate: |
124 -----BEGIN PRIVATE KEY-----
125 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
126 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
127 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
128 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
129 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
130 -----END PRIVATE KEY-----
131 -----BEGIN CERTIFICATE-----
132 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
133 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
134 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
135 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
136 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
137 -----END CERTIFICATE-----
138 ssl: true
139
140 Then apply this yaml document:
141
142 .. prompt:: bash #
143
144 ceph orch apply -i myrgw.yaml
145
146 Note the value of ``rgw_frontend_ssl_certificate`` is a literal string as
147 indicated by a ``|`` character preserving newline characters.
148
149 Service specification
150 ---------------------
151
152 .. py:currentmodule:: ceph.deployment.service_spec
153
154 .. autoclass:: RGWSpec
155 :members:
156
157 .. _orchestrator-haproxy-service-spec:
158
159 High availability service for RGW
160 =================================
161
162 The *ingress* service allows you to create a high availability endpoint
163 for RGW with a minimum set of configuration options. The orchestrator will
164 deploy and manage a combination of haproxy and keepalived to provide load
165 balancing on a floating virtual IP.
166
167 If SSL is used, then SSL must be configured and terminated by the ingress service
168 and not RGW itself.
169
170 .. image:: ../../images/HAProxy_for_RGW.svg
171
172 There are N hosts where the ingress service is deployed. Each host
173 has a haproxy daemon and a keepalived daemon. A virtual IP is
174 automatically configured on only one of these hosts at a time.
175
176 Each keepalived daemon checks every few seconds whether the haproxy
177 daemon on the same host is responding. Keepalived will also check
178 that the master keepalived daemon is running without problems. If the
179 "master" keepalived daemon or the active haproxy is not responding,
180 one of the remaining keepalived daemons running in backup mode will be
181 elected as master, and the virtual IP will be moved to that node.
182
183 The active haproxy acts like a load balancer, distributing all RGW requests
184 between all the RGW daemons available.
185
186 Prerequisites
187 -------------
188
189 * An existing RGW service, without SSL. (If you want SSL service, the certificate
190 should be configured on the ingress service, not the RGW service.)
191
192 Deploying
193 ---------
194
195 Use the command::
196
197 ceph orch apply -i <ingress_spec_file>
198
199 Service specification
200 ---------------------
201
202 It is a yaml format file with the following properties:
203
204 .. code-block:: yaml
205
206 service_type: ingress
207 service_id: rgw.something # adjust to match your existing RGW service
208 placement:
209 hosts:
210 - host1
211 - host2
212 - host3
213 spec:
214 backend_service: rgw.something # adjust to match your existing RGW service
215 virtual_ip: <string>/<string> # ex: 192.168.20.1/24
216 frontend_port: <integer> # ex: 8080
217 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
218 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
219 ssl_cert: | # optional: SSL certificate and key
220 -----BEGIN CERTIFICATE-----
221 ...
222 -----END CERTIFICATE-----
223 -----BEGIN PRIVATE KEY-----
224 ...
225 -----END PRIVATE KEY-----
226
227 .. code-block:: yaml
228
229 service_type: ingress
230 service_id: rgw.something # adjust to match your existing RGW service
231 placement:
232 hosts:
233 - host1
234 - host2
235 - host3
236 spec:
237 backend_service: rgw.something # adjust to match your existing RGW service
238 virtual_ips_list:
239 - <string>/<string> # ex: 192.168.20.1/24
240 - <string>/<string> # ex: 192.168.20.2/24
241 - <string>/<string> # ex: 192.168.20.3/24
242 frontend_port: <integer> # ex: 8080
243 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
244 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
245 ssl_cert: | # optional: SSL certificate and key
246 -----BEGIN CERTIFICATE-----
247 ...
248 -----END CERTIFICATE-----
249 -----BEGIN PRIVATE KEY-----
250 ...
251 -----END PRIVATE KEY-----
252
253
254 where the properties of this service specification are:
255
256 * ``service_type``
257 Mandatory and set to "ingress"
258 * ``service_id``
259 The name of the service. We suggest naming this after the service you are
260 controlling ingress for (e.g., ``rgw.foo``).
261 * ``placement hosts``
262 The hosts where it is desired to run the HA daemons. An haproxy and a
263 keepalived container will be deployed on these hosts. These hosts do not need
264 to match the nodes where RGW is deployed.
265 * ``virtual_ip``
266 The virtual IP (and network) in CIDR format where the ingress service will be available.
267 * ``virtual_ips_list``
268 The virtual IP address in CIDR format where the ingress service will be available.
269 Each virtual IP address will be primary on one node running the ingress service. The number
270 of virtual IP addresses must be less than or equal to the number of ingress nodes.
271 * ``virtual_interface_networks``
272 A list of networks to identify which ethernet interface to use for the virtual IP.
273 * ``frontend_port``
274 The port used to access the ingress service.
275 * ``ssl_cert``:
276 SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
277 private key blocks in .pem format.
278
279 .. _ingress-virtual-ip:
280
281 Selecting ethernet interfaces for the virtual IP
282 ------------------------------------------------
283
284 You cannot simply provide the name of the network interface on which
285 to configure the virtual IP because interface names tend to vary
286 across hosts (and/or reboots). Instead, cephadm will select
287 interfaces based on other existing IP addresses that are already
288 configured.
289
290 Normally, the virtual IP will be configured on the first network
291 interface that has an existing IP in the same subnet. For example, if
292 the virtual IP is 192.168.0.80/24 and eth2 has the static IP
293 192.168.0.40/24, cephadm will use eth2.
294
295 In some cases, the virtual IP may not belong to the same subnet as an existing static
296 IP. In such cases, you can provide a list of subnets to match against existing IPs,
297 and cephadm will put the virtual IP on the first network interface to match. For example,
298 if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's
299 static IP in 10.10.0.0/16, you can use a spec like::
300
301 service_type: ingress
302 service_id: rgw.something
303 spec:
304 virtual_ip: 192.168.0.80/24
305 virtual_interface_networks:
306 - 10.10.0.0/16
307 ...
308
309 A consequence of this strategy is that you cannot currently configure the virtual IP
310 on an interface that has no existing IP address. In this situation, we suggest
311 configuring a "dummy" IP address is an unroutable network on the correct interface
312 and reference that dummy network in the networks list (see above).
313
314
315 Useful hints for ingress
316 ------------------------
317
318 * It is good to have at least 3 RGW daemons.
319 * We recommend at least 3 hosts for the ingress service.
320
321 Further Reading
322 ===============
323
324 * :ref:`object-gateway`
325 * :ref:`mgr-rgw-module`