]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/services/rgw.rst
import ceph quincy 17.2.6
[ceph.git] / ceph / doc / cephadm / services / rgw.rst
CommitLineData
f67539c2
TL
1===========
2RGW Service
3===========
4
5.. _cephadm-deploy-rgw:
6
7Deploy RGWs
8===========
9
10Cephadm deploys radosgw as a collection of daemons that manage a
11single-cluster deployment or a particular *realm* and *zone* in a
12multisite deployment. (For more information about realms and zones,
13see :ref:`multisite`.)
14
15Note that with cephadm, radosgw daemons are configured via the monitor
16configuration database instead of via a `ceph.conf` or the command line. If
17that configuration isn't already in place (usually in the
18``client.rgw.<something>`` section), then the radosgw
19daemons will start up with default settings (e.g., binding to port
2080).
21
22To deploy a set of radosgw daemons, with an arbitrary service name
23*name*, run the following command:
24
25.. prompt:: bash #
26
27 ceph orch apply rgw *<name>* [--realm=*<realm-name>*] [--zone=*<zone-name>*] --placement="*<num-daemons>* [*<host1>* ...]"
28
29Trivial setup
30-------------
31
32For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment
33under the arbitrary service id *foo*:
34
35.. prompt:: bash #
36
37 ceph orch apply rgw foo
38
20effc67
TL
39.. _cephadm-rgw-designated_gateways:
40
f67539c2
TL
41Designated gateways
42-------------------
43
44A common scenario is to have a labeled set of hosts that will act
45as gateways, with multiple instances of radosgw running on consecutive
46ports 8000 and 8001:
47
48.. prompt:: bash #
49
50 ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything
51 ceph orch host label add gwhost2 rgw
52 ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000
53
20effc67
TL
54See also: :ref:`cephadm_co_location`.
55
a4b75251
TL
56.. _cephadm-rgw-networks:
57
58Specifying Networks
59-------------------
60
61The RGW service can have the network they bind to configured with a yaml service specification.
62
63example spec file:
64
65.. code-block:: yaml
66
67 service_type: rgw
2a845540 68 service_id: foo
a4b75251
TL
69 placement:
70 label: rgw
2a845540 71 count_per_host: 2
a4b75251
TL
72 networks:
73 - 192.169.142.0/24
74 spec:
2a845540 75 rgw_frontend_port: 8080
a4b75251
TL
76
77
f67539c2
TL
78Multisite zones
79---------------
80
81To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on
82*myhost1* and *myhost2*:
83
84.. prompt:: bash #
85
86 ceph orch apply rgw east --realm=myorg --zone=us-east-1 --placement="2 myhost1 myhost2"
87
88Note that in a multisite situation, cephadm only deploys the daemons. It does not create
89or update the realm or zone configurations. To create a new realm and zone, you need to do
90something like:
91
92.. prompt:: bash #
93
94 radosgw-admin realm create --rgw-realm=<realm-name> --default
95
96.. prompt:: bash #
97
98 radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
99
100.. prompt:: bash #
101
102 radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
103
104.. prompt:: bash #
105
106 radosgw-admin period update --rgw-realm=<realm-name> --commit
107
108See :ref:`orchestrator-cli-placement-spec` for details of the placement
109specification. See :ref:`multisite` for more information of setting up multisite RGW.
110
a4b75251
TL
111See also :ref:`multisite`.
112
522d829b
TL
113Setting up HTTPS
114----------------
115
116In order to enable HTTPS for RGW services, apply a spec file following this scheme:
117
118.. code-block:: yaml
119
120 service_type: rgw
121 service_id: myrgw
122 spec:
123 rgw_frontend_ssl_certificate: |
124 -----BEGIN PRIVATE KEY-----
125 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
126 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
127 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
128 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
129 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
130 -----END PRIVATE KEY-----
131 -----BEGIN CERTIFICATE-----
132 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
133 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
134 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
135 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
136 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
137 -----END CERTIFICATE-----
138 ssl: true
139
140Then apply this yaml document:
141
142.. prompt:: bash #
143
144 ceph orch apply -i myrgw.yaml
145
146Note the value of ``rgw_frontend_ssl_certificate`` is a literal string as
a4b75251
TL
147indicated by a ``|`` character preserving newline characters.
148
149Service specification
150---------------------
151
152.. py:currentmodule:: ceph.deployment.service_spec
153
154.. autoclass:: RGWSpec
155 :members:
f67539c2
TL
156
157.. _orchestrator-haproxy-service-spec:
158
159High availability service for RGW
160=================================
161
162The *ingress* service allows you to create a high availability endpoint
20effc67 163for RGW with a minimum set of configuration options. The orchestrator will
f67539c2
TL
164deploy and manage a combination of haproxy and keepalived to provide load
165balancing on a floating virtual IP.
166
39ae355f
TL
167If the RGW service is configured with SSL enabled, then the ingress service
168will use the `ssl` and `verify none` options in the backend configuration.
169Trust verification is disabled because the backends are accessed by IP
170address instead of FQDN.
f67539c2 171
a4b75251 172.. image:: ../../images/HAProxy_for_RGW.svg
f67539c2
TL
173
174There are N hosts where the ingress service is deployed. Each host
175has a haproxy daemon and a keepalived daemon. A virtual IP is
176automatically configured on only one of these hosts at a time.
177
178Each keepalived daemon checks every few seconds whether the haproxy
179daemon on the same host is responding. Keepalived will also check
180that the master keepalived daemon is running without problems. If the
181"master" keepalived daemon or the active haproxy is not responding,
182one of the remaining keepalived daemons running in backup mode will be
183elected as master, and the virtual IP will be moved to that node.
184
185The active haproxy acts like a load balancer, distributing all RGW requests
186between all the RGW daemons available.
187
b3b6e05e
TL
188Prerequisites
189-------------
f67539c2 190
39ae355f 191* An existing RGW service.
f67539c2 192
b3b6e05e
TL
193Deploying
194---------
f67539c2
TL
195
196Use the command::
197
198 ceph orch apply -i <ingress_spec_file>
199
b3b6e05e
TL
200Service specification
201---------------------
f67539c2
TL
202
203It is a yaml format file with the following properties:
204
205.. code-block:: yaml
206
207 service_type: ingress
208 service_id: rgw.something # adjust to match your existing RGW service
209 placement:
210 hosts:
211 - host1
212 - host2
213 - host3
214 spec:
215 backend_service: rgw.something # adjust to match your existing RGW service
216 virtual_ip: <string>/<string> # ex: 192.168.20.1/24
217 frontend_port: <integer> # ex: 8080
218 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
219 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
220 ssl_cert: | # optional: SSL certificate and key
221 -----BEGIN CERTIFICATE-----
222 ...
223 -----END CERTIFICATE-----
224 -----BEGIN PRIVATE KEY-----
225 ...
226 -----END PRIVATE KEY-----
227
2a845540
TL
228.. code-block:: yaml
229
230 service_type: ingress
231 service_id: rgw.something # adjust to match your existing RGW service
232 placement:
233 hosts:
234 - host1
235 - host2
236 - host3
237 spec:
238 backend_service: rgw.something # adjust to match your existing RGW service
239 virtual_ips_list:
240 - <string>/<string> # ex: 192.168.20.1/24
241 - <string>/<string> # ex: 192.168.20.2/24
242 - <string>/<string> # ex: 192.168.20.3/24
243 frontend_port: <integer> # ex: 8080
244 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
245 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
246 ssl_cert: | # optional: SSL certificate and key
247 -----BEGIN CERTIFICATE-----
248 ...
249 -----END CERTIFICATE-----
250 -----BEGIN PRIVATE KEY-----
251 ...
252 -----END PRIVATE KEY-----
253
254
f67539c2
TL
255where the properties of this service specification are:
256
257* ``service_type``
258 Mandatory and set to "ingress"
259* ``service_id``
260 The name of the service. We suggest naming this after the service you are
261 controlling ingress for (e.g., ``rgw.foo``).
262* ``placement hosts``
263 The hosts where it is desired to run the HA daemons. An haproxy and a
264 keepalived container will be deployed on these hosts. These hosts do not need
265 to match the nodes where RGW is deployed.
266* ``virtual_ip``
267 The virtual IP (and network) in CIDR format where the ingress service will be available.
2a845540
TL
268* ``virtual_ips_list``
269 The virtual IP address in CIDR format where the ingress service will be available.
270 Each virtual IP address will be primary on one node running the ingress service. The number
271 of virtual IP addresses must be less than or equal to the number of ingress nodes.
f67539c2
TL
272* ``virtual_interface_networks``
273 A list of networks to identify which ethernet interface to use for the virtual IP.
274* ``frontend_port``
275 The port used to access the ingress service.
276* ``ssl_cert``:
277 SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
278 private key blocks in .pem format.
279
b3b6e05e
TL
280.. _ingress-virtual-ip:
281
282Selecting ethernet interfaces for the virtual IP
283------------------------------------------------
f67539c2
TL
284
285You cannot simply provide the name of the network interface on which
286to configure the virtual IP because interface names tend to vary
287across hosts (and/or reboots). Instead, cephadm will select
288interfaces based on other existing IP addresses that are already
289configured.
290
291Normally, the virtual IP will be configured on the first network
292interface that has an existing IP in the same subnet. For example, if
293the virtual IP is 192.168.0.80/24 and eth2 has the static IP
294192.168.0.40/24, cephadm will use eth2.
295
296In some cases, the virtual IP may not belong to the same subnet as an existing static
297IP. In such cases, you can provide a list of subnets to match against existing IPs,
298and cephadm will put the virtual IP on the first network interface to match. For example,
299if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's
300static IP in 10.10.0.0/16, you can use a spec like::
301
302 service_type: ingress
303 service_id: rgw.something
304 spec:
305 virtual_ip: 192.168.0.80/24
306 virtual_interface_networks:
307 - 10.10.0.0/16
308 ...
309
310A consequence of this strategy is that you cannot currently configure the virtual IP
311on an interface that has no existing IP address. In this situation, we suggest
312configuring a "dummy" IP address is an unroutable network on the correct interface
313and reference that dummy network in the networks list (see above).
314
315
b3b6e05e
TL
316Useful hints for ingress
317------------------------
f67539c2 318
b3b6e05e
TL
319* It is good to have at least 3 RGW daemons.
320* We recommend at least 3 hosts for the ingress service.
a4b75251
TL
321
322Further Reading
323===============
324
325* :ref:`object-gateway`
20effc67 326* :ref:`mgr-rgw-module`