]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/rgw.rst
import ceph 16.2.6
[ceph.git] / ceph / doc / cephadm / rgw.rst
CommitLineData
f67539c2
TL
1===========
2RGW Service
3===========
4
5.. _cephadm-deploy-rgw:
6
7Deploy RGWs
8===========
9
10Cephadm deploys radosgw as a collection of daemons that manage a
11single-cluster deployment or a particular *realm* and *zone* in a
12multisite deployment. (For more information about realms and zones,
13see :ref:`multisite`.)
14
15Note that with cephadm, radosgw daemons are configured via the monitor
16configuration database instead of via a `ceph.conf` or the command line. If
17that configuration isn't already in place (usually in the
18``client.rgw.<something>`` section), then the radosgw
19daemons will start up with default settings (e.g., binding to port
2080).
21
22To deploy a set of radosgw daemons, with an arbitrary service name
23*name*, run the following command:
24
25.. prompt:: bash #
26
27 ceph orch apply rgw *<name>* [--realm=*<realm-name>*] [--zone=*<zone-name>*] --placement="*<num-daemons>* [*<host1>* ...]"
28
29Trivial setup
30-------------
31
32For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment
33under the arbitrary service id *foo*:
34
35.. prompt:: bash #
36
37 ceph orch apply rgw foo
38
39Designated gateways
40-------------------
41
42A common scenario is to have a labeled set of hosts that will act
43as gateways, with multiple instances of radosgw running on consecutive
44ports 8000 and 8001:
45
46.. prompt:: bash #
47
48 ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything
49 ceph orch host label add gwhost2 rgw
50 ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000
51
52Multisite zones
53---------------
54
55To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on
56*myhost1* and *myhost2*:
57
58.. prompt:: bash #
59
60 ceph orch apply rgw east --realm=myorg --zone=us-east-1 --placement="2 myhost1 myhost2"
61
62Note that in a multisite situation, cephadm only deploys the daemons. It does not create
63or update the realm or zone configurations. To create a new realm and zone, you need to do
64something like:
65
66.. prompt:: bash #
67
68 radosgw-admin realm create --rgw-realm=<realm-name> --default
69
70.. prompt:: bash #
71
72 radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
73
74.. prompt:: bash #
75
76 radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
77
78.. prompt:: bash #
79
80 radosgw-admin period update --rgw-realm=<realm-name> --commit
81
82See :ref:`orchestrator-cli-placement-spec` for details of the placement
83specification. See :ref:`multisite` for more information of setting up multisite RGW.
84
522d829b
TL
85Setting up HTTPS
86----------------
87
88In order to enable HTTPS for RGW services, apply a spec file following this scheme:
89
90.. code-block:: yaml
91
92 service_type: rgw
93 service_id: myrgw
94 spec:
95 rgw_frontend_ssl_certificate: |
96 -----BEGIN PRIVATE KEY-----
97 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
98 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
99 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
100 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
101 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
102 -----END PRIVATE KEY-----
103 -----BEGIN CERTIFICATE-----
104 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
105 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
106 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
107 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
108 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
109 -----END CERTIFICATE-----
110 ssl: true
111
112Then apply this yaml document:
113
114.. prompt:: bash #
115
116 ceph orch apply -i myrgw.yaml
117
118Note the value of ``rgw_frontend_ssl_certificate`` is a literal string as
119indicated by a ``|`` character preserving newline characters.
f67539c2
TL
120
121.. _orchestrator-haproxy-service-spec:
122
123High availability service for RGW
124=================================
125
126The *ingress* service allows you to create a high availability endpoint
127for RGW with a minumum set of configuration options. The orchestrator will
128deploy and manage a combination of haproxy and keepalived to provide load
129balancing on a floating virtual IP.
130
131If SSL is used, then SSL must be configured and terminated by the ingress service
132and not RGW itself.
133
134.. image:: ../images/HAProxy_for_RGW.svg
135
136There are N hosts where the ingress service is deployed. Each host
137has a haproxy daemon and a keepalived daemon. A virtual IP is
138automatically configured on only one of these hosts at a time.
139
140Each keepalived daemon checks every few seconds whether the haproxy
141daemon on the same host is responding. Keepalived will also check
142that the master keepalived daemon is running without problems. If the
143"master" keepalived daemon or the active haproxy is not responding,
144one of the remaining keepalived daemons running in backup mode will be
145elected as master, and the virtual IP will be moved to that node.
146
147The active haproxy acts like a load balancer, distributing all RGW requests
148between all the RGW daemons available.
149
b3b6e05e
TL
150Prerequisites
151-------------
f67539c2
TL
152
153* An existing RGW service, without SSL. (If you want SSL service, the certificate
154 should be configured on the ingress service, not the RGW service.)
155
b3b6e05e
TL
156Deploying
157---------
f67539c2
TL
158
159Use the command::
160
161 ceph orch apply -i <ingress_spec_file>
162
b3b6e05e
TL
163Service specification
164---------------------
f67539c2
TL
165
166It is a yaml format file with the following properties:
167
168.. code-block:: yaml
169
170 service_type: ingress
171 service_id: rgw.something # adjust to match your existing RGW service
172 placement:
173 hosts:
174 - host1
175 - host2
176 - host3
177 spec:
178 backend_service: rgw.something # adjust to match your existing RGW service
179 virtual_ip: <string>/<string> # ex: 192.168.20.1/24
180 frontend_port: <integer> # ex: 8080
181 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
182 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
183 ssl_cert: | # optional: SSL certificate and key
184 -----BEGIN CERTIFICATE-----
185 ...
186 -----END CERTIFICATE-----
187 -----BEGIN PRIVATE KEY-----
188 ...
189 -----END PRIVATE KEY-----
190
191where the properties of this service specification are:
192
193* ``service_type``
194 Mandatory and set to "ingress"
195* ``service_id``
196 The name of the service. We suggest naming this after the service you are
197 controlling ingress for (e.g., ``rgw.foo``).
198* ``placement hosts``
199 The hosts where it is desired to run the HA daemons. An haproxy and a
200 keepalived container will be deployed on these hosts. These hosts do not need
201 to match the nodes where RGW is deployed.
202* ``virtual_ip``
203 The virtual IP (and network) in CIDR format where the ingress service will be available.
204* ``virtual_interface_networks``
205 A list of networks to identify which ethernet interface to use for the virtual IP.
206* ``frontend_port``
207 The port used to access the ingress service.
208* ``ssl_cert``:
209 SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
210 private key blocks in .pem format.
211
b3b6e05e
TL
212.. _ingress-virtual-ip:
213
214Selecting ethernet interfaces for the virtual IP
215------------------------------------------------
f67539c2
TL
216
217You cannot simply provide the name of the network interface on which
218to configure the virtual IP because interface names tend to vary
219across hosts (and/or reboots). Instead, cephadm will select
220interfaces based on other existing IP addresses that are already
221configured.
222
223Normally, the virtual IP will be configured on the first network
224interface that has an existing IP in the same subnet. For example, if
225the virtual IP is 192.168.0.80/24 and eth2 has the static IP
226192.168.0.40/24, cephadm will use eth2.
227
228In some cases, the virtual IP may not belong to the same subnet as an existing static
229IP. In such cases, you can provide a list of subnets to match against existing IPs,
230and cephadm will put the virtual IP on the first network interface to match. For example,
231if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's
232static IP in 10.10.0.0/16, you can use a spec like::
233
234 service_type: ingress
235 service_id: rgw.something
236 spec:
237 virtual_ip: 192.168.0.80/24
238 virtual_interface_networks:
239 - 10.10.0.0/16
240 ...
241
242A consequence of this strategy is that you cannot currently configure the virtual IP
243on an interface that has no existing IP address. In this situation, we suggest
244configuring a "dummy" IP address is an unroutable network on the correct interface
245and reference that dummy network in the networks list (see above).
246
247
b3b6e05e
TL
248Useful hints for ingress
249------------------------
f67539c2 250
b3b6e05e
TL
251* It is good to have at least 3 RGW daemons.
252* We recommend at least 3 hosts for the ingress service.