]>
Commit | Line | Data |
---|---|---|
f67539c2 TL |
1 | =========== |
2 | RGW Service | |
3 | =========== | |
4 | ||
5 | .. _cephadm-deploy-rgw: | |
6 | ||
7 | Deploy RGWs | |
8 | =========== | |
9 | ||
10 | Cephadm deploys radosgw as a collection of daemons that manage a | |
11 | single-cluster deployment or a particular *realm* and *zone* in a | |
12 | multisite deployment. (For more information about realms and zones, | |
13 | see :ref:`multisite`.) | |
14 | ||
15 | Note that with cephadm, radosgw daemons are configured via the monitor | |
16 | configuration database instead of via a `ceph.conf` or the command line. If | |
17 | that configuration isn't already in place (usually in the | |
18 | ``client.rgw.<something>`` section), then the radosgw | |
19 | daemons will start up with default settings (e.g., binding to port | |
20 | 80). | |
21 | ||
22 | To deploy a set of radosgw daemons, with an arbitrary service name | |
23 | *name*, run the following command: | |
24 | ||
25 | .. prompt:: bash # | |
26 | ||
27 | ceph orch apply rgw *<name>* [--realm=*<realm-name>*] [--zone=*<zone-name>*] --placement="*<num-daemons>* [*<host1>* ...]" | |
28 | ||
29 | Trivial setup | |
30 | ------------- | |
31 | ||
32 | For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment | |
33 | under the arbitrary service id *foo*: | |
34 | ||
35 | .. prompt:: bash # | |
36 | ||
37 | ceph orch apply rgw foo | |
38 | ||
39 | Designated gateways | |
40 | ------------------- | |
41 | ||
42 | A common scenario is to have a labeled set of hosts that will act | |
43 | as gateways, with multiple instances of radosgw running on consecutive | |
44 | ports 8000 and 8001: | |
45 | ||
46 | .. prompt:: bash # | |
47 | ||
48 | ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything | |
49 | ceph orch host label add gwhost2 rgw | |
50 | ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000 | |
51 | ||
52 | Multisite zones | |
53 | --------------- | |
54 | ||
55 | To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on | |
56 | *myhost1* and *myhost2*: | |
57 | ||
58 | .. prompt:: bash # | |
59 | ||
60 | ceph orch apply rgw east --realm=myorg --zone=us-east-1 --placement="2 myhost1 myhost2" | |
61 | ||
62 | Note that in a multisite situation, cephadm only deploys the daemons. It does not create | |
63 | or update the realm or zone configurations. To create a new realm and zone, you need to do | |
64 | something like: | |
65 | ||
66 | .. prompt:: bash # | |
67 | ||
68 | radosgw-admin realm create --rgw-realm=<realm-name> --default | |
69 | ||
70 | .. prompt:: bash # | |
71 | ||
72 | radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default | |
73 | ||
74 | .. prompt:: bash # | |
75 | ||
76 | radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default | |
77 | ||
78 | .. prompt:: bash # | |
79 | ||
80 | radosgw-admin period update --rgw-realm=<realm-name> --commit | |
81 | ||
82 | See :ref:`orchestrator-cli-placement-spec` for details of the placement | |
83 | specification. See :ref:`multisite` for more information of setting up multisite RGW. | |
84 | ||
522d829b TL |
85 | Setting up HTTPS |
86 | ---------------- | |
87 | ||
88 | In order to enable HTTPS for RGW services, apply a spec file following this scheme: | |
89 | ||
90 | .. code-block:: yaml | |
91 | ||
92 | service_type: rgw | |
93 | service_id: myrgw | |
94 | spec: | |
95 | rgw_frontend_ssl_certificate: | | |
96 | -----BEGIN PRIVATE KEY----- | |
97 | V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt | |
98 | ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15 | |
99 | IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu | |
100 | YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg | |
101 | ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= | |
102 | -----END PRIVATE KEY----- | |
103 | -----BEGIN CERTIFICATE----- | |
104 | V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt | |
105 | ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15 | |
106 | IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu | |
107 | YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg | |
108 | ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= | |
109 | -----END CERTIFICATE----- | |
110 | ssl: true | |
111 | ||
112 | Then apply this yaml document: | |
113 | ||
114 | .. prompt:: bash # | |
115 | ||
116 | ceph orch apply -i myrgw.yaml | |
117 | ||
118 | Note the value of ``rgw_frontend_ssl_certificate`` is a literal string as | |
119 | indicated by a ``|`` character preserving newline characters. | |
f67539c2 TL |
120 | |
121 | .. _orchestrator-haproxy-service-spec: | |
122 | ||
123 | High availability service for RGW | |
124 | ================================= | |
125 | ||
126 | The *ingress* service allows you to create a high availability endpoint | |
127 | for RGW with a minumum set of configuration options. The orchestrator will | |
128 | deploy and manage a combination of haproxy and keepalived to provide load | |
129 | balancing on a floating virtual IP. | |
130 | ||
131 | If SSL is used, then SSL must be configured and terminated by the ingress service | |
132 | and not RGW itself. | |
133 | ||
134 | .. image:: ../images/HAProxy_for_RGW.svg | |
135 | ||
136 | There are N hosts where the ingress service is deployed. Each host | |
137 | has a haproxy daemon and a keepalived daemon. A virtual IP is | |
138 | automatically configured on only one of these hosts at a time. | |
139 | ||
140 | Each keepalived daemon checks every few seconds whether the haproxy | |
141 | daemon on the same host is responding. Keepalived will also check | |
142 | that the master keepalived daemon is running without problems. If the | |
143 | "master" keepalived daemon or the active haproxy is not responding, | |
144 | one of the remaining keepalived daemons running in backup mode will be | |
145 | elected as master, and the virtual IP will be moved to that node. | |
146 | ||
147 | The active haproxy acts like a load balancer, distributing all RGW requests | |
148 | between all the RGW daemons available. | |
149 | ||
b3b6e05e TL |
150 | Prerequisites |
151 | ------------- | |
f67539c2 TL |
152 | |
153 | * An existing RGW service, without SSL. (If you want SSL service, the certificate | |
154 | should be configured on the ingress service, not the RGW service.) | |
155 | ||
b3b6e05e TL |
156 | Deploying |
157 | --------- | |
f67539c2 TL |
158 | |
159 | Use the command:: | |
160 | ||
161 | ceph orch apply -i <ingress_spec_file> | |
162 | ||
b3b6e05e TL |
163 | Service specification |
164 | --------------------- | |
f67539c2 TL |
165 | |
166 | It is a yaml format file with the following properties: | |
167 | ||
168 | .. code-block:: yaml | |
169 | ||
170 | service_type: ingress | |
171 | service_id: rgw.something # adjust to match your existing RGW service | |
172 | placement: | |
173 | hosts: | |
174 | - host1 | |
175 | - host2 | |
176 | - host3 | |
177 | spec: | |
178 | backend_service: rgw.something # adjust to match your existing RGW service | |
179 | virtual_ip: <string>/<string> # ex: 192.168.20.1/24 | |
180 | frontend_port: <integer> # ex: 8080 | |
181 | monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status | |
182 | virtual_interface_networks: [ ... ] # optional: list of CIDR networks | |
183 | ssl_cert: | # optional: SSL certificate and key | |
184 | -----BEGIN CERTIFICATE----- | |
185 | ... | |
186 | -----END CERTIFICATE----- | |
187 | -----BEGIN PRIVATE KEY----- | |
188 | ... | |
189 | -----END PRIVATE KEY----- | |
190 | ||
191 | where the properties of this service specification are: | |
192 | ||
193 | * ``service_type`` | |
194 | Mandatory and set to "ingress" | |
195 | * ``service_id`` | |
196 | The name of the service. We suggest naming this after the service you are | |
197 | controlling ingress for (e.g., ``rgw.foo``). | |
198 | * ``placement hosts`` | |
199 | The hosts where it is desired to run the HA daemons. An haproxy and a | |
200 | keepalived container will be deployed on these hosts. These hosts do not need | |
201 | to match the nodes where RGW is deployed. | |
202 | * ``virtual_ip`` | |
203 | The virtual IP (and network) in CIDR format where the ingress service will be available. | |
204 | * ``virtual_interface_networks`` | |
205 | A list of networks to identify which ethernet interface to use for the virtual IP. | |
206 | * ``frontend_port`` | |
207 | The port used to access the ingress service. | |
208 | * ``ssl_cert``: | |
209 | SSL certificate, if SSL is to be enabled. This must contain the both the certificate and | |
210 | private key blocks in .pem format. | |
211 | ||
b3b6e05e TL |
212 | .. _ingress-virtual-ip: |
213 | ||
214 | Selecting ethernet interfaces for the virtual IP | |
215 | ------------------------------------------------ | |
f67539c2 TL |
216 | |
217 | You cannot simply provide the name of the network interface on which | |
218 | to configure the virtual IP because interface names tend to vary | |
219 | across hosts (and/or reboots). Instead, cephadm will select | |
220 | interfaces based on other existing IP addresses that are already | |
221 | configured. | |
222 | ||
223 | Normally, the virtual IP will be configured on the first network | |
224 | interface that has an existing IP in the same subnet. For example, if | |
225 | the virtual IP is 192.168.0.80/24 and eth2 has the static IP | |
226 | 192.168.0.40/24, cephadm will use eth2. | |
227 | ||
228 | In some cases, the virtual IP may not belong to the same subnet as an existing static | |
229 | IP. In such cases, you can provide a list of subnets to match against existing IPs, | |
230 | and cephadm will put the virtual IP on the first network interface to match. For example, | |
231 | if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's | |
232 | static IP in 10.10.0.0/16, you can use a spec like:: | |
233 | ||
234 | service_type: ingress | |
235 | service_id: rgw.something | |
236 | spec: | |
237 | virtual_ip: 192.168.0.80/24 | |
238 | virtual_interface_networks: | |
239 | - 10.10.0.0/16 | |
240 | ... | |
241 | ||
242 | A consequence of this strategy is that you cannot currently configure the virtual IP | |
243 | on an interface that has no existing IP address. In this situation, we suggest | |
244 | configuring a "dummy" IP address is an unroutable network on the correct interface | |
245 | and reference that dummy network in the networks list (see above). | |
246 | ||
247 | ||
b3b6e05e TL |
248 | Useful hints for ingress |
249 | ------------------------ | |
f67539c2 | 250 | |
b3b6e05e TL |
251 | * It is good to have at least 3 RGW daemons. |
252 | * We recommend at least 3 hosts for the ingress service. |