]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/rgw.rst
import ceph pacific 16.2.5
[ceph.git] / ceph / doc / cephadm / rgw.rst
1 ===========
2 RGW Service
3 ===========
4
5 .. _cephadm-deploy-rgw:
6
7 Deploy RGWs
8 ===========
9
10 Cephadm deploys radosgw as a collection of daemons that manage a
11 single-cluster deployment or a particular *realm* and *zone* in a
12 multisite deployment. (For more information about realms and zones,
13 see :ref:`multisite`.)
14
15 Note that with cephadm, radosgw daemons are configured via the monitor
16 configuration database instead of via a `ceph.conf` or the command line. If
17 that configuration isn't already in place (usually in the
18 ``client.rgw.<something>`` section), then the radosgw
19 daemons will start up with default settings (e.g., binding to port
20 80).
21
22 To deploy a set of radosgw daemons, with an arbitrary service name
23 *name*, run the following command:
24
25 .. prompt:: bash #
26
27 ceph orch apply rgw *<name>* [--realm=*<realm-name>*] [--zone=*<zone-name>*] --placement="*<num-daemons>* [*<host1>* ...]"
28
29 Trivial setup
30 -------------
31
32 For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment
33 under the arbitrary service id *foo*:
34
35 .. prompt:: bash #
36
37 ceph orch apply rgw foo
38
39 Designated gateways
40 -------------------
41
42 A common scenario is to have a labeled set of hosts that will act
43 as gateways, with multiple instances of radosgw running on consecutive
44 ports 8000 and 8001:
45
46 .. prompt:: bash #
47
48 ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything
49 ceph orch host label add gwhost2 rgw
50 ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000
51
52 Multisite zones
53 ---------------
54
55 To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on
56 *myhost1* and *myhost2*:
57
58 .. prompt:: bash #
59
60 ceph orch apply rgw east --realm=myorg --zone=us-east-1 --placement="2 myhost1 myhost2"
61
62 Note that in a multisite situation, cephadm only deploys the daemons. It does not create
63 or update the realm or zone configurations. To create a new realm and zone, you need to do
64 something like:
65
66 .. prompt:: bash #
67
68 radosgw-admin realm create --rgw-realm=<realm-name> --default
69
70 .. prompt:: bash #
71
72 radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
73
74 .. prompt:: bash #
75
76 radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
77
78 .. prompt:: bash #
79
80 radosgw-admin period update --rgw-realm=<realm-name> --commit
81
82 See :ref:`orchestrator-cli-placement-spec` for details of the placement
83 specification. See :ref:`multisite` for more information of setting up multisite RGW.
84
85
86 .. _orchestrator-haproxy-service-spec:
87
88 High availability service for RGW
89 =================================
90
91 The *ingress* service allows you to create a high availability endpoint
92 for RGW with a minumum set of configuration options. The orchestrator will
93 deploy and manage a combination of haproxy and keepalived to provide load
94 balancing on a floating virtual IP.
95
96 If SSL is used, then SSL must be configured and terminated by the ingress service
97 and not RGW itself.
98
99 .. image:: ../images/HAProxy_for_RGW.svg
100
101 There are N hosts where the ingress service is deployed. Each host
102 has a haproxy daemon and a keepalived daemon. A virtual IP is
103 automatically configured on only one of these hosts at a time.
104
105 Each keepalived daemon checks every few seconds whether the haproxy
106 daemon on the same host is responding. Keepalived will also check
107 that the master keepalived daemon is running without problems. If the
108 "master" keepalived daemon or the active haproxy is not responding,
109 one of the remaining keepalived daemons running in backup mode will be
110 elected as master, and the virtual IP will be moved to that node.
111
112 The active haproxy acts like a load balancer, distributing all RGW requests
113 between all the RGW daemons available.
114
115 Prerequisites
116 -------------
117
118 * An existing RGW service, without SSL. (If you want SSL service, the certificate
119 should be configured on the ingress service, not the RGW service.)
120
121 Deploying
122 ---------
123
124 Use the command::
125
126 ceph orch apply -i <ingress_spec_file>
127
128 Service specification
129 ---------------------
130
131 It is a yaml format file with the following properties:
132
133 .. code-block:: yaml
134
135 service_type: ingress
136 service_id: rgw.something # adjust to match your existing RGW service
137 placement:
138 hosts:
139 - host1
140 - host2
141 - host3
142 spec:
143 backend_service: rgw.something # adjust to match your existing RGW service
144 virtual_ip: <string>/<string> # ex: 192.168.20.1/24
145 frontend_port: <integer> # ex: 8080
146 monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
147 virtual_interface_networks: [ ... ] # optional: list of CIDR networks
148 ssl_cert: | # optional: SSL certificate and key
149 -----BEGIN CERTIFICATE-----
150 ...
151 -----END CERTIFICATE-----
152 -----BEGIN PRIVATE KEY-----
153 ...
154 -----END PRIVATE KEY-----
155
156 where the properties of this service specification are:
157
158 * ``service_type``
159 Mandatory and set to "ingress"
160 * ``service_id``
161 The name of the service. We suggest naming this after the service you are
162 controlling ingress for (e.g., ``rgw.foo``).
163 * ``placement hosts``
164 The hosts where it is desired to run the HA daemons. An haproxy and a
165 keepalived container will be deployed on these hosts. These hosts do not need
166 to match the nodes where RGW is deployed.
167 * ``virtual_ip``
168 The virtual IP (and network) in CIDR format where the ingress service will be available.
169 * ``virtual_interface_networks``
170 A list of networks to identify which ethernet interface to use for the virtual IP.
171 * ``frontend_port``
172 The port used to access the ingress service.
173 * ``ssl_cert``:
174 SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
175 private key blocks in .pem format.
176
177 .. _ingress-virtual-ip:
178
179 Selecting ethernet interfaces for the virtual IP
180 ------------------------------------------------
181
182 You cannot simply provide the name of the network interface on which
183 to configure the virtual IP because interface names tend to vary
184 across hosts (and/or reboots). Instead, cephadm will select
185 interfaces based on other existing IP addresses that are already
186 configured.
187
188 Normally, the virtual IP will be configured on the first network
189 interface that has an existing IP in the same subnet. For example, if
190 the virtual IP is 192.168.0.80/24 and eth2 has the static IP
191 192.168.0.40/24, cephadm will use eth2.
192
193 In some cases, the virtual IP may not belong to the same subnet as an existing static
194 IP. In such cases, you can provide a list of subnets to match against existing IPs,
195 and cephadm will put the virtual IP on the first network interface to match. For example,
196 if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's
197 static IP in 10.10.0.0/16, you can use a spec like::
198
199 service_type: ingress
200 service_id: rgw.something
201 spec:
202 virtual_ip: 192.168.0.80/24
203 virtual_interface_networks:
204 - 10.10.0.0/16
205 ...
206
207 A consequence of this strategy is that you cannot currently configure the virtual IP
208 on an interface that has no existing IP address. In this situation, we suggest
209 configuring a "dummy" IP address is an unroutable network on the correct interface
210 and reference that dummy network in the networks list (see above).
211
212
213 Useful hints for ingress
214 ------------------------
215
216 * It is good to have at least 3 RGW daemons.
217 * We recommend at least 3 hosts for the ingress service.