]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | =========================== |
2 | Install Ceph Object Gateway | |
3 | =========================== | |
4 | ||
5 | As of `firefly` (v0.80), Ceph Object Gateway is running on Civetweb (embedded | |
6 | into the ``ceph-radosgw`` daemon) instead of Apache and FastCGI. Using Civetweb | |
7 | simplifies the Ceph Object Gateway installation and configuration. | |
8 | ||
9 | .. note:: To run the Ceph Object Gateway service, you should have a running | |
10 | Ceph storage cluster, and the gateway host should have access to the | |
11 | public network. | |
12 | ||
13 | .. note:: In version 0.80, the Ceph Object Gateway does not support SSL. You | |
14 | may setup a reverse proxy server with SSL to dispatch HTTPS requests | |
15 | as HTTP requests to CivetWeb. | |
16 | ||
17 | Execute the Pre-Installation Procedure | |
18 | -------------------------------------- | |
19 | ||
20 | See Preflight_ and execute the pre-installation procedures on your Ceph Object | |
21 | Gateway node. Specifically, you should disable ``requiretty`` on your Ceph | |
22 | Deploy user, set SELinux to ``Permissive`` and set up a Ceph Deploy user with | |
23 | password-less ``sudo``. For Ceph Object Gateways, you will need to open the | |
24 | port that Civetweb will use in production. | |
25 | ||
26 | .. note:: Civetweb runs on port ``7480`` by default. | |
27 | ||
28 | Install Ceph Object Gateway | |
29 | --------------------------- | |
30 | ||
31 | From the working directory of your administration server, install the Ceph | |
32 | Object Gateway package on the Ceph Object Gateway node. For example:: | |
33 | ||
34 | ceph-deploy install --rgw <gateway-node1> [<gateway-node2> ...] | |
35 | ||
36 | The ``ceph-common`` package is a dependency, so ``ceph-deploy`` will install | |
37 | this too. The ``ceph`` CLI tools are intended for administrators. To make your | |
38 | Ceph Object Gateway node an administrator node, execute the following from the | |
39 | working directory of your administration server:: | |
40 | ||
41 | ceph-deploy admin <node-name> | |
42 | ||
43 | Create a Gateway Instance | |
44 | ------------------------- | |
45 | ||
46 | From the working directory of your administration server, create an instance of | |
47 | the Ceph Object Gateway on the Ceph Object Gateway. For example:: | |
48 | ||
49 | ceph-deploy rgw create <gateway-node1> | |
50 | ||
51 | Once the gateway is running, you should be able to access it on port ``7480`` | |
52 | with an unauthenticated request like this:: | |
53 | ||
54 | http://client-node:7480 | |
55 | ||
56 | If the gateway instance is working properly, you should receive a response like | |
57 | this:: | |
58 | ||
59 | <?xml version="1.0" encoding="UTF-8"?> | |
60 | <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> | |
61 | <Owner> | |
62 | <ID>anonymous</ID> | |
63 | <DisplayName></DisplayName> | |
64 | </Owner> | |
65 | <Buckets> | |
66 | </Buckets> | |
67 | </ListAllMyBucketsResult> | |
68 | ||
69 | If at any point you run into trouble and you want to start over, execute the | |
70 | following to purge the configuration:: | |
71 | ||
72 | ceph-deploy purge <gateway-node1> [<gateway-node2>] | |
73 | ceph-deploy purgedata <gateway-node1> [<gateway-node2>] | |
74 | ||
75 | If you execute ``purge``, you must re-install Ceph. | |
76 | ||
77 | Change the Default Port | |
78 | ----------------------- | |
79 | ||
80 | Civetweb runs on port ``7480`` by default. To change the default port (e.g., to | |
81 | port ``80``), modify your Ceph configuration file in the working directory of | |
82 | your administration server. Add a section entitled | |
83 | ``[client.rgw.<gateway-node>]``, replacing ``<gateway-node>`` with the short | |
84 | node name of your Ceph Object Gateway node (i.e., ``hostname -s``). | |
85 | ||
86 | .. note:: As of version 11.0.1, the Ceph Object Gateway **does** support SSL. | |
87 | See `Using SSL with Civetweb`_ for information on how to set that up. | |
88 | ||
89 | For example, if your node name is ``gateway-node1``, add a section like this | |
90 | after the ``[global]`` section:: | |
91 | ||
92 | [client.rgw.gateway-node1] | |
93 | rgw_frontends = "civetweb port=80" | |
94 | ||
95 | .. note:: Ensure that you leave no whitespace between ``port=<port-number>`` in | |
96 | the ``rgw_frontends`` key/value pair. The ``[client.rgw.gateway-node1]`` | |
97 | heading identifies this portion of the Ceph configuration file as | |
98 | configuring a Ceph Storage Cluster client where the client type is a Ceph | |
99 | Object Gateway (i.e., ``rgw``), and the name of the instance is | |
100 | ``gateway-node1``. | |
101 | ||
102 | Push the updated configuration file to your Ceph Object Gateway node | |
103 | (and other Ceph nodes):: | |
104 | ||
105 | ceph-deploy --overwrite-conf config push <gateway-node> [<other-nodes>] | |
106 | ||
107 | To make the new port setting take effect, restart the Ceph Object | |
108 | Gateway:: | |
109 | ||
110 | sudo systemctl restart ceph-radosgw.service | |
111 | ||
112 | Finally, check to ensure that the port you selected is open on the node's | |
113 | firewall (e.g., port ``80``). If it is not open, add the port and reload the | |
114 | firewall configuration. If you use the ``firewalld`` daemon, execute:: | |
115 | ||
116 | sudo firewall-cmd --list-all | |
117 | sudo firewall-cmd --zone=public --add-port 80/tcp --permanent | |
118 | sudo firewall-cmd --reload | |
119 | ||
120 | If you use ``iptables``, execute:: | |
121 | ||
122 | sudo iptables --list | |
123 | sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT | |
124 | ||
125 | Replace ``<iface>`` and ``<ip-address>/<netmask>`` with the relevant values for | |
126 | your Ceph Object Gateway node. | |
127 | ||
128 | Once you have finished configuring ``iptables``, ensure that you make the | |
129 | change persistent so that it will be in effect when your Ceph Object Gateway | |
130 | node reboots. Execute:: | |
131 | ||
132 | sudo apt-get install iptables-persistent | |
133 | ||
134 | A terminal UI will open up. Select ``yes`` for the prompts to save current | |
135 | ``IPv4`` iptables rules to ``/etc/iptables/rules.v4`` and current ``IPv6`` | |
136 | iptables rules to ``/etc/iptables/rules.v6``. | |
137 | ||
138 | The ``IPv4`` iptables rule that you set in the earlier step will be loaded in | |
139 | ``/etc/iptables/rules.v4`` and will be persistent across reboots. | |
140 | ||
141 | If you add a new ``IPv4`` iptables rule after installing | |
142 | ``iptables-persistent`` you will have to add it to the rule file. In such case, | |
143 | execute the following as the ``root`` user:: | |
144 | ||
145 | iptables-save > /etc/iptables/rules.v4 | |
146 | ||
147 | Using SSL with Civetweb | |
148 | ----------------------- | |
149 | .. _Using SSL with Civetweb: | |
150 | ||
151 | Before using SSL with civetweb, you will need a certificate that will match | |
152 | the host name that that will be used to access the Ceph Object Gateway. | |
153 | You may wish to obtain one that has `subject alternate name` fields for | |
154 | more flexibility. If you intend to use S3-style subdomains | |
155 | (`Add Wildcard to DNS`_), you will need a `wildcard` certificate. | |
156 | ||
157 | Civetweb requires that the server key, server certificate, and any other | |
158 | CA or intermediate certificates be supplied in one file. Each of these | |
159 | items must be in `pem` form. Because the combined file contains the | |
160 | secret key, it should be protected from unauthorized access. | |
161 | ||
162 | To configure ssl operation, append ``s`` to the port number. Currently | |
163 | it is not possible to configure the radosgw to listen on both | |
164 | http and https, you must pick only one. So:: | |
165 | ||
166 | [client.rgw.gateway-node1] | |
167 | rgw_frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/keyandcert.pem | |
168 | ||
169 | Migrating from Apache to Civetweb | |
170 | --------------------------------- | |
171 | ||
c07f9fc5 FG |
172 | If you are running the Ceph Object Gateway on Apache and FastCGI with Ceph |
173 | Storage v0.80 or above, you are already running Civetweb--it starts with the | |
7c673cae FG |
174 | ``ceph-radosgw`` daemon and it's running on port 7480 by default so that it |
175 | doesn't conflict with your Apache and FastCGI installation and other commonly | |
176 | used web service ports. Migrating to use Civetweb basically involves removing | |
177 | your Apache installation. Then, you must remove Apache and FastCGI settings | |
178 | from your Ceph configuration file and reset ``rgw_frontends`` to Civetweb. | |
179 | ||
180 | Referring back to the description for installing a Ceph Object Gateway with | |
181 | ``ceph-deploy``, notice that the configuration file only has one setting | |
182 | ``rgw_frontends`` (and that's assuming you elected to change the default port). | |
183 | The ``ceph-deploy`` utility generates the data directory and the keyring for | |
184 | you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-intance}``. The daemon | |
185 | looks in default locations, whereas you may have specified different settings | |
186 | in your Ceph configuration file. Since you already have keys and a data | |
187 | directory, you will want to maintain those paths in your Ceph configuration | |
188 | file if you used something other than default paths. | |
189 | ||
190 | A typical Ceph Object Gateway configuration file for an Apache-based deployment | |
191 | looks something similar as the following: | |
192 | ||
193 | On Red Hat Enterprise Linux:: | |
194 | ||
195 | [client.radosgw.gateway-node1] | |
196 | host = {hostname} | |
197 | keyring = /etc/ceph/ceph.client.radosgw.keyring | |
198 | rgw socket path = "" | |
199 | log file = /var/log/radosgw/client.radosgw.gateway-node1.log | |
200 | rgw frontends = fastcgi socket\_port=9000 socket\_host=0.0.0.0 | |
201 | rgw print continue = false | |
202 | ||
203 | On Ubuntu:: | |
204 | ||
205 | [client.radosgw.gateway-node] | |
206 | host = {hostname} | |
207 | keyring = /etc/ceph/ceph.client.radosgw.keyring | |
208 | rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock | |
209 | log file = /var/log/radosgw/client.radosgw.gateway-node1.log | |
210 | ||
211 | To modify it for use with Civetweb, simply remove the Apache-specific settings | |
212 | such as ``rgw_socket_path`` and ``rgw_print_continue``. Then, change the | |
213 | ``rgw_frontends`` setting to reflect Civetweb rather than the Apache FastCGI | |
214 | front end and specify the port number you intend to use. For example:: | |
215 | ||
216 | [client.radosgw.gateway-node1] | |
217 | host = {hostname} | |
218 | keyring = /etc/ceph/ceph.client.radosgw.keyring | |
219 | log file = /var/log/radosgw/client.radosgw.gateway-node1.log | |
220 | rgw_frontends = civetweb port=80 | |
221 | ||
222 | Finally, restart the Ceph Object Gateway. On Red Hat Enterprise Linux execute:: | |
223 | ||
224 | sudo systemctl restart ceph-radosgw.service | |
225 | ||
226 | On Ubuntu execute:: | |
227 | ||
228 | sudo service radosgw restart id=rgw.<short-hostname> | |
229 | ||
230 | If you used a port number that is not open, you will also need to open that | |
231 | port on your firewall. | |
232 | ||
233 | Configure Bucket Sharding | |
234 | ------------------------- | |
235 | ||
236 | A Ceph Object Gateway stores bucket index data in the ``index_pool``, which | |
237 | defaults to ``.rgw.buckets.index``. Sometimes users like to put many objects | |
238 | (hundreds of thousands to millions of objects) in a single bucket. If you do | |
239 | not use the gateway administration interface to set quotas for the maximum | |
240 | number of objects per bucket, the bucket index can suffer significant | |
241 | performance degradation when users place large numbers of objects into a | |
242 | bucket. | |
243 | ||
244 | In Ceph 0.94, you may shard bucket indices to help prevent performance | |
245 | bottlenecks when you allow a high number of objects per bucket. The | |
246 | ``rgw_override_bucket_index_max_shards`` setting allows you to set a maximum | |
247 | number of shards per bucket. The default value is ``0``, which means bucket | |
248 | index sharding is off by default. | |
249 | ||
250 | To turn bucket index sharding on, set ``rgw_override_bucket_index_max_shards`` | |
251 | to a value greater than ``0``. | |
252 | ||
253 | For simple configurations, you may add ``rgw_override_bucket_index_max_shards`` | |
254 | to your Ceph configuration file. Add it under ``[global]`` to create a | |
255 | system-wide value. You can also set it for each instance in your Ceph | |
256 | configuration file. | |
257 | ||
258 | Once you have changed your bucket sharding configuration in your Ceph | |
259 | configuration file, restart your gateway. On Red Hat Enteprise Linux execute:: | |
260 | ||
261 | sudo systemctl restart ceph-radosgw.service | |
262 | ||
263 | On Ubuntu execute:: | |
264 | ||
265 | sudo service radosgw restart id=rgw.<short-hostname> | |
266 | ||
267 | For federated configurations, each zone may have a different ``index_pool`` | |
b32b8144 FG |
268 | setting for failover. To make the value consistent for a zonegroup's zones, you |
269 | may set ``rgw_override_bucket_index_max_shards`` in a gateway's zonegroup | |
7c673cae FG |
270 | configuration. For example:: |
271 | ||
b32b8144 | 272 | radosgw-admin zonegroup get > zonegroup.json |
7c673cae | 273 | |
b32b8144 FG |
274 | Open the ``zonegroup.json`` file and edit the ``bucket_index_max_shards`` setting |
275 | for each named zone. Save the ``zonegroup.json`` file and reset the zonegroup. | |
276 | For example:: | |
7c673cae | 277 | |
b32b8144 | 278 | radosgw-admin zonegroup set < zonegroup.json |
7c673cae | 279 | |
b32b8144 FG |
280 | Once you have updated your zonegroup, update and commit the period. |
281 | For example:: | |
7c673cae | 282 | |
b32b8144 | 283 | radosgw-admin period update --commit |
7c673cae FG |
284 | |
285 | .. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH | |
b32b8144 | 286 | rule of SSD-based OSDs may also help with bucket index performance. |
7c673cae FG |
287 | |
288 | Add Wildcard to DNS | |
289 | ------------------- | |
290 | .. _Add Wildcard to DNS: | |
291 | ||
292 | To use Ceph with S3-style subdomains (e.g., bucket-name.domain-name.com), you | |
293 | need to add a wildcard to the DNS record of the DNS server you use with the | |
294 | ``ceph-radosgw`` daemon. | |
295 | ||
296 | The address of the DNS must also be specified in the Ceph configuration file | |
297 | with the ``rgw dns name = {hostname}`` setting. | |
298 | ||
299 | For ``dnsmasq``, add the following address setting with a dot (.) prepended to | |
300 | the host name:: | |
301 | ||
302 | address=/.{hostname-or-fqdn}/{host-ip-address} | |
303 | ||
304 | For example:: | |
305 | ||
306 | address=/.gateway-node1/192.168.122.75 | |
307 | ||
308 | ||
309 | For ``bind``, add a wildcard to the DNS record. For example:: | |
310 | ||
311 | $TTL 604800 | |
312 | @ IN SOA gateway-node1. root.gateway-node1. ( | |
313 | 2 ; Serial | |
314 | 604800 ; Refresh | |
315 | 86400 ; Retry | |
316 | 2419200 ; Expire | |
317 | 604800 ) ; Negative Cache TTL | |
318 | ; | |
319 | @ IN NS gateway-node1. | |
320 | @ IN A 192.168.122.113 | |
321 | * IN CNAME @ | |
322 | ||
323 | Restart your DNS server and ping your server with a subdomain to ensure that | |
324 | your DNS configuration works as expected:: | |
325 | ||
326 | ping mybucket.{hostname} | |
327 | ||
328 | For example:: | |
329 | ||
330 | ping mybucket.gateway-node1 | |
331 | ||
332 | Add Debugging (if needed) | |
333 | ------------------------- | |
334 | ||
335 | Once you finish the setup procedure, if you encounter issues with your | |
336 | configuration, you can add debugging to the ``[global]`` section of your Ceph | |
337 | configuration file and restart the gateway(s) to help troubleshoot any | |
338 | configuration issues. For example:: | |
339 | ||
340 | [global] | |
341 | #append the following in the global section. | |
342 | debug ms = 1 | |
343 | debug rgw = 20 | |
344 | ||
345 | Using the Gateway | |
346 | ----------------- | |
347 | ||
348 | To use the REST interfaces, first create an initial Ceph Object Gateway user | |
349 | for the S3 interface. Then, create a subuser for the Swift interface. You then | |
350 | need to verify if the created users are able to access the gateway. | |
351 | ||
352 | Create a RADOSGW User for S3 Access | |
353 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
354 | ||
355 | A ``radosgw`` user needs to be created and granted access. The command ``man | |
356 | radosgw-admin`` will provide information on additional command options. | |
357 | ||
358 | To create the user, execute the following on the ``gateway host``:: | |
359 | ||
360 | sudo radosgw-admin user create --uid="testuser" --display-name="First User" | |
361 | ||
362 | The output of the command will be something like the following:: | |
363 | ||
364 | { | |
365 | "user_id": "testuser", | |
366 | "display_name": "First User", | |
367 | "email": "", | |
368 | "suspended": 0, | |
369 | "max_buckets": 1000, | |
370 | "auid": 0, | |
371 | "subusers": [], | |
372 | "keys": [{ | |
373 | "user": "testuser", | |
374 | "access_key": "I0PJDPCIYZ665MW88W9R", | |
375 | "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" | |
376 | }], | |
377 | "swift_keys": [], | |
378 | "caps": [], | |
379 | "op_mask": "read, write, delete", | |
380 | "default_placement": "", | |
381 | "placement_tags": [], | |
382 | "bucket_quota": { | |
383 | "enabled": false, | |
384 | "max_size_kb": -1, | |
385 | "max_objects": -1 | |
386 | }, | |
387 | "user_quota": { | |
388 | "enabled": false, | |
389 | "max_size_kb": -1, | |
390 | "max_objects": -1 | |
391 | }, | |
392 | "temp_url_keys": [] | |
393 | } | |
394 | ||
395 | .. note:: The values of ``keys->access_key`` and ``keys->secret_key`` are | |
396 | needed for access validation. | |
397 | ||
398 | .. important:: Check the key output. Sometimes ``radosgw-admin`` generates a | |
399 | JSON escape character ``\`` in ``access_key`` or ``secret_key`` | |
400 | and some clients do not know how to handle JSON escape | |
401 | characters. Remedies include removing the JSON escape character | |
402 | ``\``, encapsulating the string in quotes, regenerating the key | |
403 | and ensuring that it does not have a JSON escape character or | |
404 | specify the key and secret manually. Also, if ``radosgw-admin`` | |
405 | generates a JSON escape character ``\`` and a forward slash ``/`` | |
406 | together in a key, like ``\/``, only remove the JSON escape | |
407 | character ``\``. Do not remove the forward slash ``/`` as it is | |
408 | a valid character in the key. | |
409 | ||
410 | Create a Swift User | |
411 | ^^^^^^^^^^^^^^^^^^^ | |
412 | ||
413 | A Swift subuser needs to be created if this kind of access is needed. Creating | |
414 | a Swift user is a two step process. The first step is to create the user. The | |
415 | second is to create the secret key. | |
416 | ||
417 | Execute the following steps on the ``gateway host``: | |
418 | ||
419 | Create the Swift user:: | |
420 | ||
421 | sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full | |
422 | ||
423 | The output will be something like the following:: | |
424 | ||
425 | { | |
426 | "user_id": "testuser", | |
427 | "display_name": "First User", | |
428 | "email": "", | |
429 | "suspended": 0, | |
430 | "max_buckets": 1000, | |
431 | "auid": 0, | |
432 | "subusers": [{ | |
433 | "id": "testuser:swift", | |
434 | "permissions": "full-control" | |
435 | }], | |
436 | "keys": [{ | |
437 | "user": "testuser:swift", | |
438 | "access_key": "3Y1LNW4Q6X0Y53A52DET", | |
439 | "secret_key": "" | |
440 | }, { | |
441 | "user": "testuser", | |
442 | "access_key": "I0PJDPCIYZ665MW88W9R", | |
443 | "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" | |
444 | }], | |
445 | "swift_keys": [], | |
446 | "caps": [], | |
447 | "op_mask": "read, write, delete", | |
448 | "default_placement": "", | |
449 | "placement_tags": [], | |
450 | "bucket_quota": { | |
451 | "enabled": false, | |
452 | "max_size_kb": -1, | |
453 | "max_objects": -1 | |
454 | }, | |
455 | "user_quota": { | |
456 | "enabled": false, | |
457 | "max_size_kb": -1, | |
458 | "max_objects": -1 | |
459 | }, | |
460 | "temp_url_keys": [] | |
461 | } | |
462 | ||
463 | Create the secret key:: | |
464 | ||
465 | sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret | |
466 | ||
467 | The output will be something like the following:: | |
468 | ||
469 | { | |
470 | "user_id": "testuser", | |
471 | "display_name": "First User", | |
472 | "email": "", | |
473 | "suspended": 0, | |
474 | "max_buckets": 1000, | |
475 | "auid": 0, | |
476 | "subusers": [{ | |
477 | "id": "testuser:swift", | |
478 | "permissions": "full-control" | |
479 | }], | |
480 | "keys": [{ | |
481 | "user": "testuser:swift", | |
482 | "access_key": "3Y1LNW4Q6X0Y53A52DET", | |
483 | "secret_key": "" | |
484 | }, { | |
485 | "user": "testuser", | |
486 | "access_key": "I0PJDPCIYZ665MW88W9R", | |
487 | "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" | |
488 | }], | |
489 | "swift_keys": [{ | |
490 | "user": "testuser:swift", | |
491 | "secret_key": "244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF\/IA" | |
492 | }], | |
493 | "caps": [], | |
494 | "op_mask": "read, write, delete", | |
495 | "default_placement": "", | |
496 | "placement_tags": [], | |
497 | "bucket_quota": { | |
498 | "enabled": false, | |
499 | "max_size_kb": -1, | |
500 | "max_objects": -1 | |
501 | }, | |
502 | "user_quota": { | |
503 | "enabled": false, | |
504 | "max_size_kb": -1, | |
505 | "max_objects": -1 | |
506 | }, | |
507 | "temp_url_keys": [] | |
508 | } | |
509 | ||
510 | Access Verification | |
511 | ^^^^^^^^^^^^^^^^^^^ | |
512 | ||
513 | Test S3 Access | |
514 | """""""""""""" | |
515 | ||
516 | You need to write and run a Python test script for verifying S3 access. The S3 | |
517 | access test script will connect to the ``radosgw``, create a new bucket and | |
518 | list all buckets. The values for ``aws_access_key_id`` and | |
519 | ``aws_secret_access_key`` are taken from the values of ``access_key`` and | |
520 | ``secret_key`` returned by the ``radosgw-admin`` command. | |
521 | ||
522 | Execute the following steps: | |
523 | ||
524 | #. You will need to install the ``python-boto`` package:: | |
525 | ||
526 | sudo yum install python-boto | |
527 | ||
528 | #. Create the Python script:: | |
529 | ||
530 | vi s3test.py | |
531 | ||
532 | #. Add the following contents to the file:: | |
533 | ||
534 | import boto.s3.connection | |
535 | ||
536 | access_key = 'I0PJDPCIYZ665MW88W9R' | |
537 | secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA' | |
538 | conn = boto.connect_s3( | |
539 | aws_access_key_id=access_key, | |
540 | aws_secret_access_key=secret_key, | |
541 | host='{hostname}', port={port}, | |
542 | is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(), | |
543 | ) | |
544 | ||
545 | bucket = conn.create_bucket('my-new-bucket') | |
546 | for bucket in conn.get_all_buckets(): | |
547 | print "{name} {created}".format( | |
548 | name=bucket.name, | |
549 | created=bucket.creation_date, | |
550 | ) | |
551 | ||
552 | ||
553 | Replace ``{hostname}`` with the hostname of the host where you have | |
554 | configured the gateway service i.e., the ``gateway host``. Replace ``{port}`` | |
555 | with the port number you are using with Civetweb. | |
556 | ||
557 | #. Run the script:: | |
558 | ||
559 | python s3test.py | |
560 | ||
561 | The output will be something like the following:: | |
562 | ||
563 | my-new-bucket 2015-02-16T17:09:10.000Z | |
564 | ||
565 | Test swift access | |
566 | """"""""""""""""" | |
567 | ||
568 | Swift access can be verified via the ``swift`` command line client. The command | |
569 | ``man swift`` will provide more information on available command line options. | |
570 | ||
571 | To install ``swift`` client, execute the following commands. On Red Hat | |
572 | Enterprise Linux:: | |
573 | ||
574 | sudo yum install python-setuptools | |
575 | sudo easy_install pip | |
576 | sudo pip install --upgrade setuptools | |
577 | sudo pip install --upgrade python-swiftclient | |
578 | ||
579 | On Debian-based distributions:: | |
580 | ||
581 | sudo apt-get install python-setuptools | |
582 | sudo easy_install pip | |
583 | sudo pip install --upgrade setuptools | |
584 | sudo pip install --upgrade python-swiftclient | |
585 | ||
586 | To test swift access, execute the following:: | |
587 | ||
588 | swift -A http://{IP ADDRESS}:{port}/auth/1.0 -U testuser:swift -K '{swift_secret_key}' list | |
589 | ||
590 | Replace ``{IP ADDRESS}`` with the public IP address of the gateway server and | |
591 | ``{swift_secret_key}`` with its value from the output of ``radosgw-admin key | |
592 | create`` command executed for the ``swift`` user. Replace {port} with the port | |
593 | number you are using with Civetweb (e.g., ``7480`` is the default). If you | |
594 | don't replace the port, it will default to port ``80``. | |
595 | ||
596 | For example:: | |
597 | ||
598 | swift -A http://10.19.143.116:7480/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list | |
599 | ||
600 | The output should be:: | |
601 | ||
602 | my-new-bucket | |
603 | ||
604 | .. _Preflight: ../../start/quick-start-preflight |