]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/api/librados-intro.rst
01ed49b92dd123e99caf35bd99757aadf1a77e6c
[ceph.git] / ceph / doc / rados / api / librados-intro.rst
1 ==========================
2 Introduction to librados
3 ==========================
4
5 The :term:`Ceph Storage Cluster` provides the basic storage service that allows
6 :term:`Ceph` to uniquely deliver **object, block, and file storage** in one
7 unified system. However, you are not limited to using the RESTful, block, or
8 POSIX interfaces. Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object
9 Store)`, the ``librados`` API enables you to create your own interface to the
10 Ceph Storage Cluster.
11
12 The ``librados`` API enables you to interact with the two types of daemons in
13 the Ceph Storage Cluster:
14
15 - The :term:`Ceph Monitor`, which maintains a master copy of the cluster map.
16 - The :term:`Ceph OSD Daemon` (OSD), which stores data as objects on a storage node.
17
18 .. ditaa::
19 +---------------------------------+
20 | Ceph Storage Cluster Protocol |
21 | (librados) |
22 +---------------------------------+
23 +---------------+ +---------------+
24 | OSDs | | Monitors |
25 +---------------+ +---------------+
26
27 This guide provides a high-level introduction to using ``librados``.
28 Refer to :doc:`../../architecture` for additional details of the Ceph
29 Storage Cluster. To use the API, you need a running Ceph Storage Cluster.
30 See `Installation (Quick)`_ for details.
31
32
33 Step 1: Getting librados
34 ========================
35
36 Your client application must bind with ``librados`` to connect to the Ceph
37 Storage Cluster. You must install ``librados`` and any required packages to
38 write applications that use ``librados``. The ``librados`` API is written in
39 C++, with additional bindings for C, Python, Java and PHP.
40
41
42 Getting librados for C/C++
43 --------------------------
44
45 To install ``librados`` development support files for C/C++ on Debian/Ubuntu
46 distributions, execute the following::
47
48 sudo apt-get install librados-dev
49
50 To install ``librados`` development support files for C/C++ on RHEL/CentOS
51 distributions, execute the following::
52
53 sudo yum install librados2-devel
54
55 Once you install ``librados`` for developers, you can find the required
56 headers for C/C++ under ``/usr/include/rados``. ::
57
58 ls /usr/include/rados
59
60
61 Getting librados for Python
62 ---------------------------
63
64 The ``rados`` module provides ``librados`` support to Python
65 applications. The ``librados-dev`` package for Debian/Ubuntu
66 and the ``librados2-devel`` package for RHEL/CentOS will install the
67 ``python-rados`` package for you. You may install ``python-rados``
68 directly too.
69
70 To install ``librados`` development support files for Python on Debian/Ubuntu
71 distributions, execute the following::
72
73 sudo apt-get install python-rados
74
75 To install ``librados`` development support files for Python on RHEL/CentOS
76 distributions, execute the following::
77
78 sudo yum install python-rados
79
80 You can find the module under ``/usr/share/pyshared`` on Debian systems,
81 or under ``/usr/lib/python*/site-packages`` on CentOS/RHEL systems.
82
83
84 Getting librados for Java
85 -------------------------
86
87 To install ``librados`` for Java, you need to execute the following procedure:
88
89 #. Install ``jna.jar``. For Debian/Ubuntu, execute::
90
91 sudo apt-get install libjna-java
92
93 For CentOS/RHEL, execute::
94
95 sudo yum install jna
96
97 The JAR files are located in ``/usr/share/java``.
98
99 #. Clone the ``rados-java`` repository::
100
101 git clone --recursive https://github.com/ceph/rados-java.git
102
103 #. Build the ``rados-java`` repository::
104
105 cd rados-java
106 ant
107
108 The JAR file is located under ``rados-java/target``.
109
110 #. Copy the JAR for RADOS to a common location (e.g., ``/usr/share/java``) and
111 ensure that it and the JNA JAR are in your JVM's classpath. For example::
112
113 sudo cp target/rados-0.1.3.jar /usr/share/java/rados-0.1.3.jar
114 sudo ln -s /usr/share/java/jna-3.2.7.jar /usr/lib/jvm/default-java/jre/lib/ext/jna-3.2.7.jar
115 sudo ln -s /usr/share/java/rados-0.1.3.jar /usr/lib/jvm/default-java/jre/lib/ext/rados-0.1.3.jar
116
117 To build the documentation, execute the following::
118
119 ant docs
120
121
122 Getting librados for PHP
123 -------------------------
124
125 To install the ``librados`` extension for PHP, you need to execute the following procedure:
126
127 #. Install php-dev. For Debian/Ubuntu, execute::
128
129 sudo apt-get install php5-dev build-essential
130
131 For CentOS/RHEL, execute::
132
133 sudo yum install php-devel
134
135 #. Clone the ``phprados`` repository::
136
137 git clone https://github.com/ceph/phprados.git
138
139 #. Build ``phprados``::
140
141 cd phprados
142 phpize
143 ./configure
144 make
145 sudo make install
146
147 #. Enable ``phprados`` in php.ini by adding::
148
149 extension=rados.so
150
151
152 Step 2: Configuring a Cluster Handle
153 ====================================
154
155 A :term:`Ceph Client`, via ``librados``, interacts directly with OSDs to store
156 and retrieve data. To interact with OSDs, the client app must invoke
157 ``librados`` and connect to a Ceph Monitor. Once connected, ``librados``
158 retrieves the :term:`Cluster Map` from the Ceph Monitor. When the client app
159 wants to read or write data, it creates an I/O context and binds to a
160 :term:`Pool`. The pool has an associated :term:`CRUSH rule` that defines how it
161 will place data in the storage cluster. Via the I/O context, the client
162 provides the object name to ``librados``, which takes the object name
163 and the cluster map (i.e., the topology of the cluster) and `computes`_ the
164 placement group and `OSD`_ for locating the data. Then the client application
165 can read or write data. The client app doesn't need to learn about the topology
166 of the cluster directly.
167
168 .. ditaa::
169 +--------+ Retrieves +---------------+
170 | Client |------------>| Cluster Map |
171 +--------+ +---------------+
172 |
173 v Writes
174 /-----\
175 | obj |
176 \-----/
177 | To
178 v
179 +--------+ +---------------+
180 | Pool |---------->| CRUSH Rule |
181 +--------+ Selects +---------------+
182
183
184 The Ceph Storage Cluster handle encapsulates the client configuration, including:
185
186 - The `user ID`_ for ``rados_create()`` or user name for ``rados_create2()``
187 (preferred).
188 - The :term:`cephx` authentication key
189 - The monitor ID and IP address
190 - Logging levels
191 - Debugging levels
192
193 Thus, the first steps in using the cluster from your app are to 1) create
194 a cluster handle that your app will use to connect to the storage cluster,
195 and then 2) use that handle to connect. To connect to the cluster, the
196 app must supply a monitor address, a username and an authentication key
197 (cephx is enabled by default).
198
199 .. tip:: Talking to different Ceph Storage Clusters – or to the same cluster
200 with different users – requires different cluster handles.
201
202 RADOS provides a number of ways for you to set the required values. For
203 the monitor and encryption key settings, an easy way to handle them is to ensure
204 that your Ceph configuration file contains a ``keyring`` path to a keyring file
205 and at least one monitor address (e.g., ``mon host``). For example::
206
207 [global]
208 mon host = 192.168.1.1
209 keyring = /etc/ceph/ceph.client.admin.keyring
210
211 Once you create the handle, you can read a Ceph configuration file to configure
212 the handle. You can also pass arguments to your app and parse them with the
213 function for parsing command line arguments (e.g., ``rados_conf_parse_argv()``),
214 or parse Ceph environment variables (e.g., ``rados_conf_parse_env()``). Some
215 wrappers may not implement convenience methods, so you may need to implement
216 these capabilities. The following diagram provides a high-level flow for the
217 initial connection.
218
219
220 .. ditaa::
221 +---------+ +---------+
222 | Client | | Monitor |
223 +---------+ +---------+
224 | |
225 |-----+ create |
226 | | cluster |
227 |<----+ handle |
228 | |
229 |-----+ read |
230 | | config |
231 |<----+ file |
232 | |
233 | connect |
234 |-------------->|
235 | |
236 |<--------------|
237 | connected |
238 | |
239
240
241 Once connected, your app can invoke functions that affect the whole cluster
242 with only the cluster handle. For example, once you have a cluster
243 handle, you can:
244
245 - Get cluster statistics
246 - Use Pool Operation (exists, create, list, delete)
247 - Get and set the configuration
248
249
250 One of the powerful features of Ceph is the ability to bind to different pools.
251 Each pool may have a different number of placement groups, object replicas and
252 replication strategies. For example, a pool could be set up as a "hot" pool that
253 uses SSDs for frequently used objects or a "cold" pool that uses erasure coding.
254
255 The main difference in the various ``librados`` bindings is between C and
256 the object-oriented bindings for C++, Java and Python. The object-oriented
257 bindings use objects to represent cluster handles, IO Contexts, iterators,
258 exceptions, etc.
259
260
261 C Example
262 ---------
263
264 For C, creating a simple cluster handle using the ``admin`` user, configuring
265 it and connecting to the cluster might look something like this:
266
267 .. code-block:: c
268
269 #include <stdio.h>
270 #include <stdlib.h>
271 #include <string.h>
272 #include <rados/librados.h>
273
274 int main (int argc, const char **argv)
275 {
276
277 /* Declare the cluster handle and required arguments. */
278 rados_t cluster;
279 char cluster_name[] = "ceph";
280 char user_name[] = "client.admin";
281 uint64_t flags = 0;
282
283 /* Initialize the cluster handle with the "ceph" cluster name and the "client.admin" user */
284 int err;
285 err = rados_create2(&cluster, cluster_name, user_name, flags);
286
287 if (err < 0) {
288 fprintf(stderr, "%s: Couldn't create the cluster handle! %s\n", argv[0], strerror(-err));
289 exit(EXIT_FAILURE);
290 } else {
291 printf("\nCreated a cluster handle.\n");
292 }
293
294
295 /* Read a Ceph configuration file to configure the cluster handle. */
296 err = rados_conf_read_file(cluster, "/etc/ceph/ceph.conf");
297 if (err < 0) {
298 fprintf(stderr, "%s: cannot read config file: %s\n", argv[0], strerror(-err));
299 exit(EXIT_FAILURE);
300 } else {
301 printf("\nRead the config file.\n");
302 }
303
304 /* Read command line arguments */
305 err = rados_conf_parse_argv(cluster, argc, argv);
306 if (err < 0) {
307 fprintf(stderr, "%s: cannot parse command line arguments: %s\n", argv[0], strerror(-err));
308 exit(EXIT_FAILURE);
309 } else {
310 printf("\nRead the command line arguments.\n");
311 }
312
313 /* Connect to the cluster */
314 err = rados_connect(cluster);
315 if (err < 0) {
316 fprintf(stderr, "%s: cannot connect to cluster: %s\n", argv[0], strerror(-err));
317 exit(EXIT_FAILURE);
318 } else {
319 printf("\nConnected to the cluster.\n");
320 }
321
322 }
323
324 Compile your client and link to ``librados`` using ``-lrados``. For example::
325
326 gcc ceph-client.c -lrados -o ceph-client
327
328
329 C++ Example
330 -----------
331
332 The Ceph project provides a C++ example in the ``ceph/examples/librados``
333 directory. For C++, a simple cluster handle using the ``admin`` user requires
334 you to initialize a ``librados::Rados`` cluster handle object:
335
336 .. code-block:: c++
337
338 #include <iostream>
339 #include <string>
340 #include <rados/librados.hpp>
341
342 int main(int argc, const char **argv)
343 {
344
345 int ret = 0;
346
347 /* Declare the cluster handle and required variables. */
348 librados::Rados cluster;
349 char cluster_name[] = "ceph";
350 char user_name[] = "client.admin";
351 uint64_t flags = 0;
352
353 /* Initialize the cluster handle with the "ceph" cluster name and "client.admin" user */
354 {
355 ret = cluster.init2(user_name, cluster_name, flags);
356 if (ret < 0) {
357 std::cerr << "Couldn't initialize the cluster handle! error " << ret << std::endl;
358 return EXIT_FAILURE;
359 } else {
360 std::cout << "Created a cluster handle." << std::endl;
361 }
362 }
363
364 /* Read a Ceph configuration file to configure the cluster handle. */
365 {
366 ret = cluster.conf_read_file("/etc/ceph/ceph.conf");
367 if (ret < 0) {
368 std::cerr << "Couldn't read the Ceph configuration file! error " << ret << std::endl;
369 return EXIT_FAILURE;
370 } else {
371 std::cout << "Read the Ceph configuration file." << std::endl;
372 }
373 }
374
375 /* Read command line arguments */
376 {
377 ret = cluster.conf_parse_argv(argc, argv);
378 if (ret < 0) {
379 std::cerr << "Couldn't parse command line options! error " << ret << std::endl;
380 return EXIT_FAILURE;
381 } else {
382 std::cout << "Parsed command line options." << std::endl;
383 }
384 }
385
386 /* Connect to the cluster */
387 {
388 ret = cluster.connect();
389 if (ret < 0) {
390 std::cerr << "Couldn't connect to cluster! error " << ret << std::endl;
391 return EXIT_FAILURE;
392 } else {
393 std::cout << "Connected to the cluster." << std::endl;
394 }
395 }
396
397 return 0;
398 }
399
400
401 Compile the source; then, link ``librados`` using ``-lrados``.
402 For example::
403
404 g++ -g -c ceph-client.cc -o ceph-client.o
405 g++ -g ceph-client.o -lrados -o ceph-client
406
407
408
409 Python Example
410 --------------
411
412 Python uses the ``admin`` id and the ``ceph`` cluster name by default, and
413 will read the standard ``ceph.conf`` file if the conffile parameter is
414 set to the empty string. The Python binding converts C++ errors
415 into exceptions.
416
417
418 .. code-block:: python
419
420 import rados
421
422 try:
423 cluster = rados.Rados(conffile='')
424 except TypeError as e:
425 print('Argument validation error: {}'.format(e))
426 raise e
427
428 print("Created cluster handle.")
429
430 try:
431 cluster.connect()
432 except Exception as e:
433 print("connection error: {}".format(e))
434 raise e
435 finally:
436 print("Connected to the cluster.")
437
438
439 Execute the example to verify that it connects to your cluster. ::
440
441 python ceph-client.py
442
443
444 Java Example
445 ------------
446
447 Java requires you to specify the user ID (``admin``) or user name
448 (``client.admin``), and uses the ``ceph`` cluster name by default . The Java
449 binding converts C++-based errors into exceptions.
450
451 .. code-block:: java
452
453 import com.ceph.rados.Rados;
454 import com.ceph.rados.RadosException;
455
456 import java.io.File;
457
458 public class CephClient {
459 public static void main (String args[]){
460
461 try {
462 Rados cluster = new Rados("admin");
463 System.out.println("Created cluster handle.");
464
465 File f = new File("/etc/ceph/ceph.conf");
466 cluster.confReadFile(f);
467 System.out.println("Read the configuration file.");
468
469 cluster.connect();
470 System.out.println("Connected to the cluster.");
471
472 } catch (RadosException e) {
473 System.out.println(e.getMessage() + ": " + e.getReturnValue());
474 }
475 }
476 }
477
478
479 Compile the source; then, run it. If you have copied the JAR to
480 ``/usr/share/java`` and sym linked from your ``ext`` directory, you won't need
481 to specify the classpath. For example::
482
483 javac CephClient.java
484 java CephClient
485
486
487 PHP Example
488 ------------
489
490 With the RADOS extension enabled in PHP you can start creating a new cluster handle very easily:
491
492 .. code-block:: php
493
494 <?php
495
496 $r = rados_create();
497 rados_conf_read_file($r, '/etc/ceph/ceph.conf');
498 if (!rados_connect($r)) {
499 echo "Failed to connect to Ceph cluster";
500 } else {
501 echo "Successfully connected to Ceph cluster";
502 }
503
504
505 Save this as rados.php and run the code::
506
507 php rados.php
508
509
510 Step 3: Creating an I/O Context
511 ===============================
512
513 Once your app has a cluster handle and a connection to a Ceph Storage Cluster,
514 you may create an I/O Context and begin reading and writing data. An I/O Context
515 binds the connection to a specific pool. The user must have appropriate
516 `CAPS`_ permissions to access the specified pool. For example, a user with read
517 access but not write access will only be able to read data. I/O Context
518 functionality includes:
519
520 - Write/read data and extended attributes
521 - List and iterate over objects and extended attributes
522 - Snapshot pools, list snapshots, etc.
523
524
525 .. ditaa::
526 +---------+ +---------+ +---------+
527 | Client | | Monitor | | OSD |
528 +---------+ +---------+ +---------+
529 | | |
530 |-----+ create | |
531 | | I/O | |
532 |<----+ context | |
533 | | |
534 | write data | |
535 |---------------+-------------->|
536 | | |
537 | write ack | |
538 |<--------------+---------------|
539 | | |
540 | write xattr | |
541 |---------------+-------------->|
542 | | |
543 | xattr ack | |
544 |<--------------+---------------|
545 | | |
546 | read data | |
547 |---------------+-------------->|
548 | | |
549 | read ack | |
550 |<--------------+---------------|
551 | | |
552 | remove data | |
553 |---------------+-------------->|
554 | | |
555 | remove ack | |
556 |<--------------+---------------|
557
558
559
560 RADOS enables you to interact both synchronously and asynchronously. Once your
561 app has an I/O Context, read/write operations only require you to know the
562 object/xattr name. The CRUSH algorithm encapsulated in ``librados`` uses the
563 cluster map to identify the appropriate OSD. OSD daemons handle the replication,
564 as described in `Smart Daemons Enable Hyperscale`_. The ``librados`` library also
565 maps objects to placement groups, as described in `Calculating PG IDs`_.
566
567 The following examples use the default ``data`` pool. However, you may also
568 use the API to list pools, ensure they exist, or create and delete pools. For
569 the write operations, the examples illustrate how to use synchronous mode. For
570 the read operations, the examples illustrate how to use asynchronous mode.
571
572 .. important:: Use caution when deleting pools with this API. If you delete
573 a pool, the pool and ALL DATA in the pool will be lost.
574
575
576 C Example
577 ---------
578
579
580 .. code-block:: c
581
582 #include <stdio.h>
583 #include <stdlib.h>
584 #include <string.h>
585 #include <rados/librados.h>
586
587 int main (int argc, const char **argv)
588 {
589 /*
590 * Continued from previous C example, where cluster handle and
591 * connection are established. First declare an I/O Context.
592 */
593
594 rados_ioctx_t io;
595 char *poolname = "data";
596
597 err = rados_ioctx_create(cluster, poolname, &io);
598 if (err < 0) {
599 fprintf(stderr, "%s: cannot open rados pool %s: %s\n", argv[0], poolname, strerror(-err));
600 rados_shutdown(cluster);
601 exit(EXIT_FAILURE);
602 } else {
603 printf("\nCreated I/O context.\n");
604 }
605
606 /* Write data to the cluster synchronously. */
607 err = rados_write(io, "hw", "Hello World!", 12, 0);
608 if (err < 0) {
609 fprintf(stderr, "%s: Cannot write object \"hw\" to pool %s: %s\n", argv[0], poolname, strerror(-err));
610 rados_ioctx_destroy(io);
611 rados_shutdown(cluster);
612 exit(1);
613 } else {
614 printf("\nWrote \"Hello World\" to object \"hw\".\n");
615 }
616
617 char xattr[] = "en_US";
618 err = rados_setxattr(io, "hw", "lang", xattr, 5);
619 if (err < 0) {
620 fprintf(stderr, "%s: Cannot write xattr to pool %s: %s\n", argv[0], poolname, strerror(-err));
621 rados_ioctx_destroy(io);
622 rados_shutdown(cluster);
623 exit(1);
624 } else {
625 printf("\nWrote \"en_US\" to xattr \"lang\" for object \"hw\".\n");
626 }
627
628 /*
629 * Read data from the cluster asynchronously.
630 * First, set up asynchronous I/O completion.
631 */
632 rados_completion_t comp;
633 err = rados_aio_create_completion(NULL, NULL, NULL, &comp);
634 if (err < 0) {
635 fprintf(stderr, "%s: Could not create aio completion: %s\n", argv[0], strerror(-err));
636 rados_ioctx_destroy(io);
637 rados_shutdown(cluster);
638 exit(1);
639 } else {
640 printf("\nCreated AIO completion.\n");
641 }
642
643 /* Next, read data using rados_aio_read. */
644 char read_res[100];
645 err = rados_aio_read(io, "hw", comp, read_res, 12, 0);
646 if (err < 0) {
647 fprintf(stderr, "%s: Cannot read object. %s %s\n", argv[0], poolname, strerror(-err));
648 rados_ioctx_destroy(io);
649 rados_shutdown(cluster);
650 exit(1);
651 } else {
652 printf("\nRead object \"hw\". The contents are:\n %s \n", read_res);
653 }
654
655 /* Wait for the operation to complete */
656 rados_aio_wait_for_complete(comp);
657
658 /* Release the asynchronous I/O complete handle to avoid memory leaks. */
659 rados_aio_release(comp);
660
661
662 char xattr_res[100];
663 err = rados_getxattr(io, "hw", "lang", xattr_res, 5);
664 if (err < 0) {
665 fprintf(stderr, "%s: Cannot read xattr. %s %s\n", argv[0], poolname, strerror(-err));
666 rados_ioctx_destroy(io);
667 rados_shutdown(cluster);
668 exit(1);
669 } else {
670 printf("\nRead xattr \"lang\" for object \"hw\". The contents are:\n %s \n", xattr_res);
671 }
672
673 err = rados_rmxattr(io, "hw", "lang");
674 if (err < 0) {
675 fprintf(stderr, "%s: Cannot remove xattr. %s %s\n", argv[0], poolname, strerror(-err));
676 rados_ioctx_destroy(io);
677 rados_shutdown(cluster);
678 exit(1);
679 } else {
680 printf("\nRemoved xattr \"lang\" for object \"hw\".\n");
681 }
682
683 err = rados_remove(io, "hw");
684 if (err < 0) {
685 fprintf(stderr, "%s: Cannot remove object. %s %s\n", argv[0], poolname, strerror(-err));
686 rados_ioctx_destroy(io);
687 rados_shutdown(cluster);
688 exit(1);
689 } else {
690 printf("\nRemoved object \"hw\".\n");
691 }
692
693 }
694
695
696
697 C++ Example
698 -----------
699
700
701 .. code-block:: c++
702
703 #include <iostream>
704 #include <string>
705 #include <rados/librados.hpp>
706
707 int main(int argc, const char **argv)
708 {
709
710 /* Continued from previous C++ example, where cluster handle and
711 * connection are established. First declare an I/O Context.
712 */
713
714 librados::IoCtx io_ctx;
715 const char *pool_name = "data";
716
717 {
718 ret = cluster.ioctx_create(pool_name, io_ctx);
719 if (ret < 0) {
720 std::cerr << "Couldn't set up ioctx! error " << ret << std::endl;
721 exit(EXIT_FAILURE);
722 } else {
723 std::cout << "Created an ioctx for the pool." << std::endl;
724 }
725 }
726
727
728 /* Write an object synchronously. */
729 {
730 librados::bufferlist bl;
731 bl.append("Hello World!");
732 ret = io_ctx.write_full("hw", bl);
733 if (ret < 0) {
734 std::cerr << "Couldn't write object! error " << ret << std::endl;
735 exit(EXIT_FAILURE);
736 } else {
737 std::cout << "Wrote new object 'hw' " << std::endl;
738 }
739 }
740
741
742 /*
743 * Add an xattr to the object.
744 */
745 {
746 librados::bufferlist lang_bl;
747 lang_bl.append("en_US");
748 ret = io_ctx.setxattr("hw", "lang", lang_bl);
749 if (ret < 0) {
750 std::cerr << "failed to set xattr version entry! error "
751 << ret << std::endl;
752 exit(EXIT_FAILURE);
753 } else {
754 std::cout << "Set the xattr 'lang' on our object!" << std::endl;
755 }
756 }
757
758
759 /*
760 * Read the object back asynchronously.
761 */
762 {
763 librados::bufferlist read_buf;
764 int read_len = 4194304;
765
766 //Create I/O Completion.
767 librados::AioCompletion *read_completion = librados::Rados::aio_create_completion();
768
769 //Send read request.
770 ret = io_ctx.aio_read("hw", read_completion, &read_buf, read_len, 0);
771 if (ret < 0) {
772 std::cerr << "Couldn't start read object! error " << ret << std::endl;
773 exit(EXIT_FAILURE);
774 }
775
776 // Wait for the request to complete, and check that it succeeded.
777 read_completion->wait_for_complete();
778 ret = read_completion->get_return_value();
779 if (ret < 0) {
780 std::cerr << "Couldn't read object! error " << ret << std::endl;
781 exit(EXIT_FAILURE);
782 } else {
783 std::cout << "Read object hw asynchronously with contents.\n"
784 << read_buf.c_str() << std::endl;
785 }
786 }
787
788
789 /*
790 * Read the xattr.
791 */
792 {
793 librados::bufferlist lang_res;
794 ret = io_ctx.getxattr("hw", "lang", lang_res);
795 if (ret < 0) {
796 std::cerr << "failed to get xattr version entry! error "
797 << ret << std::endl;
798 exit(EXIT_FAILURE);
799 } else {
800 std::cout << "Got the xattr 'lang' from object hw!"
801 << lang_res.c_str() << std::endl;
802 }
803 }
804
805
806 /*
807 * Remove the xattr.
808 */
809 {
810 ret = io_ctx.rmxattr("hw", "lang");
811 if (ret < 0) {
812 std::cerr << "Failed to remove xattr! error "
813 << ret << std::endl;
814 exit(EXIT_FAILURE);
815 } else {
816 std::cout << "Removed the xattr 'lang' from our object!" << std::endl;
817 }
818 }
819
820 /*
821 * Remove the object.
822 */
823 {
824 ret = io_ctx.remove("hw");
825 if (ret < 0) {
826 std::cerr << "Couldn't remove object! error " << ret << std::endl;
827 exit(EXIT_FAILURE);
828 } else {
829 std::cout << "Removed object 'hw'." << std::endl;
830 }
831 }
832 }
833
834
835
836 Python Example
837 --------------
838
839 .. code-block:: python
840
841 print("\n\nI/O Context and Object Operations")
842 print("=================================")
843
844 print("\nCreating a context for the 'data' pool")
845 if not cluster.pool_exists('data'):
846 raise RuntimeError('No data pool exists')
847 ioctx = cluster.open_ioctx('data')
848
849 print("\nWriting object 'hw' with contents 'Hello World!' to pool 'data'.")
850 ioctx.write("hw", b"Hello World!")
851 print("Writing XATTR 'lang' with value 'en_US' to object 'hw'")
852 ioctx.set_xattr("hw", "lang", b"en_US")
853
854
855 print("\nWriting object 'bm' with contents 'Bonjour tout le monde!' to pool
856 'data'.")
857 ioctx.write("bm", b"Bonjour tout le monde!")
858 print("Writing XATTR 'lang' with value 'fr_FR' to object 'bm'")
859 ioctx.set_xattr("bm", "lang", b"fr_FR")
860
861 print("\nContents of object 'hw'\n------------------------")
862 print(ioctx.read("hw"))
863
864 print("\n\nGetting XATTR 'lang' from object 'hw'")
865 print(ioctx.get_xattr("hw", "lang"))
866
867 print("\nContents of object 'bm'\n------------------------")
868 print(ioctx.read("bm"))
869
870 print("\n\nGetting XATTR 'lang' from object 'bm'")
871 print(ioctx.get_xattr("bm", "lang"))
872
873
874 print("\nRemoving object 'hw'")
875 ioctx.remove_object("hw")
876
877 print("Removing object 'bm'")
878 ioctx.remove_object("bm")
879
880
881 Java-Example
882 ------------
883
884 .. code-block:: java
885
886 import com.ceph.rados.Rados;
887 import com.ceph.rados.RadosException;
888
889 import java.io.File;
890 import com.ceph.rados.IoCTX;
891
892 public class CephClient {
893 public static void main (String args[]){
894
895 try {
896 Rados cluster = new Rados("admin");
897 System.out.println("Created cluster handle.");
898
899 File f = new File("/etc/ceph/ceph.conf");
900 cluster.confReadFile(f);
901 System.out.println("Read the configuration file.");
902
903 cluster.connect();
904 System.out.println("Connected to the cluster.");
905
906 IoCTX io = cluster.ioCtxCreate("data");
907
908 String oidone = "hw";
909 String contentone = "Hello World!";
910 io.write(oidone, contentone);
911
912 String oidtwo = "bm";
913 String contenttwo = "Bonjour tout le monde!";
914 io.write(oidtwo, contenttwo);
915
916 String[] objects = io.listObjects();
917 for (String object: objects)
918 System.out.println(object);
919
920 io.remove(oidone);
921 io.remove(oidtwo);
922
923 cluster.ioCtxDestroy(io);
924
925 } catch (RadosException e) {
926 System.out.println(e.getMessage() + ": " + e.getReturnValue());
927 }
928 }
929 }
930
931
932 PHP Example
933 -----------
934
935 .. code-block:: php
936
937 <?php
938
939 $io = rados_ioctx_create($r, "mypool");
940 rados_write_full($io, "oidOne", "mycontents");
941 rados_remove("oidOne");
942 rados_ioctx_destroy($io);
943
944
945 Step 4: Closing Sessions
946 ========================
947
948 Once your app finishes with the I/O Context and cluster handle, the app should
949 close the connection and shutdown the handle. For asynchronous I/O, the app
950 should also ensure that pending asynchronous operations have completed.
951
952
953 C Example
954 ---------
955
956 .. code-block:: c
957
958 rados_ioctx_destroy(io);
959 rados_shutdown(cluster);
960
961
962 C++ Example
963 -----------
964
965 .. code-block:: c++
966
967 io_ctx.close();
968 cluster.shutdown();
969
970
971 Java Example
972 --------------
973
974 .. code-block:: java
975
976 cluster.ioCtxDestroy(io);
977 cluster.shutDown();
978
979
980 Python Example
981 --------------
982
983 .. code-block:: python
984
985 print("\nClosing the connection.")
986 ioctx.close()
987
988 print("Shutting down the handle.")
989 cluster.shutdown()
990
991 PHP Example
992 -----------
993
994 .. code-block:: php
995
996 rados_shutdown($r);
997
998
999
1000 .. _user ID: ../../operations/user-management#command-line-usage
1001 .. _CAPS: ../../operations/user-management#authorization-capabilities
1002 .. _Installation (Quick): ../../../start
1003 .. _Smart Daemons Enable Hyperscale: ../../../architecture#smart-daemons-enable-hyperscale
1004 .. _Calculating PG IDs: ../../../architecture#calculating-pg-ids
1005 .. _computes: ../../../architecture#calculating-pg-ids
1006 .. _OSD: ../../../architecture#mapping-pgs-to-osds