]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/api/librados-intro.rst
update sources to v12.2.3
[ceph.git] / ceph / doc / rados / api / librados-intro.rst
1 ==========================
2 Introduction to librados
3 ==========================
4
5 The :term:`Ceph Storage Cluster` provides the basic storage service that allows
6 :term:`Ceph` to uniquely deliver **object, block, and file storage** in one
7 unified system. However, you are not limited to using the RESTful, block, or
8 POSIX interfaces. Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object
9 Store)`, the ``librados`` API enables you to create your own interface to the
10 Ceph Storage Cluster.
11
12 The ``librados`` API enables you to interact with the two types of daemons in
13 the Ceph Storage Cluster:
14
15 - The :term:`Ceph Monitor`, which maintains a master copy of the cluster map.
16 - The :term:`Ceph OSD Daemon` (OSD), which stores data as objects on a storage node.
17
18 .. ditaa::
19 +---------------------------------+
20 | Ceph Storage Cluster Protocol |
21 | (librados) |
22 +---------------------------------+
23 +---------------+ +---------------+
24 | OSDs | | Monitors |
25 +---------------+ +---------------+
26
27 This guide provides a high-level introduction to using ``librados``.
28 Refer to :doc:`../../architecture` for additional details of the Ceph
29 Storage Cluster. To use the API, you need a running Ceph Storage Cluster.
30 See `Installation (Quick)`_ for details.
31
32
33 Step 1: Getting librados
34 ========================
35
36 Your client application must bind with ``librados`` to connect to the Ceph
37 Storage Cluster. You must install ``librados`` and any required packages to
38 write applications that use ``librados``. The ``librados`` API is written in
39 C++, with additional bindings for C, Python, Java and PHP.
40
41
42 Getting librados for C/C++
43 --------------------------
44
45 To install ``librados`` development support files for C/C++ on Debian/Ubuntu
46 distributions, execute the following::
47
48 sudo apt-get install librados-dev
49
50 To install ``librados`` development support files for C/C++ on RHEL/CentOS
51 distributions, execute the following::
52
53 sudo yum install librados2-devel
54
55 Once you install ``librados`` for developers, you can find the required
56 headers for C/C++ under ``/usr/include/rados``. ::
57
58 ls /usr/include/rados
59
60
61 Getting librados for Python
62 ---------------------------
63
64 The ``rados`` module provides ``librados`` support to Python
65 applications. The ``librados-dev`` package for Debian/Ubuntu
66 and the ``librados2-devel`` package for RHEL/CentOS will install the
67 ``python-rados`` package for you. You may install ``python-rados``
68 directly too.
69
70 To install ``librados`` development support files for Python on Debian/Ubuntu
71 distributions, execute the following::
72
73 sudo apt-get install python-rados
74
75 To install ``librados`` development support files for Python on RHEL/CentOS
76 distributions, execute the following::
77
78 sudo yum install python-rados
79
80 You can find the module under ``/usr/share/pyshared`` on Debian systems,
81 or under ``/usr/lib/python*/site-packages`` on CentOS/RHEL systems.
82
83
84 Getting librados for Java
85 -------------------------
86
87 To install ``librados`` for Java, you need to execute the following procedure:
88
89 #. Install ``jna.jar``. For Debian/Ubuntu, execute::
90
91 sudo apt-get install libjna-java
92
93 For CentOS/RHEL, execute::
94
95 sudo yum install jna
96
97 The JAR files are located in ``/usr/share/java``.
98
99 #. Clone the ``rados-java`` repository::
100
101 git clone --recursive https://github.com/ceph/rados-java.git
102
103 #. Build the ``rados-java`` repository::
104
105 cd rados-java
106 ant
107
108 The JAR file is located under ``rados-java/target``.
109
110 #. Copy the JAR for RADOS to a common location (e.g., ``/usr/share/java``) and
111 ensure that it and the JNA JAR are in your JVM's classpath. For example::
112
113 sudo cp target/rados-0.1.3.jar /usr/share/java/rados-0.1.3.jar
114 sudo ln -s /usr/share/java/jna-3.2.7.jar /usr/lib/jvm/default-java/jre/lib/ext/jna-3.2.7.jar
115 sudo ln -s /usr/share/java/rados-0.1.3.jar /usr/lib/jvm/default-java/jre/lib/ext/rados-0.1.3.jar
116
117 To build the documentation, execute the following::
118
119 ant docs
120
121
122 Getting librados for PHP
123 -------------------------
124
125 To install the ``librados`` extension for PHP, you need to execute the following procedure:
126
127 #. Install php-dev. For Debian/Ubuntu, execute::
128
129 sudo apt-get install php5-dev build-essential
130
131 For CentOS/RHEL, execute::
132
133 sudo yum install php-devel
134
135 #. Clone the ``phprados`` repository::
136
137 git clone https://github.com/ceph/phprados.git
138
139 #. Build ``phprados``::
140
141 cd phprados
142 phpize
143 ./configure
144 make
145 sudo make install
146
147 #. Enable ``phprados`` in php.ini by adding::
148
149 extension=rados.so
150
151
152 Step 2: Configuring a Cluster Handle
153 ====================================
154
155 A :term:`Ceph Client`, via ``librados``, interacts directly with OSDs to store
156 and retrieve data. To interact with OSDs, the client app must invoke
157 ``librados`` and connect to a Ceph Monitor. Once connected, ``librados``
158 retrieves the :term:`Cluster Map` from the Ceph Monitor. When the client app
159 wants to read or write data, it creates an I/O context and binds to a
160 :term:`pool`. The pool has an associated :term:`CRUSH Rule` that defines how it
161 will place data in the storage cluster. Via the I/O context, the client
162 provides the object name to ``librados``, which takes the object name
163 and the cluster map (i.e., the topology of the cluster) and `computes`_ the
164 placement group and `OSD`_ for locating the data. Then the client application
165 can read or write data. The client app doesn't need to learn about the topology
166 of the cluster directly.
167
168 .. ditaa::
169 +--------+ Retrieves +---------------+
170 | Client |------------>| Cluster Map |
171 +--------+ +---------------+
172 |
173 v Writes
174 /-----\
175 | obj |
176 \-----/
177 | To
178 v
179 +--------+ +---------------+
180 | Pool |---------->| CRUSH Rule |
181 +--------+ Selects +---------------+
182
183
184 The Ceph Storage Cluster handle encapsulates the client configuration, including:
185
186 - The `user ID`_ for ``rados_create()`` or user name for ``rados_create2()``
187 (preferred).
188 - The :term:`cephx` authentication key
189 - The monitor ID and IP address
190 - Logging levels
191 - Debugging levels
192
193 Thus, the first steps in using the cluster from your app are to 1) create
194 a cluster handle that your app will use to connect to the storage cluster,
195 and then 2) use that handle to connect. To connect to the cluster, the
196 app must supply a monitor address, a username and an authentication key
197 (cephx is enabled by default).
198
199 .. tip:: Talking to different Ceph Storage Clusters – or to the same cluster
200 with different users – requires different cluster handles.
201
202 RADOS provides a number of ways for you to set the required values. For
203 the monitor and encryption key settings, an easy way to handle them is to ensure
204 that your Ceph configuration file contains a ``keyring`` path to a keyring file
205 and at least one monitor address (e.g,. ``mon host``). For example::
206
207 [global]
208 mon host = 192.168.1.1
209 keyring = /etc/ceph/ceph.client.admin.keyring
210
211 Once you create the handle, you can read a Ceph configuration file to configure
212 the handle. You can also pass arguments to your app and parse them with the
213 function for parsing command line arguments (e.g., ``rados_conf_parse_argv()``),
214 or parse Ceph environment variables (e.g., ``rados_conf_parse_env()``). Some
215 wrappers may not implement convenience methods, so you may need to implement
216 these capabilities. The following diagram provides a high-level flow for the
217 initial connection.
218
219
220 .. ditaa:: +---------+ +---------+
221 | Client | | Monitor |
222 +---------+ +---------+
223 | |
224 |-----+ create |
225 | | cluster |
226 |<----+ handle |
227 | |
228 |-----+ read |
229 | | config |
230 |<----+ file |
231 | |
232 | connect |
233 |-------------->|
234 | |
235 |<--------------|
236 | connected |
237 | |
238
239
240 Once connected, your app can invoke functions that affect the whole cluster
241 with only the cluster handle. For example, once you have a cluster
242 handle, you can:
243
244 - Get cluster statistics
245 - Use Pool Operation (exists, create, list, delete)
246 - Get and set the configuration
247
248
249 One of the powerful features of Ceph is the ability to bind to different pools.
250 Each pool may have a different number of placement groups, object replicas and
251 replication strategies. For example, a pool could be set up as a "hot" pool that
252 uses SSDs for frequently used objects or a "cold" pool that uses erasure coding.
253
254 The main difference in the various ``librados`` bindings is between C and
255 the object-oriented bindings for C++, Java and Python. The object-oriented
256 bindings use objects to represent cluster handles, IO Contexts, iterators,
257 exceptions, etc.
258
259
260 C Example
261 ---------
262
263 For C, creating a simple cluster handle using the ``admin`` user, configuring
264 it and connecting to the cluster might look something like this:
265
266 .. code-block:: c
267
268 #include <stdio.h>
269 #include <stdlib.h>
270 #include <string.h>
271 #include <rados/librados.h>
272
273 int main (int argc, const char **argv)
274 {
275
276 /* Declare the cluster handle and required arguments. */
277 rados_t cluster;
278 char cluster_name[] = "ceph";
279 char user_name[] = "client.admin";
280 uint64_t flags;
281
282 /* Initialize the cluster handle with the "ceph" cluster name and the "client.admin" user */
283 int err;
284 err = rados_create2(&cluster, cluster_name, user_name, flags);
285
286 if (err < 0) {
287 fprintf(stderr, "%s: Couldn't create the cluster handle! %s\n", argv[0], strerror(-err));
288 exit(EXIT_FAILURE);
289 } else {
290 printf("\nCreated a cluster handle.\n");
291 }
292
293
294 /* Read a Ceph configuration file to configure the cluster handle. */
295 err = rados_conf_read_file(cluster, "/etc/ceph/ceph.conf");
296 if (err < 0) {
297 fprintf(stderr, "%s: cannot read config file: %s\n", argv[0], strerror(-err));
298 exit(EXIT_FAILURE);
299 } else {
300 printf("\nRead the config file.\n");
301 }
302
303 /* Read command line arguments */
304 err = rados_conf_parse_argv(cluster, argc, argv);
305 if (err < 0) {
306 fprintf(stderr, "%s: cannot parse command line arguments: %s\n", argv[0], strerror(-err));
307 exit(EXIT_FAILURE);
308 } else {
309 printf("\nRead the command line arguments.\n");
310 }
311
312 /* Connect to the cluster */
313 err = rados_connect(cluster);
314 if (err < 0) {
315 fprintf(stderr, "%s: cannot connect to cluster: %s\n", argv[0], strerror(-err));
316 exit(EXIT_FAILURE);
317 } else {
318 printf("\nConnected to the cluster.\n");
319 }
320
321 }
322
323 Compile your client and link to ``librados`` using ``-lrados``. For example::
324
325 gcc ceph-client.c -lrados -o ceph-client
326
327
328 C++ Example
329 -----------
330
331 The Ceph project provides a C++ example in the ``ceph/examples/librados``
332 directory. For C++, a simple cluster handle using the ``admin`` user requires
333 you to initialize a ``librados::Rados`` cluster handle object:
334
335 .. code-block:: c++
336
337 #include <iostream>
338 #include <string>
339 #include <rados/librados.hpp>
340
341 int main(int argc, const char **argv)
342 {
343
344 int ret = 0;
345
346 /* Declare the cluster handle and required variables. */
347 librados::Rados cluster;
348 char cluster_name[] = "ceph";
349 char user_name[] = "client.admin";
350 uint64_t flags = 0;
351
352 /* Initialize the cluster handle with the "ceph" cluster name and "client.admin" user */
353 {
354 ret = cluster.init2(user_name, cluster_name, flags);
355 if (ret < 0) {
356 std::cerr << "Couldn't initialize the cluster handle! error " << ret << std::endl;
357 return EXIT_FAILURE;
358 } else {
359 std::cout << "Created a cluster handle." << std::endl;
360 }
361 }
362
363 /* Read a Ceph configuration file to configure the cluster handle. */
364 {
365 ret = cluster.conf_read_file("/etc/ceph/ceph.conf");
366 if (ret < 0) {
367 std::cerr << "Couldn't read the Ceph configuration file! error " << ret << std::endl;
368 return EXIT_FAILURE;
369 } else {
370 std::cout << "Read the Ceph configuration file." << std::endl;
371 }
372 }
373
374 /* Read command line arguments */
375 {
376 ret = cluster.conf_parse_argv(argc, argv);
377 if (ret < 0) {
378 std::cerr << "Couldn't parse command line options! error " << ret << std::endl;
379 return EXIT_FAILURE;
380 } else {
381 std::cout << "Parsed command line options." << std::endl;
382 }
383 }
384
385 /* Connect to the cluster */
386 {
387 ret = cluster.connect();
388 if (ret < 0) {
389 std::cerr << "Couldn't connect to cluster! error " << ret << std::endl;
390 return EXIT_FAILURE;
391 } else {
392 std::cout << "Connected to the cluster." << std::endl;
393 }
394 }
395
396 return 0;
397 }
398
399
400 Compile the source; then, link ``librados`` using ``-lrados``.
401 For example::
402
403 g++ -g -c ceph-client.cc -o ceph-client.o
404 g++ -g ceph-client.o -lrados -o ceph-client
405
406
407
408 Python Example
409 --------------
410
411 Python uses the ``admin`` id and the ``ceph`` cluster name by default, and
412 will read the standard ``ceph.conf`` file if the conffile parameter is
413 set to the empty string. The Python binding converts C++ errors
414 into exceptions.
415
416
417 .. code-block:: python
418
419 import rados
420
421 try:
422 cluster = rados.Rados(conffile='')
423 except TypeError as e:
424 print 'Argument validation error: ', e
425 raise e
426
427 print "Created cluster handle."
428
429 try:
430 cluster.connect()
431 except Exception as e:
432 print "connection error: ", e
433 raise e
434 finally:
435 print "Connected to the cluster."
436
437
438 Execute the example to verify that it connects to your cluster. ::
439
440 python ceph-client.py
441
442
443 Java Example
444 ------------
445
446 Java requires you to specify the user ID (``admin``) or user name
447 (``client.admin``), and uses the ``ceph`` cluster name by default . The Java
448 binding converts C++-based errors into exceptions.
449
450 .. code-block:: java
451
452 import com.ceph.rados.Rados;
453 import com.ceph.rados.RadosException;
454
455 import java.io.File;
456
457 public class CephClient {
458 public static void main (String args[]){
459
460 try {
461 Rados cluster = new Rados("admin");
462 System.out.println("Created cluster handle.");
463
464 File f = new File("/etc/ceph/ceph.conf");
465 cluster.confReadFile(f);
466 System.out.println("Read the configuration file.");
467
468 cluster.connect();
469 System.out.println("Connected to the cluster.");
470
471 } catch (RadosException e) {
472 System.out.println(e.getMessage() + ": " + e.getReturnValue());
473 }
474 }
475 }
476
477
478 Compile the source; then, run it. If you have copied the JAR to
479 ``/usr/share/java`` and sym linked from your ``ext`` directory, you won't need
480 to specify the classpath. For example::
481
482 javac CephClient.java
483 java CephClient
484
485
486 PHP Example
487 ------------
488
489 With the RADOS extension enabled in PHP you can start creating a new cluster handle very easily:
490
491 .. code-block:: php
492
493 <?php
494
495 $r = rados_create();
496 rados_conf_read_file($r, '/etc/ceph/ceph.conf');
497 if (!rados_connect($r)) {
498 echo "Failed to connect to Ceph cluster";
499 } else {
500 echo "Successfully connected to Ceph cluster";
501 }
502
503
504 Save this as rados.php and run the code::
505
506 php rados.php
507
508
509 Step 3: Creating an I/O Context
510 ===============================
511
512 Once your app has a cluster handle and a connection to a Ceph Storage Cluster,
513 you may create an I/O Context and begin reading and writing data. An I/O Context
514 binds the connection to a specific pool. The user must have appropriate
515 `CAPS`_ permissions to access the specified pool. For example, a user with read
516 access but not write access will only be able to read data. I/O Context
517 functionality includes:
518
519 - Write/read data and extended attributes
520 - List and iterate over objects and extended attributes
521 - Snapshot pools, list snapshots, etc.
522
523
524 .. ditaa:: +---------+ +---------+ +---------+
525 | Client | | Monitor | | OSD |
526 +---------+ +---------+ +---------+
527 | | |
528 |-----+ create | |
529 | | I/O | |
530 |<----+ context | |
531 | | |
532 | write data | |
533 |---------------+-------------->|
534 | | |
535 | write ack | |
536 |<--------------+---------------|
537 | | |
538 | write xattr | |
539 |---------------+-------------->|
540 | | |
541 | xattr ack | |
542 |<--------------+---------------|
543 | | |
544 | read data | |
545 |---------------+-------------->|
546 | | |
547 | read ack | |
548 |<--------------+---------------|
549 | | |
550 | remove data | |
551 |---------------+-------------->|
552 | | |
553 | remove ack | |
554 |<--------------+---------------|
555
556
557
558 RADOS enables you to interact both synchronously and asynchronously. Once your
559 app has an I/O Context, read/write operations only require you to know the
560 object/xattr name. The CRUSH algorithm encapsulated in ``librados`` uses the
561 cluster map to identify the appropriate OSD. OSD daemons handle the replication,
562 as described in `Smart Daemons Enable Hyperscale`_. The ``librados`` library also
563 maps objects to placement groups, as described in `Calculating PG IDs`_.
564
565 The following examples use the default ``data`` pool. However, you may also
566 use the API to list pools, ensure they exist, or create and delete pools. For
567 the write operations, the examples illustrate how to use synchronous mode. For
568 the read operations, the examples illustrate how to use asynchronous mode.
569
570 .. important:: Use caution when deleting pools with this API. If you delete
571 a pool, the pool and ALL DATA in the pool will be lost.
572
573
574 C Example
575 ---------
576
577
578 .. code-block:: c
579
580 #include <stdio.h>
581 #include <stdlib.h>
582 #include <string.h>
583 #include <rados/librados.h>
584
585 int main (int argc, const char **argv)
586 {
587 /*
588 * Continued from previous C example, where cluster handle and
589 * connection are established. First declare an I/O Context.
590 */
591
592 rados_ioctx_t io;
593 char *poolname = "data";
594
595 err = rados_ioctx_create(cluster, poolname, &io);
596 if (err < 0) {
597 fprintf(stderr, "%s: cannot open rados pool %s: %s\n", argv[0], poolname, strerror(-err));
598 rados_shutdown(cluster);
599 exit(EXIT_FAILURE);
600 } else {
601 printf("\nCreated I/O context.\n");
602 }
603
604 /* Write data to the cluster synchronously. */
605 err = rados_write(io, "hw", "Hello World!", 12, 0);
606 if (err < 0) {
607 fprintf(stderr, "%s: Cannot write object \"hw\" to pool %s: %s\n", argv[0], poolname, strerror(-err));
608 rados_ioctx_destroy(io);
609 rados_shutdown(cluster);
610 exit(1);
611 } else {
612 printf("\nWrote \"Hello World\" to object \"hw\".\n");
613 }
614
615 char xattr[] = "en_US";
616 err = rados_setxattr(io, "hw", "lang", xattr, 5);
617 if (err < 0) {
618 fprintf(stderr, "%s: Cannot write xattr to pool %s: %s\n", argv[0], poolname, strerror(-err));
619 rados_ioctx_destroy(io);
620 rados_shutdown(cluster);
621 exit(1);
622 } else {
623 printf("\nWrote \"en_US\" to xattr \"lang\" for object \"hw\".\n");
624 }
625
626 /*
627 * Read data from the cluster asynchronously.
628 * First, set up asynchronous I/O completion.
629 */
630 rados_completion_t comp;
631 err = rados_aio_create_completion(NULL, NULL, NULL, &comp);
632 if (err < 0) {
633 fprintf(stderr, "%s: Could not create aio completion: %s\n", argv[0], strerror(-err));
634 rados_ioctx_destroy(io);
635 rados_shutdown(cluster);
636 exit(1);
637 } else {
638 printf("\nCreated AIO completion.\n");
639 }
640
641 /* Next, read data using rados_aio_read. */
642 char read_res[100];
643 err = rados_aio_read(io, "hw", comp, read_res, 12, 0);
644 if (err < 0) {
645 fprintf(stderr, "%s: Cannot read object. %s %s\n", argv[0], poolname, strerror(-err));
646 rados_ioctx_destroy(io);
647 rados_shutdown(cluster);
648 exit(1);
649 } else {
650 printf("\nRead object \"hw\". The contents are:\n %s \n", read_res);
651 }
652
653 /* Wait for the operation to complete */
654 rados_aio_wait_for_complete(comp);
655
656 /* Release the asynchronous I/O complete handle to avoid memory leaks. */
657 rados_aio_release(comp);
658
659
660 char xattr_res[100];
661 err = rados_getxattr(io, "hw", "lang", xattr_res, 5);
662 if (err < 0) {
663 fprintf(stderr, "%s: Cannot read xattr. %s %s\n", argv[0], poolname, strerror(-err));
664 rados_ioctx_destroy(io);
665 rados_shutdown(cluster);
666 exit(1);
667 } else {
668 printf("\nRead xattr \"lang\" for object \"hw\". The contents are:\n %s \n", xattr_res);
669 }
670
671 err = rados_rmxattr(io, "hw", "lang");
672 if (err < 0) {
673 fprintf(stderr, "%s: Cannot remove xattr. %s %s\n", argv[0], poolname, strerror(-err));
674 rados_ioctx_destroy(io);
675 rados_shutdown(cluster);
676 exit(1);
677 } else {
678 printf("\nRemoved xattr \"lang\" for object \"hw\".\n");
679 }
680
681 err = rados_remove(io, "hw");
682 if (err < 0) {
683 fprintf(stderr, "%s: Cannot remove object. %s %s\n", argv[0], poolname, strerror(-err));
684 rados_ioctx_destroy(io);
685 rados_shutdown(cluster);
686 exit(1);
687 } else {
688 printf("\nRemoved object \"hw\".\n");
689 }
690
691 }
692
693
694
695 C++ Example
696 -----------
697
698
699 .. code-block:: c++
700
701 #include <iostream>
702 #include <string>
703 #include <rados/librados.hpp>
704
705 int main(int argc, const char **argv)
706 {
707
708 /* Continued from previous C++ example, where cluster handle and
709 * connection are established. First declare an I/O Context.
710 */
711
712 librados::IoCtx io_ctx;
713 const char *pool_name = "data";
714
715 {
716 ret = cluster.ioctx_create(pool_name, io_ctx);
717 if (ret < 0) {
718 std::cerr << "Couldn't set up ioctx! error " << ret << std::endl;
719 exit(EXIT_FAILURE);
720 } else {
721 std::cout << "Created an ioctx for the pool." << std::endl;
722 }
723 }
724
725
726 /* Write an object synchronously. */
727 {
728 librados::bufferlist bl;
729 bl.append("Hello World!");
730 ret = io_ctx.write_full("hw", bl);
731 if (ret < 0) {
732 std::cerr << "Couldn't write object! error " << ret << std::endl;
733 exit(EXIT_FAILURE);
734 } else {
735 std::cout << "Wrote new object 'hw' " << std::endl;
736 }
737 }
738
739
740 /*
741 * Add an xattr to the object.
742 */
743 {
744 librados::bufferlist lang_bl;
745 lang_bl.append("en_US");
746 ret = io_ctx.setxattr("hw", "lang", lang_bl);
747 if (ret < 0) {
748 std::cerr << "failed to set xattr version entry! error "
749 << ret << std::endl;
750 exit(EXIT_FAILURE);
751 } else {
752 std::cout << "Set the xattr 'lang' on our object!" << std::endl;
753 }
754 }
755
756
757 /*
758 * Read the object back asynchronously.
759 */
760 {
761 librados::bufferlist read_buf;
762 int read_len = 4194304;
763
764 //Create I/O Completion.
765 librados::AioCompletion *read_completion = librados::Rados::aio_create_completion();
766
767 //Send read request.
768 ret = io_ctx.aio_read("hw", read_completion, &read_buf, read_len, 0);
769 if (ret < 0) {
770 std::cerr << "Couldn't start read object! error " << ret << std::endl;
771 exit(EXIT_FAILURE);
772 }
773
774 // Wait for the request to complete, and check that it succeeded.
775 read_completion->wait_for_complete();
776 ret = read_completion->get_return_value();
777 if (ret < 0) {
778 std::cerr << "Couldn't read object! error " << ret << std::endl;
779 exit(EXIT_FAILURE);
780 } else {
781 std::cout << "Read object hw asynchronously with contents.\n"
782 << read_buf.c_str() << std::endl;
783 }
784 }
785
786
787 /*
788 * Read the xattr.
789 */
790 {
791 librados::bufferlist lang_res;
792 ret = io_ctx.getxattr("hw", "lang", lang_res);
793 if (ret < 0) {
794 std::cerr << "failed to get xattr version entry! error "
795 << ret << std::endl;
796 exit(EXIT_FAILURE);
797 } else {
798 std::cout << "Got the xattr 'lang' from object hw!"
799 << lang_res.c_str() << std::endl;
800 }
801 }
802
803
804 /*
805 * Remove the xattr.
806 */
807 {
808 ret = io_ctx.rmxattr("hw", "lang");
809 if (ret < 0) {
810 std::cerr << "Failed to remove xattr! error "
811 << ret << std::endl;
812 exit(EXIT_FAILURE);
813 } else {
814 std::cout << "Removed the xattr 'lang' from our object!" << std::endl;
815 }
816 }
817
818 /*
819 * Remove the object.
820 */
821 {
822 ret = io_ctx.remove("hw");
823 if (ret < 0) {
824 std::cerr << "Couldn't remove object! error " << ret << std::endl;
825 exit(EXIT_FAILURE);
826 } else {
827 std::cout << "Removed object 'hw'." << std::endl;
828 }
829 }
830 }
831
832
833
834 Python Example
835 --------------
836
837 .. code-block:: python
838
839 print "\n\nI/O Context and Object Operations"
840 print "================================="
841
842 print "\nCreating a context for the 'data' pool"
843 if not cluster.pool_exists('data'):
844 raise RuntimeError('No data pool exists')
845 ioctx = cluster.open_ioctx('data')
846
847 print "\nWriting object 'hw' with contents 'Hello World!' to pool 'data'."
848 ioctx.write("hw", "Hello World!")
849 print "Writing XATTR 'lang' with value 'en_US' to object 'hw'"
850 ioctx.set_xattr("hw", "lang", "en_US")
851
852
853 print "\nWriting object 'bm' with contents 'Bonjour tout le monde!' to pool 'data'."
854 ioctx.write("bm", "Bonjour tout le monde!")
855 print "Writing XATTR 'lang' with value 'fr_FR' to object 'bm'"
856 ioctx.set_xattr("bm", "lang", "fr_FR")
857
858 print "\nContents of object 'hw'\n------------------------"
859 print ioctx.read("hw")
860
861 print "\n\nGetting XATTR 'lang' from object 'hw'"
862 print ioctx.get_xattr("hw", "lang")
863
864 print "\nContents of object 'bm'\n------------------------"
865 print ioctx.read("bm")
866
867 print "Getting XATTR 'lang' from object 'bm'"
868 print ioctx.get_xattr("bm", "lang")
869
870
871 print "\nRemoving object 'hw'"
872 ioctx.remove_object("hw")
873
874 print "Removing object 'bm'"
875 ioctx.remove_object("bm")
876
877
878 Java-Example
879 ------------
880
881 .. code-block:: java
882
883 import com.ceph.rados.Rados;
884 import com.ceph.rados.RadosException;
885
886 import java.io.File;
887 import com.ceph.rados.IoCTX;
888
889 public class CephClient {
890 public static void main (String args[]){
891
892 try {
893 Rados cluster = new Rados("admin");
894 System.out.println("Created cluster handle.");
895
896 File f = new File("/etc/ceph/ceph.conf");
897 cluster.confReadFile(f);
898 System.out.println("Read the configuration file.");
899
900 cluster.connect();
901 System.out.println("Connected to the cluster.");
902
903 IoCTX io = cluster.ioCtxCreate("data");
904
905 String oidone = "hw";
906 String contentone = "Hello World!";
907 io.write(oidone, contentone);
908
909 String oidtwo = "bm";
910 String contenttwo = "Bonjour tout le monde!";
911 io.write(oidtwo, contenttwo);
912
913 String[] objects = io.listObjects();
914 for (String object: objects)
915 System.out.println(object);
916
917 io.remove(oidone);
918 io.remove(oidtwo);
919
920 cluster.ioCtxDestroy(io);
921
922 } catch (RadosException e) {
923 System.out.println(e.getMessage() + ": " + e.getReturnValue());
924 }
925 }
926 }
927
928
929 PHP Example
930 -----------
931
932 .. code-block:: php
933
934 <?php
935
936 $io = rados_ioctx_create($r, "mypool");
937 rados_write_full($io, "oidOne", "mycontents");
938 rados_remove("oidOne");
939 rados_ioctx_destroy($io);
940
941
942 Step 4: Closing Sessions
943 ========================
944
945 Once your app finishes with the I/O Context and cluster handle, the app should
946 close the connection and shutdown the handle. For asynchronous I/O, the app
947 should also ensure that pending asynchronous operations have completed.
948
949
950 C Example
951 ---------
952
953 .. code-block:: c
954
955 rados_ioctx_destroy(io);
956 rados_shutdown(cluster);
957
958
959 C++ Example
960 -----------
961
962 .. code-block:: c++
963
964 io_ctx.close();
965 cluster.shutdown();
966
967
968 Java Example
969 --------------
970
971 .. code-block:: java
972
973 cluster.ioCtxDestroy(io);
974 cluster.shutDown();
975
976
977 Python Example
978 --------------
979
980 .. code-block:: python
981
982 print "\nClosing the connection."
983 ioctx.close()
984
985 print "Shutting down the handle."
986 cluster.shutdown()
987
988 PHP Example
989 -----------
990
991 .. code-block:: php
992
993 rados_shutdown($r);
994
995
996
997 .. _user ID: ../../operations/user-management#command-line-usage
998 .. _CAPS: ../../operations/user-management#authorization-capabilities
999 .. _Installation (Quick): ../../../start
1000 .. _Smart Daemons Enable Hyperscale: ../../../architecture#smart-daemons-enable-hyperscale
1001 .. _Calculating PG IDs: ../../../architecture#calculating-pg-ids
1002 .. _computes: ../../../architecture#calculating-pg-ids
1003 .. _OSD: ../../../architecture#mapping-pgs-to-osds