]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/operations/user-management.rst
bump version to 18.2.2-pve1
[ceph.git] / ceph / doc / rados / operations / user-management.rst
1 .. _user-management:
2
3 =================
4 User Management
5 =================
6
7 This document describes :term:`Ceph Client` users, and describes the process by
8 which they perform authentication and authorization so that they can access the
9 :term:`Ceph Storage Cluster`. Users are either individuals or system actors
10 (for example, applications) that use Ceph clients to interact with the Ceph
11 Storage Cluster daemons.
12
13 .. ditaa::
14 +-----+
15 | {o} |
16 | |
17 +--+--+ /---------\ /---------\
18 | | Ceph | | Ceph |
19 ---+---*----->| |<------------->| |
20 | uses | Clients | | Servers |
21 | \---------/ \---------/
22 /--+--\
23 | |
24 | |
25 actor
26
27
28 When Ceph runs with authentication and authorization enabled (both are enabled
29 by default), you must specify a user name and a keyring that contains the
30 secret key of the specified user (usually these are specified via the command
31 line). If you do not specify a user name, Ceph will use ``client.admin`` as the
32 default user name. If you do not specify a keyring, Ceph will look for a
33 keyring via the ``keyring`` setting in the Ceph configuration. For example, if
34 you execute the ``ceph health`` command without specifying a user or a keyring,
35 Ceph will assume that the keyring is in ``/etc/ceph/ceph.client.admin.keyring``
36 and will attempt to use that keyring. The following illustrates this behavior:
37
38 .. prompt:: bash $
39
40 ceph health
41
42 Ceph will interpret the command like this:
43
44 .. prompt:: bash $
45
46 ceph -n client.admin --keyring=/etc/ceph/ceph.client.admin.keyring health
47
48 Alternatively, you may use the ``CEPH_ARGS`` environment variable to avoid
49 re-entry of the user name and secret.
50
51 For details on configuring the Ceph Storage Cluster to use authentication, see
52 `Cephx Config Reference`_. For details on the architecture of Cephx, see
53 `Architecture - High Availability Authentication`_.
54
55 Background
56 ==========
57
58 No matter what type of Ceph client is used (for example: Block Device, Object
59 Storage, Filesystem, native API), Ceph stores all data as RADOS objects within
60 `pools`_. Ceph users must have access to a given pool in order to read and
61 write data, and Ceph users must have execute permissions in order to use Ceph's
62 administrative commands. The following concepts will help you understand
63 Ceph['s] user management.
64
65 .. _rados-ops-user:
66
67 User
68 ----
69
70 A user is either an individual or a system actor (for example, an application).
71 Creating users allows you to control who (or what) can access your Ceph Storage
72 Cluster, its pools, and the data within those pools.
73
74 Ceph has the concept of a ``type`` of user. For purposes of user management,
75 the type will always be ``client``. Ceph identifies users in a "period-
76 delimited form" that consists of the user type and the user ID: for example,
77 ``TYPE.ID``, ``client.admin``, or ``client.user1``. The reason for user typing
78 is that the Cephx protocol is used not only by clients but also non-clients,
79 such as Ceph Monitors, OSDs, and Metadata Servers. Distinguishing the user type
80 helps to distinguish between client users and other users. This distinction
81 streamlines access control, user monitoring, and traceability.
82
83 Sometimes Ceph's user type might seem confusing, because the Ceph command line
84 allows you to specify a user with or without the type, depending upon your
85 command line usage. If you specify ``--user`` or ``--id``, you can omit the
86 type. For example, ``client.user1`` can be entered simply as ``user1``. On the
87 other hand, if you specify ``--name`` or ``-n``, you must supply the type and
88 name: for example, ``client.user1``. We recommend using the type and name as a
89 best practice wherever possible.
90
91 .. note:: A Ceph Storage Cluster user is not the same as a Ceph Object Storage
92 user or a Ceph File System user. The Ceph Object Gateway uses a Ceph Storage
93 Cluster user to communicate between the gateway daemon and the storage
94 cluster, but the Ceph Object Gateway has its own user-management
95 functionality for end users. The Ceph File System uses POSIX semantics, and
96 the user space associated with the Ceph File System is not the same as the
97 user space associated with a Ceph Storage Cluster user.
98
99 Authorization (Capabilities)
100 ----------------------------
101
102 Ceph uses the term "capabilities" (caps) to describe the permissions granted to
103 an authenticated user to exercise the functionality of the monitors, OSDs, and
104 metadata servers. Capabilities can also restrict access to data within a pool,
105 a namespace within a pool, or a set of pools based on their application tags.
106 A Ceph administrative user specifies the capabilities of a user when creating
107 or updating that user.
108
109 Capability syntax follows this form::
110
111 {daemon-type} '{cap-spec}[, {cap-spec} ...]'
112
113 - **Monitor Caps:** Monitor capabilities include ``r``, ``w``, ``x`` access
114 settings, and can be applied in aggregate from pre-defined profiles with
115 ``profile {name}``. For example::
116
117 mon 'allow {access-spec} [network {network/prefix}]'
118
119 mon 'profile {name}'
120
121 The ``{access-spec}`` syntax is as follows: ::
122
123 * | all | [r][w][x]
124
125 The optional ``{network/prefix}`` is a standard network name and prefix
126 length in CIDR notation (for example, ``10.3.0.0/16``). If
127 ``{network/prefix}`` is present, the monitor capability can be used only by
128 clients that connect from the specified network.
129
130 - **OSD Caps:** OSD capabilities include ``r``, ``w``, ``x``, and
131 ``class-read`` and ``class-write`` access settings. OSD capabilities can be
132 applied in aggregate from pre-defined profiles with ``profile {name}``. In
133 addition, OSD capabilities allow for pool and namespace settings. ::
134
135 osd 'allow {access-spec} [{match-spec}] [network {network/prefix}]'
136
137 osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]] [network {network/prefix}]'
138
139 There are two alternative forms of the ``{access-spec}`` syntax: ::
140
141 * | all | [r][w][x] [class-read] [class-write]
142
143 class {class name} [{method name}]
144
145 There are two alternative forms of the optional ``{match-spec}`` syntax::
146
147 pool={pool-name} [namespace={namespace-name}] [object_prefix {prefix}]
148
149 [namespace={namespace-name}] tag {application} {key}={value}
150
151 The optional ``{network/prefix}`` is a standard network name and prefix
152 length in CIDR notation (for example, ``10.3.0.0/16``). If
153 ``{network/prefix}`` is present, the OSD capability can be used only by
154 clients that connect from the specified network.
155
156 - **Manager Caps:** Manager (``ceph-mgr``) capabilities include ``r``, ``w``,
157 ``x`` access settings, and can be applied in aggregate from pre-defined
158 profiles with ``profile {name}``. For example::
159
160 mgr 'allow {access-spec} [network {network/prefix}]'
161
162 mgr 'profile {name} [{key1} {match-type} {value1} ...] [network {network/prefix}]'
163
164 Manager capabilities can also be specified for specific commands, for all
165 commands exported by a built-in manager service, or for all commands exported
166 by a specific add-on module. For example::
167
168 mgr 'allow command "{command-prefix}" [with {key1} {match-type} {value1} ...] [network {network/prefix}]'
169
170 mgr 'allow service {service-name} {access-spec} [network {network/prefix}]'
171
172 mgr 'allow module {module-name} [with {key1} {match-type} {value1} ...] {access-spec} [network {network/prefix}]'
173
174 The ``{access-spec}`` syntax is as follows: ::
175
176 * | all | [r][w][x]
177
178 The ``{service-name}`` is one of the following: ::
179
180 mgr | osd | pg | py
181
182 The ``{match-type}`` is one of the following: ::
183
184 = | prefix | regex
185
186 - **Metadata Server Caps:** For administrators, use ``allow *``. For all other
187 users (for example, CephFS clients), consult :doc:`/cephfs/client-auth`
188
189 .. note:: The Ceph Object Gateway daemon (``radosgw``) is a client of the
190 Ceph Storage Cluster. For this reason, it is not represented as
191 a Ceph Storage Cluster daemon type.
192
193 The following entries describe access capabilities.
194
195 ``allow``
196
197 :Description: Precedes access settings for a daemon. Implies ``rw``
198 for MDS only.
199
200
201 ``r``
202
203 :Description: Gives the user read access. Required with monitors to retrieve
204 the CRUSH map.
205
206
207 ``w``
208
209 :Description: Gives the user write access to objects.
210
211
212 ``x``
213
214 :Description: Gives the user the capability to call class methods
215 (that is, both read and write) and to conduct ``auth``
216 operations on monitors.
217
218
219 ``class-read``
220
221 :Descriptions: Gives the user the capability to call class read methods.
222 Subset of ``x``.
223
224
225 ``class-write``
226
227 :Description: Gives the user the capability to call class write methods.
228 Subset of ``x``.
229
230
231 ``*``, ``all``
232
233 :Description: Gives the user read, write, and execute permissions for a
234 particular daemon/pool, as well as the ability to execute
235 admin commands.
236
237
238 The following entries describe valid capability profiles:
239
240 ``profile osd`` (Monitor only)
241
242 :Description: Gives a user permissions to connect as an OSD to other OSDs or
243 monitors. Conferred on OSDs in order to enable OSDs to handle replication
244 heartbeat traffic and status reporting.
245
246
247 ``profile mds`` (Monitor only)
248
249 :Description: Gives a user permissions to connect as an MDS to other MDSs or
250 monitors.
251
252
253 ``profile bootstrap-osd`` (Monitor only)
254
255 :Description: Gives a user permissions to bootstrap an OSD. Conferred on
256 deployment tools such as ``ceph-volume`` and ``cephadm``
257 so that they have permissions to add keys when
258 bootstrapping an OSD.
259
260
261 ``profile bootstrap-mds`` (Monitor only)
262
263 :Description: Gives a user permissions to bootstrap a metadata server.
264 Conferred on deployment tools such as ``cephadm``
265 so that they have permissions to add keys when bootstrapping
266 a metadata server.
267
268 ``profile bootstrap-rbd`` (Monitor only)
269
270 :Description: Gives a user permissions to bootstrap an RBD user.
271 Conferred on deployment tools such as ``cephadm``
272 so that they have permissions to add keys when bootstrapping
273 an RBD user.
274
275 ``profile bootstrap-rbd-mirror`` (Monitor only)
276
277 :Description: Gives a user permissions to bootstrap an ``rbd-mirror`` daemon
278 user. Conferred on deployment tools such as ``cephadm`` so that
279 they have permissions to add keys when bootstrapping an
280 ``rbd-mirror`` daemon.
281
282 ``profile rbd`` (Manager, Monitor, and OSD)
283
284 :Description: Gives a user permissions to manipulate RBD images. When used as a
285 Monitor cap, it provides the user with the minimal privileges
286 required by an RBD client application; such privileges include
287 the ability to blocklist other client users. When used as an OSD
288 cap, it provides an RBD client application with read-write access
289 to the specified pool. The Manager cap supports optional ``pool``
290 and ``namespace`` keyword arguments.
291
292 ``profile rbd-mirror`` (Monitor only)
293
294 :Description: Gives a user permissions to manipulate RBD images and retrieve
295 RBD mirroring config-key secrets. It provides the minimal
296 privileges required for the user to manipulate the ``rbd-mirror``
297 daemon.
298
299 ``profile rbd-read-only`` (Manager and OSD)
300
301 :Description: Gives a user read-only permissions to RBD images. The Manager cap
302 supports optional ``pool`` and ``namespace`` keyword arguments.
303
304 ``profile simple-rados-client`` (Monitor only)
305
306 :Description: Gives a user read-only permissions for monitor, OSD, and PG data.
307 Intended for use by direct librados client applications.
308
309 ``profile simple-rados-client-with-blocklist`` (Monitor only)
310
311 :Description: Gives a user read-only permissions for monitor, OSD, and PG data.
312 Intended for use by direct librados client applications. Also
313 includes permissions to add blocklist entries to build
314 high-availability (HA) applications.
315
316 ``profile fs-client`` (Monitor only)
317
318 :Description: Gives a user read-only permissions for monitor, OSD, PG, and MDS
319 data. Intended for CephFS clients.
320
321 ``profile role-definer`` (Monitor and Auth)
322
323 :Description: Gives a user **all** permissions for the auth subsystem, read-only
324 access to monitors, and nothing else. Useful for automation
325 tools. Do not assign this unless you really, **really** know what
326 you're doing, as the security ramifications are substantial and
327 pervasive.
328
329 ``profile crash`` (Monitor and MGR)
330
331 :Description: Gives a user read-only access to monitors. Used in conjunction
332 with the manager ``crash`` module to upload daemon crash
333 dumps into monitor storage for later analysis.
334
335 Pool
336 ----
337
338 A pool is a logical partition where users store data.
339 In Ceph deployments, it is common to create a pool as a logical partition for
340 similar types of data. For example, when deploying Ceph as a back end for
341 OpenStack, a typical deployment would have pools for volumes, images, backups
342 and virtual machines, and such users as ``client.glance`` and ``client.cinder``.
343
344 Application Tags
345 ----------------
346
347 Access may be restricted to specific pools as defined by their application
348 metadata. The ``*`` wildcard may be used for the ``key`` argument, the
349 ``value`` argument, or both. The ``all`` tag is a synonym for ``*``.
350
351 Namespace
352 ---------
353
354 Objects within a pool can be associated to a namespace: that is, to a logical group of
355 objects within the pool. A user's access to a pool can be associated with a
356 namespace so that reads and writes by the user can take place only within the
357 namespace. Objects written to a namespace within the pool can be accessed only
358 by users who have access to the namespace.
359
360 .. note:: Namespaces are primarily useful for applications written on top of
361 ``librados``. In such situations, the logical grouping provided by
362 namespaces can obviate the need to create different pools. In Luminous and
363 later releases, Ceph Object Gateway uses namespaces for various metadata
364 objects.
365
366 The rationale for namespaces is this: namespaces are relatively less
367 computationally expensive than pools, which (pools) can be a computationally
368 expensive method of segregating data sets between different authorized users.
369
370 For example, a pool ought to host approximately 100 placement-group replicas
371 per OSD. This means that a cluster with 1000 OSDs and three 3R replicated pools
372 would have (in a single pool) 100,000 placement-group replicas, and that means
373 that it has 33,333 Placement Groups.
374
375 By contrast, writing an object to a namespace simply associates the namespace
376 to the object name without incurring the computational overhead of a separate
377 pool. Instead of creating a separate pool for a user or set of users, you can
378 use a namespace.
379
380 .. note::
381
382 Namespaces are available only when using ``librados``.
383
384
385 Access may be restricted to specific RADOS namespaces by use of the ``namespace``
386 capability. Limited globbing of namespaces (that is, use of wildcards (``*``)) is supported: if the last character
387 of the specified namespace is ``*``, then access is granted to any namespace
388 starting with the provided argument.
389
390 Managing Users
391 ==============
392
393 User management functionality provides Ceph Storage Cluster administrators with
394 the ability to create, update, and delete users directly in the Ceph Storage
395 Cluster.
396
397 When you create or delete users in the Ceph Storage Cluster, you might need to
398 distribute keys to clients so that they can be added to keyrings. For details, see `Keyring
399 Management`_.
400
401 Listing Users
402 -------------
403
404 To list the users in your cluster, run the following command:
405
406 .. prompt:: bash $
407
408 ceph auth ls
409
410 Ceph will list all users in your cluster. For example, in a two-node
411 cluster, ``ceph auth ls`` will provide an output that resembles the following::
412
413 installed auth entries:
414
415 osd.0
416 key: AQCvCbtToC6MDhAATtuT70Sl+DymPCfDSsyV4w==
417 caps: [mon] allow profile osd
418 caps: [osd] allow *
419 osd.1
420 key: AQC4CbtTCFJBChAAVq5spj0ff4eHZICxIOVZeA==
421 caps: [mon] allow profile osd
422 caps: [osd] allow *
423 client.admin
424 key: AQBHCbtT6APDHhAA5W00cBchwkQjh3dkKsyPjw==
425 caps: [mds] allow
426 caps: [mon] allow *
427 caps: [osd] allow *
428 client.bootstrap-mds
429 key: AQBICbtTOK9uGBAAdbe5zcIGHZL3T/u2g6EBww==
430 caps: [mon] allow profile bootstrap-mds
431 client.bootstrap-osd
432 key: AQBHCbtT4GxqORAADE5u7RkpCN/oo4e5W0uBtw==
433 caps: [mon] allow profile bootstrap-osd
434
435 Note that, according to the ``TYPE.ID`` notation for users, ``osd.0`` is a
436 user of type ``osd`` and an ID of ``0``, and ``client.admin`` is a user of type
437 ``client`` and an ID of ``admin`` (that is, the default ``client.admin`` user).
438 Note too that each entry has a ``key: <value>`` entry, and also has one or more
439 ``caps:`` entries.
440
441 To save the output of ``ceph auth ls`` to a file, use the ``-o {filename}`` option.
442
443
444 Getting a User
445 --------------
446
447 To retrieve a specific user, key, and capabilities, run the following command:
448
449 .. prompt:: bash $
450
451 ceph auth get {TYPE.ID}
452
453 For example:
454
455 .. prompt:: bash $
456
457 ceph auth get client.admin
458
459 To save the output of ``ceph auth get`` to a file, use the ``-o {filename}`` option. Developers may also run the following command:
460
461 .. prompt:: bash $
462
463 ceph auth export {TYPE.ID}
464
465 The ``auth export`` command is identical to ``auth get``.
466
467 .. _rados_ops_adding_a_user:
468
469 Adding a User
470 -------------
471
472 Adding a user creates a user name (that is, ``TYPE.ID``), a secret key, and
473 any capabilities specified in the command that creates the user.
474
475 A user's key allows the user to authenticate with the Ceph Storage Cluster.
476 The user's capabilities authorize the user to read, write, or execute on Ceph
477 monitors (``mon``), Ceph OSDs (``osd``) or Ceph Metadata Servers (``mds``).
478
479 There are a few ways to add a user:
480
481 - ``ceph auth add``: This command is the canonical way to add a user. It
482 will create the user, generate a key, and add any specified capabilities.
483
484 - ``ceph auth get-or-create``: This command is often the most convenient way
485 to create a user, because it returns a keyfile format with the user name
486 (in brackets) and the key. If the user already exists, this command
487 simply returns the user name and key in the keyfile format. To save the output to
488 a file, use the ``-o {filename}`` option.
489
490 - ``ceph auth get-or-create-key``: This command is a convenient way to create
491 a user and return the user's key and nothing else. This is useful for clients that
492 need only the key (for example, libvirt). If the user already exists, this command
493 simply returns the key. To save the output to
494 a file, use the ``-o {filename}`` option.
495
496 It is possible, when creating client users, to create a user with no capabilities. A user
497 with no capabilities is useless beyond mere authentication, because the client
498 cannot retrieve the cluster map from the monitor. However, you might want to create a user
499 with no capabilities and wait until later to add capabilities to the user by using the ``ceph auth caps`` comand.
500
501 A typical user has at least read capabilities on the Ceph monitor and
502 read and write capabilities on Ceph OSDs. A user's OSD permissions
503 are often restricted so that the user can access only one particular pool.
504 In the following example, the commands (1) add a client named ``john`` that has read capabilities on the Ceph monitor
505 and read and write capabilities on the pool named ``liverpool``, (2) authorize a client named ``paul`` to have read capabilities on the Ceph monitor and
506 read and write capabilities on the pool named ``liverpool``, (3) authorize a client named ``george`` to have read capabilities on the Ceph monitor and
507 read and write capabilities on the pool named ``liverpool`` and use the keyring named ``george.keyring`` to make this authorization, and (4) authorize
508 a client named ``ringo`` to have read capabilities on the Ceph monitor and read and write capabilities on the pool named ``liverpool`` and use the key
509 named ``ringo.key`` to make this authorization:
510
511 .. prompt:: bash $
512
513 ceph auth add client.john mon 'allow r' osd 'allow rw pool=liverpool'
514 ceph auth get-or-create client.paul mon 'allow r' osd 'allow rw pool=liverpool'
515 ceph auth get-or-create client.george mon 'allow r' osd 'allow rw pool=liverpool' -o george.keyring
516 ceph auth get-or-create-key client.ringo mon 'allow r' osd 'allow rw pool=liverpool' -o ringo.key
517
518 .. important:: Any user that has capabilities on OSDs will have access to ALL pools in the cluster
519 unless that user's access has been restricted to a proper subset of the pools in the cluster.
520
521
522 .. _modify-user-capabilities:
523
524 Modifying User Capabilities
525 ---------------------------
526
527 The ``ceph auth caps`` command allows you to specify a user and change that
528 user's capabilities. Setting new capabilities will overwrite current capabilities.
529 To view current capabilities, run ``ceph auth get USERTYPE.USERID``.
530 To add capabilities, run a command of the following form (and be sure to specify the existing capabilities):
531
532 .. prompt:: bash $
533
534 ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]']
535
536 For example:
537
538 .. prompt:: bash $
539
540 ceph auth get client.john
541 ceph auth caps client.john mon 'allow r' osd 'allow rw pool=liverpool'
542 ceph auth caps client.paul mon 'allow rw' osd 'allow rwx pool=liverpool'
543 ceph auth caps client.brian-manager mon 'allow *' osd 'allow *'
544
545 For additional details on capabilities, see `Authorization (Capabilities)`_.
546
547 Deleting a User
548 ---------------
549
550 To delete a user, use ``ceph auth del``:
551
552 .. prompt:: bash $
553
554 ceph auth del {TYPE}.{ID}
555
556 Here ``{TYPE}`` is either ``client``, ``osd``, ``mon``, or ``mds``,
557 and ``{ID}`` is the user name or the ID of the daemon.
558
559
560 Printing a User's Key
561 ---------------------
562
563 To print a user's authentication key to standard output, run the following command:
564
565 .. prompt:: bash $
566
567 ceph auth print-key {TYPE}.{ID}
568
569 Here ``{TYPE}`` is either ``client``, ``osd``, ``mon``, or ``mds``,
570 and ``{ID}`` is the user name or the ID of the daemon.
571
572 When it is necessary to populate client software with a user's key (as in the case of libvirt),
573 you can print the user's key by running the following command:
574
575 .. prompt:: bash $
576
577 mount -t ceph serverhost:/ mountpoint -o name=client.user,secret=`ceph auth print-key client.user`
578
579 Importing a User
580 ----------------
581
582 To import one or more users, use ``ceph auth import`` and
583 specify a keyring as follows:
584
585 .. prompt:: bash $
586
587 ceph auth import -i /path/to/keyring
588
589 For example:
590
591 .. prompt:: bash $
592
593 sudo ceph auth import -i /etc/ceph/ceph.keyring
594
595 .. note:: The Ceph storage cluster will add new users, their keys, and their
596 capabilities and will update existing users, their keys, and their
597 capabilities.
598
599 Keyring Management
600 ==================
601
602 When you access Ceph via a Ceph client, the Ceph client will look for a local
603 keyring. Ceph presets the ``keyring`` setting with four keyring
604 names by default. For this reason, you do not have to set the keyring names in your Ceph configuration file
605 unless you want to override these defaults (which is not recommended). The four default keyring names are as follows:
606
607 - ``/etc/ceph/$cluster.$name.keyring``
608 - ``/etc/ceph/$cluster.keyring``
609 - ``/etc/ceph/keyring``
610 - ``/etc/ceph/keyring.bin``
611
612 The ``$cluster`` metavariable found in the first two default keyring names above
613 is your Ceph cluster name as defined by the name of the Ceph configuration
614 file: for example, if the Ceph configuration file is named ``ceph.conf``,
615 then your Ceph cluster name is ``ceph`` and the second name above would be
616 ``ceph.keyring``. The ``$name`` metavariable is the user type and user ID:
617 for example, given the user ``client.admin``, the first name above would be
618 ``ceph.client.admin.keyring``.
619
620 .. note:: When running commands that read or write to ``/etc/ceph``, you might
621 need to use ``sudo`` to run the command as ``root``.
622
623 After you create a user (for example, ``client.ringo``), you must get the key and add
624 it to a keyring on a Ceph client so that the user can access the Ceph Storage
625 Cluster.
626
627 The `User Management`_ section details how to list, get, add, modify, and delete
628 users directly in the Ceph Storage Cluster. In addition, Ceph provides the
629 ``ceph-authtool`` utility to allow you to manage keyrings from a Ceph client.
630
631 Creating a Keyring
632 ------------------
633
634 When you use the procedures in the `Managing Users`_ section to create users,
635 you must provide user keys to the Ceph client(s). This is required so that the Ceph client(s)
636 can retrieve the key for the specified user and authenticate that user against the Ceph
637 Storage Cluster. Ceph clients access keyrings in order to look up a user name and
638 retrieve the user's key.
639
640 The ``ceph-authtool`` utility allows you to create a keyring. To create an
641 empty keyring, use ``--create-keyring`` or ``-C``. For example:
642
643 .. prompt:: bash $
644
645 ceph-authtool --create-keyring /path/to/keyring
646
647 When creating a keyring with multiple users, we recommend using the cluster name
648 (of the form ``$cluster.keyring``) for the keyring filename and saving the keyring in the
649 ``/etc/ceph`` directory. By doing this, you ensure that the ``keyring`` configuration default setting
650 will pick up the filename without requiring you to specify the filename in the local copy
651 of your Ceph configuration file. For example, you can create ``ceph.keyring`` by
652 running the following command:
653
654 .. prompt:: bash $
655
656 sudo ceph-authtool -C /etc/ceph/ceph.keyring
657
658 When creating a keyring with a single user, we recommend using the cluster name,
659 the user type, and the user name, and saving the keyring in the ``/etc/ceph`` directory.
660 For example, we recommend that the ``client.admin`` user use ``ceph.client.admin.keyring``.
661
662 To create a keyring in ``/etc/ceph``, you must do so as ``root``. This means
663 that the file will have ``rw`` permissions for the ``root`` user only, which is
664 appropriate when the keyring contains administrator keys. However, if you
665 intend to use the keyring for a particular user or group of users, be sure to use ``chown`` or ``chmod`` to establish appropriate keyring
666 ownership and access.
667
668 Adding a User to a Keyring
669 --------------------------
670
671 When you :ref:`Add a user<rados_ops_adding_a_user>` to the Ceph Storage
672 Cluster, you can use the `Getting a User`_ procedure to retrieve a user, key,
673 and capabilities and then save the user to a keyring.
674
675 If you want to use only one user per keyring, the `Getting a User`_ procedure with
676 the ``-o`` option will save the output in the keyring file format. For example,
677 to create a keyring for the ``client.admin`` user, run the following command:
678
679 .. prompt:: bash $
680
681 sudo ceph auth get client.admin -o /etc/ceph/ceph.client.admin.keyring
682
683 Notice that the file format in this command is the file format conventionally used when manipulating the keyrings of individual users.
684
685 If you want to import users to a keyring, you can use ``ceph-authtool``
686 to specify the destination keyring and the source keyring.
687 For example:
688
689 .. prompt:: bash $
690
691 sudo ceph-authtool /etc/ceph/ceph.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
692
693 Creating a User
694 ---------------
695
696 Ceph provides the `Adding a User`_ function to create a user directly in the Ceph
697 Storage Cluster. However, you can also create a user, keys, and capabilities
698 directly on a Ceph client keyring, and then import the user to the Ceph
699 Storage Cluster. For example:
700
701 .. prompt:: bash $
702
703 sudo ceph-authtool -n client.ringo --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.keyring
704
705 For additional details on capabilities, see `Authorization (Capabilities)`_.
706
707 You can also create a keyring and add a new user to the keyring simultaneously.
708 For example:
709
710 .. prompt:: bash $
711
712 sudo ceph-authtool -C /etc/ceph/ceph.keyring -n client.ringo --cap osd 'allow rwx' --cap mon 'allow rwx' --gen-key
713
714 In the above examples, the new user ``client.ringo`` has been added only to the
715 keyring. The new user has not been added to the Ceph Storage Cluster.
716
717 To add the new user ``client.ringo`` to the Ceph Storage Cluster, run the following command:
718
719 .. prompt:: bash $
720
721 sudo ceph auth add client.ringo -i /etc/ceph/ceph.keyring
722
723 Modifying a User
724 ----------------
725
726 To modify the capabilities of a user record in a keyring, specify the keyring
727 and the user, followed by the capabilities. For example:
728
729 .. prompt:: bash $
730
731 sudo ceph-authtool /etc/ceph/ceph.keyring -n client.ringo --cap osd 'allow rwx' --cap mon 'allow rwx'
732
733 To update the user in the Ceph Storage Cluster, you must update the user
734 in the keyring to the user entry in the Ceph Storage Cluster. To do so, run the following command:
735
736 .. prompt:: bash $
737
738 sudo ceph auth import -i /etc/ceph/ceph.keyring
739
740 For details on updating a Ceph Storage Cluster user from a
741 keyring, see `Importing a User`_
742
743 You may also :ref:`Modify user capabilities<modify-user-capabilities>` directly in the cluster, store the
744 results to a keyring file, and then import the keyring into your main
745 ``ceph.keyring`` file.
746
747 Command Line Usage
748 ==================
749
750 Ceph supports the following usage for user name and secret:
751
752 ``--id`` | ``--user``
753
754 :Description: Ceph identifies users with a type and an ID: the form of this user identification is ``TYPE.ID``, and examples of the type and ID are
755 ``client.admin`` and ``client.user1``. The ``id``, ``name`` and
756 ``-n`` options allow you to specify the ID portion of the user
757 name (for example, ``admin``, ``user1``, ``foo``). You can specify
758 the user with the ``--id`` and omit the type. For example,
759 to specify user ``client.foo``, run the following commands:
760
761 .. prompt:: bash $
762
763 ceph --id foo --keyring /path/to/keyring health
764 ceph --user foo --keyring /path/to/keyring health
765
766
767 ``--name`` | ``-n``
768
769 :Description: Ceph identifies users with a type and an ID: the form of this user identification is ``TYPE.ID``, and examples of the type and ID are
770 ``client.admin`` and ``client.user1``. The ``--name`` and ``-n``
771 options allow you to specify the fully qualified user name.
772 You are required to specify the user type (typically ``client``) with the
773 user ID. For example:
774
775 .. prompt:: bash $
776
777 ceph --name client.foo --keyring /path/to/keyring health
778 ceph -n client.foo --keyring /path/to/keyring health
779
780
781 ``--keyring``
782
783 :Description: The path to the keyring that contains one or more user names and
784 secrets. The ``--secret`` option provides the same functionality,
785 but it does not work with Ceph RADOS Gateway, which uses
786 ``--secret`` for another purpose. You may retrieve a keyring with
787 ``ceph auth get-or-create`` and store it locally. This is a
788 preferred approach, because you can switch user names without
789 switching the keyring path. For example:
790
791 .. prompt:: bash $
792
793 sudo rbd map --id foo --keyring /path/to/keyring mypool/myimage
794
795
796 .. _pools: ../pools
797
798 Limitations
799 ===========
800
801 The ``cephx`` protocol authenticates Ceph clients and servers to each other. It
802 is not intended to handle authentication of human users or application programs
803 that are run on their behalf. If your access control
804 needs require that kind of authentication, you will need to have some other mechanism, which is likely to be specific to the
805 front end that is used to access the Ceph object store. This other mechanism would ensure that only acceptable users and programs are able to run on the
806 machine that Ceph permits to access its object store.
807
808 The keys used to authenticate Ceph clients and servers are typically stored in
809 a plain text file on a trusted host. Appropriate permissions must be set on the plain text file.
810
811 .. important:: Storing keys in plaintext files has security shortcomings, but
812 they are difficult to avoid, given the basic authentication methods Ceph
813 uses in the background. Anyone setting up Ceph systems should be aware of
814 these shortcomings.
815
816 In particular, user machines, especially portable machines, should not
817 be configured to interact directly with Ceph, since that mode of use would
818 require the storage of a plaintext authentication key on an insecure machine.
819 Anyone who stole that machine or obtained access to it could
820 obtain a key that allows them to authenticate their own machines to Ceph.
821
822 Instead of permitting potentially insecure machines to access a Ceph object
823 store directly, you should require users to sign in to a trusted machine in
824 your environment, using a method that provides sufficient security for your
825 purposes. That trusted machine will store the plaintext Ceph keys for the
826 human users. A future version of Ceph might address these particular
827 authentication issues more fully.
828
829 At present, none of the Ceph authentication protocols provide secrecy for
830 messages in transit. As a result, an eavesdropper on the wire can hear and understand
831 all data sent between clients and servers in Ceph, even if the eavesdropper cannot create or
832 alter the data. Similarly, Ceph does not include options to encrypt user data in the
833 object store. Users can, of course, hand-encrypt and store their own data in the Ceph
834 object store, but Ceph itself provides no features to perform object
835 encryption. Anyone storing sensitive data in Ceph should consider
836 encrypting their data before providing it to the Ceph system.
837
838
839 .. _Architecture - High Availability Authentication: ../../../architecture#high-availability-authentication
840 .. _Cephx Config Reference: ../../configuration/auth-config-ref