configuration, in spite of the limitations as described herein.
If you are trying to create a cluster on a single node, you must change the
-default of the ``osd crush chooseleaf type`` setting from ``1`` (meaning
+default of the ``osd_crush_chooseleaf_type`` setting from ``1`` (meaning
``host`` or ``node``) to ``0`` (meaning ``osd``) in your Ceph configuration
file before you create your monitors and OSDs. This tells Ceph that an OSD
can peer with another OSD on the same host. If you are trying to set up a
-1-node cluster and ``osd crush chooseleaf type`` is greater than ``0``,
+1-node cluster and ``osd_crush_chooseleaf_type`` is greater than ``0``,
Ceph will try to peer the PGs of one OSD with the PGs of another OSD on
another node, chassis, rack, row, or even datacenter depending on the setting.
If you have brought up two OSDs to an ``up`` and ``in`` state, but you still
don't see ``active + clean`` placement groups, you may have an
-``osd pool default size`` set to greater than ``2``.
+``osd_pool_default_size`` set to greater than ``2``.
There are a few ways to address this situation. If you want to operate your
cluster in an ``active + degraded`` state with two replicas, you can set the
-``osd pool default min size`` to ``2`` so that you can write objects in
-an ``active + degraded`` state. You may also set the ``osd pool default size``
+``osd_pool_default_min_size`` to ``2`` so that you can write objects in
+an ``active + degraded`` state. You may also set the ``osd_pool_default_size``
setting to ``2`` so that you only have two stored replicas (the original and
one replica), in which case the cluster should achieve an ``active + clean``
state.
Pool Size = 1
-------------
-If you have the ``osd pool default size`` set to ``1``, you will only have
+If you have the ``osd_pool_default_size`` set to ``1``, you will only have
one copy of the object. OSDs rely on other OSDs to tell them which objects
they should have. If a first OSD has a copy of an object and there is no
second copy, then no second OSD can tell the first OSD that it should have
check to ensure that you have the minimum number of OSDs running for the
placement group. If you don't have the minimum number of OSDs running,
Ceph will not allow you to write data because there is no guarantee
-that Ceph can replicate your data. See ``osd pool default min size``
+that Ceph can replicate your data. See ``osd_pool_default_min_size``
in the `Pool, PG and CRUSH Config Reference`_ for details.