X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=ceph%2Fdoc%2Finstall%2Fmanual-deployment.rst;h=2e8bb86729cd6dbda0d744ac24c42f8278cb269c;hb=d2e6a577eb19928d58b31d1b6e096ca0f03c4052;hp=cf14d4b838a617595ae36e5fe378209949d8153a;hpb=7c673caec407dd16107e56e4b51a6d00f021315c;p=ceph.git diff --git a/ceph/doc/install/manual-deployment.rst b/ceph/doc/install/manual-deployment.rst index cf14d4b83..2e8bb8672 100644 --- a/ceph/doc/install/manual-deployment.rst +++ b/ceph/doc/install/manual-deployment.rst @@ -58,7 +58,7 @@ a number of things: For example, when you run multiple clusters in a `federated architecture`_, the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for the current CLI session. **Note:** To identify the cluster name on the - command line interface, specify the a Ceph configuration file with the + command line interface, specify the Ceph configuration file with the cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.). Also see CLI usage (``ceph --cluster {cluster-name}``). @@ -162,7 +162,7 @@ The procedure is as follows: #. Generate an administrator keyring, generate a ``client.admin`` user and add the user to the keyring. :: - sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' + sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' #. Add the ``client.admin`` key to the ``ceph.mon.keyring``. :: @@ -290,6 +290,12 @@ The procedure is as follows: **Note:** Once you add OSDs and start them, the placement group health errors should disappear. See the next section for details. +Manager daemon configuration +============================ + +On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon. + +See :doc:`../mgr/administrator` Adding OSDs =========== @@ -317,7 +323,7 @@ on ``node2`` and ``node3``: #. Prepare the OSD. :: ssh {node-name} - sudo ceph-disk prepare --cluster {cluster-name} --cluster-uuid {uuid} --fs-type {ext4|xfs|btrfs} {data-path} [{journal-path}] + sudo ceph-disk prepare --cluster {cluster-name} --cluster-uuid {uuid} {data-path} [{journal-path}] For example:: @@ -342,116 +348,71 @@ Long Form Without the benefit of any helper utilities, create an OSD and add it to the cluster and CRUSH map with the following procedure. To create the first two -OSDs with the long form procedure, execute the following on ``node2`` and -``node3``: - -#. Connect to the OSD host. :: +OSDs with the long form procedure, execute the following steps for each OSD. - ssh {node-name} - -#. Generate a UUID for the OSD. :: - - uuidgen - - -#. Create the OSD. If no UUID is given, it will be set automatically when the - OSD starts up. The following command will output the OSD number, which you - will need for subsequent steps. :: - - ceph osd create [{uuid} [{id}]] +.. note:: This procedure does not describe deployment on top of dm-crypt + making use of the dm-crypt 'lockbox'. +#. Connect to the OSD host and become root. :: -#. Create the default directory on your new OSD. :: + ssh {node-name} + sudo bash - ssh {new-osd-host} - sudo mkdir /var/lib/ceph/osd/{cluster-name}-{osd-number} - +#. Generate a UUID for the OSD. :: -#. If the OSD is for a drive other than the OS drive, prepare it - for use with Ceph, and mount it to the directory you just created:: + UUID=$(uuidgen) - ssh {new-osd-host} - sudo mkfs -t {fstype} /dev/{hdd} - sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/{cluster-name}-{osd-number} +#. Generate a cephx key for the OSD. :: - -#. Initialize the OSD data directory. :: + OSD_SECRET=$(ceph-authtool --gen-print-key) - ssh {new-osd-host} - sudo ceph-osd -i {osd-num} --mkfs --mkkey --osd-uuid [{uuid}] +#. Create the OSD. Note that an OSD ID can be provided as an + additional argument to ``ceph osd new`` if you need to reuse a + previously-destroyed OSD id. We assume that the + ``client.bootstrap-osd`` key is present on the machine. You may + alternatively execute this command as ``client.admin`` on a + different host where that key is present.:: - The directory must be empty before you can run ``ceph-osd`` with the - ``--mkkey`` option. In addition, the ceph-osd tool requires specification - of custom cluster names with the ``--cluster`` option. - + ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \ + ceph osd new $UUID -i - \ + -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring) -#. Register the OSD authentication key. The value of ``ceph`` for - ``ceph-{osd-num}`` in the path is the ``$cluster-$id``. If your - cluster name differs from ``ceph``, use your cluster name instead.:: +#. Create the default directory on your new OSD. :: - sudo ceph auth add osd.{osd-num} osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/{cluster-name}-{osd-num}/keyring - - -#. Add your Ceph Node to the CRUSH map. :: - - ceph [--cluster {cluster-name}] osd crush add-bucket {hostname} host - - For example:: - - ceph osd crush add-bucket node1 host + mkdir /var/lib/ceph/osd/ceph-$ID +#. If the OSD is for a drive other than the OS drive, prepare it + for use with Ceph, and mount it to the directory you just created. :: -#. Place the Ceph Node under the root ``default``. :: + mkfs.xfs /dev/{DEV} + mount /dev/{DEV} /var/lib/ceph/osd/ceph-$ID - ceph osd crush move node1 root=default +#. Write the secret to the OSD keyring file. :: + ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \ + --name osd.$ID --add-key $OSD_SECRET -#. Add the OSD to the CRUSH map so that it can begin receiving data. You may - also decompile the CRUSH map, add the OSD to the device list, add the host as a - bucket (if it's not already in the CRUSH map), add the device as an item in the - host, assign it a weight, recompile it and set it. :: +#. Initialize the OSD data directory. :: - ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...] + ceph-osd -i $ID --mkfs --osd-uuid $UUID - For example:: - - ceph osd crush add osd.0 1.0 host=node1 +#. Fix ownership. :: + chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID #. After you add an OSD to Ceph, the OSD is in your configuration. However, - it is not yet running. The OSD is ``down`` and ``in``. You must start + it is not yet running. You must start your new OSD before it can begin receiving data. - For Ubuntu, use Upstart:: - - sudo start ceph-osd id={osd-num} [cluster={cluster-name}] - - For example:: - - sudo start ceph-osd id=0 - sudo start ceph-osd id=1 - - For Debian/CentOS/RHEL, use sysvinit:: - - sudo /etc/init.d/ceph start osd.{osd-num} [--cluster {cluster-name}] - - For example:: - - sudo /etc/init.d/ceph start osd.0 - sudo /etc/init.d/ceph start osd.1 - - In this case, to allow the start of the daemon at each reboot you - must create an empty file like this:: - - sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/sysvinit + For modern systemd distributions:: + systemctl enable ceph-osd@$ID + systemctl start ceph-osd@$ID + For example:: - sudo touch /var/lib/ceph/osd/ceph-0/sysvinit - sudo touch /var/lib/ceph/osd/ceph-1/sysvinit - - Once you start your OSD, it is ``up`` and ``in``. - + systemctl enable ceph-osd@12 + systemctl start ceph-osd@12 Adding MDS