]> git.proxmox.com Git - ceph.git/blob - ceph/doc/ceph-volume/lvm/prepare.rst
update sources to v12.2.0
[ceph.git] / ceph / doc / ceph-volume / lvm / prepare.rst
1 .. _ceph-volume-lvm-prepare:
2
3 ``prepare``
4 ===========
5 This subcommand allows a :term:`filestore` setup (:term:`bluestore` support is
6 planned) and currently consumes only logical volumes for both the data and
7 journal. It will not create or modify the logical volumes except for adding
8 extra metadata.
9
10 .. note:: This is part of a two step process to deploy an OSD. If looking for
11 a single-call way, please see :ref:`ceph-volume-lvm-create`
12
13 To help identify volumes, the process of preparing a volume (or volumes) to
14 work with Ceph, the tool will assign a few pieces of metadata information using
15 :term:`LVM tags`.
16
17 :term:`LVM tags` makes volumes easy to discover later, and help identify them as
18 part of a Ceph system, and what role they have (journal, filestore, bluestore,
19 etc...)
20
21 Although initially :term:`filestore` is supported (and supported by default)
22 the back end can be specified with:
23
24
25 * :ref:`--filestore <ceph-volume-lvm-prepare_filestore>`
26 * ``--bluestore``
27
28 .. when available, this will need to be updated to:
29 .. * :ref:`--bluestore <ceph-volume-prepare_bluestore>`
30
31 .. _ceph-volume-lvm-prepare_filestore:
32
33 ``filestore``
34 -------------
35 This is the default OSD backend and allows preparation of logical volumes for
36 a :term:`filestore` OSD.
37
38 The process is *very* strict, it requires two logical volumes that are ready to
39 be used. No special preparation is needed for these volumes other than
40 following the minimum size requirements for data and journal.
41
42 The API call looks like::
43
44 ceph-volume prepare --filestore --data data --journal journal
45
46 The journal *must* be a logical volume, just like the data volume, and that
47 argument is always required even if both live under the same group.
48
49 A generated uuid is used to ask the cluster for a new OSD. These two pieces are
50 crucial for identifying an OSD and will later be used throughout the
51 :ref:`ceph-volume-lvm-activate` process.
52
53 The OSD data directory is created using the following convention::
54
55 /var/lib/ceph/osd/<cluster name>-<osd id>
56
57 At this point the data volume is mounted at this location, and the journal
58 volume is linked::
59
60 ln -s /path/to/journal /var/lib/ceph/osd/<cluster_name>-<osd-id>/journal
61
62 The monmap is fetched using the bootstrap key from the OSD::
63
64 /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
65 --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
66 mon getmap -o /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap
67
68 ``ceph-osd`` will be called to populate the OSD directory, that is already
69 mounted, re-using all the pieces of information from the initial steps::
70
71 ceph-osd --cluster ceph --mkfs --mkkey -i <osd id> \
72 --monmap /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap --osd-data \
73 /var/lib/ceph/osd/<cluster name>-<osd id> --osd-journal /var/lib/ceph/osd/<cluster name>-<osd id>/journal \
74 --osd-uuid <osd uuid> --keyring /var/lib/ceph/osd/<cluster name>-<osd id>/keyring \
75 --setuser ceph --setgroup ceph
76
77 .. _ceph-volume-lvm-existing-osds:
78
79 Existing OSDs
80 -------------
81 For existing clusters that want to use this new system and have OSDs that are
82 already running there are a few things to take into account:
83
84 .. warning:: this process will forcefully format the data device, destroying
85 existing data, if any.
86
87 * OSD paths should follow this convention::
88
89 /var/lib/ceph/osd/<cluster name>-<osd id>
90
91 * Preferably, no other mechanisms to mount the volume should exist, and should
92 be removed (like fstab mount points)
93 * There is currently no support for encrypted volumes
94
95 The one time process for an existing OSD, with an ID of 0 and
96 using a ``"ceph"`` cluster name would look like::
97
98 ceph-volume lvm prepare --filestore --osd-id 0 --osd-fsid E3D291C1-E7BF-4984-9794-B60D9FA139CB
99
100 The command line tool will not contact the monitor to generate an OSD ID and
101 will format the LVM device in addition to storing the metadata on it so that it
102 can later be startednot contact the monitor to generate an OSD ID and will
103 format the LVM device in addition to storing the metadata on it so that it can
104 later be started (for detailed metadata description see :ref:`ceph-volume-lvm-tags`).
105
106
107 .. _ceph-volume-lvm-prepare_bluestore:
108
109 ``bluestore``
110 -------------
111 This subcommand is planned but not currently implemented.
112
113
114 Storing metadata
115 ----------------
116 The following tags will get applied as part of the prepartion process
117 regardless of the type of volume (journal or data) and also regardless of the
118 OSD backend:
119
120 * ``cluster_fsid``
121 * ``data_device``
122 * ``journal_device``
123 * ``encrypted``
124 * ``osd_fsid``
125 * ``osd_id``
126 * ``block``
127 * ``db``
128 * ``wal``
129 * ``lockbox_device``
130
131 .. note:: For the complete lvm tag conventions see :ref:`ceph-volume-lvm-tag-api`
132
133
134 Summary
135 -------
136 To recap the ``prepare`` process:
137
138 #. Accept only logical volumes for data and journal (both required)
139 #. Generate a UUID for the OSD
140 #. Ask the monitor get an OSD ID reusing the generated UUID
141 #. OSD data directory is created and data volume mounted
142 #. Journal is symlinked from data volume to journal location
143 #. monmap is fetched for activation
144 #. devices is mounted and data directory is populated by ``ceph-osd``
145 #. data and journal volumes are assigned all the Ceph metadata using lvm tags