1 .. _ceph-volume-lvm-list:
5 This subcommand will list any devices (logical and physical) that may be
6 associated with a Ceph cluster, as long as they contain enough metadata to
7 allow for that discovery.
9 Output is grouped by the OSD ID associated with the devices, and unlike
10 ``ceph-disk`` it does not provide any information for devices that aren't
15 * ``--format`` Allows a ``json`` or ``pretty`` value. Defaults to ``pretty``
16 which will group the device information in a human-readable format.
20 When no positional arguments are used, a full reporting will be presented. This
21 means that all devices and logical volumes found in the system will be
24 Full ``pretty`` reporting for two OSDs, one with a lv as a journal, and another
25 one with a physical device may look similar to::
27 # ceph-volume lvm list
32 [journal] /dev/journals/journal1
34 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
36 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
38 osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
39 data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
40 journal device /dev/journals/journal1
41 data device /dev/test_group/data-lv2
44 [data] /dev/test_group/data-lv2
46 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
48 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
50 osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
51 data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
52 journal device /dev/journals/journal1
53 data device /dev/test_group/data-lv2
58 [data] /dev/test_group/data-lv1
60 journal uuid cd72bd28-002a-48da-bdf6-d5b993e84f3f
62 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
64 osd fsid 943949f0-ce37-47ca-a33c-3413d46ee9ec
65 data uuid TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00
66 journal device /dev/sdd1
67 data device /dev/test_group/data-lv1
72 PARTUUID cd72bd28-002a-48da-bdf6-d5b993e84f3f
75 For logical volumes the ``devices`` key is populated with the physical devices
76 associated with the logical volume. Since LVM allows multiple physical devices
77 to be part of a logical volume, the value will be comma separated when using
78 ``pretty``, but an array when using ``json``.
80 .. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
81 as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
82 see :ref:`ceph-volume-lvm-tag-api`
86 Single reporting can consume both devices and logical volumes as input
87 (positional parameters). For logical volumes, it is required to use the group
88 name as well as the logical volume name.
90 For example the ``data-lv2`` logical volume, in the ``test_group`` volume group
91 can be listed in the following way::
93 # ceph-volume lvm list test_group/data-lv2
98 [data] /dev/test_group/data-lv2
100 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
102 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
104 osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
105 data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
106 journal device /dev/journals/journal1
107 data device /dev/test_group/data-lv2
111 .. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
112 as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
113 see :ref:`ceph-volume-lvm-tag-api`
116 For plain disks, the full path to the device is required. For example, for
117 a device like ``/dev/sdd1`` it can look like::
120 # ceph-volume lvm list /dev/sdd1
127 PARTUUID cd72bd28-002a-48da-bdf6-d5b993e84f3f
133 All output using ``--format=json`` will show everything the system has stored
134 as metadata for the devices, including tags.
136 No changes for readability are done with ``json`` reporting, and all
137 information is presented as-is. Full output as well as single devices can be
140 For brevity, this is how a single logical volume would look with ``json``
141 output (note how tags aren't modified)::
143 # ceph-volume lvm list --format=json test_group/data-lv1
147 "devices": ["/dev/sda"],
148 "lv_name": "data-lv1",
149 "lv_path": "/dev/test_group/data-lv1",
150 "lv_tags": "ceph.cluster_fsid=ce454d91-d748-4751-a318-ff7f7aa18ffd,ceph.data_device=/dev/test_group/data-lv1,ceph.data_uuid=TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00,ceph.journal_device=/dev/sdd1,ceph.journal_uuid=cd72bd28-002a-48da-bdf6-d5b993e84f3f,ceph.osd_fsid=943949f0-ce37-47ca-a33c-3413d46ee9ec,ceph.osd_id=0,ceph.type=data",
151 "lv_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
153 "path": "/dev/test_group/data-lv1",
155 "ceph.cluster_fsid": "ce454d91-d748-4751-a318-ff7f7aa18ffd",
156 "ceph.data_device": "/dev/test_group/data-lv1",
157 "ceph.data_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
158 "ceph.journal_device": "/dev/sdd1",
159 "ceph.journal_uuid": "cd72bd28-002a-48da-bdf6-d5b993e84f3f",
160 "ceph.osd_fsid": "943949f0-ce37-47ca-a33c-3413d46ee9ec",
165 "vg_name": "test_group"
171 Synchronized information
172 ------------------------
173 Before any listing type, the lvm API is queried to ensure that physical devices
174 that may be in use haven't changed naming. It is possible that non-persistent
175 devices like ``/dev/sda1`` could change to ``/dev/sdb1``.
177 The detection is possible because the ``PARTUUID`` is stored as part of the
178 metadata in the logical volume for the data lv. Even in the case of a journal
179 that is a physical device, this information is still stored on the data logical
180 volume associated with it.
182 If the name is no longer the same (as reported by ``blkid`` when using the
183 ``PARTUUID``), the tag will get updated and the report will use the newly
184 refreshed information.