]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/ceph-bluestore-tool.rst
import 15.2.5
[ceph.git] / ceph / doc / man / 8 / ceph-bluestore-tool.rst
1 :orphan:
2
3 ======================================================
4 ceph-bluestore-tool -- bluestore administrative tool
5 ======================================================
6
7 .. program:: ceph-bluestore-tool
8
9 Synopsis
10 ========
11
12 | **ceph-bluestore-tool** *command*
13 [ --dev *device* ... ]
14 [ --path *osd path* ]
15 [ --out-dir *dir* ]
16 [ --log-file | -l *filename* ]
17 [ --deep ]
18 | **ceph-bluestore-tool** fsck|repair --path *osd path* [ --deep ]
19 | **ceph-bluestore-tool** show-label --dev *device* ...
20 | **ceph-bluestore-tool** prime-osd-dir --dev *device* --path *osd path*
21 | **ceph-bluestore-tool** bluefs-export --path *osd path* --out-dir *dir*
22 | **ceph-bluestore-tool** bluefs-bdev-new-wal --path *osd path* --dev-target *new-device*
23 | **ceph-bluestore-tool** bluefs-bdev-new-db --path *osd path* --dev-target *new-device*
24 | **ceph-bluestore-tool** bluefs-bdev-migrate --path *osd path* --dev-target *new-device* --devs-source *device1* [--devs-source *device2*]
25 | **ceph-bluestore-tool** free-dump|free-score --path *osd path* [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
26
27
28 Description
29 ===========
30
31 **ceph-bluestore-tool** is a utility to perform low-level administrative
32 operations on a BlueStore instance.
33
34 Commands
35 ========
36
37 :command:`help`
38
39 show help
40
41 :command:`fsck` [ --deep ]
42
43 run consistency check on BlueStore metadata. If *--deep* is specified, also read all object data and verify checksums.
44
45 :command:`repair`
46
47 Run a consistency check *and* repair any errors we can.
48
49 :command:`bluefs-export`
50
51 Export the contents of BlueFS (i.e., RocksDB files) to an output directory.
52
53 :command:`bluefs-bdev-sizes` --path *osd path*
54
55 Print the device sizes, as understood by BlueFS, to stdout.
56
57 :command:`bluefs-bdev-expand` --path *osd path*
58
59 Instruct BlueFS to check the size of its block devices and, if they have
60 expanded, make use of the additional space. Please note that only the new
61 files created by BlueFS will be allocated on the preferred block device if
62 it has enough free space, and the existing files that have spilled over to
63 the slow device will be gradually removed when RocksDB performs compaction.
64 In other words, if there is any data spilled over to the slow device, it
65 will be moved to the fast device over time.
66
67 :command:`bluefs-bdev-new-wal` --path *osd path* --dev-target *new-device*
68
69 Adds WAL device to BlueFS, fails if WAL device already exists.
70
71 :command:`bluefs-bdev-new-db` --path *osd path* --dev-target *new-device*
72
73 Adds DB device to BlueFS, fails if DB device already exists.
74
75 :command:`bluefs-bdev-migrate` --dev-target *new-device* --devs-source *device1* [--devs-source *device2*]
76
77 Moves BlueFS data from source device(s) to the target one, source devices
78 (except the main one) are removed on success. Target device can be both
79 already attached or new device. In the latter case it's added to OSD
80 replacing one of the source devices. Following replacement rules apply
81 (in the order of precedence, stop on the first match):
82
83 - if source list has DB volume - target device replaces it.
84 - if source list has WAL volume - target device replace it.
85 - if source list has slow volume only - operation isn't permitted, requires explicit allocation via new-db/new-wal command.
86
87 :command:`show-label` --dev *device* [...]
88
89 Show device label(s).
90
91 :command:`free-dump` --path *osd path* [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
92
93 Dump all free regions in allocator.
94
95 :command:`free-score` --path *osd path* [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
96
97 Give a [0-1] number that represents quality of fragmentation in allocator.
98 0 represents case when all free space is in one chunk. 1 represents worst possible fragmentation.
99
100 Options
101 =======
102
103 .. option:: --dev *device*
104
105 Add *device* to the list of devices to consider
106
107 .. option:: --devs-source *device*
108
109 Add *device* to the list of devices to consider as sources for migrate operation
110
111 .. option:: --dev-target *device*
112
113 Specify target *device* migrate operation or device to add for adding new DB/WAL.
114
115 .. option:: --path *osd path*
116
117 Specify an osd path. In most cases, the device list is inferred from the symlinks present in *osd path*. This is usually simpler than explicitly specifying the device(s) with --dev.
118
119 .. option:: --out-dir *dir*
120
121 Output directory for bluefs-export
122
123 .. option:: -l, --log-file *log file*
124
125 file to log to
126
127 .. option:: --log-level *num*
128
129 debug log level. Default is 30 (extremely verbose), 20 is very
130 verbose, 10 is verbose, and 1 is not very verbose.
131
132 .. option:: --deep
133
134 deep scrub/repair (read and validate object data, not just metadata)
135
136 .. option:: --allocator *name*
137
138 Useful for *free-dump* and *free-score* actions. Selects allocator(s).
139
140 Device labels
141 =============
142
143 Every BlueStore block device has a single block label at the beginning of the
144 device. You can dump the contents of the label with::
145
146 ceph-bluestore-tool show-label --dev *device*
147
148 The main device will have a lot of metadata, including information
149 that used to be stored in small files in the OSD data directory. The
150 auxiliary devices (db and wal) will only have the minimum required
151 fields (OSD UUID, size, device type, birth time).
152
153 OSD directory priming
154 =====================
155
156 You can generate the content for an OSD data directory that can start up a
157 BlueStore OSD with the *prime-osd-dir* command::
158
159 ceph-bluestore-tool prime-osd-dir --dev *main device* --path /var/lib/ceph/osd/ceph-*id*
160
161 BlueFS log rescue
162 =====================
163
164 Some versions of BlueStore were susceptible to BlueFS log growing extremaly large -
165 beyond the point of making booting OSD impossible. This state is indicated by
166 booting that takes very long and fails in _replay function.
167
168 This can be fixed by::
169 ceph-bluestore-tool fsck --path *osd path* --bluefs_replay_recovery=true
170
171 It is advised to first check if rescue process would be successfull::
172 ceph-bluestore-tool fsck --path *osd path* \
173 --bluefs_replay_recovery=true --bluefs_replay_recovery_disable_compact=true
174
175 If above fsck is successfull fix procedure can be applied.
176
177 Availability
178 ============
179
180 **ceph-bluestore-tool** is part of Ceph, a massively scalable,
181 open-source, distributed storage system. Please refer to the Ceph
182 documentation at http://ceph.com/docs for more information.
183
184
185 See also
186 ========
187
188 :doc:`ceph-osd <ceph-osd>`\(8)