]>
Commit | Line | Data |
---|---|---|
30607d9b TF |
1 | '\" te |
2 | .\" | |
3 | .\" Copyright 2013 Turbo Fredriksson <turbo@bayour.com>. All rights reserved. | |
4 | .\" | |
8fc53372 | 5 | .TH SPL-MODULE-PARAMETERS 5 "Oct 28, 2017" |
30607d9b TF |
6 | .SH NAME |
7 | spl\-module\-parameters \- SPL module parameters | |
8 | .SH DESCRIPTION | |
9 | .sp | |
10 | .LP | |
11 | Description of the different parameters to the SPL module. | |
12 | ||
13 | .SS "Module parameters" | |
14 | .sp | |
15 | .LP | |
16 | ||
17 | .sp | |
18 | .ne 2 | |
19 | .na | |
b1c3ae48 | 20 | \fBspl_kmem_cache_expire\fR (uint) |
30607d9b TF |
21 | .ad |
22 | .RS 12n | |
b1c3ae48 BB |
23 | Cache expiration is part of default Illumos cache behavior. The idea is |
24 | that objects in magazines which have not been recently accessed should be | |
25 | returned to the slabs periodically. This is known as cache aging and | |
26 | when enabled objects will be typically returned after 15 seconds. | |
27 | .sp | |
28 | On the other hand Linux slabs are designed to never move objects back to | |
29 | the slabs unless there is memory pressure. This is possible because under | |
30 | Linux the cache will be notified when memory is low and objects can be | |
31 | released. | |
32 | .sp | |
33 | By default only the Linux method is enabled. It has been shown to improve | |
34 | responsiveness on low memory systems and not negatively impact the performance | |
35 | of systems with more memory. This policy may be changed by setting the | |
36 | \fBspl_kmem_cache_expire\fR bit mask as follows, both policies may be enabled | |
37 | concurrently. | |
38 | .sp | |
39 | 0x01 - Aging (Illumos), 0x02 - Low memory (Linux) | |
30607d9b | 40 | .sp |
b1c3ae48 | 41 | Default value: \fB0x02\fR |
30607d9b TF |
42 | .RE |
43 | ||
d297a5a3 BB |
44 | .sp |
45 | .ne 2 | |
46 | .na | |
47 | \fBspl_kmem_cache_kmem_threads\fR (uint) | |
48 | .ad | |
49 | .RS 12n | |
50 | The number of threads created for the spl_kmem_cache task queue. This task | |
51 | queue is responsible for allocating new slabs for use by the kmem caches. | |
52 | For the majority of systems and workloads only a small number of threads are | |
53 | required. | |
54 | .sp | |
55 | Default value: \fB4\fR | |
56 | .RE | |
57 | ||
30607d9b TF |
58 | .sp |
59 | .ne 2 | |
60 | .na | |
b1c3ae48 | 61 | \fBspl_kmem_cache_reclaim\fR (uint) |
30607d9b TF |
62 | .ad |
63 | .RS 12n | |
b1c3ae48 BB |
64 | When this is set it prevents Linux from being able to rapidly reclaim all the |
65 | memory held by the kmem caches. This may be useful in circumstances where | |
66 | it's preferable that Linux reclaim memory from some other subsystem first. | |
67 | Setting this will increase the likelihood out of memory events on a memory | |
68 | constrained system. | |
30607d9b | 69 | .sp |
b1c3ae48 | 70 | Default value: \fB0\fR |
30607d9b TF |
71 | .RE |
72 | ||
73 | .sp | |
74 | .ne 2 | |
75 | .na | |
b1c3ae48 | 76 | \fBspl_kmem_cache_obj_per_slab\fR (uint) |
30607d9b TF |
77 | .ad |
78 | .RS 12n | |
b1c3ae48 BB |
79 | The preferred number of objects per slab in the cache. In general, a larger |
80 | value will increase the caches memory footprint while decreasing the time | |
81 | required to perform an allocation. Conversely, a smaller value will minimize | |
82 | the footprint and improve cache reclaim time but individual allocations may | |
83 | take longer. | |
30607d9b | 84 | .sp |
3018bffa | 85 | Default value: \fB8\fR |
30607d9b TF |
86 | .RE |
87 | ||
88 | .sp | |
89 | .ne 2 | |
90 | .na | |
b1c3ae48 | 91 | \fBspl_kmem_cache_obj_per_slab_min\fR (uint) |
30607d9b TF |
92 | .ad |
93 | .RS 12n | |
b1c3ae48 BB |
94 | The minimum number of objects allowed per slab. Normally slabs will contain |
95 | \fBspl_kmem_cache_obj_per_slab\fR objects but for caches that contain very | |
96 | large objects it's desirable to only have a few, or even just one, object per | |
97 | slab. | |
30607d9b | 98 | .sp |
b1c3ae48 | 99 | Default value: \fB1\fR |
30607d9b TF |
100 | .RE |
101 | ||
102 | .sp | |
103 | .ne 2 | |
104 | .na | |
b1c3ae48 | 105 | \fBspl_kmem_cache_max_size\fR (uint) |
30607d9b TF |
106 | .ad |
107 | .RS 12n | |
b1c3ae48 BB |
108 | The maximum size of a kmem cache slab in MiB. This effectively limits |
109 | the maximum cache object size to \fBspl_kmem_cache_max_size\fR / | |
110 | \fBspl_kmem_cache_obj_per_slab\fR. Caches may not be created with | |
111 | object sized larger than this limit. | |
30607d9b | 112 | .sp |
3018bffa | 113 | Default value: \fB32 (64-bit) or 4 (32-bit)\fR |
30607d9b TF |
114 | .RE |
115 | ||
116 | .sp | |
117 | .ne 2 | |
118 | .na | |
b1c3ae48 | 119 | \fBspl_kmem_cache_slab_limit\fR (uint) |
30607d9b TF |
120 | .ad |
121 | .RS 12n | |
b1c3ae48 BB |
122 | For small objects the Linux slab allocator should be used to make the most |
123 | efficient use of the memory. However, large objects are not supported by | |
124 | the Linux slab and therefore the SPL implementation is preferred. This | |
125 | value is used to determine the cutoff between a small and large object. | |
126 | .sp | |
127 | Objects of \fBspl_kmem_cache_slab_limit\fR or smaller will be allocated | |
128 | using the Linux slab allocator, large objects use the SPL allocator. A | |
129 | cutoff of 16K was determined to be optimal for architectures using 4K pages. | |
30607d9b | 130 | .sp |
b1c3ae48 BB |
131 | Default value: \fB16,384\fR |
132 | .RE | |
133 | ||
134 | .sp | |
135 | .ne 2 | |
136 | .na | |
137 | \fBspl_kmem_cache_kmem_limit\fR (uint) | |
138 | .ad | |
139 | .RS 12n | |
140 | Depending on the size of a cache object it may be backed by kmalloc()'d | |
141 | or vmalloc()'d memory. This is because the size of the required allocation | |
142 | greatly impacts the best way to allocate the memory. | |
143 | .sp | |
144 | When objects are small and only a small number of memory pages need to be | |
145 | allocated, ideally just one, then kmalloc() is very efficient. However, | |
146 | when allocating multiple pages with kmalloc() it gets increasingly expensive | |
147 | because the pages must be physically contiguous. | |
148 | .sp | |
149 | For this reason we shift to vmalloc() for slabs of large objects which | |
150 | which removes the need for contiguous pages. We cannot use vmalloc() in | |
151 | all cases because there is significant locking overhead involved. This | |
152 | function takes a single global lock over the entire virtual address range | |
153 | which serializes all allocations. Using slightly different allocation | |
154 | functions for small and large objects allows us to handle a wide range of | |
155 | object sizes. | |
945b7f1c | 156 | .sp |
b1c3ae48 BB |
157 | The \fBspl_kmem_cache_kmem_limit\fR value is used to determine this cutoff |
158 | size. One quarter the PAGE_SIZE is used as the default value because | |
159 | \fBspl_kmem_cache_obj_per_slab\fR defaults to 16. This means that at | |
160 | most we will need to allocate four contiguous pages. | |
161 | .sp | |
162 | Default value: \fBPAGE_SIZE/4\fR | |
30607d9b TF |
163 | .RE |
164 | ||
c3eabc75 BB |
165 | .sp |
166 | .ne 2 | |
167 | .na | |
168 | \fBspl_kmem_alloc_warn\fR (uint) | |
169 | .ad | |
170 | .RS 12n | |
171 | As a general rule kmem_alloc() allocations should be small, preferably | |
172 | just a few pages since they must by physically contiguous. Therefore, a | |
173 | rate limited warning will be printed to the console for any kmem_alloc() | |
174 | which exceeds a reasonable threshold. | |
b1c3ae48 | 175 | .sp |
c3eabc75 BB |
176 | The default warning threshold is set to eight pages but capped at 32K to |
177 | accommodate systems using large pages. This value was selected to be small | |
178 | enough to ensure the largest allocations are quickly noticed and fixed. | |
179 | But large enough to avoid logging any warnings when a allocation size is | |
180 | larger than optimal but not a serious concern. Since this value is tunable, | |
181 | developers are encouraged to set it lower when testing so any new largish | |
182 | allocations are quickly caught. These warnings may be disabled by setting | |
183 | the threshold to zero. | |
184 | .sp | |
b1c3ae48 | 185 | Default value: \fB32,768\fR |
c3eabc75 BB |
186 | .RE |
187 | ||
188 | .sp | |
189 | .ne 2 | |
190 | .na | |
191 | \fBspl_kmem_alloc_max\fR (uint) | |
192 | .ad | |
193 | .RS 12n | |
194 | Large kmem_alloc() allocations will fail if they exceed KMALLOC_MAX_SIZE. | |
195 | Allocations which are marginally smaller than this limit may succeed but | |
196 | should still be avoided due to the expense of locating a contiguous range | |
197 | of free pages. Therefore, a maximum kmem size with reasonable safely | |
198 | margin of 4x is set. Kmem_alloc() allocations larger than this maximum | |
199 | will quickly fail. Vmem_alloc() allocations less than or equal to this | |
200 | value will use kmalloc(), but shift to vmalloc() when exceeding this value. | |
201 | .sp | |
b1c3ae48 | 202 | Default value: \fBKMALLOC_MAX_SIZE/4\fR |
c3eabc75 BB |
203 | .RE |
204 | ||
1a204968 BB |
205 | .sp |
206 | .ne 2 | |
207 | .na | |
208 | \fBspl_kmem_cache_magazine_size\fR (uint) | |
209 | .ad | |
210 | .RS 12n | |
211 | Cache magazines are an optimization designed to minimize the cost of | |
212 | allocating memory. They do this by keeping a per-cpu cache of recently | |
213 | freed objects, which can then be reallocated without taking a lock. This | |
214 | can improve performance on highly contended caches. However, because | |
215 | objects in magazines will prevent otherwise empty slabs from being | |
216 | immediately released this may not be ideal for low memory machines. | |
217 | .sp | |
218 | For this reason \fBspl_kmem_cache_magazine_size\fR can be used to set a | |
219 | maximum magazine size. When this value is set to 0 the magazine size will | |
220 | be automatically determined based on the object size. Otherwise magazines | |
221 | will be limited to 2-256 objects per magazine (i.e per cpu). Magazines | |
222 | may never be entirely disabled in this implementation. | |
223 | .sp | |
b1c3ae48 | 224 | Default value: \fB0\fR |
1a204968 BB |
225 | .RE |
226 | ||
30607d9b TF |
227 | .sp |
228 | .ne 2 | |
229 | .na | |
230 | \fBspl_hostid\fR (ulong) | |
231 | .ad | |
232 | .RS 12n | |
b1c3ae48 BB |
233 | The system hostid, when set this can be used to uniquely identify a system. |
234 | By default this value is set to zero which indicates the hostid is disabled. | |
235 | It can be explicitly enabled by placing a unique non-zero value in | |
236 | \fB/etc/hostid/\fR. | |
30607d9b | 237 | .sp |
b1c3ae48 | 238 | Default value: \fB0\fR |
30607d9b TF |
239 | .RE |
240 | ||
241 | .sp | |
242 | .ne 2 | |
243 | .na | |
244 | \fBspl_hostid_path\fR (charp) | |
245 | .ad | |
246 | .RS 12n | |
b1c3ae48 BB |
247 | The expected path to locate the system hostid when specified. This value |
248 | may be overridden for non-standard configurations. | |
30607d9b | 249 | .sp |
b1c3ae48 | 250 | Default value: \fB/etc/hostid\fR |
30607d9b TF |
251 | .RE |
252 | ||
8fc53372 | 253 | .sp |
254 | .ne 2 | |
255 | .na | |
256 | \fBspl_panic_halt\fR (uint) | |
257 | .ad | |
258 | .RS 12n | |
259 | Cause a kernel panic on assertion failures. When not enabled, the thread is | |
260 | halted to facilitate further debugging. | |
261 | .sp | |
262 | Set to a non-zero value to enable. | |
263 | .sp | |
264 | Default value: \fB0\fR | |
265 | .RE | |
266 | ||
8f3b403a CC |
267 | .sp |
268 | .ne 2 | |
269 | .na | |
270 | \fBspl_taskq_kick\fR (uint) | |
271 | .ad | |
272 | .RS 12n | |
273 | Kick stuck taskq to spawn threads. When writing a non-zero value to it, it will | |
274 | scan all the taskqs. If any of them have a pending task more than 5 seconds old, | |
275 | it will kick it to spawn more threads. This can be used if you find a rare | |
276 | deadlock occurs because one or more taskqs didn't spawn a thread when it should. | |
277 | .sp | |
278 | Default value: \fB0\fR | |
279 | .RE | |
280 | ||
703371d8 AV |
281 | .sp |
282 | .ne 2 | |
283 | .na | |
284 | \fBspl_taskq_thread_bind\fR (int) | |
285 | .ad | |
286 | .RS 12n | |
b1c3ae48 BB |
287 | Bind taskq threads to specific CPUs. When enabled all taskq threads will |
288 | be distributed evenly over the available CPUs. By default, this behavior | |
289 | is disabled to allow the Linux scheduler the maximum flexibility to determine | |
290 | where a thread should run. | |
703371d8 | 291 | .sp |
b1c3ae48 | 292 | Default value: \fB0\fR |
703371d8 | 293 | .RE |
f7a973d9 BB |
294 | |
295 | .sp | |
296 | .ne 2 | |
297 | .na | |
298 | \fBspl_taskq_thread_dynamic\fR (int) | |
299 | .ad | |
300 | .RS 12n | |
301 | Allow dynamic taskqs. When enabled taskqs which set the TASKQ_DYNAMIC flag | |
302 | will by default create only a single thread. New threads will be created on | |
303 | demand up to a maximum allowed number to facilitate the completion of | |
304 | outstanding tasks. Threads which are no longer needed will be promptly | |
305 | destroyed. By default this behavior is enabled but it can be disabled to | |
306 | aid performance analysis or troubleshooting. | |
307 | .sp | |
308 | Default value: \fB1\fR | |
309 | .RE | |
310 | ||
62aa81a5 BB |
311 | .sp |
312 | .ne 2 | |
313 | .na | |
314 | \fBspl_taskq_thread_priority\fR (int) | |
315 | .ad | |
316 | .RS 12n | |
317 | Allow newly created taskq threads to set a non-default scheduler priority. | |
318 | When enabled the priority specified when a taskq is created will be applied | |
319 | to all threads created by that taskq. When disabled all threads will use | |
320 | the default Linux kernel thread priority. By default, this behavior is | |
321 | enabled. | |
322 | .sp | |
323 | Default value: \fB1\fR | |
324 | .RE | |
325 | ||
f7a973d9 BB |
326 | .sp |
327 | .ne 2 | |
328 | .na | |
329 | \fBspl_taskq_thread_sequential\fR (int) | |
330 | .ad | |
331 | .RS 12n | |
332 | The number of items a taskq worker thread must handle without interruption | |
333 | before requesting a new worker thread be spawned. This is used to control | |
334 | how quickly taskqs ramp up the number of threads processing the queue. | |
335 | Because Linux thread creation and destruction are relatively inexpensive a | |
336 | small default value has been selected. This means that normally threads will | |
337 | be created aggressively which is desirable. Increasing this value will | |
338 | result in a slower thread creation rate which may be preferable for some | |
339 | configurations. | |
340 | .sp | |
341 | Default value: \fB4\fR | |
342 | .RE | |
49349255 CC |
343 | |
344 | .sp | |
345 | .ne 2 | |
346 | .na | |
347 | \fBspl_max_show_tasks\fR (uint) | |
348 | .ad | |
349 | .RS 12n | |
350 | The maximum number of tasks per pending list in each taskq shown in | |
351 | /proc/spl/{taskq,taskq-all}. Write 0 to turn off the limit. The proc file will | |
352 | walk the lists with lock held, reading it could cause a lock up if the list | |
353 | grow too large without limiting the output. "(truncated)" will be shown if the | |
354 | list is larger than the limit. | |
355 | .sp | |
356 | Default value: \fB512\fR | |
357 | .RE |