]> git.proxmox.com Git - mirror_spl-debian.git/blob - man/man5/spl-module-parameters.5
acdd5b658ff84486bcdb3a63bb83f126d7f342ad
[mirror_spl-debian.git] / man / man5 / spl-module-parameters.5
1 '\" te
2 .\"
3 .\" Copyright 2013 Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
4 .\"
5 .TH SPL-MODULE-PARAMETERS 5 "Nov 18, 2013"
6 .SH NAME
7 spl\-module\-parameters \- SPL module parameters
8 .SH DESCRIPTION
9 .sp
10 .LP
11 Description of the different parameters to the SPL module.
12
13 .SS "Module parameters"
14 .sp
15 .LP
16
17 .sp
18 .ne 2
19 .na
20 \fBspl_kmem_cache_expire\fR (uint)
21 .ad
22 .RS 12n
23 Cache expiration is part of default Illumos cache behavior. The idea is
24 that objects in magazines which have not been recently accessed should be
25 returned to the slabs periodically. This is known as cache aging and
26 when enabled objects will be typically returned after 15 seconds.
27 .sp
28 On the other hand Linux slabs are designed to never move objects back to
29 the slabs unless there is memory pressure. This is possible because under
30 Linux the cache will be notified when memory is low and objects can be
31 released.
32 .sp
33 By default only the Linux method is enabled. It has been shown to improve
34 responsiveness on low memory systems and not negatively impact the performance
35 of systems with more memory. This policy may be changed by setting the
36 \fBspl_kmem_cache_expire\fR bit mask as follows, both policies may be enabled
37 concurrently.
38 .sp
39 0x01 - Aging (Illumos), 0x02 - Low memory (Linux)
40 .sp
41 Default value: \fB0x02\fR
42 .RE
43
44 .sp
45 .ne 2
46 .na
47 \fBspl_kmem_cache_reclaim\fR (uint)
48 .ad
49 .RS 12n
50 When this is set it prevents Linux from being able to rapidly reclaim all the
51 memory held by the kmem caches. This may be useful in circumstances where
52 it's preferable that Linux reclaim memory from some other subsystem first.
53 Setting this will increase the likelihood out of memory events on a memory
54 constrained system.
55 .sp
56 Default value: \fB0\fR
57 .RE
58
59 .sp
60 .ne 2
61 .na
62 \fBspl_kmem_cache_obj_per_slab\fR (uint)
63 .ad
64 .RS 12n
65 The preferred number of objects per slab in the cache. In general, a larger
66 value will increase the caches memory footprint while decreasing the time
67 required to perform an allocation. Conversely, a smaller value will minimize
68 the footprint and improve cache reclaim time but individual allocations may
69 take longer.
70 .sp
71 Default value: \fB8\fR
72 .RE
73
74 .sp
75 .ne 2
76 .na
77 \fBspl_kmem_cache_obj_per_slab_min\fR (uint)
78 .ad
79 .RS 12n
80 The minimum number of objects allowed per slab. Normally slabs will contain
81 \fBspl_kmem_cache_obj_per_slab\fR objects but for caches that contain very
82 large objects it's desirable to only have a few, or even just one, object per
83 slab.
84 .sp
85 Default value: \fB1\fR
86 .RE
87
88 .sp
89 .ne 2
90 .na
91 \fBspl_kmem_cache_max_size\fR (uint)
92 .ad
93 .RS 12n
94 The maximum size of a kmem cache slab in MiB. This effectively limits
95 the maximum cache object size to \fBspl_kmem_cache_max_size\fR /
96 \fBspl_kmem_cache_obj_per_slab\fR. Caches may not be created with
97 object sized larger than this limit.
98 .sp
99 Default value: \fB32 (64-bit) or 4 (32-bit)\fR
100 .RE
101
102 .sp
103 .ne 2
104 .na
105 \fBspl_kmem_cache_slab_limit\fR (uint)
106 .ad
107 .RS 12n
108 For small objects the Linux slab allocator should be used to make the most
109 efficient use of the memory. However, large objects are not supported by
110 the Linux slab and therefore the SPL implementation is preferred. This
111 value is used to determine the cutoff between a small and large object.
112 .sp
113 Objects of \fBspl_kmem_cache_slab_limit\fR or smaller will be allocated
114 using the Linux slab allocator, large objects use the SPL allocator. A
115 cutoff of 16K was determined to be optimal for architectures using 4K pages.
116 .sp
117 Default value: \fB16,384\fR
118 .RE
119
120 .sp
121 .ne 2
122 .na
123 \fBspl_kmem_cache_kmem_limit\fR (uint)
124 .ad
125 .RS 12n
126 Depending on the size of a cache object it may be backed by kmalloc()'d
127 or vmalloc()'d memory. This is because the size of the required allocation
128 greatly impacts the best way to allocate the memory.
129 .sp
130 When objects are small and only a small number of memory pages need to be
131 allocated, ideally just one, then kmalloc() is very efficient. However,
132 when allocating multiple pages with kmalloc() it gets increasingly expensive
133 because the pages must be physically contiguous.
134 .sp
135 For this reason we shift to vmalloc() for slabs of large objects which
136 which removes the need for contiguous pages. We cannot use vmalloc() in
137 all cases because there is significant locking overhead involved. This
138 function takes a single global lock over the entire virtual address range
139 which serializes all allocations. Using slightly different allocation
140 functions for small and large objects allows us to handle a wide range of
141 object sizes.
142 .sh
143 The \fBspl_kmem_cache_kmem_limit\fR value is used to determine this cutoff
144 size. One quarter the PAGE_SIZE is used as the default value because
145 \fBspl_kmem_cache_obj_per_slab\fR defaults to 16. This means that at
146 most we will need to allocate four contiguous pages.
147 .sp
148 Default value: \fBPAGE_SIZE/4\fR
149 .RE
150
151 .sp
152 .ne 2
153 .na
154 \fBspl_kmem_alloc_warn\fR (uint)
155 .ad
156 .RS 12n
157 As a general rule kmem_alloc() allocations should be small, preferably
158 just a few pages since they must by physically contiguous. Therefore, a
159 rate limited warning will be printed to the console for any kmem_alloc()
160 which exceeds a reasonable threshold.
161 .sp
162 The default warning threshold is set to eight pages but capped at 32K to
163 accommodate systems using large pages. This value was selected to be small
164 enough to ensure the largest allocations are quickly noticed and fixed.
165 But large enough to avoid logging any warnings when a allocation size is
166 larger than optimal but not a serious concern. Since this value is tunable,
167 developers are encouraged to set it lower when testing so any new largish
168 allocations are quickly caught. These warnings may be disabled by setting
169 the threshold to zero.
170 .sp
171 Default value: \fB32,768\fR
172 .RE
173
174 .sp
175 .ne 2
176 .na
177 \fBspl_kmem_alloc_max\fR (uint)
178 .ad
179 .RS 12n
180 Large kmem_alloc() allocations will fail if they exceed KMALLOC_MAX_SIZE.
181 Allocations which are marginally smaller than this limit may succeed but
182 should still be avoided due to the expense of locating a contiguous range
183 of free pages. Therefore, a maximum kmem size with reasonable safely
184 margin of 4x is set. Kmem_alloc() allocations larger than this maximum
185 will quickly fail. Vmem_alloc() allocations less than or equal to this
186 value will use kmalloc(), but shift to vmalloc() when exceeding this value.
187 .sp
188 Default value: \fBKMALLOC_MAX_SIZE/4\fR
189 .RE
190
191 .sp
192 .ne 2
193 .na
194 \fBspl_kmem_cache_magazine_size\fR (uint)
195 .ad
196 .RS 12n
197 Cache magazines are an optimization designed to minimize the cost of
198 allocating memory. They do this by keeping a per-cpu cache of recently
199 freed objects, which can then be reallocated without taking a lock. This
200 can improve performance on highly contended caches. However, because
201 objects in magazines will prevent otherwise empty slabs from being
202 immediately released this may not be ideal for low memory machines.
203 .sp
204 For this reason \fBspl_kmem_cache_magazine_size\fR can be used to set a
205 maximum magazine size. When this value is set to 0 the magazine size will
206 be automatically determined based on the object size. Otherwise magazines
207 will be limited to 2-256 objects per magazine (i.e per cpu). Magazines
208 may never be entirely disabled in this implementation.
209 .sp
210 Default value: \fB0\fR
211 .RE
212
213 .sp
214 .ne 2
215 .na
216 \fBspl_hostid\fR (ulong)
217 .ad
218 .RS 12n
219 The system hostid, when set this can be used to uniquely identify a system.
220 By default this value is set to zero which indicates the hostid is disabled.
221 It can be explicitly enabled by placing a unique non-zero value in
222 \fB/etc/hostid/\fR.
223 .sp
224 Default value: \fB0\fR
225 .RE
226
227 .sp
228 .ne 2
229 .na
230 \fBspl_hostid_path\fR (charp)
231 .ad
232 .RS 12n
233 The expected path to locate the system hostid when specified. This value
234 may be overridden for non-standard configurations.
235 .sp
236 Default value: \fB/etc/hostid\fR
237 .RE
238
239 .sp
240 .ne 2
241 .na
242 \fBspl_taskq_thread_bind\fR (int)
243 .ad
244 .RS 12n
245 Bind taskq threads to specific CPUs. When enabled all taskq threads will
246 be distributed evenly over the available CPUs. By default, this behavior
247 is disabled to allow the Linux scheduler the maximum flexibility to determine
248 where a thread should run.
249 .sp
250 Default value: \fB0\fR
251 .RE
252
253 .sp
254 .ne 2
255 .na
256 \fBspl_taskq_thread_dynamic\fR (int)
257 .ad
258 .RS 12n
259 Allow dynamic taskqs. When enabled taskqs which set the TASKQ_DYNAMIC flag
260 will by default create only a single thread. New threads will be created on
261 demand up to a maximum allowed number to facilitate the completion of
262 outstanding tasks. Threads which are no longer needed will be promptly
263 destroyed. By default this behavior is enabled but it can be disabled to
264 aid performance analysis or troubleshooting.
265 .sp
266 Default value: \fB1\fR
267 .RE
268
269 .sp
270 .ne 2
271 .na
272 \fBspl_taskq_thread_priority\fR (int)
273 .ad
274 .RS 12n
275 Allow newly created taskq threads to set a non-default scheduler priority.
276 When enabled the priority specified when a taskq is created will be applied
277 to all threads created by that taskq. When disabled all threads will use
278 the default Linux kernel thread priority. By default, this behavior is
279 enabled.
280 .sp
281 Default value: \fB1\fR
282 .RE
283
284 .sp
285 .ne 2
286 .na
287 \fBspl_taskq_thread_sequential\fR (int)
288 .ad
289 .RS 12n
290 The number of items a taskq worker thread must handle without interruption
291 before requesting a new worker thread be spawned. This is used to control
292 how quickly taskqs ramp up the number of threads processing the queue.
293 Because Linux thread creation and destruction are relatively inexpensive a
294 small default value has been selected. This means that normally threads will
295 be created aggressively which is desirable. Increasing this value will
296 result in a slower thread creation rate which may be preferable for some
297 configurations.
298 .sp
299 Default value: \fB4\fR
300 .RE