]>
Commit | Line | Data |
---|---|---|
1 | .\" | |
2 | .\" CDDL HEADER START | |
3 | .\" | |
4 | .\" The contents of this file are subject to the terms of the | |
5 | .\" Common Development and Distribution License (the "License"). | |
6 | .\" You may not use this file except in compliance with the License. | |
7 | .\" | |
8 | .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE | |
9 | .\" or https://opensource.org/licenses/CDDL-1.0. | |
10 | .\" See the License for the specific language governing permissions | |
11 | .\" and limitations under the License. | |
12 | .\" | |
13 | .\" When distributing Covered Code, include this CDDL HEADER in each | |
14 | .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. | |
15 | .\" If applicable, add the following below this CDDL HEADER, with the | |
16 | .\" fields enclosed by brackets "[]" replaced with your own identifying | |
17 | .\" information: Portions Copyright [yyyy] [name of copyright owner] | |
18 | .\" | |
19 | .\" CDDL HEADER END | |
20 | .\" | |
21 | .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. | |
22 | .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. | |
23 | .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. | |
24 | .\" Copyright (c) 2017 Datto Inc. | |
25 | .\" Copyright (c) 2018 George Melikov. All Rights Reserved. | |
26 | .\" Copyright 2017 Nexenta Systems, Inc. | |
27 | .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. | |
28 | .\" | |
29 | .Dd March 16, 2022 | |
30 | .Dt ZPOOL 8 | |
31 | .Os | |
32 | . | |
33 | .Sh NAME | |
34 | .Nm zpool | |
35 | .Nd configure ZFS storage pools | |
36 | .Sh SYNOPSIS | |
37 | .Nm | |
38 | .Fl ?V | |
39 | .Nm | |
40 | .Cm version | |
41 | .Nm | |
42 | .Cm subcommand | |
43 | .Op Ar arguments | |
44 | . | |
45 | .Sh DESCRIPTION | |
46 | The | |
47 | .Nm | |
48 | command configures ZFS storage pools. | |
49 | A storage pool is a collection of devices that provides physical storage and | |
50 | data replication for ZFS datasets. | |
51 | All datasets within a storage pool share the same space. | |
52 | See | |
53 | .Xr zfs 8 | |
54 | for information on managing datasets. | |
55 | .Pp | |
56 | For an overview of creating and managing ZFS storage pools see the | |
57 | .Xr zpoolconcepts 7 | |
58 | manual page. | |
59 | . | |
60 | .Sh SUBCOMMANDS | |
61 | All subcommands that modify state are logged persistently to the pool in their | |
62 | original form. | |
63 | .Pp | |
64 | The | |
65 | .Nm | |
66 | command provides subcommands to create and destroy storage pools, add capacity | |
67 | to storage pools, and provide information about the storage pools. | |
68 | The following subcommands are supported: | |
69 | .Bl -tag -width Ds | |
70 | .It Xo | |
71 | .Nm | |
72 | .Fl ?\& | |
73 | .Xc | |
74 | Displays a help message. | |
75 | .It Xo | |
76 | .Nm | |
77 | .Fl V , -version | |
78 | .Xc | |
79 | .It Xo | |
80 | .Nm | |
81 | .Cm version | |
82 | .Xc | |
83 | Displays the software version of the | |
84 | .Nm | |
85 | userland utility and the ZFS kernel module. | |
86 | .El | |
87 | . | |
88 | .Ss Creation | |
89 | .Bl -tag -width Ds | |
90 | .It Xr zpool-create 8 | |
91 | Creates a new storage pool containing the virtual devices specified on the | |
92 | command line. | |
93 | .It Xr zpool-initialize 8 | |
94 | Begins initializing by writing to all unallocated regions on the specified | |
95 | devices, or all eligible devices in the pool if no individual devices are | |
96 | specified. | |
97 | .El | |
98 | . | |
99 | .Ss Destruction | |
100 | .Bl -tag -width Ds | |
101 | .It Xr zpool-destroy 8 | |
102 | Destroys the given pool, freeing up any devices for other use. | |
103 | .It Xr zpool-labelclear 8 | |
104 | Removes ZFS label information from the specified | |
105 | .Ar device . | |
106 | .El | |
107 | . | |
108 | .Ss Virtual Devices | |
109 | .Bl -tag -width Ds | |
110 | .It Xo | |
111 | .Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8 | |
112 | .Xc | |
113 | Converts a non-redundant disk into a mirror, or increases | |
114 | the redundancy level of an existing mirror | |
115 | .Cm ( attach Ns ), or performs the inverse operation ( | |
116 | .Cm detach Ns ). | |
117 | .It Xo | |
118 | .Xr zpool-add 8 Ns / Ns Xr zpool-remove 8 | |
119 | .Xc | |
120 | Adds the specified virtual devices to the given pool, | |
121 | or removes the specified device from the pool. | |
122 | .It Xr zpool-replace 8 | |
123 | Replaces an existing device (which may be faulted) with a new one. | |
124 | .It Xr zpool-split 8 | |
125 | Creates a new pool by splitting all mirrors in an existing pool (which decreases | |
126 | its redundancy). | |
127 | .El | |
128 | . | |
129 | .Ss Properties | |
130 | Available pool properties listed in the | |
131 | .Xr zpoolprops 7 | |
132 | manual page. | |
133 | .Bl -tag -width Ds | |
134 | .It Xr zpool-list 8 | |
135 | Lists the given pools along with a health status and space usage. | |
136 | .It Xo | |
137 | .Xr zpool-get 8 Ns / Ns Xr zpool-set 8 | |
138 | .Xc | |
139 | Retrieves the given list of properties | |
140 | .Po | |
141 | or all properties if | |
142 | .Sy all | |
143 | is used | |
144 | .Pc | |
145 | for the specified storage pool(s). | |
146 | .El | |
147 | . | |
148 | .Ss Monitoring | |
149 | .Bl -tag -width Ds | |
150 | .It Xr zpool-status 8 | |
151 | Displays the detailed health status for the given pools. | |
152 | .It Xr zpool-iostat 8 | |
153 | Displays logical I/O statistics for the given pools/vdevs. | |
154 | Physical I/O operations may be observed via | |
155 | .Xr iostat 1 . | |
156 | .It Xr zpool-events 8 | |
157 | Lists all recent events generated by the ZFS kernel modules. | |
158 | These events are consumed by the | |
159 | .Xr zed 8 | |
160 | and used to automate administrative tasks such as replacing a failed device | |
161 | with a hot spare. | |
162 | That manual page also describes the subclasses and event payloads | |
163 | that can be generated. | |
164 | .It Xr zpool-history 8 | |
165 | Displays the command history of the specified pool(s) or all pools if no pool is | |
166 | specified. | |
167 | .El | |
168 | . | |
169 | .Ss Maintenance | |
170 | .Bl -tag -width Ds | |
171 | .It Xr zpool-scrub 8 | |
172 | Begins a scrub or resumes a paused scrub. | |
173 | .It Xr zpool-checkpoint 8 | |
174 | Checkpoints the current state of | |
175 | .Ar pool , | |
176 | which can be later restored by | |
177 | .Nm zpool Cm import Fl -rewind-to-checkpoint . | |
178 | .It Xr zpool-trim 8 | |
179 | Initiates an immediate on-demand TRIM operation for all of the free space in a | |
180 | pool. | |
181 | This operation informs the underlying storage devices of all blocks | |
182 | in the pool which are no longer allocated and allows thinly provisioned | |
183 | devices to reclaim the space. | |
184 | .It Xr zpool-sync 8 | |
185 | This command forces all in-core dirty data to be written to the primary | |
186 | pool storage and not the ZIL. | |
187 | It will also update administrative information including quota reporting. | |
188 | Without arguments, | |
189 | .Nm zpool Cm sync | |
190 | will sync all pools on the system. | |
191 | Otherwise, it will sync only the specified pool(s). | |
192 | .It Xr zpool-upgrade 8 | |
193 | Manage the on-disk format version of storage pools. | |
194 | .It Xr zpool-wait 8 | |
195 | Waits until all background activity of the given types has ceased in the given | |
196 | pool. | |
197 | .El | |
198 | . | |
199 | .Ss Fault Resolution | |
200 | .Bl -tag -width Ds | |
201 | .It Xo | |
202 | .Xr zpool-offline 8 Ns / Ns Xr zpool-online 8 | |
203 | .Xc | |
204 | Takes the specified physical device offline or brings it online. | |
205 | .It Xr zpool-resilver 8 | |
206 | Starts a resilver. | |
207 | If an existing resilver is already running it will be restarted from the | |
208 | beginning. | |
209 | .It Xr zpool-reopen 8 | |
210 | Reopen all the vdevs associated with the pool. | |
211 | .It Xr zpool-clear 8 | |
212 | Clears device errors in a pool. | |
213 | .El | |
214 | . | |
215 | .Ss Import & Export | |
216 | .Bl -tag -width Ds | |
217 | .It Xr zpool-import 8 | |
218 | Make disks containing ZFS storage pools available for use on the system. | |
219 | .It Xr zpool-export 8 | |
220 | Exports the given pools from the system. | |
221 | .It Xr zpool-reguid 8 | |
222 | Generates a new unique identifier for the pool. | |
223 | .El | |
224 | . | |
225 | .Sh EXIT STATUS | |
226 | The following exit values are returned: | |
227 | .Bl -tag -compact -offset 4n -width "a" | |
228 | .It Sy 0 | |
229 | Successful completion. | |
230 | .It Sy 1 | |
231 | An error occurred. | |
232 | .It Sy 2 | |
233 | Invalid command line options were specified. | |
234 | .El | |
235 | . | |
236 | .Sh EXAMPLES | |
237 | .\" Examples 1, 2, 3, 4, 12, 13 are shared with zpool-create.8. | |
238 | .\" Examples 6, 14 are shared with zpool-add.8. | |
239 | .\" Examples 7, 16 are shared with zpool-list.8. | |
240 | .\" Examples 8 are shared with zpool-destroy.8. | |
241 | .\" Examples 9 are shared with zpool-export.8. | |
242 | .\" Examples 10 are shared with zpool-import.8. | |
243 | .\" Examples 11 are shared with zpool-upgrade.8. | |
244 | .\" Examples 15 are shared with zpool-remove.8. | |
245 | .\" Examples 17 are shared with zpool-status.8. | |
246 | .\" Examples 14, 17 are also shared with zpool-iostat.8. | |
247 | .\" Make sure to update them omnidirectionally | |
248 | .Ss Example 1 : No Creating a RAID-Z Storage Pool | |
249 | The following command creates a pool with a single raidz root vdev that | |
250 | consists of six disks: | |
251 | .Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf | |
252 | . | |
253 | .Ss Example 2 : No Creating a Mirrored Storage Pool | |
254 | The following command creates a pool with two mirrors, where each mirror | |
255 | contains two disks: | |
256 | .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd | |
257 | . | |
258 | .Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions | |
259 | The following command creates a non-redundant pool using two disk partitions: | |
260 | .Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2 | |
261 | . | |
262 | .Ss Example 4 : No Creating a ZFS Storage Pool by Using Files | |
263 | The following command creates a non-redundant pool using files. | |
264 | While not recommended, a pool based on files can be useful for experimental | |
265 | purposes. | |
266 | .Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b | |
267 | . | |
268 | .Ss Example 5 : No Making a non-mirrored ZFS Storage Pool mirrored | |
269 | The following command converts an existing single device | |
270 | .Ar sda | |
271 | into a mirror by attaching a second device to it, | |
272 | .Ar sdb . | |
273 | .Dl # Nm zpool Cm attach Ar tank Pa sda sdb | |
274 | . | |
275 | .Ss Example 6 : No Adding a Mirror to a ZFS Storage Pool | |
276 | The following command adds two mirrored disks to the pool | |
277 | .Ar tank , | |
278 | assuming the pool is already made up of two-way mirrors. | |
279 | The additional space is immediately available to any datasets within the pool. | |
280 | .Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb | |
281 | . | |
282 | .Ss Example 7 : No Listing Available ZFS Storage Pools | |
283 | The following command lists all available pools on the system. | |
284 | In this case, the pool | |
285 | .Ar zion | |
286 | is faulted due to a missing device. | |
287 | The results from this command are similar to the following: | |
288 | .Bd -literal -compact -offset Ds | |
289 | .No # Nm zpool Cm list | |
290 | NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT | |
291 | rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE - | |
292 | tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE - | |
293 | zion - - - - - - - FAULTED - | |
294 | .Ed | |
295 | . | |
296 | .Ss Example 8 : No Destroying a ZFS Storage Pool | |
297 | The following command destroys the pool | |
298 | .Ar tank | |
299 | and any datasets contained within: | |
300 | .Dl # Nm zpool Cm destroy Fl f Ar tank | |
301 | . | |
302 | .Ss Example 9 : No Exporting a ZFS Storage Pool | |
303 | The following command exports the devices in pool | |
304 | .Ar tank | |
305 | so that they can be relocated or later imported: | |
306 | .Dl # Nm zpool Cm export Ar tank | |
307 | . | |
308 | .Ss Example 10 : No Importing a ZFS Storage Pool | |
309 | The following command displays available pools, and then imports the pool | |
310 | .Ar tank | |
311 | for use on the system. | |
312 | The results from this command are similar to the following: | |
313 | .Bd -literal -compact -offset Ds | |
314 | .No # Nm zpool Cm import | |
315 | pool: tank | |
316 | id: 15451357997522795478 | |
317 | state: ONLINE | |
318 | action: The pool can be imported using its name or numeric identifier. | |
319 | config: | |
320 | ||
321 | tank ONLINE | |
322 | mirror ONLINE | |
323 | sda ONLINE | |
324 | sdb ONLINE | |
325 | ||
326 | .No # Nm zpool Cm import Ar tank | |
327 | .Ed | |
328 | . | |
329 | .Ss Example 11 : No Upgrading All ZFS Storage Pools to the Current Version | |
330 | The following command upgrades all ZFS Storage pools to the current version of | |
331 | the software: | |
332 | .Bd -literal -compact -offset Ds | |
333 | .No # Nm zpool Cm upgrade Fl a | |
334 | This system is currently running ZFS version 2. | |
335 | .Ed | |
336 | . | |
337 | .Ss Example 12 : No Managing Hot Spares | |
338 | The following command creates a new pool with an available hot spare: | |
339 | .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc | |
340 | .Pp | |
341 | If one of the disks were to fail, the pool would be reduced to the degraded | |
342 | state. | |
343 | The failed device can be replaced using the following command: | |
344 | .Dl # Nm zpool Cm replace Ar tank Pa sda sdd | |
345 | .Pp | |
346 | Once the data has been resilvered, the spare is automatically removed and is | |
347 | made available for use should another device fail. | |
348 | The hot spare can be permanently removed from the pool using the following | |
349 | command: | |
350 | .Dl # Nm zpool Cm remove Ar tank Pa sdc | |
351 | . | |
352 | .Ss Example 13 : No Creating a ZFS Pool with Mirrored Separate Intent Logs | |
353 | The following command creates a ZFS storage pool consisting of two, two-way | |
354 | mirrors and mirrored log devices: | |
355 | .Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf | |
356 | . | |
357 | .Ss Example 14 : No Adding Cache Devices to a ZFS Pool | |
358 | The following command adds two disks for use as cache devices to a ZFS storage | |
359 | pool: | |
360 | .Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd | |
361 | .Pp | |
362 | Once added, the cache devices gradually fill with content from main memory. | |
363 | Depending on the size of your cache devices, it could take over an hour for | |
364 | them to fill. | |
365 | Capacity and reads can be monitored using the | |
366 | .Cm iostat | |
367 | subcommand as follows: | |
368 | .Dl # Nm zpool Cm iostat Fl v Ar pool 5 | |
369 | . | |
370 | .Ss Example 15 : No Removing a Mirrored top-level (Log or Data) Device | |
371 | The following commands remove the mirrored log device | |
372 | .Sy mirror-2 | |
373 | and mirrored top-level data device | |
374 | .Sy mirror-1 . | |
375 | .Pp | |
376 | Given this configuration: | |
377 | .Bd -literal -compact -offset Ds | |
378 | pool: tank | |
379 | state: ONLINE | |
380 | scrub: none requested | |
381 | config: | |
382 | ||
383 | NAME STATE READ WRITE CKSUM | |
384 | tank ONLINE 0 0 0 | |
385 | mirror-0 ONLINE 0 0 0 | |
386 | sda ONLINE 0 0 0 | |
387 | sdb ONLINE 0 0 0 | |
388 | mirror-1 ONLINE 0 0 0 | |
389 | sdc ONLINE 0 0 0 | |
390 | sdd ONLINE 0 0 0 | |
391 | logs | |
392 | mirror-2 ONLINE 0 0 0 | |
393 | sde ONLINE 0 0 0 | |
394 | sdf ONLINE 0 0 0 | |
395 | .Ed | |
396 | .Pp | |
397 | The command to remove the mirrored log | |
398 | .Ar mirror-2 No is : | |
399 | .Dl # Nm zpool Cm remove Ar tank mirror-2 | |
400 | .Pp | |
401 | The command to remove the mirrored data | |
402 | .Ar mirror-1 No is : | |
403 | .Dl # Nm zpool Cm remove Ar tank mirror-1 | |
404 | . | |
405 | .Ss Example 16 : No Displaying expanded space on a device | |
406 | The following command displays the detailed information for the pool | |
407 | .Ar data . | |
408 | This pool is comprised of a single raidz vdev where one of its devices | |
409 | increased its capacity by 10 GiB. | |
410 | In this example, the pool will not be able to utilize this extra capacity until | |
411 | all the devices under the raidz vdev have been expanded. | |
412 | .Bd -literal -compact -offset Ds | |
413 | .No # Nm zpool Cm list Fl v Ar data | |
414 | NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT | |
415 | data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE - | |
416 | raidz1 23.9G 14.6G 9.30G - 48% | |
417 | sda - - - - - | |
418 | sdb - - - 10G - | |
419 | sdc - - - - - | |
420 | .Ed | |
421 | . | |
422 | .Ss Example 17 : No Adding output columns | |
423 | Additional columns can be added to the | |
424 | .Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c . | |
425 | .Bd -literal -compact -offset Ds | |
426 | .No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size | |
427 | NAME STATE READ WRITE CKSUM vendor model size | |
428 | tank ONLINE 0 0 0 | |
429 | mirror-0 ONLINE 0 0 0 | |
430 | U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T | |
431 | U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T | |
432 | U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T | |
433 | U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T | |
434 | U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T | |
435 | U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T | |
436 | ||
437 | .No # Nm zpool Cm iostat Fl vc Pa size | |
438 | capacity operations bandwidth | |
439 | pool alloc free read write read write size | |
440 | ---------- ----- ----- ----- ----- ----- ----- ---- | |
441 | rpool 14.6G 54.9G 4 55 250K 2.69M | |
442 | sda1 14.6G 54.9G 4 55 250K 2.69M 70G | |
443 | ---------- ----- ----- ----- ----- ----- ----- ---- | |
444 | .Ed | |
445 | . | |
446 | .Sh ENVIRONMENT VARIABLES | |
447 | .Bl -tag -compact -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE" | |
448 | .It Sy ZFS_ABORT | |
449 | Cause | |
450 | .Nm | |
451 | to dump core on exit for the purposes of running | |
452 | .Sy ::findleaks . | |
453 | .It Sy ZFS_COLOR | |
454 | Use ANSI color in | |
455 | .Nm zpool Cm status | |
456 | and | |
457 | .Nm zpool Cm iostat | |
458 | output. | |
459 | .It Sy ZPOOL_AUTO_POWER_ON_SLOT | |
460 | Automatically attempt to turn on the drives enclosure slot power to a drive when | |
461 | running the | |
462 | .Nm zpool Cm online | |
463 | or | |
464 | .Nm zpool Cm clear | |
465 | commands. | |
466 | This has the same effect as passing the | |
467 | .Fl -power | |
468 | option to those commands. | |
469 | .It Sy ZPOOL_POWER_ON_SLOT_TIMEOUT_MS | |
470 | The maximum time in milliseconds to wait for a slot power sysfs value | |
471 | to return the correct value after writing it. | |
472 | For example, after writing "on" to the sysfs enclosure slot power_control file, | |
473 | it can take some time for the enclosure to power down the slot and return | |
474 | "on" if you read back the 'power_control' value. | |
475 | Defaults to 30 seconds (30000ms) if not set. | |
476 | .It Sy ZPOOL_IMPORT_PATH | |
477 | The search path for devices or files to use with the pool. | |
478 | This is a colon-separated list of directories in which | |
479 | .Nm | |
480 | looks for device nodes and files. | |
481 | Similar to the | |
482 | .Fl d | |
483 | option in | |
484 | .Nm zpool import . | |
485 | .It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS | |
486 | The maximum time in milliseconds that | |
487 | .Nm zpool import | |
488 | will wait for an expected device to be available. | |
489 | .It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE | |
490 | If set, suppress warning about non-native vdev ashift in | |
491 | .Nm zpool Cm status . | |
492 | The value is not used, only the presence or absence of the variable matters. | |
493 | .It Sy ZPOOL_VDEV_NAME_GUID | |
494 | Cause | |
495 | .Nm | |
496 | subcommands to output vdev guids by default. | |
497 | This behavior is identical to the | |
498 | .Nm zpool Cm status Fl g | |
499 | command line option. | |
500 | .It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS | |
501 | Cause | |
502 | .Nm | |
503 | subcommands to follow links for vdev names by default. | |
504 | This behavior is identical to the | |
505 | .Nm zpool Cm status Fl L | |
506 | command line option. | |
507 | .It Sy ZPOOL_VDEV_NAME_PATH | |
508 | Cause | |
509 | .Nm | |
510 | subcommands to output full vdev path names by default. | |
511 | This behavior is identical to the | |
512 | .Nm zpool Cm status Fl P | |
513 | command line option. | |
514 | .It Sy ZFS_VDEV_DEVID_OPT_OUT | |
515 | Older OpenZFS implementations had issues when attempting to display pool | |
516 | config vdev names if a | |
517 | .Sy devid | |
518 | NVP value is present in the pool's config. | |
519 | .Pp | |
520 | For example, a pool that originated on illumos platform would have a | |
521 | .Sy devid | |
522 | value in the config and | |
523 | .Nm zpool Cm status | |
524 | would fail when listing the config. | |
525 | This would also be true for future Linux-based pools. | |
526 | .Pp | |
527 | A pool can be stripped of any | |
528 | .Sy devid | |
529 | values on import or prevented from adding | |
530 | them on | |
531 | .Nm zpool Cm create | |
532 | or | |
533 | .Nm zpool Cm add | |
534 | by setting | |
535 | .Sy ZFS_VDEV_DEVID_OPT_OUT . | |
536 | .Pp | |
537 | .It Sy ZPOOL_SCRIPTS_AS_ROOT | |
538 | Allow a privileged user to run | |
539 | .Nm zpool Cm status Ns / Ns Cm iostat Fl c . | |
540 | Normally, only unprivileged users are allowed to run | |
541 | .Fl c . | |
542 | .It Sy ZPOOL_SCRIPTS_PATH | |
543 | The search path for scripts when running | |
544 | .Nm zpool Cm status Ns / Ns Cm iostat Fl c . | |
545 | This is a colon-separated list of directories and overrides the default | |
546 | .Pa ~/.zpool.d | |
547 | and | |
548 | .Pa /etc/zfs/zpool.d | |
549 | search paths. | |
550 | .It Sy ZPOOL_SCRIPTS_ENABLED | |
551 | Allow a user to run | |
552 | .Nm zpool Cm status Ns / Ns Cm iostat Fl c . | |
553 | If | |
554 | .Sy ZPOOL_SCRIPTS_ENABLED | |
555 | is not set, it is assumed that the user is allowed to run | |
556 | .Nm zpool Cm status Ns / Ns Cm iostat Fl c . | |
557 | .\" Shared with zfs.8 | |
558 | .It Sy ZFS_MODULE_TIMEOUT | |
559 | Time, in seconds, to wait for | |
560 | .Pa /dev/zfs | |
561 | to appear. | |
562 | Defaults to | |
563 | .Sy 10 , | |
564 | max | |
565 | .Sy 600 Pq 10 minutes . | |
566 | If | |
567 | .Pf < Sy 0 , | |
568 | wait forever; if | |
569 | .Sy 0 , | |
570 | don't wait. | |
571 | .El | |
572 | . | |
573 | .Sh INTERFACE STABILITY | |
574 | .Sy Evolving | |
575 | . | |
576 | .Sh SEE ALSO | |
577 | .Xr zfs 4 , | |
578 | .Xr zpool-features 7 , | |
579 | .Xr zpoolconcepts 7 , | |
580 | .Xr zpoolprops 7 , | |
581 | .Xr zed 8 , | |
582 | .Xr zfs 8 , | |
583 | .Xr zpool-add 8 , | |
584 | .Xr zpool-attach 8 , | |
585 | .Xr zpool-checkpoint 8 , | |
586 | .Xr zpool-clear 8 , | |
587 | .Xr zpool-create 8 , | |
588 | .Xr zpool-destroy 8 , | |
589 | .Xr zpool-detach 8 , | |
590 | .Xr zpool-events 8 , | |
591 | .Xr zpool-export 8 , | |
592 | .Xr zpool-get 8 , | |
593 | .Xr zpool-history 8 , | |
594 | .Xr zpool-import 8 , | |
595 | .Xr zpool-initialize 8 , | |
596 | .Xr zpool-iostat 8 , | |
597 | .Xr zpool-labelclear 8 , | |
598 | .Xr zpool-list 8 , | |
599 | .Xr zpool-offline 8 , | |
600 | .Xr zpool-online 8 , | |
601 | .Xr zpool-reguid 8 , | |
602 | .Xr zpool-remove 8 , | |
603 | .Xr zpool-reopen 8 , | |
604 | .Xr zpool-replace 8 , | |
605 | .Xr zpool-resilver 8 , | |
606 | .Xr zpool-scrub 8 , | |
607 | .Xr zpool-set 8 , | |
608 | .Xr zpool-split 8 , | |
609 | .Xr zpool-status 8 , | |
610 | .Xr zpool-sync 8 , | |
611 | .Xr zpool-trim 8 , | |
612 | .Xr zpool-upgrade 8 , | |
613 | .Xr zpool-wait 8 |