]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zfsconcepts.8
man: use one Makefile, use OpenZFS for .Os
[mirror_zfs.git] / man / man8 / zfsconcepts.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
22 .\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
23 .\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
24 .\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
25 .\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
26 .\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
27 .\" Copyright (c) 2014 Integros [integros.com]
28 .\" Copyright 2019 Richard Laager. All rights reserved.
29 .\" Copyright 2018 Nexenta Systems, Inc.
30 .\" Copyright 2019 Joyent, Inc.
31 .\"
32 .Dd June 30, 2019
33 .Dt ZFSCONCEPTS 8
34 .Os
35 .
36 .Sh NAME
37 .Nm zfsconcepts
38 .Nd overview of ZFS concepts
39 .
40 .Sh DESCRIPTION
41 .Ss ZFS File System Hierarchy
42 A ZFS storage pool is a logical collection of devices that provide space for
43 datasets.
44 A storage pool is also the root of the ZFS file system hierarchy.
45 .Pp
46 The root of the pool can be accessed as a file system, such as mounting and
47 unmounting, taking snapshots, and setting properties.
48 The physical storage characteristics, however, are managed by the
49 .Xr zpool 8
50 command.
51 .Pp
52 See
53 .Xr zpool 8
54 for more information on creating and administering pools.
55 .Ss Snapshots
56 A snapshot is a read-only copy of a file system or volume.
57 Snapshots can be created extremely quickly, and initially consume no additional
58 space within the pool.
59 As data within the active dataset changes, the snapshot consumes more data than
60 would otherwise be shared with the active dataset.
61 .Pp
62 Snapshots can have arbitrary names.
63 Snapshots of volumes can be cloned or rolled back, visibility is determined
64 by the
65 .Sy snapdev
66 property of the parent volume.
67 .Pp
68 File system snapshots can be accessed under the
69 .Pa .zfs/snapshot
70 directory in the root of the file system.
71 Snapshots are automatically mounted on demand and may be unmounted at regular
72 intervals.
73 The visibility of the
74 .Pa .zfs
75 directory can be controlled by the
76 .Sy snapdir
77 property.
78 .Ss Bookmarks
79 A bookmark is like a snapshot, a read-only copy of a file system or volume.
80 Bookmarks can be created extremely quickly, compared to snapshots, and they
81 consume no additional space within the pool.
82 Bookmarks can also have arbitrary names, much like snapshots.
83 .Pp
84 Unlike snapshots, bookmarks can not be accessed through the filesystem in any way.
85 From a storage standpoint a bookmark just provides a way to reference
86 when a snapshot was created as a distinct object.
87 Bookmarks are initially tied to a snapshot, not the filesystem or volume,
88 and they will survive if the snapshot itself is destroyed.
89 Since they are very light weight there's little incentive to destroy them.
90 .Ss Clones
91 A clone is a writable volume or file system whose initial contents are the same
92 as another dataset.
93 As with snapshots, creating a clone is nearly instantaneous, and initially
94 consumes no additional space.
95 .Pp
96 Clones can only be created from a snapshot.
97 When a snapshot is cloned, it creates an implicit dependency between the parent
98 and child.
99 Even though the clone is created somewhere else in the dataset hierarchy, the
100 original snapshot cannot be destroyed as long as a clone exists.
101 The
102 .Sy origin
103 property exposes this dependency, and the
104 .Cm destroy
105 command lists any such dependencies, if they exist.
106 .Pp
107 The clone parent-child dependency relationship can be reversed by using the
108 .Cm promote
109 subcommand.
110 This causes the
111 .Qq origin
112 file system to become a clone of the specified file system, which makes it
113 possible to destroy the file system that the clone was created from.
114 .Ss "Mount Points"
115 Creating a ZFS file system is a simple operation, so the number of file systems
116 per system is likely to be numerous.
117 To cope with this, ZFS automatically manages mounting and unmounting file
118 systems without the need to edit the
119 .Pa /etc/fstab
120 file.
121 All automatically managed file systems are mounted by ZFS at boot time.
122 .Pp
123 By default, file systems are mounted under
124 .Pa /path ,
125 where
126 .Ar path
127 is the name of the file system in the ZFS namespace.
128 Directories are created and destroyed as needed.
129 .Pp
130 A file system can also have a mount point set in the
131 .Sy mountpoint
132 property.
133 This directory is created as needed, and ZFS automatically mounts the file
134 system when the
135 .Nm zfs Cm mount Fl a
136 command is invoked
137 .Po without editing
138 .Pa /etc/fstab
139 .Pc .
140 The
141 .Sy mountpoint
142 property can be inherited, so if
143 .Em pool/home
144 has a mount point of
145 .Pa /export/stuff ,
146 then
147 .Em pool/home/user
148 automatically inherits a mount point of
149 .Pa /export/stuff/user .
150 .Pp
151 A file system
152 .Sy mountpoint
153 property of
154 .Sy none
155 prevents the file system from being mounted.
156 .Pp
157 If needed, ZFS file systems can also be managed with traditional tools
158 .Po
159 .Nm mount ,
160 .Nm umount ,
161 .Pa /etc/fstab
162 .Pc .
163 If a file system's mount point is set to
164 .Sy legacy ,
165 ZFS makes no attempt to manage the file system, and the administrator is
166 responsible for mounting and unmounting the file system.
167 Because pools must
168 be imported before a legacy mount can succeed, administrators should ensure
169 that legacy mounts are only attempted after the zpool import process
170 finishes at boot time.
171 For example, on machines using systemd, the mount option
172 .Pp
173 .Nm x-systemd.requires=zfs-import.target
174 .Pp
175 will ensure that the zfs-import completes before systemd attempts mounting
176 the filesystem.
177 See
178 .Xr systemd.mount 5
179 for details.
180 .Ss Deduplication
181 Deduplication is the process for removing redundant data at the block level,
182 reducing the total amount of data stored.
183 If a file system has the
184 .Sy dedup
185 property enabled, duplicate data blocks are removed synchronously.
186 The result
187 is that only unique data is stored and common components are shared among files.
188 .Pp
189 Deduplicating data is a very resource-intensive operation.
190 It is generally recommended that you have at least 1.25 GiB of RAM
191 per 1 TiB of storage when you enable deduplication.
192 Calculating the exact requirement depends heavily
193 on the type of data stored in the pool.
194 .Pp
195 Enabling deduplication on an improperly-designed system can result in
196 performance issues (slow IO and administrative operations).
197 It can potentially lead to problems importing a pool due to memory exhaustion.
198 Deduplication can consume significant processing power (CPU) and memory as well
199 as generate additional disk IO.
200 .Pp
201 Before creating a pool with deduplication enabled, ensure that you have planned
202 your hardware requirements appropriately and implemented appropriate recovery
203 practices, such as regular backups.
204 Consider using the
205 .Sy compression
206 property as a less resource-intensive alternative.