]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/file-layouts.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / cephfs / file-layouts.rst
1
2 File layouts
3 ============
4
5 The layout of a file controls how its contents are mapped to Ceph RADOS objects. You can
6 read and write a file's layout using *virtual extended attributes* or xattrs.
7
8 The name of the layout xattrs depends on whether a file is a regular file or a directory. Regular
9 files' layout xattrs are called ``ceph.file.layout``, whereas directories' layout xattrs are called
10 ``ceph.dir.layout``. Where subsequent examples refer to ``ceph.file.layout``, substitute ``dir`` as appropriate
11 when dealing with directories.
12
13 .. tip::
14
15 Your linux distribution may not ship with commands for manipulating xattrs by default,
16 the required package is usually called ``attr``.
17
18 Layout fields
19 -------------
20
21 pool
22 String, giving ID or name. Which RADOS pool a file's data objects will be stored in.
23
24 pool_namespace
25 String. Within the data pool, which RADOS namespace the objects will
26 be written to. Empty by default (i.e. default namespace).
27
28 stripe_unit
29 Integer in bytes. The size (in bytes) of a block of data used in the RAID 0 distribution of a file. All stripe units for a file have equal size. The last stripe unit is typically incomplete–i.e. it represents the data at the end of the file as well as unused “space” beyond it up to the end of the fixed stripe unit size.
30
31 stripe_count
32 Integer. The number of consecutive stripe units that constitute a RAID 0 “stripe” of file data.
33
34 object_size
35 Integer in bytes. File data is chunked into RADOS objects of this size.
36
37 .. tip::
38
39 RADOS enforces a configurable limit on object sizes: if you increase CephFS
40 object sizes beyond that limit then writes may not succeed. The OSD
41 setting is ``osd_max_object_size``, which is 128MB by default.
42 Very large RADOS objects may prevent smooth operation of the cluster,
43 so increasing the object size limit past the default is not recommended.
44
45 Reading layouts with ``getfattr``
46 ---------------------------------
47
48 Read the layout information as a single string:
49
50 .. code-block:: bash
51
52 $ touch file
53 $ getfattr -n ceph.file.layout file
54 # file: file
55 ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data"
56
57 Read individual layout fields:
58
59 .. code-block:: bash
60
61 $ getfattr -n ceph.file.layout.pool file
62 # file: file
63 ceph.file.layout.pool="cephfs_data"
64 $ getfattr -n ceph.file.layout.stripe_unit file
65 # file: file
66 ceph.file.layout.stripe_unit="4194304"
67 $ getfattr -n ceph.file.layout.stripe_count file
68 # file: file
69 ceph.file.layout.stripe_count="1"
70 $ getfattr -n ceph.file.layout.object_size file
71 # file: file
72 ceph.file.layout.object_size="4194304"
73
74 .. note::
75
76 When reading layouts, the pool will usually be indicated by name. However, in
77 rare cases when pools have only just been created, the ID may be output instead.
78
79 Directories do not have an explicit layout until it is customized. Attempts to read
80 the layout will fail if it has never been modified: this indicates that layout of the
81 next ancestor directory with an explicit layout will be used.
82
83 .. code-block:: bash
84
85 $ mkdir dir
86 $ getfattr -n ceph.dir.layout dir
87 dir: ceph.dir.layout: No such attribute
88 $ setfattr -n ceph.dir.layout.stripe_count -v 2 dir
89 $ getfattr -n ceph.dir.layout dir
90 # file: dir
91 ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
92
93
94 Writing layouts with ``setfattr``
95 ---------------------------------
96
97 Layout fields are modified using ``setfattr``:
98
99 .. code-block:: bash
100
101 $ ceph osd lspools
102 0 rbd
103 1 cephfs_data
104 2 cephfs_metadata
105
106 $ setfattr -n ceph.file.layout.stripe_unit -v 1048576 file2
107 $ setfattr -n ceph.file.layout.stripe_count -v 8 file2
108 $ setfattr -n ceph.file.layout.object_size -v 10485760 file2
109 $ setfattr -n ceph.file.layout.pool -v 1 file2 # Setting pool by ID
110 $ setfattr -n ceph.file.layout.pool -v cephfs_data file2 # Setting pool by name
111
112 .. note::
113
114 When the layout fields of a file are modified using ``setfattr``, this file must be empty, otherwise an error will occur.
115
116 .. code-block:: bash
117
118 # touch an empty file
119 $ touch file1
120 # modify layout field successfully
121 $ setfattr -n ceph.file.layout.stripe_count -v 3 file1
122
123 # write something to file1
124 $ echo "hello world" > file1
125 $ setfattr -n ceph.file.layout.stripe_count -v 4 file1
126 setfattr: file1: Directory not empty
127
128 Clearing layouts
129 ----------------
130
131 If you wish to remove an explicit layout from a directory, to revert to
132 inheriting the layout of its ancestor, you can do so:
133
134 .. code-block:: bash
135
136 setfattr -x ceph.dir.layout mydir
137
138 Similarly, if you have set the ``pool_namespace`` attribute and wish
139 to modify the layout to use the default namespace instead:
140
141 .. code-block:: bash
142
143 # Create a dir and set a namespace on it
144 mkdir mydir
145 setfattr -n ceph.dir.layout.pool_namespace -v foons mydir
146 getfattr -n ceph.dir.layout mydir
147 ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data_a pool_namespace=foons"
148
149 # Clear the namespace from the directory's layout
150 setfattr -x ceph.dir.layout.pool_namespace mydir
151 getfattr -n ceph.dir.layout mydir
152 ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data_a"
153
154
155 Inheritance of layouts
156 ----------------------
157
158 Files inherit the layout of their parent directory at creation time. However, subsequent
159 changes to the parent directory's layout do not affect children.
160
161 .. code-block:: bash
162
163 $ getfattr -n ceph.dir.layout dir
164 # file: dir
165 ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
166
167 # Demonstrate file1 inheriting its parent's layout
168 $ touch dir/file1
169 $ getfattr -n ceph.file.layout dir/file1
170 # file: dir/file1
171 ceph.file.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
172
173 # Now update the layout of the directory before creating a second file
174 $ setfattr -n ceph.dir.layout.stripe_count -v 4 dir
175 $ touch dir/file2
176
177 # Demonstrate that file1's layout is unchanged
178 $ getfattr -n ceph.file.layout dir/file1
179 # file: dir/file1
180 ceph.file.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
181
182 # ...while file2 has the parent directory's new layout
183 $ getfattr -n ceph.file.layout dir/file2
184 # file: dir/file2
185 ceph.file.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
186
187
188 Files created as descendents of the directory also inherit the layout, if the intermediate
189 directories do not have layouts set:
190
191 .. code-block:: bash
192
193 $ getfattr -n ceph.dir.layout dir
194 # file: dir
195 ceph.dir.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
196 $ mkdir dir/childdir
197 $ getfattr -n ceph.dir.layout dir/childdir
198 dir/childdir: ceph.dir.layout: No such attribute
199 $ touch dir/childdir/grandchild
200 $ getfattr -n ceph.file.layout dir/childdir/grandchild
201 # file: dir/childdir/grandchild
202 ceph.file.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
203
204
205 Adding a data pool to the MDS
206 -----------------------------
207
208 Before you can use a pool with CephFS you have to add it to the Metadata Servers.
209
210 .. code-block:: bash
211
212 $ ceph fs add_data_pool cephfs cephfs_data_ssd
213 $ ceph fs ls # Pool should now show up
214 .... data pools: [cephfs_data cephfs_data_ssd ]
215
216 Make sure that your cephx keys allows the client to access this new pool.
217
218 You can then update the layout on a directory in CephFS to use the pool you added:
219
220 .. code-block:: bash
221
222 $ mkdir /mnt/cephfs/myssddir
223 $ setfattr -n ceph.dir.layout.pool -v cephfs_data_ssd /mnt/cephfs/myssddir
224
225 All new files created within that directory will now inherit its layout and place their data in your newly added pool.
226
227 You may notice that object counts in your primary data pool (the one passed to ``fs new``) continue to increase, even if files are being created in the pool you added. This is normal: the file data is stored in the pool specified by the layout, but a small amount of metadata is kept in the primary data pool for all files.
228
229