]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/file-layouts.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / doc / cephfs / file-layouts.rst
CommitLineData
92f5a8d4 1.. _file-layouts:
7c673cae
FG
2
3File layouts
4============
5
6The layout of a file controls how its contents are mapped to Ceph RADOS objects. You can
7read and write a file's layout using *virtual extended attributes* or xattrs.
8
9The name of the layout xattrs depends on whether a file is a regular file or a directory. Regular
10files' layout xattrs are called ``ceph.file.layout``, whereas directories' layout xattrs are called
11``ceph.dir.layout``. Where subsequent examples refer to ``ceph.file.layout``, substitute ``dir`` as appropriate
12when dealing with directories.
13
14.. tip::
15
16 Your linux distribution may not ship with commands for manipulating xattrs by default,
17 the required package is usually called ``attr``.
18
19Layout fields
20-------------
21
22pool
e306af50 23 String, giving ID or name. String can only have characters in the set [a-zA-Z0-9\_-.]. Which RADOS pool a file's data objects will be stored in.
7c673cae
FG
24
25pool_namespace
e306af50 26 String with only characters in the set [a-zA-Z0-9\_-.]. Within the data pool, which RADOS namespace the objects will
7c673cae
FG
27 be written to. Empty by default (i.e. default namespace).
28
29stripe_unit
30 Integer in bytes. The size (in bytes) of a block of data used in the RAID 0 distribution of a file. All stripe units for a file have equal size. The last stripe unit is typically incomplete–i.e. it represents the data at the end of the file as well as unused “space” beyond it up to the end of the fixed stripe unit size.
31
32stripe_count
33 Integer. The number of consecutive stripe units that constitute a RAID 0 “stripe” of file data.
34
35object_size
36 Integer in bytes. File data is chunked into RADOS objects of this size.
37
31f18b77
FG
38.. tip::
39
40 RADOS enforces a configurable limit on object sizes: if you increase CephFS
41 object sizes beyond that limit then writes may not succeed. The OSD
11fdf7f2 42 setting is ``osd_max_object_size``, which is 128MB by default.
31f18b77
FG
43 Very large RADOS objects may prevent smooth operation of the cluster,
44 so increasing the object size limit past the default is not recommended.
45
7c673cae
FG
46Reading layouts with ``getfattr``
47---------------------------------
48
49Read the layout information as a single string:
50
51.. code-block:: bash
52
53 $ touch file
54 $ getfattr -n ceph.file.layout file
55 # file: file
56 ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data"
57
58Read individual layout fields:
59
60.. code-block:: bash
61
62 $ getfattr -n ceph.file.layout.pool file
63 # file: file
64 ceph.file.layout.pool="cephfs_data"
65 $ getfattr -n ceph.file.layout.stripe_unit file
66 # file: file
67 ceph.file.layout.stripe_unit="4194304"
68 $ getfattr -n ceph.file.layout.stripe_count file
69 # file: file
70 ceph.file.layout.stripe_count="1"
71 $ getfattr -n ceph.file.layout.object_size file
72 # file: file
73 ceph.file.layout.object_size="4194304"
74
75.. note::
76
77 When reading layouts, the pool will usually be indicated by name. However, in
78 rare cases when pools have only just been created, the ID may be output instead.
79
80Directories do not have an explicit layout until it is customized. Attempts to read
81the layout will fail if it has never been modified: this indicates that layout of the
82next ancestor directory with an explicit layout will be used.
83
84.. code-block:: bash
85
86 $ mkdir dir
87 $ getfattr -n ceph.dir.layout dir
88 dir: ceph.dir.layout: No such attribute
89 $ setfattr -n ceph.dir.layout.stripe_count -v 2 dir
90 $ getfattr -n ceph.dir.layout dir
91 # file: dir
92 ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
93
94
95Writing layouts with ``setfattr``
96---------------------------------
97
98Layout fields are modified using ``setfattr``:
99
100.. code-block:: bash
101
102 $ ceph osd lspools
11fdf7f2
TL
103 0 rbd
104 1 cephfs_data
105 2 cephfs_metadata
7c673cae
FG
106
107 $ setfattr -n ceph.file.layout.stripe_unit -v 1048576 file2
108 $ setfattr -n ceph.file.layout.stripe_count -v 8 file2
109 $ setfattr -n ceph.file.layout.object_size -v 10485760 file2
110 $ setfattr -n ceph.file.layout.pool -v 1 file2 # Setting pool by ID
111 $ setfattr -n ceph.file.layout.pool -v cephfs_data file2 # Setting pool by name
112
113.. note::
114
115 When the layout fields of a file are modified using ``setfattr``, this file must be empty, otherwise an error will occur.
116
117.. code-block:: bash
118
119 # touch an empty file
120 $ touch file1
121 # modify layout field successfully
122 $ setfattr -n ceph.file.layout.stripe_count -v 3 file1
123
124 # write something to file1
125 $ echo "hello world" > file1
126 $ setfattr -n ceph.file.layout.stripe_count -v 4 file1
127 setfattr: file1: Directory not empty
128
129Clearing layouts
130----------------
131
132If you wish to remove an explicit layout from a directory, to revert to
11fdf7f2 133inheriting the layout of its ancestor, you can do so:
7c673cae
FG
134
135.. code-block:: bash
136
137 setfattr -x ceph.dir.layout mydir
138
139Similarly, if you have set the ``pool_namespace`` attribute and wish
140to modify the layout to use the default namespace instead:
141
142.. code-block:: bash
143
144 # Create a dir and set a namespace on it
145 mkdir mydir
146 setfattr -n ceph.dir.layout.pool_namespace -v foons mydir
147 getfattr -n ceph.dir.layout mydir
148 ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data_a pool_namespace=foons"
149
150 # Clear the namespace from the directory's layout
151 setfattr -x ceph.dir.layout.pool_namespace mydir
152 getfattr -n ceph.dir.layout mydir
153 ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data_a"
154
155
156Inheritance of layouts
157----------------------
158
159Files inherit the layout of their parent directory at creation time. However, subsequent
160changes to the parent directory's layout do not affect children.
161
162.. code-block:: bash
163
164 $ getfattr -n ceph.dir.layout dir
165 # file: dir
166 ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
167
168 # Demonstrate file1 inheriting its parent's layout
169 $ touch dir/file1
170 $ getfattr -n ceph.file.layout dir/file1
171 # file: dir/file1
172 ceph.file.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
173
174 # Now update the layout of the directory before creating a second file
175 $ setfattr -n ceph.dir.layout.stripe_count -v 4 dir
176 $ touch dir/file2
177
178 # Demonstrate that file1's layout is unchanged
179 $ getfattr -n ceph.file.layout dir/file1
180 # file: dir/file1
181 ceph.file.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
182
183 # ...while file2 has the parent directory's new layout
184 $ getfattr -n ceph.file.layout dir/file2
185 # file: dir/file2
186 ceph.file.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
187
188
189Files created as descendents of the directory also inherit the layout, if the intermediate
190directories do not have layouts set:
191
192.. code-block:: bash
193
194 $ getfattr -n ceph.dir.layout dir
195 # file: dir
196 ceph.dir.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
197 $ mkdir dir/childdir
198 $ getfattr -n ceph.dir.layout dir/childdir
199 dir/childdir: ceph.dir.layout: No such attribute
200 $ touch dir/childdir/grandchild
201 $ getfattr -n ceph.file.layout dir/childdir/grandchild
202 # file: dir/childdir/grandchild
203 ceph.file.layout="stripe_unit=4194304 stripe_count=4 object_size=4194304 pool=cephfs_data"
204
f67539c2
TL
205
206.. _adding-data-pool-to-file-system:
7c673cae 207
f67539c2
TL
208Adding a data pool to the File System
209-------------------------------------
7c673cae
FG
210
211Before you can use a pool with CephFS you have to add it to the Metadata Servers.
212
213.. code-block:: bash
214
215 $ ceph fs add_data_pool cephfs cephfs_data_ssd
11fdf7f2 216 $ ceph fs ls # Pool should now show up
7c673cae
FG
217 .... data pools: [cephfs_data cephfs_data_ssd ]
218
219Make sure that your cephx keys allows the client to access this new pool.
11fdf7f2
TL
220
221You can then update the layout on a directory in CephFS to use the pool you added:
222
223.. code-block:: bash
224
225 $ mkdir /mnt/cephfs/myssddir
226 $ setfattr -n ceph.dir.layout.pool -v cephfs_data_ssd /mnt/cephfs/myssddir
227
228All new files created within that directory will now inherit its layout and place their data in your newly added pool.
229
230You may notice that object counts in your primary data pool (the one passed to ``fs new``) continue to increase, even if files are being created in the pool you added. This is normal: the file data is stored in the pool specified by the layout, but a small amount of metadata is kept in the primary data pool for all files.
231
232