]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/createfs.rst
import 15.2.4
[ceph.git] / ceph / doc / cephfs / createfs.rst
CommitLineData
9f95a23c
TL
1=========================
2Create a Ceph file system
3=========================
7c673cae 4
11fdf7f2
TL
5Creating pools
6==============
7c673cae 7
9f95a23c 8A Ceph file system requires at least two RADOS pools, one for data and one for metadata.
7c673cae
FG
9When configuring these pools, you might consider:
10
92f5a8d4 11- Using a higher replication level for the metadata pool, as any data loss in
9f95a23c 12 this pool can render the whole file system inaccessible.
92f5a8d4 13- Using lower-latency storage such as SSDs for the metadata pool, as this will
9f95a23c 14 directly affect the observed latency of file system operations on clients.
92f5a8d4
TL
15- The data pool used to create the file system is the "default" data pool and
16 the location for storing all inode backtrace information, used for hard link
17 management and disaster recovery. For this reason, all inodes created in
18 CephFS have at least one object in the default data pool. If erasure-coded
19 pools are planned for the file system, it is usually better to use a
20 replicated pool for the default data pool to improve small-object write and
21 read performance for updating backtraces. Separately, another erasure-coded
22 data pool can be added (see also :ref:`ecpool`) that can be used on an entire
23 hierarchy of directories and files (see also :ref:`file-layouts`).
7c673cae
FG
24
25Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For
9f95a23c 26example, to create two pools with default settings for use with a file system, you
7c673cae
FG
27might run the following commands:
28
29.. code:: bash
30
9f95a23c
TL
31 $ ceph osd pool create cephfs_data
32 $ ceph osd pool create cephfs_metadata
7c673cae 33
92f5a8d4
TL
34Generally, the metadata pool will have at most a few gigabytes of data. For
35this reason, a smaller PG count is usually recommended. 64 or 128 is commonly
36used in practice for large clusters.
37
e306af50
TL
38.. note:: The names of the file systems, metadata pools, and data pools can
39 only have characters in the set [a-zA-Z0-9\_-.].
92f5a8d4 40
9f95a23c
TL
41Creating a file system
42======================
11fdf7f2 43
9f95a23c 44Once the pools are created, you may enable the file system using the ``fs new`` command:
7c673cae
FG
45
46.. code:: bash
47
48 $ ceph fs new <fs_name> <metadata> <data>
49
50For example:
51
52.. code:: bash
53
54 $ ceph fs new cephfs cephfs_metadata cephfs_data
55 $ ceph fs ls
56 name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
57
9f95a23c 58Once a file system has been created, your MDS(s) will be able to enter
7c673cae
FG
59an *active* state. For example, in a single MDS system:
60
61.. code:: bash
62
63 $ ceph mds stat
11fdf7f2 64 cephfs-1/1/1 up {0=a=up:active}
7c673cae 65
9f95a23c
TL
66Once the file system is created and the MDS is active, you are ready to mount
67the file system. If you have created more than one file system, you will
7c673cae
FG
68choose which to use when mounting.
69
70 - `Mount CephFS`_
71 - `Mount CephFS as FUSE`_
72
73.. _Mount CephFS: ../../cephfs/kernel
74.. _Mount CephFS as FUSE: ../../cephfs/fuse
11fdf7f2 75
9f95a23c
TL
76If you have created more than one file system, and a client does not
77specify a file system when mounting, you can control which file system
11fdf7f2
TL
78they will see by using the `ceph fs set-default` command.
79
80Using Erasure Coded pools with CephFS
81=====================================
82
83You may use Erasure Coded pools as CephFS data pools as long as they have overwrites enabled, which is done as follows:
84
85.. code:: bash
86
87 ceph osd pool set my_ec_pool allow_ec_overwrites true
88
89Note that EC overwrites are only supported when using OSDS with the BlueStore backend.
90
91You may not use Erasure Coded pools as CephFS metadata pools, because CephFS metadata is stored using RADOS *OMAP* data structures, which EC pools cannot store.
92