]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/createfs.rst
update download target update for octopus release
[ceph.git] / ceph / doc / cephfs / createfs.rst
CommitLineData
7c673cae
FG
1========================
2Create a Ceph filesystem
3========================
4
11fdf7f2
TL
5Creating pools
6==============
7c673cae
FG
7
8A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata.
9When configuring these pools, you might consider:
10
92f5a8d4
TL
11- Using a higher replication level for the metadata pool, as any data loss in
12 this pool can render the whole filesystem inaccessible.
13- Using lower-latency storage such as SSDs for the metadata pool, as this will
14 directly affect the observed latency of filesystem operations on clients.
15- The data pool used to create the file system is the "default" data pool and
16 the location for storing all inode backtrace information, used for hard link
17 management and disaster recovery. For this reason, all inodes created in
18 CephFS have at least one object in the default data pool. If erasure-coded
19 pools are planned for the file system, it is usually better to use a
20 replicated pool for the default data pool to improve small-object write and
21 read performance for updating backtraces. Separately, another erasure-coded
22 data pool can be added (see also :ref:`ecpool`) that can be used on an entire
23 hierarchy of directories and files (see also :ref:`file-layouts`).
7c673cae
FG
24
25Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For
26example, to create two pools with default settings for use with a filesystem, you
27might run the following commands:
28
29.. code:: bash
30
31 $ ceph osd pool create cephfs_data <pg_num>
32 $ ceph osd pool create cephfs_metadata <pg_num>
33
92f5a8d4
TL
34Generally, the metadata pool will have at most a few gigabytes of data. For
35this reason, a smaller PG count is usually recommended. 64 or 128 is commonly
36used in practice for large clusters.
37
38
11fdf7f2
TL
39Creating a filesystem
40=====================
41
7c673cae
FG
42Once the pools are created, you may enable the filesystem using the ``fs new`` command:
43
44.. code:: bash
45
46 $ ceph fs new <fs_name> <metadata> <data>
47
48For example:
49
50.. code:: bash
51
52 $ ceph fs new cephfs cephfs_metadata cephfs_data
53 $ ceph fs ls
54 name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
55
56Once a filesystem has been created, your MDS(s) will be able to enter
57an *active* state. For example, in a single MDS system:
58
59.. code:: bash
60
61 $ ceph mds stat
11fdf7f2 62 cephfs-1/1/1 up {0=a=up:active}
7c673cae
FG
63
64Once the filesystem is created and the MDS is active, you are ready to mount
65the filesystem. If you have created more than one filesystem, you will
66choose which to use when mounting.
67
68 - `Mount CephFS`_
69 - `Mount CephFS as FUSE`_
70
71.. _Mount CephFS: ../../cephfs/kernel
72.. _Mount CephFS as FUSE: ../../cephfs/fuse
11fdf7f2
TL
73
74If you have created more than one filesystem, and a client does not
75specify a filesystem when mounting, you can control which filesystem
76they will see by using the `ceph fs set-default` command.
77
78Using Erasure Coded pools with CephFS
79=====================================
80
81You may use Erasure Coded pools as CephFS data pools as long as they have overwrites enabled, which is done as follows:
82
83.. code:: bash
84
85 ceph osd pool set my_ec_pool allow_ec_overwrites true
86
87Note that EC overwrites are only supported when using OSDS with the BlueStore backend.
88
89You may not use Erasure Coded pools as CephFS metadata pools, because CephFS metadata is stored using RADOS *OMAP* data structures, which EC pools cannot store.
90