]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/createfs.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / cephfs / createfs.rst
1 ========================
2 Create a Ceph filesystem
3 ========================
4
5 Creating pools
6 ==============
7
8 A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata.
9 When configuring these pools, you might consider:
10
11 - Using a higher replication level for the metadata pool, as any data
12 loss in this pool can render the whole filesystem inaccessible.
13 - Using lower-latency storage such as SSDs for the metadata pool, as this
14 will directly affect the observed latency of filesystem operations
15 on clients.
16
17 Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For
18 example, to create two pools with default settings for use with a filesystem, you
19 might run the following commands:
20
21 .. code:: bash
22
23 $ ceph osd pool create cephfs_data <pg_num>
24 $ ceph osd pool create cephfs_metadata <pg_num>
25
26 Creating a filesystem
27 =====================
28
29 Once the pools are created, you may enable the filesystem using the ``fs new`` command:
30
31 .. code:: bash
32
33 $ ceph fs new <fs_name> <metadata> <data>
34
35 For example:
36
37 .. code:: bash
38
39 $ ceph fs new cephfs cephfs_metadata cephfs_data
40 $ ceph fs ls
41 name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
42
43 Once a filesystem has been created, your MDS(s) will be able to enter
44 an *active* state. For example, in a single MDS system:
45
46 .. code:: bash
47
48 $ ceph mds stat
49 cephfs-1/1/1 up {0=a=up:active}
50
51 Once the filesystem is created and the MDS is active, you are ready to mount
52 the filesystem. If you have created more than one filesystem, you will
53 choose which to use when mounting.
54
55 - `Mount CephFS`_
56 - `Mount CephFS as FUSE`_
57
58 .. _Mount CephFS: ../../cephfs/kernel
59 .. _Mount CephFS as FUSE: ../../cephfs/fuse
60
61 If you have created more than one filesystem, and a client does not
62 specify a filesystem when mounting, you can control which filesystem
63 they will see by using the `ceph fs set-default` command.
64
65 Using Erasure Coded pools with CephFS
66 =====================================
67
68 You may use Erasure Coded pools as CephFS data pools as long as they have overwrites enabled, which is done as follows:
69
70 .. code:: bash
71
72 ceph osd pool set my_ec_pool allow_ec_overwrites true
73
74 Note that EC overwrites are only supported when using OSDS with the BlueStore backend.
75
76 You may not use Erasure Coded pools as CephFS metadata pools, because CephFS metadata is stored using RADOS *OMAP* data structures, which EC pools cannot store.
77