]>
Commit | Line | Data |
---|---|---|
7ad920b5 SW |
1 | Ceph Distributed File System |
2 | ============================ | |
3 | ||
4 | Ceph is a distributed network file system designed to provide good | |
5 | performance, reliability, and scalability. | |
6 | ||
7 | Basic features include: | |
8 | ||
9 | * POSIX semantics | |
10 | * Seamless scaling from 1 to many thousands of nodes | |
8136b58d | 11 | * High availability and reliability. No single point of failure. |
7ad920b5 SW |
12 | * N-way replication of data across storage nodes |
13 | * Fast recovery from node failures | |
14 | * Automatic rebalancing of data on node addition/removal | |
15 | * Easy deployment: most FS components are userspace daemons | |
16 | ||
17 | Also, | |
18 | * Flexible snapshots (on any directory) | |
19 | * Recursive accounting (nested files, directories, bytes) | |
20 | ||
21 | In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely | |
22 | on symmetric access by all clients to shared block devices, Ceph | |
23 | separates data and metadata management into independent server | |
24 | clusters, similar to Lustre. Unlike Lustre, however, metadata and | |
d11ae8e0 | 25 | storage nodes run entirely as user space daemons. File data is striped |
7ad920b5 SW |
26 | across storage nodes in large chunks to distribute workload and |
27 | facilitate high throughputs. When storage nodes fail, data is | |
28 | re-replicated in a distributed fashion by the storage nodes themselves | |
29 | (with some minimal coordination from a cluster monitor), making the | |
30 | system extremely efficient and scalable. | |
31 | ||
32 | Metadata servers effectively form a large, consistent, distributed | |
33 | in-memory cache above the file namespace that is extremely scalable, | |
34 | dynamically redistributes metadata in response to workload changes, | |
35 | and can tolerate arbitrary (well, non-Byzantine) node failures. The | |
36 | metadata server takes a somewhat unconventional approach to metadata | |
37 | storage to significantly improve performance for common workloads. In | |
38 | particular, inodes with only a single link are embedded in | |
39 | directories, allowing entire directories of dentries and inodes to be | |
40 | loaded into its cache with a single I/O operation. The contents of | |
41 | extremely large directories can be fragmented and managed by | |
42 | independent metadata servers, allowing scalable concurrent access. | |
43 | ||
44 | The system offers automatic data rebalancing/migration when scaling | |
45 | from a small cluster of just a few nodes to many hundreds, without | |
46 | requiring an administrator carve the data set into static volumes or | |
47 | go through the tedious process of migrating data between servers. | |
48 | When the file system approaches full, new nodes can be easily added | |
49 | and things will "just work." | |
50 | ||
51 | Ceph includes flexible snapshot mechanism that allows a user to create | |
52 | a snapshot on any subdirectory (and its nested contents) in the | |
53 | system. Snapshot creation and deletion are as simple as 'mkdir | |
54 | .snap/foo' and 'rmdir .snap/foo'. | |
55 | ||
56 | Ceph also provides some recursive accounting on directories for nested | |
57 | files and bytes. That is, a 'getfattr -d foo' on any directory in the | |
58 | system will reveal the total number of nested regular files and | |
59 | subdirectories, and a summation of all nested file sizes. This makes | |
60 | the identification of large disk space consumers relatively quick, as | |
61 | no 'du' or similar recursive scan of the file system is required. | |
62 | ||
fb18a575 LH |
63 | Finally, Ceph also allows quotas to be set on any directory in the system. |
64 | The quota can restrict the number of bytes or the number of files stored | |
65 | beneath that point in the directory hierarchy. Quotas can be set using | |
66 | extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg: | |
67 | ||
68 | setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir | |
69 | getfattr -n ceph.quota.max_bytes /some/dir | |
70 | ||
71 | A limitation of the current quotas implementation is that it relies on the | |
72 | cooperation of the client mounting the file system to stop writers when a | |
73 | limit is reached. A modified or adversarial client cannot be prevented | |
74 | from writing as much data as it needs. | |
7ad920b5 SW |
75 | |
76 | Mount Syntax | |
77 | ============ | |
78 | ||
79 | The basic mount syntax is: | |
80 | ||
81 | # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt | |
82 | ||
83 | You only need to specify a single monitor, as the client will get the | |
84 | full list when it connects. (However, if the monitor you specify | |
85 | happens to be down, the mount won't succeed.) The port can be left | |
86 | off if the monitor is using the default. So if the monitor is at | |
87 | 1.2.3.4, | |
88 | ||
89 | # mount -t ceph 1.2.3.4:/ /mnt/ceph | |
90 | ||
91 | is sufficient. If /sbin/mount.ceph is installed, a hostname can be | |
92 | used instead of an IP address. | |
93 | ||
94 | ||
95 | ||
96 | Mount Options | |
97 | ============= | |
98 | ||
99 | ip=A.B.C.D[:N] | |
100 | Specify the IP and/or port the client should bind to locally. | |
101 | There is normally not much reason to do this. If the IP is not | |
102 | specified, the client's IP address is determined by looking at the | |
a33f3224 | 103 | address its connection to the monitor originates from. |
7ad920b5 SW |
104 | |
105 | wsize=X | |
c7f04944 | 106 | Specify the maximum write size in bytes. Default: 16 MB. |
7ad920b5 SW |
107 | |
108 | rsize=X | |
c7f04944 | 109 | Specify the maximum read size in bytes. Default: 16 MB. |
92c1037c AG |
110 | |
111 | rasize=X | |
c7f04944 | 112 | Specify the maximum readahead size in bytes. Default: 8 MB. |
7ad920b5 SW |
113 | |
114 | mount_timeout=X | |
115 | Specify the timeout value for mount (in seconds), in the case | |
116 | of a non-responsive Ceph file system. The default is 30 | |
117 | seconds. | |
118 | ||
fe33032d YZ |
119 | caps_max=X |
120 | Specify the maximum number of caps to hold. Unused caps are released | |
121 | when number of caps exceeds the limit. The default is 0 (no limit) | |
122 | ||
7ad920b5 SW |
123 | rbytes |
124 | When stat() is called on a directory, set st_size to 'rbytes', | |
125 | the summation of file sizes over all files nested beneath that | |
126 | directory. This is the default. | |
127 | ||
128 | norbytes | |
129 | When stat() is called on a directory, set st_size to the | |
130 | number of entries in that directory. | |
131 | ||
132 | nocrc | |
23ab15ad | 133 | Disable CRC32C calculation for data writes. If set, the storage node |
7ad920b5 SW |
134 | must rely on TCP's error correction to detect data corruption |
135 | in the data payload. | |
136 | ||
a40dc6cc SW |
137 | dcache |
138 | Use the dcache contents to perform negative lookups and | |
139 | readdir when the client has the entire directory contents in | |
140 | its cache. (This does not change correctness; the client uses | |
141 | cached metadata only when a lease or capability ensures it is | |
142 | valid.) | |
143 | ||
144 | nodcache | |
145 | Do not use the dcache as above. This avoids a significant amount of | |
146 | complex code, sacrificing performance without affecting correctness, | |
147 | and is useful for tracking down bugs. | |
7ad920b5 | 148 | |
a40dc6cc SW |
149 | noasyncreaddir |
150 | Do not use the dcache as above for readdir. | |
7ad920b5 | 151 | |
9122eed5 LH |
152 | noquotadf |
153 | Report overall filesystem usage in statfs instead of using the root | |
154 | directory quota. | |
155 | ||
ea4cdc54 LH |
156 | nocopyfrom |
157 | Don't use the RADOS 'copy-from' operation to perform remote object | |
158 | copies. Currently, it's only used in copy_file_range, which will revert | |
159 | to the default VFS implementation if this option is used. | |
160 | ||
131d7eb4 YZ |
161 | recover_session=<no|clean> |
162 | Set auto reconnect mode in the case where the client is blacklisted. The | |
163 | available modes are "no" and "clean". The default is "no". | |
164 | ||
165 | * no: never attempt to reconnect when client detects that it has been | |
166 | blacklisted. Operations will generally fail after being blacklisted. | |
167 | ||
168 | * clean: client reconnects to the ceph cluster automatically when it | |
169 | detects that it has been blacklisted. During reconnect, client drops | |
170 | dirty data/metadata, invalidates page caches and writable file handles. | |
171 | After reconnect, file locks become stale because the MDS loses track | |
172 | of them. If an inode contains any stale file locks, read/write on the | |
173 | inode is not allowed until applications release all stale file locks. | |
174 | ||
7ad920b5 SW |
175 | More Information |
176 | ================ | |
177 | ||
178 | For more information on Ceph, see the home page at | |
d11ae8e0 | 179 | https://ceph.com/ |
7ad920b5 SW |
180 | |
181 | The Linux kernel client source tree is available at | |
d11ae8e0 | 182 | https://github.com/ceph/ceph-client.git |
8136b58d | 183 | git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git |
7ad920b5 SW |
184 | |
185 | and the source for the full system is at | |
d11ae8e0 | 186 | https://github.com/ceph/ceph.git |