xref: /linux/Documentation/filesystems/ceph.rst (revision 6c8c1406)
1.. SPDX-License-Identifier: GPL-2.0
2
3============================
4Ceph Distributed File System
5============================
6
7Ceph is a distributed network file system designed to provide good
8performance, reliability, and scalability.
9
10Basic features include:
11
12 * POSIX semantics
13 * Seamless scaling from 1 to many thousands of nodes
14 * High availability and reliability.  No single point of failure.
15 * N-way replication of data across storage nodes
16 * Fast recovery from node failures
17 * Automatic rebalancing of data on node addition/removal
18 * Easy deployment: most FS components are userspace daemons
19
20Also,
21
22 * Flexible snapshots (on any directory)
23 * Recursive accounting (nested files, directories, bytes)
24
25In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
26on symmetric access by all clients to shared block devices, Ceph
27separates data and metadata management into independent server
28clusters, similar to Lustre.  Unlike Lustre, however, metadata and
29storage nodes run entirely as user space daemons.  File data is striped
30across storage nodes in large chunks to distribute workload and
31facilitate high throughputs.  When storage nodes fail, data is
32re-replicated in a distributed fashion by the storage nodes themselves
33(with some minimal coordination from a cluster monitor), making the
34system extremely efficient and scalable.
35
36Metadata servers effectively form a large, consistent, distributed
37in-memory cache above the file namespace that is extremely scalable,
38dynamically redistributes metadata in response to workload changes,
39and can tolerate arbitrary (well, non-Byzantine) node failures.  The
40metadata server takes a somewhat unconventional approach to metadata
41storage to significantly improve performance for common workloads.  In
42particular, inodes with only a single link are embedded in
43directories, allowing entire directories of dentries and inodes to be
44loaded into its cache with a single I/O operation.  The contents of
45extremely large directories can be fragmented and managed by
46independent metadata servers, allowing scalable concurrent access.
47
48The system offers automatic data rebalancing/migration when scaling
49from a small cluster of just a few nodes to many hundreds, without
50requiring an administrator carve the data set into static volumes or
51go through the tedious process of migrating data between servers.
52When the file system approaches full, new nodes can be easily added
53and things will "just work."
54
55Ceph includes flexible snapshot mechanism that allows a user to create
56a snapshot on any subdirectory (and its nested contents) in the
57system.  Snapshot creation and deletion are as simple as 'mkdir
58.snap/foo' and 'rmdir .snap/foo'.
59
60Ceph also provides some recursive accounting on directories for nested
61files and bytes.  That is, a 'getfattr -d foo' on any directory in the
62system will reveal the total number of nested regular files and
63subdirectories, and a summation of all nested file sizes.  This makes
64the identification of large disk space consumers relatively quick, as
65no 'du' or similar recursive scan of the file system is required.
66
67Finally, Ceph also allows quotas to be set on any directory in the system.
68The quota can restrict the number of bytes or the number of files stored
69beneath that point in the directory hierarchy.  Quotas can be set using
70extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg::
71
72 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
73 getfattr -n ceph.quota.max_bytes /some/dir
74
75A limitation of the current quotas implementation is that it relies on the
76cooperation of the client mounting the file system to stop writers when a
77limit is reached.  A modified or adversarial client cannot be prevented
78from writing as much data as it needs.
79
80Mount Syntax
81============
82
83The basic mount syntax is::
84
85 # mount -t ceph user@fsid.fs_name=/[subdir] mnt -o mon_addr=monip1[:port][/monip2[:port]]
86
87You only need to specify a single monitor, as the client will get the
88full list when it connects.  (However, if the monitor you specify
89happens to be down, the mount won't succeed.)  The port can be left
90off if the monitor is using the default.  So if the monitor is at
911.2.3.4::
92
93 # mount -t ceph cephuser@07fe3187-00d9-42a3-814b-72a4d5e7d5be.cephfs=/ /mnt/ceph -o mon_addr=1.2.3.4
94
95is sufficient.  If /sbin/mount.ceph is installed, a hostname can be
96used instead of an IP address and the cluster FSID can be left out
97(as the mount helper will fill it in by reading the ceph configuration
98file)::
99
100  # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=mon-addr
101
102Multiple monitor addresses can be passed by separating each address with a slash (`/`)::
103
104  # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=192.168.1.100/192.168.1.101
105
106When using the mount helper, monitor address can be read from ceph
107configuration file if available. Note that, the cluster FSID (passed as part
108of the device string) is validated by checking it with the FSID reported by
109the monitor.
110
111Mount Options
112=============
113
114  mon_addr=ip_address[:port][/ip_address[:port]]
115	Monitor address to the cluster. This is used to bootstrap the
116        connection to the cluster. Once connection is established, the
117        monitor addresses in the monitor map are followed.
118
119  fsid=cluster-id
120	FSID of the cluster (from `ceph fsid` command).
121
122  ip=A.B.C.D[:N]
123	Specify the IP and/or port the client should bind to locally.
124	There is normally not much reason to do this.  If the IP is not
125	specified, the client's IP address is determined by looking at the
126	address its connection to the monitor originates from.
127
128  wsize=X
129	Specify the maximum write size in bytes.  Default: 64 MB.
130
131  rsize=X
132	Specify the maximum read size in bytes.  Default: 64 MB.
133
134  rasize=X
135	Specify the maximum readahead size in bytes.  Default: 8 MB.
136
137  mount_timeout=X
138	Specify the timeout value for mount (in seconds), in the case
139	of a non-responsive Ceph file system.  The default is 60
140	seconds.
141
142  caps_max=X
143	Specify the maximum number of caps to hold. Unused caps are released
144	when number of caps exceeds the limit. The default is 0 (no limit)
145
146  rbytes
147	When stat() is called on a directory, set st_size to 'rbytes',
148	the summation of file sizes over all files nested beneath that
149	directory.  This is the default.
150
151  norbytes
152	When stat() is called on a directory, set st_size to the
153	number of entries in that directory.
154
155  nocrc
156	Disable CRC32C calculation for data writes.  If set, the storage node
157	must rely on TCP's error correction to detect data corruption
158	in the data payload.
159
160  dcache
161        Use the dcache contents to perform negative lookups and
162        readdir when the client has the entire directory contents in
163        its cache.  (This does not change correctness; the client uses
164        cached metadata only when a lease or capability ensures it is
165        valid.)
166
167  nodcache
168        Do not use the dcache as above.  This avoids a significant amount of
169        complex code, sacrificing performance without affecting correctness,
170        and is useful for tracking down bugs.
171
172  noasyncreaddir
173	Do not use the dcache as above for readdir.
174
175  noquotadf
176        Report overall filesystem usage in statfs instead of using the root
177        directory quota.
178
179  nocopyfrom
180        Don't use the RADOS 'copy-from' operation to perform remote object
181        copies.  Currently, it's only used in copy_file_range, which will revert
182        to the default VFS implementation if this option is used.
183
184  recover_session=<no|clean>
185	Set auto reconnect mode in the case where the client is blocklisted. The
186	available modes are "no" and "clean". The default is "no".
187
188	* no: never attempt to reconnect when client detects that it has been
189	  blocklisted. Operations will generally fail after being blocklisted.
190
191	* clean: client reconnects to the ceph cluster automatically when it
192	  detects that it has been blocklisted. During reconnect, client drops
193	  dirty data/metadata, invalidates page caches and writable file handles.
194	  After reconnect, file locks become stale because the MDS loses track
195	  of them. If an inode contains any stale file locks, read/write on the
196	  inode is not allowed until applications release all stale file locks.
197
198More Information
199================
200
201For more information on Ceph, see the home page at
202	https://ceph.com/
203
204The Linux kernel client source tree is available at
205	- https://github.com/ceph/ceph-client.git
206
207and the source for the full system is at
208	https://github.com/ceph/ceph.git
209