]>
Commit | Line | Data |
---|---|---|
1 | inotify | |
2 | a powerful yet simple file change notification system | |
3 | ||
4 | ||
5 | ||
6 | Document started 15 Mar 2005 by Robert Love <rml@novell.com> | |
7 | Document updated 4 Jan 2015 by Zhang Zhen <zhenzhang.zhang@huawei.com> | |
8 | --Deleted obsoleted interface, just refer to manpages for user interface. | |
9 | ||
10 | (i) Rationale | |
11 | ||
12 | Q: What is the design decision behind not tying the watch to the open fd of | |
13 | the watched object? | |
14 | ||
15 | A: Watches are associated with an open inotify device, not an open file. | |
16 | This solves the primary problem with dnotify: keeping the file open pins | |
17 | the file and thus, worse, pins the mount. Dnotify is therefore infeasible | |
18 | for use on a desktop system with removable media as the media cannot be | |
19 | unmounted. Watching a file should not require that it be open. | |
20 | ||
21 | Q: What is the design decision behind using an-fd-per-instance as opposed to | |
22 | an fd-per-watch? | |
23 | ||
24 | A: An fd-per-watch quickly consumes more file descriptors than are allowed, | |
25 | more fd's than are feasible to manage, and more fd's than are optimally | |
26 | select()-able. Yes, root can bump the per-process fd limit and yes, users | |
27 | can use epoll, but requiring both is a silly and extraneous requirement. | |
28 | A watch consumes less memory than an open file, separating the number | |
29 | spaces is thus sensible. The current design is what user-space developers | |
30 | want: Users initialize inotify, once, and add n watches, requiring but one | |
31 | fd and no twiddling with fd limits. Initializing an inotify instance two | |
32 | thousand times is silly. If we can implement user-space's preferences | |
33 | cleanly--and we can, the idr layer makes stuff like this trivial--then we | |
34 | should. | |
35 | ||
36 | There are other good arguments. With a single fd, there is a single | |
37 | item to block on, which is mapped to a single queue of events. The single | |
38 | fd returns all watch events and also any potential out-of-band data. If | |
39 | every fd was a separate watch, | |
40 | ||
41 | - There would be no way to get event ordering. Events on file foo and | |
42 | file bar would pop poll() on both fd's, but there would be no way to tell | |
43 | which happened first. A single queue trivially gives you ordering. Such | |
44 | ordering is crucial to existing applications such as Beagle. Imagine | |
45 | "mv a b ; mv b a" events without ordering. | |
46 | ||
47 | - We'd have to maintain n fd's and n internal queues with state, | |
48 | versus just one. It is a lot messier in the kernel. A single, linear | |
49 | queue is the data structure that makes sense. | |
50 | ||
51 | - User-space developers prefer the current API. The Beagle guys, for | |
52 | example, love it. Trust me, I asked. It is not a surprise: Who'd want | |
53 | to manage and block on 1000 fd's via select? | |
54 | ||
55 | - No way to get out of band data. | |
56 | ||
57 | - 1024 is still too low. ;-) | |
58 | ||
59 | When you talk about designing a file change notification system that | |
60 | scales to 1000s of directories, juggling 1000s of fd's just does not seem | |
61 | the right interface. It is too heavy. | |
62 | ||
63 | Additionally, it _is_ possible to more than one instance and | |
64 | juggle more than one queue and thus more than one associated fd. There | |
65 | need not be a one-fd-per-process mapping; it is one-fd-per-queue and a | |
66 | process can easily want more than one queue. | |
67 | ||
68 | Q: Why the system call approach? | |
69 | ||
70 | A: The poor user-space interface is the second biggest problem with dnotify. | |
71 | Signals are a terrible, terrible interface for file notification. Or for | |
72 | anything, for that matter. The ideal solution, from all perspectives, is a | |
73 | file descriptor-based one that allows basic file I/O and poll/select. | |
74 | Obtaining the fd and managing the watches could have been done either via a | |
75 | device file or a family of new system calls. We decided to implement a | |
76 | family of system calls because that is the preferred approach for new kernel | |
77 | interfaces. The only real difference was whether we wanted to use open(2) | |
78 | and ioctl(2) or a couple of new system calls. System calls beat ioctls. | |
79 |