2 JFFS2 LOCKING DOCUMENTATION
3 ---------------------------
5 This document attempts to describe the existing locking rules for
6 JFFS2. It is not expected to remain perfectly up to date, but ought to
13 The alloc_sem is a per-filesystem mutex, used primarily to ensure
14 contiguous allocation of space on the medium. It is automatically
15 obtained during space allocations (jffs2_reserve_space()) and freed
16 upon write completion (jffs2_complete_reservation()). Note that
17 the garbage collector will obtain this right at the beginning of
18 jffs2_garbage_collect_pass() and release it at the end, thereby
19 preventing any other write activity on the file system during a
22 When writing new nodes, the alloc_sem must be held until the new nodes
23 have been properly linked into the data structures for the inode to
24 which they belong. This is for the benefit of NAND flash - adding new
25 nodes to an inode may obsolete old ones, and by holding the alloc_sem
26 until this happens we ensure that any data in the write-buffer at the
27 time this happens are part of the new node, not just something that
28 was written afterwards. Hence, we can ensure the newly-obsoleted nodes
29 don't actually get erased until the write-buffer has been flushed to
32 With the introduction of NAND flash support and the write-buffer,
33 the alloc_sem is also used to protect the wbuf-related members of the
34 jffs2_sb_info structure. Atomically reading the wbuf_len member to see
35 if the wbuf is currently holding any data is permitted, though.
37 Ordering constraints: See f->sem.
43 This is the JFFS2-internal equivalent of the inode mutex i->i_sem.
44 It protects the contents of the jffs2_inode_info private inode data,
45 including the linked list of node fragments (but see the notes below on
46 erase_completion_lock), etc.
48 The reason that the i_sem itself isn't used for this purpose is to
49 avoid deadlocks with garbage collection -- the VFS will lock the i_sem
50 before calling a function which may need to allocate space. The
51 allocation may trigger garbage-collection, which may need to move a
52 node belonging to the inode which was locked in the first place by the
53 VFS. If the garbage collection code were to attempt to lock the i_sem
54 of the inode from which it's garbage-collecting a physical node, this
55 lead to deadlock, unless we played games with unlocking the i_sem
56 before calling the space allocation functions.
58 Instead of playing such games, we just have an extra internal
59 mutex, which is obtained by the garbage collection code and also
60 by the normal file system code _after_ allocation of space.
64 1. Never attempt to allocate space or lock alloc_sem with
66 2. Never attempt to lock two file mutexes in one thread.
67 No ordering rules have been made for doing so.
68 3. Never lock a page cache page with f->sem held.
71 erase_completion_lock spinlock
72 ------------------------------
74 This is used to serialise access to the eraseblock lists, to the
75 per-eraseblock lists of physical jffs2_raw_node_ref structures, and
76 (NB) the per-inode list of physical nodes. The latter is a special
79 As the MTD API no longer permits erase-completion callback functions
80 to be called from bottom-half (timer) context (on the basis that nobody
81 ever actually implemented such a thing), it's now sufficient to use
82 a simple spin_lock() rather than spin_lock_bh().
84 Note that the per-inode list of physical nodes (f->nodes) is a special
85 case. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in
86 the list are protected by the file mutex f->sem. But the erase code
87 may remove _obsolete_ nodes from the list while holding only the
88 erase_completion_lock. So you can walk the list only while holding the
89 erase_completion_lock, and can drop the lock temporarily mid-walk as
90 long as the pointer you're holding is to a _valid_ node, not an
93 The erase_completion_lock is also used to protect the c->gc_task
94 pointer when the garbage collection thread exits. The code to kill the
95 GC thread locks it, sends the signal, then unlocks it - while the GC
96 thread itself locks it, zeroes c->gc_task, then unlocks on the exit path.
99 inocache_lock spinlock
100 ----------------------
102 This spinlock protects the hashed list (c->inocache_list) of the
103 in-core jffs2_inode_cache objects (each inode in JFFS2 has the
104 correspondent jffs2_inode_cache object). So, the inocache_lock
105 has to be locked while walking the c->inocache_list hash buckets.
107 This spinlock also covers allocation of new inode numbers, which is
108 currently just '++->highest_ino++', but might one day get more complicated
109 if we need to deal with wrapping after 4 milliard inode numbers are used.
111 Note, the f->sem guarantees that the correspondent jffs2_inode_cache
112 will not be removed. So, it is allowed to access it without locking
113 the inocache_lock spinlock.
115 Ordering constraints:
117 If both erase_completion_lock and inocache_lock are needed, the
118 c->erase_completion has to be acquired first.
124 This mutex is only used by the erase code which frees obsolete node
125 references and the jffs2_garbage_collect_deletion_dirent() function.
126 The latter function on NAND flash must read _obsolete_ nodes to
127 determine whether the 'deletion dirent' under consideration can be
128 discarded or whether it is still required to show that an inode has
129 been unlinked. Because reading from the flash may sleep, the
130 erase_completion_lock cannot be held, so an alternative, more
131 heavyweight lock was required to prevent the erase code from freeing
132 the jffs2_raw_node_ref structures in question while the garbage
133 collection code is looking at them.
135 Suggestions for alternative solutions to this problem would be welcomed.
141 This read/write semaphore protects against concurrent access to the
142 write-behind buffer ('wbuf') used for flash chips where we must write
143 in blocks. It protects both the contents of the wbuf and the metadata
144 which indicates which flash region (if any) is currently covered by
147 Ordering constraints:
148 Lock wbuf_sem last, after the alloc_sem or and f->sem.
154 This read/write semaphore protects against concurrent access to the
155 xattr related objects which include stuff in superblock and ic->xref.
156 In read-only path, write-semaphore is too much exclusion. It's enough
157 by read-semaphore. But you must hold write-semaphore when updating,
158 creating or deleting any xattr related object.
160 Once xattr_sem released, there would be no assurance for the existence
161 of those objects. Thus, a series of processes is often required to retry,
162 when updating such a object is necessary under holding read semaphore.
163 For example, do_jffs2_getxattr() holds read-semaphore to scan xref and
164 xdatum at first. But it retries this process with holding write-semaphore
165 after release read-semaphore, if it's necessary to load name/value pair
168 Ordering constraints:
169 Lock xattr_sem last, after the alloc_sem.