]>
Commit | Line | Data |
---|---|---|
bd9a4c7d OBC |
1 | Hardware Spinlock Framework |
2 | ||
3 | 1. Introduction | |
4 | ||
5 | Hardware spinlock modules provide hardware assistance for synchronization | |
6 | and mutual exclusion between heterogeneous processors and those not operating | |
7 | under a single, shared operating system. | |
8 | ||
9 | For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP, | |
10 | each of which is running a different Operating System (the master, A9, | |
11 | is usually running Linux and the slave processors, the M3 and the DSP, | |
12 | are running some flavor of RTOS). | |
13 | ||
14 | A generic hwspinlock framework allows platform-independent drivers to use | |
15 | the hwspinlock device in order to access data structures that are shared | |
16 | between remote processors, that otherwise have no alternative mechanism | |
17 | to accomplish synchronization and mutual exclusion operations. | |
18 | ||
19 | This is necessary, for example, for Inter-processor communications: | |
20 | on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the | |
21 | remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink). | |
22 | ||
23 | To achieve fast message-based communications, a minimal kernel support | |
24 | is needed to deliver messages arriving from a remote processor to the | |
25 | appropriate user process. | |
26 | ||
27 | This communication is based on simple data structures that is shared between | |
28 | the remote processors, and access to it is synchronized using the hwspinlock | |
29 | module (remote processor directly places new messages in this shared data | |
30 | structure). | |
31 | ||
32 | A common hwspinlock interface makes it possible to have generic, platform- | |
33 | independent, drivers. | |
34 | ||
35 | 2. User API | |
36 | ||
37 | struct hwspinlock *hwspin_lock_request(void); | |
38 | - dynamically assign an hwspinlock and return its address, or NULL | |
39 | in case an unused hwspinlock isn't available. Users of this | |
40 | API will usually want to communicate the lock's id to the remote core | |
41 | before it can be used to achieve synchronization. | |
93b465c2 | 42 | Should be called from a process context (might sleep). |
bd9a4c7d OBC |
43 | |
44 | struct hwspinlock *hwspin_lock_request_specific(unsigned int id); | |
45 | - assign a specific hwspinlock id and return its address, or NULL | |
46 | if that hwspinlock is already in use. Usually board code will | |
47 | be calling this function in order to reserve specific hwspinlock | |
48 | ids for predefined purposes. | |
93b465c2 | 49 | Should be called from a process context (might sleep). |
bd9a4c7d | 50 | |
fb7737e9 SA |
51 | int of_hwspin_lock_get_id(struct device_node *np, int index); |
52 | - retrieve the global lock id for an OF phandle-based specific lock. | |
53 | This function provides a means for DT users of a hwspinlock module | |
54 | to get the global lock id of a specific hwspinlock, so that it can | |
55 | be requested using the normal hwspin_lock_request_specific() API. | |
56 | The function returns a lock id number on success, -EPROBE_DEFER if | |
57 | the hwspinlock device is not yet registered with the core, or other | |
58 | error values. | |
59 | Should be called from a process context (might sleep). | |
60 | ||
bd9a4c7d OBC |
61 | int hwspin_lock_free(struct hwspinlock *hwlock); |
62 | - free a previously-assigned hwspinlock; returns 0 on success, or an | |
63 | appropriate error code on failure (e.g. -EINVAL if the hwspinlock | |
64 | is already free). | |
93b465c2 | 65 | Should be called from a process context (might sleep). |
bd9a4c7d OBC |
66 | |
67 | int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); | |
68 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | |
69 | msecs). If the hwspinlock is already taken, the function will busy loop | |
70 | waiting for it to be released, but give up when the timeout elapses. | |
71 | Upon a successful return from this function, preemption is disabled so | |
72 | the caller must not sleep, and is advised to release the hwspinlock as | |
73 | soon as possible, in order to minimize remote cores polling on the | |
74 | hardware interconnect. | |
75 | Returns 0 when successful and an appropriate error code otherwise (most | |
76 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | |
77 | The function will never sleep. | |
78 | ||
79 | int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); | |
80 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | |
81 | msecs). If the hwspinlock is already taken, the function will busy loop | |
82 | waiting for it to be released, but give up when the timeout elapses. | |
83 | Upon a successful return from this function, preemption and the local | |
84 | interrupts are disabled, so the caller must not sleep, and is advised to | |
85 | release the hwspinlock as soon as possible. | |
86 | Returns 0 when successful and an appropriate error code otherwise (most | |
87 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | |
88 | The function will never sleep. | |
89 | ||
90 | int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, | |
91 | unsigned long *flags); | |
92 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | |
93 | msecs). If the hwspinlock is already taken, the function will busy loop | |
94 | waiting for it to be released, but give up when the timeout elapses. | |
95 | Upon a successful return from this function, preemption is disabled, | |
96 | local interrupts are disabled and their previous state is saved at the | |
97 | given flags placeholder. The caller must not sleep, and is advised to | |
98 | release the hwspinlock as soon as possible. | |
99 | Returns 0 when successful and an appropriate error code otherwise (most | |
100 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | |
101 | The function will never sleep. | |
102 | ||
103 | int hwspin_trylock(struct hwspinlock *hwlock); | |
104 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | |
105 | it is already taken. | |
106 | Upon a successful return from this function, preemption is disabled so | |
107 | caller must not sleep, and is advised to release the hwspinlock as soon as | |
108 | possible, in order to minimize remote cores polling on the hardware | |
109 | interconnect. | |
110 | Returns 0 on success and an appropriate error code otherwise (most | |
111 | notably -EBUSY if the hwspinlock was already taken). | |
112 | The function will never sleep. | |
113 | ||
114 | int hwspin_trylock_irq(struct hwspinlock *hwlock); | |
115 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | |
116 | it is already taken. | |
117 | Upon a successful return from this function, preemption and the local | |
118 | interrupts are disabled so caller must not sleep, and is advised to | |
119 | release the hwspinlock as soon as possible. | |
120 | Returns 0 on success and an appropriate error code otherwise (most | |
121 | notably -EBUSY if the hwspinlock was already taken). | |
122 | The function will never sleep. | |
123 | ||
124 | int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); | |
125 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | |
126 | it is already taken. | |
127 | Upon a successful return from this function, preemption is disabled, | |
128 | the local interrupts are disabled and their previous state is saved | |
129 | at the given flags placeholder. The caller must not sleep, and is advised | |
130 | to release the hwspinlock as soon as possible. | |
131 | Returns 0 on success and an appropriate error code otherwise (most | |
132 | notably -EBUSY if the hwspinlock was already taken). | |
133 | The function will never sleep. | |
134 | ||
135 | void hwspin_unlock(struct hwspinlock *hwlock); | |
136 | - unlock a previously-locked hwspinlock. Always succeed, and can be called | |
137 | from any context (the function never sleeps). Note: code should _never_ | |
138 | unlock an hwspinlock which is already unlocked (there is no protection | |
139 | against this). | |
140 | ||
141 | void hwspin_unlock_irq(struct hwspinlock *hwlock); | |
142 | - unlock a previously-locked hwspinlock and enable local interrupts. | |
143 | The caller should _never_ unlock an hwspinlock which is already unlocked. | |
144 | Doing so is considered a bug (there is no protection against this). | |
145 | Upon a successful return from this function, preemption and local | |
146 | interrupts are enabled. This function will never sleep. | |
147 | ||
148 | void | |
149 | hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); | |
150 | - unlock a previously-locked hwspinlock. | |
151 | The caller should _never_ unlock an hwspinlock which is already unlocked. | |
152 | Doing so is considered a bug (there is no protection against this). | |
153 | Upon a successful return from this function, preemption is reenabled, | |
154 | and the state of the local interrupts is restored to the state saved at | |
155 | the given flags. This function will never sleep. | |
156 | ||
157 | int hwspin_lock_get_id(struct hwspinlock *hwlock); | |
158 | - retrieve id number of a given hwspinlock. This is needed when an | |
159 | hwspinlock is dynamically assigned: before it can be used to achieve | |
160 | mutual exclusion with a remote cpu, the id number should be communicated | |
161 | to the remote task with which we want to synchronize. | |
162 | Returns the hwspinlock id number, or -EINVAL if hwlock is null. | |
163 | ||
164 | 3. Typical usage | |
165 | ||
166 | #include <linux/hwspinlock.h> | |
167 | #include <linux/err.h> | |
168 | ||
169 | int hwspinlock_example1(void) | |
170 | { | |
171 | struct hwspinlock *hwlock; | |
172 | int ret; | |
173 | ||
174 | /* dynamically assign a hwspinlock */ | |
175 | hwlock = hwspin_lock_request(); | |
176 | if (!hwlock) | |
177 | ... | |
178 | ||
179 | id = hwspin_lock_get_id(hwlock); | |
180 | /* probably need to communicate id to a remote processor now */ | |
181 | ||
182 | /* take the lock, spin for 1 sec if it's already taken */ | |
183 | ret = hwspin_lock_timeout(hwlock, 1000); | |
184 | if (ret) | |
185 | ... | |
186 | ||
187 | /* | |
188 | * we took the lock, do our thing now, but do NOT sleep | |
189 | */ | |
190 | ||
191 | /* release the lock */ | |
192 | hwspin_unlock(hwlock); | |
193 | ||
194 | /* free the lock */ | |
195 | ret = hwspin_lock_free(hwlock); | |
196 | if (ret) | |
197 | ... | |
198 | ||
199 | return ret; | |
200 | } | |
201 | ||
202 | int hwspinlock_example2(void) | |
203 | { | |
204 | struct hwspinlock *hwlock; | |
205 | int ret; | |
206 | ||
207 | /* | |
208 | * assign a specific hwspinlock id - this should be called early | |
209 | * by board init code. | |
210 | */ | |
211 | hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID); | |
212 | if (!hwlock) | |
213 | ... | |
214 | ||
215 | /* try to take it, but don't spin on it */ | |
216 | ret = hwspin_trylock(hwlock); | |
217 | if (!ret) { | |
218 | pr_info("lock is already taken\n"); | |
219 | return -EBUSY; | |
220 | } | |
221 | ||
222 | /* | |
223 | * we took the lock, do our thing now, but do NOT sleep | |
224 | */ | |
225 | ||
226 | /* release the lock */ | |
227 | hwspin_unlock(hwlock); | |
228 | ||
229 | /* free the lock */ | |
230 | ret = hwspin_lock_free(hwlock); | |
231 | if (ret) | |
232 | ... | |
233 | ||
234 | return ret; | |
235 | } | |
236 | ||
237 | ||
238 | 4. API for implementors | |
239 | ||
300bab97 OBC |
240 | int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, |
241 | const struct hwspinlock_ops *ops, int base_id, int num_locks); | |
bd9a4c7d | 242 | - to be called from the underlying platform-specific implementation, in |
300bab97 OBC |
243 | order to register a new hwspinlock device (which is usually a bank of |
244 | numerous locks). Should be called from a process context (this function | |
245 | might sleep). | |
93b465c2 | 246 | Returns 0 on success, or appropriate error code on failure. |
bd9a4c7d | 247 | |
300bab97 | 248 | int hwspin_lock_unregister(struct hwspinlock_device *bank); |
bd9a4c7d | 249 | - to be called from the underlying vendor-specific implementation, in order |
300bab97 OBC |
250 | to unregister an hwspinlock device (which is usually a bank of numerous |
251 | locks). | |
93b465c2 | 252 | Should be called from a process context (this function might sleep). |
bd9a4c7d | 253 | Returns the address of hwspinlock on success, or NULL on error (e.g. |
6c1b06d6 | 254 | if the hwspinlock is still in use). |
bd9a4c7d | 255 | |
300bab97 | 256 | 5. Important structs |
bd9a4c7d | 257 | |
300bab97 OBC |
258 | struct hwspinlock_device is a device which usually contains a bank |
259 | of hardware locks. It is registered by the underlying hwspinlock | |
260 | implementation using the hwspin_lock_register() API. | |
bd9a4c7d OBC |
261 | |
262 | /** | |
300bab97 OBC |
263 | * struct hwspinlock_device - a device which usually spans numerous hwspinlocks |
264 | * @dev: underlying device, will be used to invoke runtime PM api | |
265 | * @ops: platform-specific hwspinlock handlers | |
266 | * @base_id: id index of the first lock in this device | |
267 | * @num_locks: number of locks in this device | |
268 | * @lock: dynamically allocated array of 'struct hwspinlock' | |
bd9a4c7d | 269 | */ |
300bab97 | 270 | struct hwspinlock_device { |
bd9a4c7d OBC |
271 | struct device *dev; |
272 | const struct hwspinlock_ops *ops; | |
300bab97 OBC |
273 | int base_id; |
274 | int num_locks; | |
275 | struct hwspinlock lock[0]; | |
276 | }; | |
277 | ||
278 | struct hwspinlock_device contains an array of hwspinlock structs, each | |
279 | of which represents a single hardware lock: | |
280 | ||
281 | /** | |
282 | * struct hwspinlock - this struct represents a single hwspinlock instance | |
283 | * @bank: the hwspinlock_device structure which owns this lock | |
284 | * @lock: initialized and used by hwspinlock core | |
285 | * @priv: private data, owned by the underlying platform-specific hwspinlock drv | |
286 | */ | |
287 | struct hwspinlock { | |
288 | struct hwspinlock_device *bank; | |
bd9a4c7d | 289 | spinlock_t lock; |
300bab97 | 290 | void *priv; |
bd9a4c7d OBC |
291 | }; |
292 | ||
300bab97 OBC |
293 | When registering a bank of locks, the hwspinlock driver only needs to |
294 | set the priv members of the locks. The rest of the members are set and | |
295 | initialized by the hwspinlock core itself. | |
bd9a4c7d OBC |
296 | |
297 | 6. Implementation callbacks | |
298 | ||
299 | There are three possible callbacks defined in 'struct hwspinlock_ops': | |
300 | ||
301 | struct hwspinlock_ops { | |
302 | int (*trylock)(struct hwspinlock *lock); | |
303 | void (*unlock)(struct hwspinlock *lock); | |
304 | void (*relax)(struct hwspinlock *lock); | |
305 | }; | |
306 | ||
307 | The first two callbacks are mandatory: | |
308 | ||
309 | The ->trylock() callback should make a single attempt to take the lock, and | |
310 | return 0 on failure and 1 on success. This callback may _not_ sleep. | |
311 | ||
312 | The ->unlock() callback releases the lock. It always succeed, and it, too, | |
313 | may _not_ sleep. | |
314 | ||
315 | The ->relax() callback is optional. It is called by hwspinlock core while | |
316 | spinning on a lock, and can be used by the underlying implementation to force | |
317 | a delay between two successive invocations of ->trylock(). It may _not_ sleep. |