]> git.proxmox.com Git - rustc.git/blob - vendor/chalk-engine/src/README.md
New upstream version 1.45.0+dfsg1
[rustc.git] / vendor / chalk-engine / src / README.md
1 # The on-demand SLG solver
2
3 The basis of the solver is the `Forest` type. A *forest* stores a
4 collection of *tables* as well as a *stack*. Each *table* represents
5 the stored results of a particular query that is being performed, as
6 well as the various *strands*, which are basically suspended
7 computations that may be used to find more answers. Tables are
8 interdependent: solving one query may require solving others.
9
10 ## Walkthrough
11
12 Perhaps the easiest way to explain how the solver works is to walk
13 through an example. Let's imagine that we have the following program:
14
15 ```rust
16 trait Debug { }
17
18 struct u32 { }
19 impl Debug for u32 { }
20
21 struct Rc<T> { }
22 impl<T: Debug> Debug for Rc<T> { }
23
24 struct Vec<T> { }
25 impl<T: Debug> Debug for Vec<T> { }
26 ```
27
28 Now imagine that we want to find answers for the query `exists<T> {
29 Rc<T>: Debug }`. The first step would be to u-canonicalize this query; this
30 is the act of giving canonical names to all the unbound inference variables based on the
31 order of their left-most appearance, as well as canonicalizing the universes of any
32 universally bound names (e.g., the `T` in `forall<T> { ... }`). In this case, there are no
33 universally bound names, but the canonical form Q of the query might look something like:
34
35 Rc<?0>: Debug
36
37 where `?0` is a variable in the root universe U0. We would then go and
38 look for a table with this as the key: since the forest is empty, this
39 lookup will fail, and we will create a new table T0, corresponding to
40 the u-canonical goal Q.
41
42 ### Ignoring negative reasoning and regions
43
44 To start, we'll ignore the possibility of negative goals like `not {
45 Foo }`. We'll phase them in later, as they bring several
46 complications.
47
48 ### Creating a table
49
50 When we first create a table, we also initialize it with a set of
51 *initial strands*. A "strand" is kind of like a "thread" for the
52 solver: it contains a particular way to produce an answer. The initial
53 set of strands for a goal like `Rc<?0>: Debug` (i.e., a "domain goal")
54 is determined by looking for *clauses* in the environment. In Rust,
55 these clauses derive from impls, but also from where-clauses that are
56 in scope. In the case of our example, there would be three clauses,
57 each coming from the program. Using a Prolog-like notation, these look
58 like:
59
60 ```
61 (u32: Debug).
62 (Rc<T>: Debug) :- (T: Debug).
63 (Vec<T>: Debug) :- (T: Debug).
64 ```
65
66 To create our initial strands, then, we will try to apply each of
67 these clauses to our goal of `Rc<?0>: Debug`. The first and third
68 clauses are inapplicable because `u32` and `Vec<?0>` cannot be unified
69 with `Rc<?0>`. The second clause, however, will work.
70
71 ### What is a strand?
72
73 Let's talk a bit more about what a strand *is*. In the code, a strand
74 is the combination of an inference table, an X-clause, and (possibly)
75 a selected subgoal from that X-clause. But what is an X-clause
76 (`ExClause`, in the code)? An X-clause pulls together a few things:
77
78 - The current state of the goal we are trying to prove;
79 - A set of subgoals that have yet to be proven;
80 - A set of floundered subgoals (see the section on floundering below);
81 - There are also a few things we're ignoring for now:
82 - delayed literals, region constraints
83
84 The general form of an X-clause is written much like a Prolog clause,
85 but with somewhat different semantics. Since we're ignoring delayed
86 literals and region constraints, an X-clause just looks like this:
87
88 G :- L
89
90 where G is a goal and L is a set of subgoals that must be proven.
91 (The L stands for *literal* -- when we address negative reasoning, a
92 literal will be either a positive or negative subgoal.) The idea is
93 that if we are able to prove L then the goal G can be considered true.
94
95 In the case of our example, we would wind up creating one strand, with
96 an X-clause like so:
97
98 (Rc<?T>: Debug) :- (?T: Debug)
99
100 Here, the `?T` refers to one of the inference variables created in the
101 inference table that accompanies the strand. (I'll use named variables
102 to refer to inference variables, and numbered variables like `?0` to
103 refer to variables in a canonicalized goal; in the code, however, they
104 are both represented with an index.)
105
106 For each strand, we also optionally store a *selected subgoal*. This
107 is the subgoal after the turnstile (`:-`) that we are currently trying
108 to prove in this strand. Initially, when a strand is first created,
109 there is no selected subgoal.
110
111 ### Activating a strand
112
113 Now that we have created the table T0 and initialized it with strands,
114 we have to actually try and produce an answer. We do this by invoking
115 the `ensure_answer` operation on the table: specifically, we say
116 `ensure_answer(T0, A0)`, meaning "ensure that there is a 0th answer".
117
118 Remember that tables store not only strands, but also a vector of
119 cached answers. The first thing that `ensure_answer` does is to check
120 whether answer 0 is in this vector. If so, we can just return
121 immediately. In this case, the vector will be empty, and hence that
122 does not apply (this becomes important for cyclic checks later on).
123
124 When there is no cached answer, `ensure_answer` will try to produce
125 one. It does this by selecting a strand from the set of active
126 strands -- the strands are stored in a `VecDeque` and hence processed
127 in a round-robin fashion. Right now, we have only one strand, storing
128 the following X-clause with no selected subgoal:
129
130 (Rc<?T>: Debug) :- (?T: Debug)
131
132 When we activate the strand, we see that we have no selected subgoal,
133 and so we first pick one of the subgoals to process. Here, there is only
134 one (`?T: Debug`), so that becomes the selected subgoal, changing
135 the state of the strand to:
136
137 (Rc<?T>: Debug) :- selected(?T: Debug, A0)
138
139 Here, we write `selected(L, An)` to indicate that (a) the literal `L`
140 is the selected subgoal and (b) which answer `An` we are looking for. We
141 start out looking for `A0`.
142
143 ### Processing the selected subgoal
144
145 Next, we have to try and find an answer to this selected goal. To do
146 that, we will u-canonicalize it and try to find an associated
147 table. In this case, the u-canonical form of the subgoal is `?0:
148 Debug`: we don't have a table yet for that, so we can create a new
149 one, T1. As before, we'll initialize T1 with strands. In this case,
150 there will be three strands, because all the program clauses are
151 potentially applicable. Those three strands will be:
152
153 - `(u32: Debug) :-`, derived from the program clause `(u32: Debug).`.
154 - Note: This strand has no subgoals.
155 - `(Vec<?U>: Debug) :- (?U: Debug)`, derived from the `Vec` impl.
156 - `(Rc<?U>: Debug) :- (?U: Debug)`, derived from the `Rc` impl.
157
158 We can thus summarize the state of the whole forest at this point as
159 follows:
160
161 ```
162 Table T0 [Rc<?0>: Debug]
163 Strands:
164 (Rc<?T>: Debug) :- selected(?T: Debug, A0)
165
166 Table T1 [?0: Debug]
167 Strands:
168 (u32: Debug) :-
169 (Vec<?U>: Debug) :- (?U: Debug)
170 (Rc<?V>: Debug) :- (?V: Debug)
171 ```
172
173 ### Delegation between tables
174
175 Now that the active strand from T0 has created the table T1, it can
176 try to extract an answer. It does this via that same `ensure_answer`
177 operation we saw before. In this case, the strand would invoke
178 `ensure_answer(T1, A0)`, since we will start with the first
179 answer. This will cause T1 to activate its first strand, `u32: Debug
180 :-`.
181
182 This strand is somewhat special: it has no subgoals at all. This means
183 that the goal is proven. We can therefore add `u32: Debug` to the set
184 of *answers* for our table, calling it answer A0 (it is the first
185 answer). The strand is then removed from the list of strands.
186
187 The state of table T1 is therefore:
188
189 ```
190 Table T1 [?0: Debug]
191 Answers:
192 A0 = [?0 = u32]
193 Strand:
194 (Vec<?U>: Debug) :- (?U: Debug)
195 (Rc<?V>: Debug) :- (?V: Debug)
196 ```
197
198 Note that I am writing out the answer A0 as a substitution that can be
199 applied to the table goal; actually, in the code, the goals for each
200 X-clause are also represented as substitutions, but in this exposition
201 I've chosen to write them as full goals, following NFTD.
202
203 Since we now have an answer, `ensure_answer(T1, A0)` will return `Ok`
204 to the table T0, indicating that answer A0 is available. T0 now has
205 the job of incorporating that result into its active strand. It does
206 this in two ways. First, it creates a new strand that is looking for
207 the next possible answer of T1. Next, it incorporates the answer from
208 A0 and removes the subgoal. The resulting state of table T0 is:
209
210 ```
211 Table T0 [Rc<?0>: Debug]
212 Strands:
213 (Rc<?T>: Debug) :- selected(?T: Debug, A1)
214 (Rc<u32>: Debug) :-
215 ```
216
217 We then immediately activate the strand that incorporated the answer
218 (the `Rc<u32>: Debug` one). In this case, that strand has no further
219 subgoals, so it becomes an answer to the table T0. This answer can
220 then be returned up to our caller, and the whole forest goes quiescent
221 at this point (remember, we only do enough work to generate *one*
222 answer). The ending state of the forest at this point will be:
223
224 ```
225 Table T0 [Rc<?0>: Debug]
226 Answer:
227 A0 = [?0 = u32]
228 Strands:
229 (Rc<?T>: Debug) :- selected(?T: Debug, A1)
230
231 Table T1 [?0: Debug]
232 Answers:
233 A0 = [?0 = u32]
234 Strand:
235 (Vec<?U>: Debug) :- (?U: Debug)
236 (Rc<?V>: Debug) :- (?V: Debug)
237 ```
238
239 Here you can see how the forest captures both the answers we have
240 created thus far *and* the strands that will let us try to produce
241 more answers later on.
242
243 ### Floundering
244
245 The first thing we do when we create a table is to initialize it with
246 a set of strands. These strands represent all the ways that one can
247 solve the table's associated goal. For an ordinary trait, we would
248 effectively create one strand per "relevant impl". But sometimes the
249 goals are too vague for this to be possible; other times, it may be possible
250 but just really inefficient, since all of those strands must be explored.
251
252 As an example of when it may not be possible, consider a goal like
253 `?T: Sized`. This goal can in principle enumerate **every sized type**
254 that exists -- that includes not only struct/enum types, but also
255 closure types, fn types with arbitrary arity, tuple types with
256 arbitrary arity, and so forth. In other words, there are not only an
257 infinite set of **answers** but actually an infinite set of
258 **strands**. The same applies to auto traits like `Send` as well as
259 "hybrid" traits like `Clone`, which contain *some* auto-generated sets
260 of impls.
261
262 Another example of floundering comes from negative logic. In general,
263 we cannot process negative subgoals if they have unbound existential
264 variables, such as `not { Vec<?T>: Foo }`. This is because we can only
265 enumerate things that *do* match a given trait (or which *are*
266 provable, more generally). We cannot enumerate out possible types `?T`
267 that *are not* provable (there is an infinite set, to be sure).
268
269 To handle this, we allow tables to enter a **floundered** state. This
270 state begins when we try to create the program clauses for a table --
271 if that is not possible (e.g., in one of the cases above) then the
272 table enters a floundered state. Attempts to get an answer from a
273 floundered table result in an error (e.g.,
274 `RecursiveSearchFail::Floundered`).
275
276 Whenever a goal results in a floundered result, that goal is placed
277 into a distinct list (the "floundered subgoals"). We then go on and
278 process the rest of the subgoals. Once all the normal subgoals have
279 completed, floundered subgoals are removed from the floundered list
280 and re-attempted: the idea is that we may have accumulated more type
281 information in the meantime. If they continue to flounder, then we
282 stop.
283
284 Let's look at an example. Imagine that we have:
285
286 ```rust
287 trait Foo { }
288 trait Bar { }
289
290 impl<T: Send + Bar> Foo for T { }
291
292 impl Bar for u32 { }
293 impl Bar for i32 { }
294 ```
295
296 Now imagine we are trying to prove `?T: Foo`. There is only one impl,
297 so we will create a state like:
298
299 ```
300 (?T: Foo) :- (?T: Send), (?T: Bar)
301 ```
302
303 When we attempt to solve `?T: Send`, however, that subgoal will
304 flounder, because `Send` is an auto-trait. So it will be put into a
305 floundered list:
306
307 ```
308 (?T: Foo) :- (?T: Bar) [ floundered: (?T: Send) ]
309 ```
310
311 and we will go on to solve `?T: Bar`. `Bar` is an ordinary trait -- so
312 we can enumerate two strands (one for `u32` and one for `i32`). When
313 we process the first answer, we wind up with:
314
315 ```
316 (u32: Foo) :- [ floundered: (u32: Send) ]
317 ```
318
319 At this point we can move the floundered subgoal back into the main
320 list and process:
321
322 ```
323 (u32: Foo) :- (u32: Send)
324 ```
325
326 This time, the goal does not flounder.
327
328 But how do we detect when it makes sense to move a floundered subgoal
329 into the main list? To handle this, we use a timestamp scheme. We
330 keep a counter as part of the strand -- each time we succeed in
331 solving some subgoal, we increment the counter, as that *may* have
332 provided more information about some type variables. When we move a
333 goal to the floundered list, we also track the current value of the
334 timestamp. Then, when it comes time to move things *from* the
335 floundered list, we can compare if the timestamp has been changed
336 since the goal was marked as floundering. If not, then no new
337 information can have been attempted, and we can mark the current table
338 as being floundered itself.
339
340 This mechanism allows floundered to propagate up many levels, e.g.
341 in an example like this:
342
343 ```rust
344 trait Foo { }
345 trait Bar { }
346 trait Baz { }
347
348 impl<T: Baz + Bar> Foo for T { }
349
350 impl Bar for u32 { }
351 impl Bar for i32 { }
352
353 impl<T: Sized> Baz for T { }
354 ```
355
356 Here, solving `?T: Baz` will in turn invoke `?T: Sized` -- this
357 floundering state will be propagated up to the `?T: Foo` table.
358
359 ## Heritage and acronyms
360
361 This solver implements the SLG solving technique, though extended to
362 accommodate hereditary harrop (HH) predicates, as well as the needs of
363 lazy normalization.
364
365 Its design is kind of a fusion of [MiniKanren] and the following
366 papers, which I will refer to as EWFS and NTFD respectively:
367
368 > Efficient Top-Down Computation of Queries Under the Well-formed Semantics
369 > (Chen, Swift, and Warren; Journal of Logic Programming '95)
370
371 > A New Formulation of Tabled resolution With Delay
372 > (Swift; EPIA '99)
373
374 [MiniKanren]: http://minikanren.org/
375
376 In addition, I incorporated extensions from the following papers,
377 which I will refer to as SA and RR respectively, that describes how to
378 do introduce approximation when processing subgoals and so forth:
379
380 > Terminating Evaluation of Logic Programs with Finite Three-Valued Models
381 > Riguzzi and Swift; ACM Transactions on Computational Logic 2013
382 > (Introduces "subgoal abstraction", hence the name SA)
383 >
384 > Radial Restraint
385 > Grosof and Swift; 2013
386
387 Another useful paper that gives a kind of high-level overview of
388 concepts at play is the following:
389
390 > XSB: Extending Prolog with Tabled Logic Programming
391 > (Swift and Warren; Theory and Practice of Logic Programming '10)
392
393 There are a places where I intentionally diverged from the semantics
394 as described in the papers -- e.g. by more aggressively approximating
395 -- which I marked them with a comment DIVERGENCE. Those places may
396 want to be evaluated in the future.
397
398 A few other acronyms that I use:
399
400 - WAM: Warren abstract machine, an efficient way to evaluate Prolog programs.
401 See <http://wambook.sourceforge.net/>.
402 - HH: Hereditary harrop predicates. What Chalk deals in.
403 Popularized by Lambda Prolog.
404
405