]> git.proxmox.com Git - rustc.git/blob - src/doc/rustc-guide/src/variance.md
New upstream version 1.41.1+dfsg1
[rustc.git] / src / doc / rustc-guide / src / variance.md
1 # Variance of type and lifetime parameters
2
3 For a more general background on variance, see the [background] appendix.
4
5 [background]: ./appendix/background.html
6
7 During type checking we must infer the variance of type and lifetime
8 parameters. The algorithm is taken from Section 4 of the paper ["Taming the
9 Wildcards: Combining Definition- and Use-Site Variance"][pldi11] published in
10 PLDI'11 and written by Altidor et al., and hereafter referred to as The Paper.
11
12 [pldi11]: https://people.cs.umass.edu/~yannis/variance-extended2011.pdf
13
14 This inference is explicitly designed *not* to consider the uses of
15 types within code. To determine the variance of type parameters
16 defined on type `X`, we only consider the definition of the type `X`
17 and the definitions of any types it references.
18
19 We only infer variance for type parameters found on *data types*
20 like structs and enums. In these cases, there is a fairly straightforward
21 explanation for what variance means. The variance of the type
22 or lifetime parameters defines whether `T<A>` is a subtype of `T<B>`
23 (resp. `T<'a>` and `T<'b>`) based on the relationship of `A` and `B`
24 (resp. `'a` and `'b`).
25
26 We do not infer variance for type parameters found on traits, functions,
27 or impls. Variance on trait parameters can indeed make sense
28 (and we used to compute it) but it is actually rather subtle in
29 meaning and not that useful in practice, so we removed it. See the
30 [addendum] for some details. Variances on function/impl parameters, on the
31 other hand, doesn't make sense because these parameters are instantiated and
32 then forgotten, they don't persist in types or compiled byproducts.
33
34 [addendum]: #addendum
35
36 > **Notation**
37 >
38 > We use the notation of The Paper throughout this chapter:
39 >
40 > - `+` is _covariance_.
41 > - `-` is _contravariance_.
42 > - `*` is _bivariance_.
43 > - `o` is _invariance_.
44
45 ## The algorithm
46
47 The basic idea is quite straightforward. We iterate over the types
48 defined and, for each use of a type parameter `X`, accumulate a
49 constraint indicating that the variance of `X` must be valid for the
50 variance of that use site. We then iteratively refine the variance of
51 `X` until all constraints are met. There is *always* a solution, because at
52 the limit we can declare all type parameters to be invariant and all
53 constraints will be satisfied.
54
55 As a simple example, consider:
56
57 ```rust,ignore
58 enum Option<A> { Some(A), None }
59 enum OptionalFn<B> { Some(|B|), None }
60 enum OptionalMap<C> { Some(|C| -> C), None }
61 ```
62
63 Here, we will generate the constraints:
64
65 ```text
66 1. V(A) <= +
67 2. V(B) <= -
68 3. V(C) <= +
69 4. V(C) <= -
70 ```
71
72 These indicate that (1) the variance of A must be at most covariant;
73 (2) the variance of B must be at most contravariant; and (3, 4) the
74 variance of C must be at most covariant *and* contravariant. All of these
75 results are based on a variance lattice defined as follows:
76
77 ```text
78 * Top (bivariant)
79 - +
80 o Bottom (invariant)
81 ```
82
83 Based on this lattice, the solution `V(A)=+`, `V(B)=-`, `V(C)=o` is the
84 optimal solution. Note that there is always a naive solution which
85 just declares all variables to be invariant.
86
87 You may be wondering why fixed-point iteration is required. The reason
88 is that the variance of a use site may itself be a function of the
89 variance of other type parameters. In full generality, our constraints
90 take the form:
91
92 ```text
93 V(X) <= Term
94 Term := + | - | * | o | V(X) | Term x Term
95 ```
96
97 Here the notation `V(X)` indicates the variance of a type/region
98 parameter `X` with respect to its defining class. `Term x Term`
99 represents the "variance transform" as defined in the paper:
100
101 > If the variance of a type variable `X` in type expression `E` is `V2`
102 and the definition-site variance of the corresponding type parameter
103 of a class `C` is `V1`, then the variance of `X` in the type expression
104 `C<E>` is `V3 = V1.xform(V2)`.
105
106 ## Constraints
107
108 If I have a struct or enum with where clauses:
109
110 ```rust,ignore
111 struct Foo<T: Bar> { ... }
112 ```
113
114 you might wonder whether the variance of `T` with respect to `Bar` affects the
115 variance `T` with respect to `Foo`. I claim no. The reason: assume that `T` is
116 invariant with respect to `Bar` but covariant with respect to `Foo`. And then
117 we have a `Foo<X>` that is upcast to `Foo<Y>`, where `X <: Y`. However, while
118 `X : Bar`, `Y : Bar` does not hold. In that case, the upcast will be illegal,
119 but not because of a variance failure, but rather because the target type
120 `Foo<Y>` is itself just not well-formed. Basically we get to assume
121 well-formedness of all types involved before considering variance.
122
123 ### Dependency graph management
124
125 Because variance is a whole-crate inference, its dependency graph
126 can become quite muddled if we are not careful. To resolve this, we refactor
127 into two queries:
128
129 - `crate_variances` computes the variance for all items in the current crate.
130 - `variances_of` accesses the variance for an individual reading; it
131 works by requesting `crate_variances` and extracting the relevant data.
132
133 If you limit yourself to reading `variances_of`, your code will only
134 depend then on the inference of that particular item.
135
136 Ultimately, this setup relies on the [red-green algorithm][rga]. In particular,
137 every variance query effectively depends on all type definitions in the entire
138 crate (through `crate_variances`), but since most changes will not result in a
139 change to the actual results from variance inference, the `variances_of` query
140 will wind up being considered green after it is re-evaluated.
141
142 [rga]: ./queries/incremental-compilation.html
143
144 <a name="addendum"></a>
145
146 ## Addendum: Variance on traits
147
148 As mentioned above, we used to permit variance on traits. This was
149 computed based on the appearance of trait type parameters in
150 method signatures and was used to represent the compatibility of
151 vtables in trait objects (and also "virtual" vtables or dictionary
152 in trait bounds). One complication was that variance for
153 associated types is less obvious, since they can be projected out
154 and put to myriad uses, so it's not clear when it is safe to allow
155 `X<A>::Bar` to vary (or indeed just what that means). Moreover (as
156 covered below) all inputs on any trait with an associated type had
157 to be invariant, limiting the applicability. Finally, the
158 annotations (`MarkerTrait`, `PhantomFn`) needed to ensure that all
159 trait type parameters had a variance were confusing and annoying
160 for little benefit.
161
162 Just for historical reference, I am going to preserve some text indicating how
163 one could interpret variance and trait matching.
164
165 ### Variance and object types
166
167 Just as with structs and enums, we can decide the subtyping
168 relationship between two object types `&Trait<A>` and `&Trait<B>`
169 based on the relationship of `A` and `B`. Note that for object
170 types we ignore the `Self` type parameter – it is unknown, and
171 the nature of dynamic dispatch ensures that we will always call a
172 function that is expected the appropriate `Self` type. However, we
173 must be careful with the other type parameters, or else we could
174 end up calling a function that is expecting one type but provided
175 another.
176
177 To see what I mean, consider a trait like so:
178
179 ```rust
180 trait ConvertTo<A> {
181 fn convertTo(&self) -> A;
182 }
183 ```
184
185 Intuitively, If we had one object `O=&ConvertTo<Object>` and another
186 `S=&ConvertTo<String>`, then `S <: O` because `String <: Object`
187 (presuming Java-like "string" and "object" types, my go to examples
188 for subtyping). The actual algorithm would be to compare the
189 (explicit) type parameters pairwise respecting their variance: here,
190 the type parameter A is covariant (it appears only in a return
191 position), and hence we require that `String <: Object`.
192
193 You'll note though that we did not consider the binding for the
194 (implicit) `Self` type parameter: in fact, it is unknown, so that's
195 good. The reason we can ignore that parameter is precisely because we
196 don't need to know its value until a call occurs, and at that time (as
197 you said) the dynamic nature of virtual dispatch means the code we run
198 will be correct for whatever value `Self` happens to be bound to for
199 the particular object whose method we called. `Self` is thus different
200 from `A`, because the caller requires that `A` be known in order to
201 know the return type of the method `convertTo()`. (As an aside, we
202 have rules preventing methods where `Self` appears outside of the
203 receiver position from being called via an object.)
204
205 ### Trait variance and vtable resolution
206
207 But traits aren't only used with objects. They're also used when
208 deciding whether a given impl satisfies a given trait bound. To set the
209 scene here, imagine I had a function:
210
211 ```rust,ignore
212 fn convertAll<A,T:ConvertTo<A>>(v: &[T]) { ... }
213 ```
214
215 Now imagine that I have an implementation of `ConvertTo` for `Object`:
216
217 ```rust,ignore
218 impl ConvertTo<i32> for Object { ... }
219 ```
220
221 And I want to call `convertAll` on an array of strings. Suppose
222 further that for whatever reason I specifically supply the value of
223 `String` for the type parameter `T`:
224
225 ```rust,ignore
226 let mut vector = vec!["string", ...];
227 convertAll::<i32, String>(vector);
228 ```
229
230 Is this legal? To put another way, can we apply the `impl` for
231 `Object` to the type `String`? The answer is yes, but to see why
232 we have to expand out what will happen:
233
234 - `convertAll` will create a pointer to one of the entries in the
235 vector, which will have type `&String`
236 - It will then call the impl of `convertTo()` that is intended
237 for use with objects. This has the type `fn(self: &Object) -> i32`.
238
239 It is OK to provide a value for `self` of type `&String` because
240 `&String <: &Object`.
241
242 OK, so intuitively we want this to be legal, so let's bring this back
243 to variance and see whether we are computing the correct result. We
244 must first figure out how to phrase the question "is an impl for
245 `Object,i32` usable where an impl for `String,i32` is expected?"
246
247 Maybe it's helpful to think of a dictionary-passing implementation of
248 type classes. In that case, `convertAll()` takes an implicit parameter
249 representing the impl. In short, we *have* an impl of type:
250
251 ```text
252 V_O = ConvertTo<i32> for Object
253 ```
254
255 and the function prototype expects an impl of type:
256
257 ```text
258 V_S = ConvertTo<i32> for String
259 ```
260
261 As with any argument, this is legal if the type of the value given
262 (`V_O`) is a subtype of the type expected (`V_S`). So is `V_O <: V_S`?
263 The answer will depend on the variance of the various parameters. In
264 this case, because the `Self` parameter is contravariant and `A` is
265 covariant, it means that:
266
267 ```text
268 V_O <: V_S iff
269 i32 <: i32
270 String <: Object
271 ```
272
273 These conditions are satisfied and so we are happy.
274
275 ### Variance and associated types
276
277 Traits with associated types – or at minimum projection
278 expressions – must be invariant with respect to all of their
279 inputs. To see why this makes sense, consider what subtyping for a
280 trait reference means:
281
282 ```text
283 <T as Trait> <: <U as Trait>
284 ```
285
286 means that if I know that `T as Trait`, I also know that `U as
287 Trait`. Moreover, if you think of it as dictionary passing style,
288 it means that a dictionary for `<T as Trait>` is safe to use where
289 a dictionary for `<U as Trait>` is expected.
290
291 The problem is that when you can project types out from `<T as
292 Trait>`, the relationship to types projected out of `<U as Trait>`
293 is completely unknown unless `T==U` (see #21726 for more
294 details). Making `Trait` invariant ensures that this is true.
295
296 Another related reason is that if we didn't make traits with
297 associated types invariant, then projection is no longer a
298 function with a single result. Consider:
299
300 ```rust,ignore
301 trait Identity { type Out; fn foo(&self); }
302 impl<T> Identity for T { type Out = T; ... }
303 ```
304
305 Now if I have `<&'static () as Identity>::Out`, this can be
306 validly derived as `&'a ()` for any `'a`:
307
308 ```text
309 <&'a () as Identity> <: <&'static () as Identity>
310 if &'static () < : &'a () -- Identity is contravariant in Self
311 if 'static : 'a -- Subtyping rules for relations
312 ```
313
314 This change otoh means that `<'static () as Identity>::Out` is
315 always `&'static ()` (which might then be upcast to `'a ()`,
316 separately). This was helpful in solving #21750.