]>
Commit | Line | Data |
---|---|---|
e9174d1e SL |
1 | /*! |
2 | Managing the scope stack. The scopes are tied to lexical scopes, so as | |
3dfed10e | 3 | we descend the THIR, we push a scope on the stack, build its |
e9174d1e | 4 | contents, and then pop it off. Every scope is named by a |
ea8adc8c | 5 | `region::Scope`. |
e9174d1e SL |
6 | |
7 | ### SEME Regions | |
8 | ||
29967ef6 | 9 | When pushing a new [Scope], we record the current point in the graph (a |
e9174d1e SL |
10 | basic block); this marks the entry to the scope. We then generate more |
11 | stuff in the control-flow graph. Whenever the scope is exited, either | |
12 | via a `break` or `return` or just by fallthrough, that marks an exit | |
13 | from the scope. Each lexical scope thus corresponds to a single-entry, | |
14 | multiple-exit (SEME) region in the control-flow graph. | |
15 | ||
29967ef6 XL |
16 | For now, we record the `region::Scope` to each SEME region for later reference |
17 | (see caveat in next paragraph). This is because destruction scopes are tied to | |
18 | them. This may change in the future so that MIR lowering determines its own | |
19 | destruction scopes. | |
e9174d1e | 20 | |
dc9dc135 XL |
21 | ### Not so SEME Regions |
22 | ||
23 | In the course of building matches, it sometimes happens that certain code | |
24 | (namely guards) gets executed multiple times. This means that the scope lexical | |
25 | scope may in fact correspond to multiple, disjoint SEME regions. So in fact our | |
29967ef6 XL |
26 | mapping is from one scope to a vector of SEME regions. Since the SEME regions |
27 | are disjoint, the mapping is still one-to-one for the set of SEME regions that | |
28 | we're currently in. | |
e9174d1e | 29 | |
29967ef6 XL |
30 | Also in matches, the scopes assigned to arms are not always even SEME regions! |
31 | Each arm has a single region with one entry for each pattern. We manually | |
dc9dc135 | 32 | manipulate the scheduled drops in this scope to avoid dropping things multiple |
29967ef6 | 33 | times. |
dc9dc135 | 34 | |
e9174d1e SL |
35 | ### Drops |
36 | ||
94b46f34 | 37 | The primary purpose for scopes is to insert drops: while building |
ff7c6d11 | 38 | the contents, we also accumulate places that need to be dropped upon |
e9174d1e SL |
39 | exit from each scope. This is done by calling `schedule_drop`. Once a |
40 | drop is scheduled, whenever we branch out we will insert drops of all | |
ff7c6d11 | 41 | those places onto the outgoing edge. Note that we don't know the full |
e9174d1e SL |
42 | set of scheduled drops up front, and so whenever we exit from the |
43 | scope we only drop the values scheduled thus far. For example, consider | |
44 | the scope S corresponding to this loop: | |
45 | ||
041b39d2 XL |
46 | ``` |
47 | # let cond = true; | |
e9174d1e | 48 | loop { |
041b39d2 | 49 | let x = ..; |
e9174d1e | 50 | if cond { break; } |
041b39d2 | 51 | let y = ..; |
e9174d1e SL |
52 | } |
53 | ``` | |
54 | ||
55 | When processing the `let x`, we will add one drop to the scope for | |
56 | `x`. The break will then insert a drop for `x`. When we process `let | |
57 | y`, we will add another drop (in fact, to a subscope, but let's ignore | |
58 | that for now); any later drops would also drop `y`. | |
59 | ||
60 | ### Early exit | |
61 | ||
62 | There are numerous "normal" ways to early exit a scope: `break`, | |
63 | `continue`, `return` (panics are handled separately). Whenever an | |
29967ef6 | 64 | early exit occurs, the method `break_scope` is called. It is given the |
e9174d1e SL |
65 | current point in execution where the early exit occurs, as well as the |
66 | scope you want to branch to (note that all early exits from to some | |
29967ef6 XL |
67 | other enclosing scope). `break_scope` will record the set of drops currently |
68 | scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops | |
69 | will be added to the CFG. | |
e9174d1e | 70 | |
29967ef6 XL |
71 | Panics are handled in a similar fashion, except that the drops are added to the |
72 | MIR once the rest of the function has finished being lowered. If a terminator | |
73 | can panic, call `diverge_from(block)` with the block containing the terminator | |
74 | `block`. | |
e9174d1e | 75 | |
29967ef6 | 76 | ### Breakable scopes |
e9174d1e SL |
77 | |
78 | In addition to the normal scope stack, we track a loop scope stack | |
29967ef6 XL |
79 | that contains only loops and breakable blocks. It tracks where a `break`, |
80 | `continue` or `return` should go to. | |
e9174d1e SL |
81 | |
82 | */ | |
83 | ||
dc9dc135 | 84 | use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG}; |
3dfed10e | 85 | use crate::thir::{Expr, ExprRef, LintLevel}; |
476ff2be | 86 | use rustc_data_structures::fx::FxHashMap; |
dfeec247 | 87 | use rustc_hir as hir; |
29967ef6 | 88 | use rustc_index::vec::IndexVec; |
3dfed10e XL |
89 | use rustc_middle::middle::region; |
90 | use rustc_middle::mir::*; | |
dfeec247 | 91 | use rustc_span::{Span, DUMMY_SP}; |
29967ef6 XL |
92 | |
93 | #[derive(Debug)] | |
94 | pub struct Scopes<'tcx> { | |
95 | scopes: Vec<Scope>, | |
96 | /// The current set of breakable scopes. See module comment for more details. | |
97 | breakable_scopes: Vec<BreakableScope<'tcx>>, | |
98 | ||
99 | /// Drops that need to be done on unwind paths. See the comment on | |
100 | /// [DropTree] for more details. | |
101 | unwind_drops: DropTree, | |
102 | ||
103 | /// Drops that need to be done on paths to the `GeneratorDrop` terminator. | |
104 | generator_drops: DropTree, | |
105 | } | |
e9174d1e | 106 | |
041b39d2 | 107 | #[derive(Debug)] |
dc9dc135 | 108 | struct Scope { |
94b46f34 XL |
109 | /// The source scope this scope was created in. |
110 | source_scope: SourceScope, | |
3157f602 | 111 | |
ea8adc8c XL |
112 | /// the region span of this scope within source code. |
113 | region_scope: region::Scope, | |
54a0048b | 114 | |
ea8adc8c XL |
115 | /// the span of that region_scope |
116 | region_scope_span: Span, | |
3b2f2976 | 117 | |
ff7c6d11 | 118 | /// set of places to drop when exiting this scope. This starts |
54a0048b SL |
119 | /// out empty but grows as variables are declared during the |
120 | /// building process. This is a stack, so we always drop from the | |
121 | /// end of the vector (top of the stack) first. | |
dc9dc135 | 122 | drops: Vec<DropData>, |
54a0048b | 123 | |
e1599b0c XL |
124 | moved_locals: Vec<Local>, |
125 | ||
29967ef6 XL |
126 | /// The drop index that will drop everything in and below this scope on an |
127 | /// unwind path. | |
128 | cached_unwind_block: Option<DropIdx>, | |
ea8adc8c | 129 | |
29967ef6 XL |
130 | /// The drop index that will drop everything in and below this scope on a |
131 | /// generator drop path. | |
132 | cached_generator_drop_block: Option<DropIdx>, | |
7453a54e SL |
133 | } |
134 | ||
29967ef6 | 135 | #[derive(Clone, Copy, Debug)] |
dc9dc135 | 136 | struct DropData { |
29967ef6 XL |
137 | /// The `Span` where drop obligation was incurred (typically where place was |
138 | /// declared) | |
139 | source_info: SourceInfo, | |
54a0048b | 140 | |
dc9dc135 XL |
141 | /// local to drop |
142 | local: Local, | |
54a0048b | 143 | |
8faf50e0 XL |
144 | /// Whether this is a value Drop or a StorageDead. |
145 | kind: DropKind, | |
ea8adc8c XL |
146 | } |
147 | ||
29967ef6 | 148 | #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] |
8faf50e0 | 149 | pub(crate) enum DropKind { |
dc9dc135 XL |
150 | Value, |
151 | Storage, | |
7453a54e SL |
152 | } |
153 | ||
29967ef6 | 154 | #[derive(Debug)] |
dc9dc135 | 155 | struct BreakableScope<'tcx> { |
ea8adc8c | 156 | /// Region scope of the loop |
dc9dc135 | 157 | region_scope: region::Scope, |
e74abb32 | 158 | /// The destination of the loop/block expression itself (i.e., where to put |
29967ef6 | 159 | /// the result of a `break` or `return` expression) |
dc9dc135 | 160 | break_destination: Place<'tcx>, |
29967ef6 XL |
161 | /// Drops that happen on the `break`/`return` path. |
162 | break_drops: DropTree, | |
163 | /// Drops that happen on the `continue` path. | |
164 | continue_drops: Option<DropTree>, | |
dc9dc135 XL |
165 | } |
166 | ||
167 | /// The target of an expression that breaks out of a scope | |
168 | #[derive(Clone, Copy, Debug)] | |
dfeec247 | 169 | crate enum BreakableTarget { |
dc9dc135 XL |
170 | Continue(region::Scope), |
171 | Break(region::Scope), | |
172 | Return, | |
7453a54e SL |
173 | } |
174 | ||
29967ef6 XL |
175 | rustc_index::newtype_index! { |
176 | struct DropIdx { .. } | |
177 | } | |
ea8adc8c | 178 | |
29967ef6 | 179 | const ROOT_NODE: DropIdx = DropIdx::from_u32(0); |
ea8adc8c | 180 | |
29967ef6 XL |
181 | /// A tree of drops that we have deferred lowering. It's used for: |
182 | /// | |
183 | /// * Drops on unwind paths | |
184 | /// * Drops on generator drop paths (when a suspended generator is dropped) | |
185 | /// * Drops on return and loop exit paths | |
186 | /// | |
187 | /// Once no more nodes could be added to the tree, we lower it to MIR in one go | |
188 | /// in `build_mir`. | |
189 | #[derive(Debug)] | |
190 | struct DropTree { | |
191 | /// Drops in the tree. | |
192 | drops: IndexVec<DropIdx, (DropData, DropIdx)>, | |
193 | /// Map for finding the inverse of the `next_drop` relation: | |
194 | /// | |
195 | /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i` | |
196 | previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>, | |
197 | /// Edges into the `DropTree` that need to be added once it's lowered. | |
198 | entry_points: Vec<(DropIdx, BasicBlock)>, | |
ea8adc8c XL |
199 | } |
200 | ||
dc9dc135 | 201 | impl Scope { |
e1599b0c XL |
202 | /// Whether there's anything to do for the cleanup path, that is, |
203 | /// when unwinding through this scope. This includes destructors, | |
204 | /// but not StorageDead statements, which don't get emitted at all | |
205 | /// for unwinding, for several reasons: | |
206 | /// * clang doesn't emit llvm.lifetime.end for C++ unwinding | |
207 | /// * LLVM's memory dependency analysis can't handle it atm | |
208 | /// * polluting the cleanup MIR with StorageDead creates | |
209 | /// landing pads even though there's no actual destructors | |
210 | /// * freeing up stack space has no effect during unwinding | |
211 | /// Note that for generators we do emit StorageDeads, for the | |
212 | /// use of optimizations in the MIR generator transform. | |
213 | fn needs_cleanup(&self) -> bool { | |
214 | self.drops.iter().any(|drop| match drop.kind { | |
215 | DropKind::Value => true, | |
216 | DropKind::Storage => false, | |
217 | }) | |
218 | } | |
29967ef6 XL |
219 | |
220 | fn invalidate_cache(&mut self) { | |
221 | self.cached_unwind_block = None; | |
222 | self.cached_generator_drop_block = None; | |
223 | } | |
e9174d1e SL |
224 | } |
225 | ||
29967ef6 XL |
226 | /// A trait that determined how [DropTree] creates its blocks and |
227 | /// links to any entry nodes. | |
228 | trait DropTreeBuilder<'tcx> { | |
229 | /// Create a new block for the tree. This should call either | |
230 | /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`. | |
231 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock; | |
232 | ||
233 | /// Links a block outside the drop tree, `from`, to the block `to` inside | |
234 | /// the drop tree. | |
235 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock); | |
236 | } | |
237 | ||
238 | impl DropTree { | |
239 | fn new() -> Self { | |
240 | // The root node of the tree doesn't represent a drop, but instead | |
241 | // represents the block in the tree that should be jumped to once all | |
242 | // of the required drops have been performed. | |
243 | let fake_source_info = SourceInfo::outermost(DUMMY_SP); | |
244 | let fake_data = | |
245 | DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage }; | |
246 | let drop_idx = DropIdx::MAX; | |
247 | let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1); | |
248 | Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() } | |
dc9dc135 XL |
249 | } |
250 | ||
29967ef6 XL |
251 | fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx { |
252 | let drops = &mut self.drops; | |
253 | *self | |
254 | .previous_drops | |
255 | .entry((next, drop.local, drop.kind)) | |
256 | .or_insert_with(|| drops.push((drop, next))) | |
dc9dc135 XL |
257 | } |
258 | ||
29967ef6 XL |
259 | fn add_entry(&mut self, from: BasicBlock, to: DropIdx) { |
260 | debug_assert!(to < self.drops.next_index()); | |
261 | self.entry_points.push((to, from)); | |
dc9dc135 XL |
262 | } |
263 | ||
29967ef6 XL |
264 | /// Builds the MIR for a given drop tree. |
265 | /// | |
266 | /// `blocks` should have the same length as `self.drops`, and may have its | |
267 | /// first value set to some already existing block. | |
268 | fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>( | |
269 | &mut self, | |
270 | cfg: &mut CFG<'tcx>, | |
271 | blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, | |
272 | ) { | |
273 | debug!("DropTree::build_mir(drops = {:#?})", self); | |
274 | assert_eq!(blocks.len(), self.drops.len()); | |
275 | ||
276 | self.assign_blocks::<T>(cfg, blocks); | |
277 | self.link_blocks(cfg, blocks) | |
dc9dc135 XL |
278 | } |
279 | ||
29967ef6 XL |
280 | /// Assign blocks for all of the drops in the drop tree that need them. |
281 | fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>( | |
282 | &mut self, | |
283 | cfg: &mut CFG<'tcx>, | |
284 | blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, | |
285 | ) { | |
286 | // StorageDead statements can share blocks with each other and also with | |
287 | // a Drop terminator. We iterate through the drops to find which drops | |
288 | // need their own block. | |
289 | #[derive(Clone, Copy)] | |
290 | enum Block { | |
291 | // This drop is unreachable | |
292 | None, | |
293 | // This drop is only reachable through the `StorageDead` with the | |
294 | // specified index. | |
295 | Shares(DropIdx), | |
296 | // This drop has more than one way of being reached, or it is | |
297 | // branched to from outside the tree, or its predecessor is a | |
298 | // `Value` drop. | |
299 | Own, | |
300 | } | |
301 | ||
302 | let mut needs_block = IndexVec::from_elem(Block::None, &self.drops); | |
303 | if blocks[ROOT_NODE].is_some() { | |
304 | // In some cases (such as drops for `continue`) the root node | |
305 | // already has a block. In this case, make sure that we don't | |
306 | // override it. | |
307 | needs_block[ROOT_NODE] = Block::Own; | |
308 | } | |
309 | ||
310 | // Sort so that we only need to check the last value. | |
311 | let entry_points = &mut self.entry_points; | |
312 | entry_points.sort(); | |
313 | ||
314 | for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() { | |
315 | if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) { | |
316 | let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg)); | |
317 | needs_block[drop_idx] = Block::Own; | |
318 | while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) { | |
319 | let entry_block = entry_points.pop().unwrap().1; | |
320 | T::add_entry(cfg, entry_block, block); | |
dc9dc135 | 321 | } |
dc9dc135 | 322 | } |
29967ef6 XL |
323 | match needs_block[drop_idx] { |
324 | Block::None => continue, | |
325 | Block::Own => { | |
326 | blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg)); | |
327 | } | |
328 | Block::Shares(pred) => { | |
329 | blocks[drop_idx] = blocks[pred]; | |
330 | } | |
dc9dc135 | 331 | } |
29967ef6 XL |
332 | if let DropKind::Value = drop_data.0.kind { |
333 | needs_block[drop_data.1] = Block::Own; | |
334 | } else if drop_idx != ROOT_NODE { | |
335 | match &mut needs_block[drop_data.1] { | |
336 | pred @ Block::None => *pred = Block::Shares(drop_idx), | |
337 | pred @ Block::Shares(_) => *pred = Block::Own, | |
338 | Block::Own => (), | |
339 | } | |
dc9dc135 XL |
340 | } |
341 | } | |
29967ef6 XL |
342 | |
343 | debug!("assign_blocks: blocks = {:#?}", blocks); | |
344 | assert!(entry_points.is_empty()); | |
dc9dc135 XL |
345 | } |
346 | ||
29967ef6 XL |
347 | fn link_blocks<'tcx>( |
348 | &self, | |
349 | cfg: &mut CFG<'tcx>, | |
350 | blocks: &IndexVec<DropIdx, Option<BasicBlock>>, | |
351 | ) { | |
352 | for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() { | |
353 | let block = if let Some(block) = blocks[drop_idx] { | |
354 | block | |
355 | } else { | |
356 | continue; | |
357 | }; | |
358 | match drop_data.0.kind { | |
359 | DropKind::Value => { | |
360 | let terminator = TerminatorKind::Drop { | |
361 | target: blocks[drop_data.1].unwrap(), | |
362 | // The caller will handle this if needed. | |
363 | unwind: None, | |
364 | place: drop_data.0.local.into(), | |
365 | }; | |
366 | cfg.terminate(block, drop_data.0.source_info, terminator); | |
367 | } | |
368 | // Root nodes don't correspond to a drop. | |
369 | DropKind::Storage if drop_idx == ROOT_NODE => {} | |
370 | DropKind::Storage => { | |
371 | let stmt = Statement { | |
372 | source_info: drop_data.0.source_info, | |
373 | kind: StatementKind::StorageDead(drop_data.0.local), | |
374 | }; | |
375 | cfg.push(block, stmt); | |
376 | let target = blocks[drop_data.1].unwrap(); | |
377 | if target != block { | |
378 | // Diagnostics don't use this `Span` but debuginfo | |
379 | // might. Since we don't want breakpoints to be placed | |
380 | // here, especially when this is on an unwind path, we | |
381 | // use `DUMMY_SP`. | |
382 | let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info }; | |
383 | let terminator = TerminatorKind::Goto { target }; | |
384 | cfg.terminate(block, source_info, terminator); | |
385 | } | |
386 | } | |
387 | } | |
388 | } | |
dc9dc135 | 389 | } |
29967ef6 | 390 | } |
dc9dc135 | 391 | |
29967ef6 XL |
392 | impl<'tcx> Scopes<'tcx> { |
393 | pub(crate) fn new() -> Self { | |
394 | Self { | |
395 | scopes: Vec::new(), | |
396 | breakable_scopes: Vec::new(), | |
397 | unwind_drops: DropTree::new(), | |
398 | generator_drops: DropTree::new(), | |
399 | } | |
dc9dc135 XL |
400 | } |
401 | ||
29967ef6 XL |
402 | fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) { |
403 | debug!("push_scope({:?})", region_scope); | |
404 | self.scopes.push(Scope { | |
405 | source_scope: vis_scope, | |
406 | region_scope: region_scope.0, | |
407 | region_scope_span: region_scope.1.span, | |
408 | drops: vec![], | |
409 | moved_locals: vec![], | |
410 | cached_unwind_block: None, | |
411 | cached_generator_drop_block: None, | |
412 | }); | |
413 | } | |
414 | ||
415 | fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope { | |
416 | let scope = self.scopes.pop().unwrap(); | |
417 | assert_eq!(scope.region_scope, region_scope.0); | |
418 | scope | |
419 | } | |
420 | ||
421 | fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize { | |
422 | self.scopes | |
423 | .iter() | |
424 | .rposition(|scope| scope.region_scope == region_scope) | |
425 | .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope)) | |
dc9dc135 XL |
426 | } |
427 | ||
428 | /// Returns the topmost active scope, which is known to be alive until | |
429 | /// the next scope expression. | |
29967ef6 | 430 | fn topmost(&self) -> region::Scope { |
dc9dc135 XL |
431 | self.scopes.last().expect("topmost_scope: no scopes present").region_scope |
432 | } | |
dc9dc135 XL |
433 | } |
434 | ||
435 | impl<'a, 'tcx> Builder<'a, 'tcx> { | |
7453a54e SL |
436 | // Adding and removing scopes |
437 | // ========================== | |
dc9dc135 XL |
438 | // Start a breakable scope, which tracks where `continue`, `break` and |
439 | // `return` should branch to. | |
29967ef6 | 440 | crate fn in_breakable_scope<F>( |
dfeec247 XL |
441 | &mut self, |
442 | loop_block: Option<BasicBlock>, | |
dfeec247 | 443 | break_destination: Place<'tcx>, |
29967ef6 | 444 | span: Span, |
dfeec247 | 445 | f: F, |
29967ef6 | 446 | ) -> BlockAnd<()> |
dfeec247 | 447 | where |
29967ef6 | 448 | F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>, |
e9174d1e | 449 | { |
dc9dc135 | 450 | let region_scope = self.scopes.topmost(); |
cc61c64b | 451 | let scope = BreakableScope { |
ea8adc8c | 452 | region_scope, |
3b2f2976 | 453 | break_destination, |
29967ef6 XL |
454 | break_drops: DropTree::new(), |
455 | continue_drops: loop_block.map(|_| DropTree::new()), | |
b039eaaf | 456 | }; |
dc9dc135 | 457 | self.scopes.breakable_scopes.push(scope); |
29967ef6 | 458 | let normal_exit_block = f(self); |
dc9dc135 | 459 | let breakable_scope = self.scopes.breakable_scopes.pop().unwrap(); |
ea8adc8c | 460 | assert!(breakable_scope.region_scope == region_scope); |
29967ef6 XL |
461 | let break_block = self.build_exit_tree(breakable_scope.break_drops, None); |
462 | if let Some(drops) = breakable_scope.continue_drops { self.build_exit_tree(drops, loop_block); } | |
463 | match (normal_exit_block, break_block) { | |
464 | (Some(block), None) | (None, Some(block)) => block, | |
465 | (None, None) => self.cfg.start_new_block().unit(), | |
466 | (Some(normal_block), Some(exit_block)) => { | |
467 | let target = self.cfg.start_new_block(); | |
468 | let source_info = self.source_info(span); | |
469 | self.cfg.terminate( | |
470 | unpack!(normal_block), | |
471 | source_info, | |
472 | TerminatorKind::Goto { target }, | |
473 | ); | |
474 | self.cfg.terminate( | |
475 | unpack!(exit_block), | |
476 | source_info, | |
477 | TerminatorKind::Goto { target }, | |
478 | ); | |
479 | target.unit() | |
480 | } | |
481 | } | |
e9174d1e SL |
482 | } |
483 | ||
dfeec247 XL |
484 | crate fn in_opt_scope<F, R>( |
485 | &mut self, | |
486 | opt_scope: Option<(region::Scope, SourceInfo)>, | |
487 | f: F, | |
488 | ) -> BlockAnd<R> | |
489 | where | |
490 | F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>, | |
041b39d2 | 491 | { |
dc9dc135 | 492 | debug!("in_opt_scope(opt_scope={:?})", opt_scope); |
dfeec247 XL |
493 | if let Some(region_scope) = opt_scope { |
494 | self.push_scope(region_scope); | |
495 | } | |
dc9dc135 | 496 | let mut block; |
041b39d2 | 497 | let rv = unpack!(block = f(self)); |
ea8adc8c XL |
498 | if let Some(region_scope) = opt_scope { |
499 | unpack!(block = self.pop_scope(region_scope, block)); | |
041b39d2 | 500 | } |
ea8adc8c | 501 | debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block); |
041b39d2 XL |
502 | block.and(rv) |
503 | } | |
504 | ||
92a42be0 SL |
505 | /// Convenience wrapper that pushes a scope and then executes `f` |
506 | /// to build its contents, popping the scope afterwards. | |
dfeec247 XL |
507 | crate fn in_scope<F, R>( |
508 | &mut self, | |
509 | region_scope: (region::Scope, SourceInfo), | |
510 | lint_level: LintLevel, | |
511 | f: F, | |
512 | ) -> BlockAnd<R> | |
513 | where | |
514 | F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>, | |
e9174d1e | 515 | { |
dc9dc135 | 516 | debug!("in_scope(region_scope={:?})", region_scope); |
94b46f34 | 517 | let source_scope = self.source_scope; |
ea8adc8c | 518 | let tcx = self.hir.tcx(); |
532ac7d7 XL |
519 | if let LintLevel::Explicit(current_hir_id) = lint_level { |
520 | // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound | |
521 | // to avoid adding Hir dependences on our parents. | |
522 | // We estimate the true lint roots here to avoid creating a lot of source scopes. | |
523 | ||
524 | let parent_root = tcx.maybe_lint_level_root_bounded( | |
dfeec247 | 525 | self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root, |
532ac7d7 XL |
526 | self.hir.root_lint_level, |
527 | ); | |
dfeec247 XL |
528 | let current_root = |
529 | tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir.root_lint_level); | |
532ac7d7 XL |
530 | |
531 | if parent_root != current_root { | |
532 | self.source_scope = self.new_source_scope( | |
533 | region_scope.1.span, | |
534 | LintLevel::Explicit(current_root), | |
dfeec247 | 535 | None, |
532ac7d7 | 536 | ); |
ea8adc8c XL |
537 | } |
538 | } | |
539 | self.push_scope(region_scope); | |
dc9dc135 | 540 | let mut block; |
3157f602 | 541 | let rv = unpack!(block = f(self)); |
ea8adc8c | 542 | unpack!(block = self.pop_scope(region_scope, block)); |
94b46f34 | 543 | self.source_scope = source_scope; |
ea8adc8c | 544 | debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block); |
92a42be0 SL |
545 | block.and(rv) |
546 | } | |
e9174d1e | 547 | |
92a42be0 SL |
548 | /// Push a scope onto the stack. You can then build code in this |
549 | /// scope and call `pop_scope` afterwards. Note that these two | |
550 | /// calls must be paired; using `in_scope` as a convenience | |
551 | /// wrapper maybe preferable. | |
dfeec247 | 552 | crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) { |
dc9dc135 | 553 | self.scopes.push_scope(region_scope, self.source_scope); |
92a42be0 SL |
554 | } |
555 | ||
ea8adc8c XL |
556 | /// Pops a scope, which should have region scope `region_scope`, |
557 | /// adding any drops onto the end of `block` that are needed. | |
558 | /// This must match 1-to-1 with `push_scope`. | |
dfeec247 XL |
559 | crate fn pop_scope( |
560 | &mut self, | |
561 | region_scope: (region::Scope, SourceInfo), | |
562 | mut block: BasicBlock, | |
563 | ) -> BlockAnd<()> { | |
ea8adc8c | 564 | debug!("pop_scope({:?}, {:?})", region_scope, block); |
a1dfa0c6 | 565 | |
29967ef6 XL |
566 | block = self.leave_top_scope(block); |
567 | ||
568 | self.scopes.pop_scope(region_scope); | |
041b39d2 | 569 | |
54a0048b | 570 | block.unit() |
e9174d1e SL |
571 | } |
572 | ||
29967ef6 | 573 | /// Sets up the drops for breaking from `block` to `target`. |
dfeec247 | 574 | crate fn break_scope( |
dc9dc135 XL |
575 | &mut self, |
576 | mut block: BasicBlock, | |
577 | value: Option<ExprRef<'tcx>>, | |
29967ef6 | 578 | target: BreakableTarget, |
dc9dc135 XL |
579 | source_info: SourceInfo, |
580 | ) -> BlockAnd<()> { | |
29967ef6 XL |
581 | let span = source_info.span; |
582 | ||
583 | let get_scope_index = |scope: region::Scope| { | |
584 | // find the loop-scope by its `region::Scope`. | |
585 | self.scopes | |
586 | .breakable_scopes | |
587 | .iter() | |
588 | .rposition(|breakable_scope| breakable_scope.region_scope == scope) | |
589 | .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found")) | |
590 | }; | |
591 | let (break_index, destination) = match target { | |
592 | BreakableTarget::Return => { | |
593 | let scope = &self.scopes.breakable_scopes[0]; | |
594 | if scope.break_destination != Place::return_place() { | |
595 | span_bug!(span, "`return` in item with no return scope"); | |
596 | } | |
597 | (0, Some(scope.break_destination)) | |
598 | } | |
599 | BreakableTarget::Break(scope) => { | |
600 | let break_index = get_scope_index(scope); | |
601 | let scope = &self.scopes.breakable_scopes[break_index]; | |
602 | (break_index, Some(scope.break_destination)) | |
603 | } | |
604 | BreakableTarget::Continue(scope) => { | |
605 | let break_index = get_scope_index(scope); | |
606 | (break_index, None) | |
607 | } | |
608 | }; | |
609 | ||
dc9dc135 XL |
610 | if let Some(destination) = destination { |
611 | if let Some(value) = value { | |
612 | debug!("stmt_expr Break val block_context.push(SubExpr)"); | |
613 | self.block_context.push(BlockFrame::SubExpr); | |
ba9703b0 | 614 | unpack!(block = self.into(destination, block, value)); |
dc9dc135 XL |
615 | self.block_context.pop(); |
616 | } else { | |
ba9703b0 | 617 | self.cfg.push_assign_unit(block, source_info, destination, self.hir.tcx()) |
dc9dc135 XL |
618 | } |
619 | } else { | |
620 | assert!(value.is_none(), "`return` and `break` should have a destination"); | |
621 | } | |
29967ef6 XL |
622 | |
623 | let region_scope = self.scopes.breakable_scopes[break_index].region_scope; | |
624 | let scope_index = self.scopes.scope_index(region_scope, span); | |
625 | let drops = if destination.is_some() { | |
626 | &mut self.scopes.breakable_scopes[break_index].break_drops | |
627 | } else { | |
628 | self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap() | |
629 | }; | |
630 | let mut drop_idx = ROOT_NODE; | |
631 | for scope in &self.scopes.scopes[scope_index + 1..] { | |
632 | for drop in &scope.drops { | |
633 | drop_idx = drops.add_drop(*drop, drop_idx); | |
634 | } | |
635 | } | |
636 | drops.add_entry(block, drop_idx); | |
637 | ||
638 | // `build_drop_tree` doesn't have access to our source_info, so we | |
639 | // create a dummy terminator now. `TerminatorKind::Resume` is used | |
640 | // because MIR type checking will panic if it hasn't been overwritten. | |
641 | self.cfg.terminate(block, source_info, TerminatorKind::Resume); | |
642 | ||
dc9dc135 XL |
643 | self.cfg.start_new_block().unit() |
644 | } | |
e9174d1e | 645 | |
29967ef6 | 646 | crate fn exit_top_scope( |
dfeec247 | 647 | &mut self, |
dfeec247 XL |
648 | mut block: BasicBlock, |
649 | target: BasicBlock, | |
29967ef6 | 650 | source_info: SourceInfo, |
dfeec247 | 651 | ) { |
29967ef6 XL |
652 | block = self.leave_top_scope(block); |
653 | self.cfg.terminate(block, source_info, TerminatorKind::Goto { target }); | |
654 | } | |
3b2f2976 | 655 | |
29967ef6 | 656 | fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock { |
3b2f2976 XL |
657 | // If we are emitting a `drop` statement, we need to have the cached |
658 | // diverge cleanup pads ready in case that drop panics. | |
29967ef6 XL |
659 | let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup()); |
660 | let is_generator = self.generator_kind.is_some(); | |
661 | let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX }; | |
662 | ||
663 | let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes"); | |
664 | unpack!(build_scope_drops( | |
665 | &mut self.cfg, | |
666 | &mut self.scopes.unwind_drops, | |
667 | scope, | |
668 | block, | |
669 | unwind_to, | |
670 | is_generator && needs_cleanup, | |
671 | self.arg_count, | |
672 | )) | |
ea8adc8c XL |
673 | } |
674 | ||
94b46f34 | 675 | /// Creates a new source scope, nested in the current one. |
dfeec247 XL |
676 | crate fn new_source_scope( |
677 | &mut self, | |
678 | span: Span, | |
679 | lint_level: LintLevel, | |
680 | safety: Option<Safety>, | |
681 | ) -> SourceScope { | |
94b46f34 | 682 | let parent = self.source_scope; |
dfeec247 XL |
683 | debug!( |
684 | "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}", | |
685 | span, | |
686 | lint_level, | |
687 | safety, | |
688 | parent, | |
689 | self.source_scopes.get(parent) | |
690 | ); | |
94b46f34 | 691 | let scope_local_data = SourceScopeLocalData { |
ea8adc8c XL |
692 | lint_root: if let LintLevel::Explicit(lint_root) = lint_level { |
693 | lint_root | |
694 | } else { | |
60c5eb7d | 695 | self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root |
ea8adc8c XL |
696 | }, |
697 | safety: safety.unwrap_or_else(|| { | |
60c5eb7d | 698 | self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety |
dfeec247 | 699 | }), |
ea8adc8c | 700 | }; |
60c5eb7d XL |
701 | self.source_scopes.push(SourceScopeData { |
702 | span, | |
703 | parent_scope: Some(parent), | |
29967ef6 XL |
704 | inlined: None, |
705 | inlined_parent_scope: None, | |
60c5eb7d XL |
706 | local_data: ClearCrossCrate::Set(scope_local_data), |
707 | }) | |
3157f602 XL |
708 | } |
709 | ||
94b46f34 | 710 | /// Given a span and the current source scope, make a SourceInfo. |
dfeec247 XL |
711 | crate fn source_info(&self, span: Span) -> SourceInfo { |
712 | SourceInfo { span, scope: self.source_scope } | |
e9174d1e SL |
713 | } |
714 | ||
dc9dc135 XL |
715 | // Finding scopes |
716 | // ============== | |
7cac9316 XL |
717 | /// Returns the scope that we should use as the lifetime of an |
718 | /// operand. Basically, an operand must live until it is consumed. | |
719 | /// This is similar to, but not quite the same as, the temporary | |
720 | /// scope (which can be larger or smaller). | |
721 | /// | |
722 | /// Consider: | |
723 | /// | |
724 | /// let x = foo(bar(X, Y)); | |
725 | /// | |
726 | /// We wish to pop the storage for X and Y after `bar()` is | |
727 | /// called, not after the whole `let` is completed. | |
728 | /// | |
729 | /// As another example, if the second argument diverges: | |
730 | /// | |
731 | /// foo(Box::new(2), panic!()) | |
732 | /// | |
733 | /// We would allocate the box but then free it on the unwinding | |
734 | /// path; we would also emit a free on the 'success' path from | |
735 | /// panic, but that will turn out to be removed as dead-code. | |
736 | /// | |
737 | /// When building statics/constants, returns `None` since | |
738 | /// intermediate values do not have to be dropped in that case. | |
dfeec247 | 739 | crate fn local_scope(&self) -> Option<region::Scope> { |
abe05a73 | 740 | match self.hir.body_owner_kind { |
dfeec247 XL |
741 | hir::BodyOwnerKind::Const | hir::BodyOwnerKind::Static(_) => |
742 | // No need to free storage in this context. | |
743 | { | |
744 | None | |
745 | } | |
746 | hir::BodyOwnerKind::Closure | hir::BodyOwnerKind::Fn => Some(self.scopes.topmost()), | |
7cac9316 XL |
747 | } |
748 | } | |
749 | ||
dc9dc135 XL |
750 | // Scheduling drops |
751 | // ================ | |
dfeec247 | 752 | crate fn schedule_drop_storage_and_value( |
8faf50e0 XL |
753 | &mut self, |
754 | span: Span, | |
755 | region_scope: region::Scope, | |
dc9dc135 | 756 | local: Local, |
8faf50e0 | 757 | ) { |
e74abb32 XL |
758 | self.schedule_drop(span, region_scope, local, DropKind::Storage); |
759 | self.schedule_drop(span, region_scope, local, DropKind::Value); | |
8faf50e0 XL |
760 | } |
761 | ||
29967ef6 | 762 | /// Indicates that `place` should be dropped on exit from `region_scope`. |
8faf50e0 | 763 | /// |
29967ef6 XL |
764 | /// When called with `DropKind::Storage`, `place` shouldn't be the return |
765 | /// place, or a function parameter. | |
dfeec247 | 766 | crate fn schedule_drop( |
8faf50e0 XL |
767 | &mut self, |
768 | span: Span, | |
769 | region_scope: region::Scope, | |
dc9dc135 | 770 | local: Local, |
8faf50e0 XL |
771 | drop_kind: DropKind, |
772 | ) { | |
e74abb32 XL |
773 | let needs_drop = match drop_kind { |
774 | DropKind::Value => { | |
dfeec247 XL |
775 | if !self.hir.needs_drop(self.local_decls[local].ty) { |
776 | return; | |
777 | } | |
e74abb32 | 778 | true |
dfeec247 | 779 | } |
8faf50e0 | 780 | DropKind::Storage => { |
dc9dc135 XL |
781 | if local.index() <= self.arg_count { |
782 | span_bug!( | |
dfeec247 XL |
783 | span, |
784 | "`schedule_drop` called with local {:?} and arg_count {}", | |
dc9dc135 XL |
785 | local, |
786 | self.arg_count, | |
787 | ) | |
8faf50e0 | 788 | } |
e74abb32 | 789 | false |
5bcae85e | 790 | } |
e74abb32 | 791 | }; |
5bcae85e | 792 | |
29967ef6 XL |
793 | // When building drops, we try to cache chains of drops to reduce the |
794 | // number of `DropTree::add_drop` calls. This, however, means that | |
795 | // whenever we add a drop into a scope which already had some entries | |
796 | // in the drop tree built (and thus, cached) for it, we must invalidate | |
797 | // all caches which might branch into the scope which had a drop just | |
798 | // added to it. This is necessary, because otherwise some other code | |
799 | // might use the cache to branch into already built chain of drops, | |
800 | // essentially ignoring the newly added drop. | |
801 | // | |
802 | // For example consider there’s two scopes with a drop in each. These | |
803 | // are built and thus the caches are filled: | |
804 | // | |
805 | // +--------------------------------------------------------+ | |
806 | // | +---------------------------------+ | | |
807 | // | | +--------+ +-------------+ | +---------------+ | | |
808 | // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | | | |
809 | // | | +--------+ +-------------+ | +---------------+ | | |
810 | // | +------------|outer_scope cache|--+ | | |
811 | // +------------------------------|middle_scope cache|------+ | |
812 | // | |
813 | // Now, a new, inner-most scope is added along with a new drop into | |
814 | // both inner-most and outer-most scopes: | |
815 | // | |
816 | // +------------------------------------------------------------+ | |
817 | // | +----------------------------------+ | | |
818 | // | | +--------+ +-------------+ | +---------------+ | +-------------+ | |
819 | // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) | | |
820 | // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+ | |
821 | // | | +-+ +-------------+ | | | |
822 | // | +---|invalid outer_scope cache|----+ | | |
823 | // +----=----------------|invalid middle_scope cache|-----------+ | |
824 | // | |
825 | // If, when adding `drop(new)` we do not invalidate the cached blocks for both | |
826 | // outer_scope and middle_scope, then, when building drops for the inner (right-most) | |
827 | // scope, the old, cached blocks, without `drop(new)` will get used, producing the | |
828 | // wrong results. | |
829 | // | |
830 | // Note that this code iterates scopes from the inner-most to the outer-most, | |
831 | // invalidating caches of each scope visited. This way bare minimum of the | |
832 | // caches gets invalidated. i.e., if a new drop is added into the middle scope, the | |
833 | // cache of outer scope stays intact. | |
834 | // | |
835 | // Since we only cache drops for the unwind path and the generator drop | |
836 | // path, we only need to invalidate the cache for drops that happen on | |
837 | // the unwind or generator drop paths. This means that for | |
838 | // non-generators we don't need to invalidate caches for `DropKind::Storage`. | |
839 | let invalidate_caches = needs_drop || self.generator_kind.is_some(); | |
840 | for scope in self.scopes.scopes.iter_mut().rev() { | |
841 | if invalidate_caches { | |
842 | scope.invalidate_cache(); | |
843 | } | |
844 | ||
845 | if scope.region_scope == region_scope { | |
dfeec247 XL |
846 | let region_scope_span = |
847 | region_scope.span(self.hir.tcx(), &self.hir.region_scope_tree); | |
2c00a5a8 | 848 | // Attribute scope exit drops to scope's closing brace. |
b7449926 | 849 | let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span); |
2c00a5a8 | 850 | |
7453a54e | 851 | scope.drops.push(DropData { |
29967ef6 | 852 | source_info: SourceInfo { span: scope_end, scope: scope.source_scope }, |
dc9dc135 XL |
853 | local, |
854 | kind: drop_kind, | |
7453a54e | 855 | }); |
29967ef6 | 856 | |
7453a54e | 857 | return; |
7453a54e SL |
858 | } |
859 | } | |
29967ef6 | 860 | |
dc9dc135 | 861 | span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local); |
7453a54e SL |
862 | } |
863 | ||
e1599b0c XL |
864 | /// Indicates that the "local operand" stored in `local` is |
865 | /// *moved* at some point during execution (see `local_scope` for | |
866 | /// more information about what a "local operand" is -- in short, | |
867 | /// it's an intermediate operand created as part of preparing some | |
868 | /// MIR instruction). We use this information to suppress | |
869 | /// redundant drops on the non-unwind paths. This results in less | |
870 | /// MIR, but also avoids spurious borrow check errors | |
871 | /// (c.f. #64391). | |
872 | /// | |
873 | /// Example: when compiling the call to `foo` here: | |
874 | /// | |
875 | /// ```rust | |
876 | /// foo(bar(), ...) | |
877 | /// ``` | |
878 | /// | |
879 | /// we would evaluate `bar()` to an operand `_X`. We would also | |
880 | /// schedule `_X` to be dropped when the expression scope for | |
881 | /// `foo(bar())` is exited. This is relevant, for example, if the | |
882 | /// later arguments should unwind (it would ensure that `_X` gets | |
883 | /// dropped). However, if no unwind occurs, then `_X` will be | |
884 | /// unconditionally consumed by the `call`: | |
885 | /// | |
886 | /// ``` | |
887 | /// bb { | |
888 | /// ... | |
889 | /// _R = CALL(foo, _X, ...) | |
890 | /// } | |
891 | /// ``` | |
892 | /// | |
893 | /// However, `_X` is still registered to be dropped, and so if we | |
894 | /// do nothing else, we would generate a `DROP(_X)` that occurs | |
895 | /// after the call. This will later be optimized out by the | |
896 | /// drop-elaboation code, but in the meantime it can lead to | |
897 | /// spurious borrow-check errors -- the problem, ironically, is | |
898 | /// not the `DROP(_X)` itself, but the (spurious) unwind pathways | |
899 | /// that it creates. See #64391 for an example. | |
dfeec247 | 900 | crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) { |
e1599b0c XL |
901 | let scope = match self.local_scope() { |
902 | None => { | |
903 | // if there is no local scope, operands won't be dropped anyway | |
904 | return; | |
905 | } | |
906 | ||
dfeec247 | 907 | Some(local_scope) => self |
29967ef6 | 908 | .scopes |
dfeec247 XL |
909 | .scopes |
910 | .iter_mut() | |
29967ef6 | 911 | .rfind(|scope| scope.region_scope == local_scope) |
dfeec247 | 912 | .unwrap_or_else(|| bug!("scope {:?} not found in scope list!", local_scope)), |
e1599b0c XL |
913 | }; |
914 | ||
915 | // look for moves of a local variable, like `MOVE(_X)` | |
916 | let locals_moved = operands.iter().flat_map(|operand| match operand { | |
917 | Operand::Copy(_) | Operand::Constant(_) => None, | |
918 | Operand::Move(place) => place.as_local(), | |
919 | }); | |
920 | ||
921 | for local in locals_moved { | |
922 | // check if we have a Drop for this operand and -- if so | |
923 | // -- add it to the list of moved operands. Note that this | |
924 | // local might not have been an operand created for this | |
925 | // call, it could come from other places too. | |
926 | if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) { | |
927 | scope.moved_locals.push(local); | |
928 | } | |
929 | } | |
930 | } | |
931 | ||
7453a54e SL |
932 | // Other |
933 | // ===== | |
dc9dc135 XL |
934 | /// Branch based on a boolean condition. |
935 | /// | |
936 | /// This is a special case because the temporary for the condition needs to | |
937 | /// be dropped on both the true and the false arm. | |
dfeec247 | 938 | crate fn test_bool( |
dc9dc135 XL |
939 | &mut self, |
940 | mut block: BasicBlock, | |
941 | condition: Expr<'tcx>, | |
942 | source_info: SourceInfo, | |
943 | ) -> (BasicBlock, BasicBlock) { | |
944 | let cond = unpack!(block = self.as_local_operand(block, condition)); | |
945 | let true_block = self.cfg.start_new_block(); | |
946 | let false_block = self.cfg.start_new_block(); | |
dfeec247 | 947 | let term = TerminatorKind::if_(self.hir.tcx(), cond.clone(), true_block, false_block); |
dc9dc135 XL |
948 | self.cfg.terminate(block, source_info, term); |
949 | ||
950 | match cond { | |
951 | // Don't try to drop a constant | |
952 | Operand::Constant(_) => (), | |
953 | // If constants and statics, we don't generate StorageLive for this | |
954 | // temporary, so don't try to generate StorageDead for it either. | |
955 | _ if self.local_scope().is_none() => (), | |
dfeec247 | 956 | Operand::Copy(place) | Operand::Move(place) => { |
e74abb32 XL |
957 | if let Some(cond_temp) = place.as_local() { |
958 | // Manually drop the condition on both branches. | |
959 | let top_scope = self.scopes.scopes.last_mut().unwrap(); | |
960 | let top_drop_data = top_scope.drops.pop().unwrap(); | |
29967ef6 XL |
961 | if self.generator_kind.is_some() { |
962 | top_scope.invalidate_cache(); | |
963 | } | |
e74abb32 XL |
964 | |
965 | match top_drop_data.kind { | |
966 | DropKind::Value { .. } => { | |
967 | bug!("Drop scheduled on top of condition variable") | |
968 | } | |
969 | DropKind::Storage => { | |
29967ef6 | 970 | let source_info = top_drop_data.source_info; |
e74abb32 XL |
971 | let local = top_drop_data.local; |
972 | assert_eq!(local, cond_temp, "Drop scheduled on top of condition"); | |
973 | self.cfg.push( | |
974 | true_block, | |
dfeec247 | 975 | Statement { source_info, kind: StatementKind::StorageDead(local) }, |
e74abb32 XL |
976 | ); |
977 | self.cfg.push( | |
978 | false_block, | |
dfeec247 | 979 | Statement { source_info, kind: StatementKind::StorageDead(local) }, |
e74abb32 XL |
980 | ); |
981 | } | |
dc9dc135 | 982 | } |
e74abb32 XL |
983 | } else { |
984 | bug!("Expected as_local_operand to produce a temporary"); | |
985 | } | |
dc9dc135 | 986 | } |
dc9dc135 XL |
987 | } |
988 | ||
989 | (true_block, false_block) | |
990 | } | |
991 | ||
29967ef6 XL |
992 | /// Returns the [DropIdx] for the innermost drop if the function unwound at |
993 | /// this point. The `DropIdx` will be created if it doesn't already exist. | |
994 | fn diverge_cleanup(&mut self) -> DropIdx { | |
995 | let is_generator = self.generator_kind.is_some(); | |
996 | let (uncached_scope, mut cached_drop) = self | |
997 | .scopes | |
998 | .scopes | |
999 | .iter() | |
1000 | .enumerate() | |
1001 | .rev() | |
1002 | .find_map(|(scope_idx, scope)| { | |
1003 | scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block)) | |
1004 | }) | |
1005 | .unwrap_or((0, ROOT_NODE)); | |
1006 | ||
1007 | for scope in &mut self.scopes.scopes[uncached_scope..] { | |
1008 | for drop in &scope.drops { | |
1009 | if is_generator || drop.kind == DropKind::Value { | |
1010 | cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop); | |
1011 | } | |
1012 | } | |
1013 | scope.cached_unwind_block = Some(cached_drop); | |
7453a54e | 1014 | } |
29967ef6 XL |
1015 | |
1016 | cached_drop | |
ff7c6d11 XL |
1017 | } |
1018 | ||
29967ef6 XL |
1019 | /// Prepares to create a path that performs all required cleanup for a |
1020 | /// terminator that can unwind at the given basic block. | |
1021 | /// | |
1022 | /// This path terminates in Resume. The path isn't created until after all | |
1023 | /// of the non-unwind paths in this item have been lowered. | |
1024 | crate fn diverge_from(&mut self, start: BasicBlock) { | |
1025 | debug_assert!( | |
1026 | matches!( | |
1027 | self.cfg.block_data(start).terminator().kind, | |
1028 | TerminatorKind::Assert { .. } | |
1029 | | TerminatorKind::Call {..} | |
1030 | | TerminatorKind::DropAndReplace { .. } | |
1031 | | TerminatorKind::FalseUnwind { .. } | |
1032 | ), | |
1033 | "diverge_from called on block with terminator that cannot unwind." | |
1034 | ); | |
a1dfa0c6 | 1035 | |
29967ef6 XL |
1036 | let next_drop = self.diverge_cleanup(); |
1037 | self.scopes.unwind_drops.add_entry(start, next_drop); | |
1038 | } | |
1039 | ||
1040 | /// Sets up a path that performs all required cleanup for dropping a | |
1041 | /// generator, starting from the given block that ends in | |
1042 | /// [TerminatorKind::Yield]. | |
1043 | /// | |
1044 | /// This path terminates in GeneratorDrop. | |
1045 | crate fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) { | |
1046 | debug_assert!( | |
1047 | matches!( | |
1048 | self.cfg.block_data(yield_block).terminator().kind, | |
1049 | TerminatorKind::Yield { .. } | |
1050 | ), | |
1051 | "generator_drop_cleanup called on block with non-yield terminator." | |
1052 | ); | |
1053 | let (uncached_scope, mut cached_drop) = self | |
1054 | .scopes | |
1055 | .scopes | |
1056 | .iter() | |
1057 | .enumerate() | |
1058 | .rev() | |
1059 | .find_map(|(scope_idx, scope)| { | |
1060 | scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block)) | |
1061 | }) | |
1062 | .unwrap_or((0, ROOT_NODE)); | |
1063 | ||
1064 | for scope in &mut self.scopes.scopes[uncached_scope..] { | |
1065 | for drop in &scope.drops { | |
1066 | cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop); | |
1067 | } | |
1068 | scope.cached_generator_drop_block = Some(cached_drop); | |
7453a54e | 1069 | } |
ff7c6d11 | 1070 | |
29967ef6 | 1071 | self.scopes.generator_drops.add_entry(yield_block, cached_drop); |
e9174d1e SL |
1072 | } |
1073 | ||
3157f602 | 1074 | /// Utility function for *non*-scope code to build their own drops |
dfeec247 XL |
1075 | crate fn build_drop_and_replace( |
1076 | &mut self, | |
1077 | block: BasicBlock, | |
1078 | span: Span, | |
f035d41b | 1079 | place: Place<'tcx>, |
dfeec247 XL |
1080 | value: Operand<'tcx>, |
1081 | ) -> BlockAnd<()> { | |
3157f602 XL |
1082 | let source_info = self.source_info(span); |
1083 | let next_target = self.cfg.start_new_block(); | |
29967ef6 | 1084 | |
dfeec247 XL |
1085 | self.cfg.terminate( |
1086 | block, | |
1087 | source_info, | |
29967ef6 | 1088 | TerminatorKind::DropAndReplace { place, value, target: next_target, unwind: None }, |
dfeec247 | 1089 | ); |
29967ef6 XL |
1090 | self.diverge_from(block); |
1091 | ||
3157f602 | 1092 | next_target.unit() |
e9174d1e SL |
1093 | } |
1094 | ||
29967ef6 | 1095 | /// Creates an `Assert` terminator and return the success block. |
3157f602 XL |
1096 | /// If the boolean condition operand is not the expected value, |
1097 | /// a runtime panic will be caused with the given message. | |
dfeec247 XL |
1098 | crate fn assert( |
1099 | &mut self, | |
1100 | block: BasicBlock, | |
1101 | cond: Operand<'tcx>, | |
1102 | expected: bool, | |
1103 | msg: AssertMessage<'tcx>, | |
1104 | span: Span, | |
1105 | ) -> BasicBlock { | |
3157f602 | 1106 | let source_info = self.source_info(span); |
3157f602 | 1107 | let success_block = self.cfg.start_new_block(); |
e9174d1e | 1108 | |
dfeec247 XL |
1109 | self.cfg.terminate( |
1110 | block, | |
1111 | source_info, | |
29967ef6 | 1112 | TerminatorKind::Assert { cond, expected, msg, target: success_block, cleanup: None }, |
dfeec247 | 1113 | ); |
29967ef6 | 1114 | self.diverge_from(block); |
e9174d1e | 1115 | |
3157f602 | 1116 | success_block |
9cc50fc6 | 1117 | } |
dc9dc135 | 1118 | |
dc9dc135 XL |
1119 | /// Unschedules any drops in the top scope. |
1120 | /// | |
1121 | /// This is only needed for `match` arm scopes, because they have one | |
1122 | /// entrance per pattern, but only one exit. | |
29967ef6 | 1123 | crate fn clear_top_scope(&mut self, region_scope: region::Scope) { |
dc9dc135 XL |
1124 | let top_scope = self.scopes.scopes.last_mut().unwrap(); |
1125 | ||
1126 | assert_eq!(top_scope.region_scope, region_scope); | |
1127 | ||
1128 | top_scope.drops.clear(); | |
29967ef6 | 1129 | top_scope.invalidate_cache(); |
dc9dc135 | 1130 | } |
7453a54e SL |
1131 | } |
1132 | ||
29967ef6 | 1133 | /// Builds drops for `pop_scope` and `leave_top_scope`. |
a1dfa0c6 XL |
1134 | fn build_scope_drops<'tcx>( |
1135 | cfg: &mut CFG<'tcx>, | |
29967ef6 | 1136 | unwind_drops: &mut DropTree, |
dc9dc135 | 1137 | scope: &Scope, |
a1dfa0c6 | 1138 | mut block: BasicBlock, |
29967ef6 XL |
1139 | mut unwind_to: DropIdx, |
1140 | storage_dead_on_unwind: bool, | |
a1dfa0c6 | 1141 | arg_count: usize, |
a1dfa0c6 | 1142 | ) -> BlockAnd<()> { |
dc9dc135 | 1143 | debug!("build_scope_drops({:?} -> {:?})", block, scope); |
a1dfa0c6 XL |
1144 | |
1145 | // Build up the drops in evaluation order. The end result will | |
1146 | // look like: | |
1147 | // | |
1148 | // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]] | |
1149 | // | | | | |
1150 | // : | | | |
1151 | // V V | |
1152 | // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to] | |
1153 | // | |
1154 | // The horizontal arrows represent the execution path when the drops return | |
1155 | // successfully. The downwards arrows represent the execution path when the | |
1156 | // drops panic (panicking while unwinding will abort, so there's no need for | |
dc9dc135 | 1157 | // another set of arrows). |
a1dfa0c6 | 1158 | // |
dc9dc135 XL |
1159 | // For generators, we unwind from a drop on a local to its StorageDead |
1160 | // statement. For other functions we don't worry about StorageDead. The | |
1161 | // drops for the unwind path should have already been generated by | |
1162 | // `diverge_cleanup_gen`. | |
a1dfa0c6 | 1163 | |
29967ef6 XL |
1164 | for drop_data in scope.drops.iter().rev() { |
1165 | let source_info = drop_data.source_info; | |
dc9dc135 | 1166 | let local = drop_data.local; |
e1599b0c | 1167 | |
5bcae85e | 1168 | match drop_data.kind { |
dc9dc135 | 1169 | DropKind::Value => { |
29967ef6 XL |
1170 | // `unwind_to` should drop the value that we're about to |
1171 | // schedule. If dropping this value panics, then we continue | |
1172 | // with the *next* value on the unwind path. | |
1173 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local); | |
1174 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind); | |
1175 | unwind_to = unwind_drops.drops[unwind_to].1; | |
1176 | ||
e1599b0c XL |
1177 | // If the operand has been moved, and we are not on an unwind |
1178 | // path, then don't generate the drop. (We only take this into | |
1179 | // account for non-unwind paths so as not to disturb the | |
1180 | // caching mechanism.) | |
29967ef6 | 1181 | if scope.moved_locals.iter().any(|&o| o == local) { |
e1599b0c XL |
1182 | continue; |
1183 | } | |
1184 | ||
29967ef6 | 1185 | unwind_drops.add_entry(block, unwind_to); |
abe05a73 | 1186 | |
3b2f2976 | 1187 | let next = cfg.start_new_block(); |
dfeec247 XL |
1188 | cfg.terminate( |
1189 | block, | |
1190 | source_info, | |
29967ef6 | 1191 | TerminatorKind::Drop { place: local.into(), target: next, unwind: None }, |
dfeec247 | 1192 | ); |
3b2f2976 XL |
1193 | block = next; |
1194 | } | |
8faf50e0 | 1195 | DropKind::Storage => { |
29967ef6 XL |
1196 | if storage_dead_on_unwind { |
1197 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local); | |
1198 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind); | |
1199 | unwind_to = unwind_drops.drops[unwind_to].1; | |
1200 | } | |
8faf50e0 | 1201 | // Only temps and vars need their storage dead. |
dc9dc135 | 1202 | assert!(local.index() > arg_count); |
dfeec247 | 1203 | cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) }); |
5bcae85e SL |
1204 | } |
1205 | } | |
7453a54e SL |
1206 | } |
1207 | block.unit() | |
1208 | } | |
1209 | ||
29967ef6 XL |
1210 | impl<'a, 'tcx: 'a> Builder<'a, 'tcx> { |
1211 | /// Build a drop tree for a breakable scope. | |
1212 | /// | |
1213 | /// If `continue_block` is `Some`, then the tree is for `continue` inside a | |
1214 | /// loop. Otherwise this is for `break` or `return`. | |
1215 | fn build_exit_tree( | |
1216 | &mut self, | |
1217 | mut drops: DropTree, | |
1218 | continue_block: Option<BasicBlock>, | |
1219 | ) -> Option<BlockAnd<()>> { | |
1220 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1221 | blocks[ROOT_NODE] = continue_block; | |
1222 | ||
1223 | drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks); | |
1224 | ||
1225 | // Link the exit drop tree to unwind drop tree. | |
1226 | if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) { | |
1227 | let unwind_target = self.diverge_cleanup(); | |
1228 | let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1); | |
1229 | for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) { | |
1230 | match drop_data.0.kind { | |
1231 | DropKind::Storage => { | |
1232 | if self.generator_kind.is_some() { | |
1233 | let unwind_drop = self | |
1234 | .scopes | |
1235 | .unwind_drops | |
1236 | .add_drop(drop_data.0, unwind_indices[drop_data.1]); | |
1237 | unwind_indices.push(unwind_drop); | |
1238 | } else { | |
1239 | unwind_indices.push(unwind_indices[drop_data.1]); | |
1240 | } | |
1241 | } | |
1242 | DropKind::Value => { | |
1243 | let unwind_drop = self | |
1244 | .scopes | |
1245 | .unwind_drops | |
1246 | .add_drop(drop_data.0, unwind_indices[drop_data.1]); | |
1247 | self.scopes | |
1248 | .unwind_drops | |
1249 | .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]); | |
1250 | unwind_indices.push(unwind_drop); | |
1251 | } | |
1252 | } | |
dc9dc135 | 1253 | } |
dc9dc135 | 1254 | } |
29967ef6 | 1255 | blocks[ROOT_NODE].map(BasicBlock::unit) |
dc9dc135 | 1256 | } |
dc9dc135 | 1257 | |
29967ef6 XL |
1258 | /// Build the unwind and generator drop trees. |
1259 | crate fn build_drop_trees(&mut self, should_abort: bool) { | |
1260 | if self.generator_kind.is_some() { | |
1261 | self.build_generator_drop_trees(should_abort); | |
1262 | } else { | |
1263 | Self::build_unwind_tree( | |
1264 | &mut self.cfg, | |
1265 | &mut self.scopes.unwind_drops, | |
1266 | self.fn_span, | |
1267 | should_abort, | |
1268 | &mut None, | |
1269 | ); | |
1270 | } | |
1271 | } | |
1272 | ||
1273 | fn build_generator_drop_trees(&mut self, should_abort: bool) { | |
1274 | // Build the drop tree for dropping the generator while it's suspended. | |
1275 | let drops = &mut self.scopes.generator_drops; | |
1276 | let cfg = &mut self.cfg; | |
1277 | let fn_span = self.fn_span; | |
1278 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1279 | drops.build_mir::<GeneratorDrop>(cfg, &mut blocks); | |
1280 | if let Some(root_block) = blocks[ROOT_NODE] { | |
1281 | cfg.terminate( | |
1282 | root_block, | |
1283 | SourceInfo::outermost(fn_span), | |
1284 | TerminatorKind::GeneratorDrop, | |
1285 | ); | |
1286 | } | |
1287 | ||
1288 | // Build the drop tree for unwinding in the normal control flow paths. | |
1289 | let resume_block = &mut None; | |
1290 | let unwind_drops = &mut self.scopes.unwind_drops; | |
1291 | Self::build_unwind_tree(cfg, unwind_drops, fn_span, should_abort, resume_block); | |
1292 | ||
1293 | // Build the drop tree for unwinding when dropping a suspended | |
1294 | // generator. | |
041b39d2 | 1295 | // |
29967ef6 XL |
1296 | // This is a different tree to the standard unwind paths here to |
1297 | // prevent drop elaboration from creating drop flags that would have | |
1298 | // to be captured by the generator. I'm not sure how important this | |
1299 | // optimization is, but it is here. | |
1300 | for (drop_idx, drop_data) in drops.drops.iter_enumerated() { | |
1301 | if let DropKind::Value = drop_data.0.kind { | |
1302 | debug_assert!(drop_data.1 < drops.drops.next_index()); | |
1303 | drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap())); | |
dc9dc135 | 1304 | } |
29967ef6 XL |
1305 | } |
1306 | Self::build_unwind_tree(cfg, drops, fn_span, should_abort, resume_block); | |
7453a54e | 1307 | } |
abe05a73 | 1308 | |
29967ef6 XL |
1309 | fn build_unwind_tree( |
1310 | cfg: &mut CFG<'tcx>, | |
1311 | drops: &mut DropTree, | |
1312 | fn_span: Span, | |
1313 | should_abort: bool, | |
1314 | resume_block: &mut Option<BasicBlock>, | |
1315 | ) { | |
1316 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1317 | blocks[ROOT_NODE] = *resume_block; | |
1318 | drops.build_mir::<Unwind>(cfg, &mut blocks); | |
1319 | if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) { | |
1320 | // `TerminatorKind::Abort` is used for `#[unwind(aborts)]` | |
1321 | // functions. | |
1322 | let terminator = | |
1323 | if should_abort { TerminatorKind::Abort } else { TerminatorKind::Resume }; | |
1324 | ||
1325 | cfg.terminate(resume, SourceInfo::outermost(fn_span), terminator); | |
1326 | ||
1327 | *resume_block = blocks[ROOT_NODE]; | |
1328 | } | |
1329 | } | |
1330 | } | |
041b39d2 | 1331 | |
29967ef6 XL |
1332 | // DropTreeBuilder implementations. |
1333 | ||
1334 | struct ExitScopes; | |
1335 | ||
1336 | impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes { | |
1337 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1338 | cfg.start_new_block() | |
1339 | } | |
1340 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1341 | cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to }; | |
1342 | } | |
7453a54e | 1343 | } |
dc9dc135 | 1344 | |
29967ef6 XL |
1345 | struct GeneratorDrop; |
1346 | ||
1347 | impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop { | |
1348 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1349 | cfg.start_new_block() | |
1350 | } | |
1351 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1352 | let term = cfg.block_data_mut(from).terminator_mut(); | |
1353 | if let TerminatorKind::Yield { ref mut drop, .. } = term.kind { | |
1354 | *drop = Some(to); | |
1355 | } else { | |
1356 | span_bug!( | |
1357 | term.source_info.span, | |
1358 | "cannot enter generator drop tree from {:?}", | |
1359 | term.kind | |
1360 | ) | |
1361 | } | |
1362 | } | |
1363 | } | |
1364 | ||
1365 | struct Unwind; | |
1366 | ||
1367 | impl<'tcx> DropTreeBuilder<'tcx> for Unwind { | |
1368 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1369 | cfg.start_new_cleanup_block() | |
1370 | } | |
1371 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1372 | let term = &mut cfg.block_data_mut(from).terminator_mut(); | |
1373 | match &mut term.kind { | |
1374 | TerminatorKind::Drop { unwind, .. } | |
1375 | | TerminatorKind::DropAndReplace { unwind, .. } | |
1376 | | TerminatorKind::FalseUnwind { unwind, .. } | |
1377 | | TerminatorKind::Call { cleanup: unwind, .. } | |
1378 | | TerminatorKind::Assert { cleanup: unwind, .. } => { | |
1379 | *unwind = Some(to); | |
1380 | } | |
1381 | TerminatorKind::Goto { .. } | |
1382 | | TerminatorKind::SwitchInt { .. } | |
1383 | | TerminatorKind::Resume | |
1384 | | TerminatorKind::Abort | |
1385 | | TerminatorKind::Return | |
1386 | | TerminatorKind::Unreachable | |
1387 | | TerminatorKind::Yield { .. } | |
1388 | | TerminatorKind::GeneratorDrop | |
1389 | | TerminatorKind::FalseEdge { .. } | |
1390 | | TerminatorKind::InlineAsm {.. } => { | |
1391 | span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind) | |
1392 | } | |
1393 | } | |
dfeec247 | 1394 | } |
dc9dc135 | 1395 | } |