]> git.proxmox.com Git - rustc.git/blame - compiler/rustc_mir_build/src/build/scope.rs
New upstream version 1.61.0+dfsg1
[rustc.git] / compiler / rustc_mir_build / src / build / scope.rs
CommitLineData
e9174d1e
SL
1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3dfed10e 3we descend the THIR, we push a scope on the stack, build its
e9174d1e 4contents, and then pop it off. Every scope is named by a
ea8adc8c 5`region::Scope`.
e9174d1e
SL
6
7### SEME Regions
8
29967ef6 9When pushing a new [Scope], we record the current point in the graph (a
e9174d1e
SL
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
29967ef6
XL
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
e9174d1e 20
dc9dc135
XL
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
29967ef6
XL
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
e9174d1e 29
29967ef6
XL
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
dc9dc135 32manipulate the scheduled drops in this scope to avoid dropping things multiple
29967ef6 33times.
dc9dc135 34
e9174d1e
SL
35### Drops
36
94b46f34 37The primary purpose for scopes is to insert drops: while building
ff7c6d11 38the contents, we also accumulate places that need to be dropped upon
e9174d1e
SL
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
ff7c6d11 41those places onto the outgoing edge. Note that we don't know the full
e9174d1e
SL
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
041b39d2
XL
46```
47# let cond = true;
e9174d1e 48loop {
041b39d2 49 let x = ..;
e9174d1e 50 if cond { break; }
041b39d2 51 let y = ..;
e9174d1e
SL
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
29967ef6 64early exit occurs, the method `break_scope` is called. It is given the
e9174d1e
SL
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
29967ef6
XL
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
e9174d1e 70
29967ef6
XL
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
e9174d1e 75
29967ef6 76### Breakable scopes
e9174d1e
SL
77
78In addition to the normal scope stack, we track a loop scope stack
29967ef6
XL
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
e9174d1e
SL
81
82*/
83
94222f64
XL
84use std::mem;
85
dc9dc135 86use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
476ff2be 87use rustc_data_structures::fx::FxHashMap;
29967ef6 88use rustc_index::vec::IndexVec;
3dfed10e
XL
89use rustc_middle::middle::region;
90use rustc_middle::mir::*;
17df50a5
XL
91use rustc_middle::thir::{Expr, LintLevel};
92
dfeec247 93use rustc_span::{Span, DUMMY_SP};
29967ef6
XL
94
95#[derive(Debug)]
96pub struct Scopes<'tcx> {
97 scopes: Vec<Scope>,
94222f64 98
29967ef6
XL
99 /// The current set of breakable scopes. See module comment for more details.
100 breakable_scopes: Vec<BreakableScope<'tcx>>,
101
94222f64
XL
102 /// The scope of the innermost if-then currently being lowered.
103 if_then_scope: Option<IfThenScope>,
104
29967ef6
XL
105 /// Drops that need to be done on unwind paths. See the comment on
106 /// [DropTree] for more details.
107 unwind_drops: DropTree,
108
109 /// Drops that need to be done on paths to the `GeneratorDrop` terminator.
110 generator_drops: DropTree,
111}
e9174d1e 112
041b39d2 113#[derive(Debug)]
dc9dc135 114struct Scope {
94b46f34
XL
115 /// The source scope this scope was created in.
116 source_scope: SourceScope,
3157f602 117
ea8adc8c
XL
118 /// the region span of this scope within source code.
119 region_scope: region::Scope,
54a0048b 120
ff7c6d11 121 /// set of places to drop when exiting this scope. This starts
54a0048b
SL
122 /// out empty but grows as variables are declared during the
123 /// building process. This is a stack, so we always drop from the
124 /// end of the vector (top of the stack) first.
dc9dc135 125 drops: Vec<DropData>,
54a0048b 126
e1599b0c
XL
127 moved_locals: Vec<Local>,
128
29967ef6
XL
129 /// The drop index that will drop everything in and below this scope on an
130 /// unwind path.
131 cached_unwind_block: Option<DropIdx>,
ea8adc8c 132
29967ef6
XL
133 /// The drop index that will drop everything in and below this scope on a
134 /// generator drop path.
135 cached_generator_drop_block: Option<DropIdx>,
7453a54e
SL
136}
137
29967ef6 138#[derive(Clone, Copy, Debug)]
dc9dc135 139struct DropData {
29967ef6
XL
140 /// The `Span` where drop obligation was incurred (typically where place was
141 /// declared)
142 source_info: SourceInfo,
54a0048b 143
dc9dc135
XL
144 /// local to drop
145 local: Local,
54a0048b 146
8faf50e0
XL
147 /// Whether this is a value Drop or a StorageDead.
148 kind: DropKind,
ea8adc8c
XL
149}
150
29967ef6 151#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
8faf50e0 152pub(crate) enum DropKind {
dc9dc135
XL
153 Value,
154 Storage,
7453a54e
SL
155}
156
29967ef6 157#[derive(Debug)]
dc9dc135 158struct BreakableScope<'tcx> {
ea8adc8c 159 /// Region scope of the loop
dc9dc135 160 region_scope: region::Scope,
e74abb32 161 /// The destination of the loop/block expression itself (i.e., where to put
29967ef6 162 /// the result of a `break` or `return` expression)
dc9dc135 163 break_destination: Place<'tcx>,
29967ef6
XL
164 /// Drops that happen on the `break`/`return` path.
165 break_drops: DropTree,
166 /// Drops that happen on the `continue` path.
167 continue_drops: Option<DropTree>,
dc9dc135
XL
168}
169
94222f64
XL
170#[derive(Debug)]
171struct IfThenScope {
172 /// The if-then scope or arm scope
173 region_scope: region::Scope,
174 /// Drops that happen on the `else` path.
175 else_drops: DropTree,
176}
177
dc9dc135
XL
178/// The target of an expression that breaks out of a scope
179#[derive(Clone, Copy, Debug)]
dfeec247 180crate enum BreakableTarget {
dc9dc135
XL
181 Continue(region::Scope),
182 Break(region::Scope),
183 Return,
7453a54e
SL
184}
185
29967ef6
XL
186rustc_index::newtype_index! {
187 struct DropIdx { .. }
188}
ea8adc8c 189
29967ef6 190const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
ea8adc8c 191
29967ef6
XL
192/// A tree of drops that we have deferred lowering. It's used for:
193///
194/// * Drops on unwind paths
195/// * Drops on generator drop paths (when a suspended generator is dropped)
196/// * Drops on return and loop exit paths
94222f64 197/// * Drops on the else path in an `if let` chain
29967ef6
XL
198///
199/// Once no more nodes could be added to the tree, we lower it to MIR in one go
200/// in `build_mir`.
201#[derive(Debug)]
202struct DropTree {
203 /// Drops in the tree.
204 drops: IndexVec<DropIdx, (DropData, DropIdx)>,
205 /// Map for finding the inverse of the `next_drop` relation:
206 ///
207 /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
208 previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
209 /// Edges into the `DropTree` that need to be added once it's lowered.
210 entry_points: Vec<(DropIdx, BasicBlock)>,
ea8adc8c
XL
211}
212
dc9dc135 213impl Scope {
e1599b0c
XL
214 /// Whether there's anything to do for the cleanup path, that is,
215 /// when unwinding through this scope. This includes destructors,
216 /// but not StorageDead statements, which don't get emitted at all
217 /// for unwinding, for several reasons:
218 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
219 /// * LLVM's memory dependency analysis can't handle it atm
220 /// * polluting the cleanup MIR with StorageDead creates
221 /// landing pads even though there's no actual destructors
222 /// * freeing up stack space has no effect during unwinding
223 /// Note that for generators we do emit StorageDeads, for the
224 /// use of optimizations in the MIR generator transform.
225 fn needs_cleanup(&self) -> bool {
226 self.drops.iter().any(|drop| match drop.kind {
227 DropKind::Value => true,
228 DropKind::Storage => false,
229 })
230 }
29967ef6
XL
231
232 fn invalidate_cache(&mut self) {
233 self.cached_unwind_block = None;
234 self.cached_generator_drop_block = None;
235 }
e9174d1e
SL
236}
237
29967ef6
XL
238/// A trait that determined how [DropTree] creates its blocks and
239/// links to any entry nodes.
240trait DropTreeBuilder<'tcx> {
241 /// Create a new block for the tree. This should call either
242 /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
243 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
244
245 /// Links a block outside the drop tree, `from`, to the block `to` inside
246 /// the drop tree.
247 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
248}
249
250impl DropTree {
251 fn new() -> Self {
252 // The root node of the tree doesn't represent a drop, but instead
253 // represents the block in the tree that should be jumped to once all
254 // of the required drops have been performed.
255 let fake_source_info = SourceInfo::outermost(DUMMY_SP);
256 let fake_data =
257 DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
258 let drop_idx = DropIdx::MAX;
259 let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
260 Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
dc9dc135
XL
261 }
262
29967ef6
XL
263 fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
264 let drops = &mut self.drops;
265 *self
266 .previous_drops
267 .entry((next, drop.local, drop.kind))
268 .or_insert_with(|| drops.push((drop, next)))
dc9dc135
XL
269 }
270
29967ef6
XL
271 fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
272 debug_assert!(to < self.drops.next_index());
273 self.entry_points.push((to, from));
dc9dc135
XL
274 }
275
29967ef6
XL
276 /// Builds the MIR for a given drop tree.
277 ///
278 /// `blocks` should have the same length as `self.drops`, and may have its
279 /// first value set to some already existing block.
280 fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
281 &mut self,
282 cfg: &mut CFG<'tcx>,
283 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
284 ) {
285 debug!("DropTree::build_mir(drops = {:#?})", self);
286 assert_eq!(blocks.len(), self.drops.len());
287
288 self.assign_blocks::<T>(cfg, blocks);
289 self.link_blocks(cfg, blocks)
dc9dc135
XL
290 }
291
29967ef6
XL
292 /// Assign blocks for all of the drops in the drop tree that need them.
293 fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
294 &mut self,
295 cfg: &mut CFG<'tcx>,
296 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
297 ) {
298 // StorageDead statements can share blocks with each other and also with
299 // a Drop terminator. We iterate through the drops to find which drops
300 // need their own block.
301 #[derive(Clone, Copy)]
302 enum Block {
303 // This drop is unreachable
304 None,
305 // This drop is only reachable through the `StorageDead` with the
306 // specified index.
307 Shares(DropIdx),
308 // This drop has more than one way of being reached, or it is
309 // branched to from outside the tree, or its predecessor is a
310 // `Value` drop.
311 Own,
312 }
313
314 let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
315 if blocks[ROOT_NODE].is_some() {
316 // In some cases (such as drops for `continue`) the root node
317 // already has a block. In this case, make sure that we don't
318 // override it.
319 needs_block[ROOT_NODE] = Block::Own;
320 }
321
322 // Sort so that we only need to check the last value.
323 let entry_points = &mut self.entry_points;
324 entry_points.sort();
325
326 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
327 if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
328 let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
329 needs_block[drop_idx] = Block::Own;
330 while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
331 let entry_block = entry_points.pop().unwrap().1;
332 T::add_entry(cfg, entry_block, block);
dc9dc135 333 }
dc9dc135 334 }
29967ef6
XL
335 match needs_block[drop_idx] {
336 Block::None => continue,
337 Block::Own => {
338 blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
339 }
340 Block::Shares(pred) => {
341 blocks[drop_idx] = blocks[pred];
342 }
dc9dc135 343 }
29967ef6
XL
344 if let DropKind::Value = drop_data.0.kind {
345 needs_block[drop_data.1] = Block::Own;
346 } else if drop_idx != ROOT_NODE {
347 match &mut needs_block[drop_data.1] {
348 pred @ Block::None => *pred = Block::Shares(drop_idx),
349 pred @ Block::Shares(_) => *pred = Block::Own,
350 Block::Own => (),
351 }
dc9dc135
XL
352 }
353 }
29967ef6
XL
354
355 debug!("assign_blocks: blocks = {:#?}", blocks);
356 assert!(entry_points.is_empty());
dc9dc135
XL
357 }
358
29967ef6
XL
359 fn link_blocks<'tcx>(
360 &self,
361 cfg: &mut CFG<'tcx>,
362 blocks: &IndexVec<DropIdx, Option<BasicBlock>>,
363 ) {
364 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
3c0e092e 365 let Some(block) = blocks[drop_idx] else { continue };
29967ef6
XL
366 match drop_data.0.kind {
367 DropKind::Value => {
368 let terminator = TerminatorKind::Drop {
369 target: blocks[drop_data.1].unwrap(),
370 // The caller will handle this if needed.
371 unwind: None,
372 place: drop_data.0.local.into(),
373 };
374 cfg.terminate(block, drop_data.0.source_info, terminator);
375 }
376 // Root nodes don't correspond to a drop.
377 DropKind::Storage if drop_idx == ROOT_NODE => {}
378 DropKind::Storage => {
379 let stmt = Statement {
380 source_info: drop_data.0.source_info,
381 kind: StatementKind::StorageDead(drop_data.0.local),
382 };
383 cfg.push(block, stmt);
384 let target = blocks[drop_data.1].unwrap();
385 if target != block {
386 // Diagnostics don't use this `Span` but debuginfo
387 // might. Since we don't want breakpoints to be placed
388 // here, especially when this is on an unwind path, we
389 // use `DUMMY_SP`.
390 let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
391 let terminator = TerminatorKind::Goto { target };
392 cfg.terminate(block, source_info, terminator);
393 }
394 }
395 }
396 }
dc9dc135 397 }
29967ef6 398}
dc9dc135 399
29967ef6
XL
400impl<'tcx> Scopes<'tcx> {
401 pub(crate) fn new() -> Self {
402 Self {
403 scopes: Vec::new(),
404 breakable_scopes: Vec::new(),
94222f64 405 if_then_scope: None,
29967ef6
XL
406 unwind_drops: DropTree::new(),
407 generator_drops: DropTree::new(),
408 }
dc9dc135
XL
409 }
410
29967ef6
XL
411 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
412 debug!("push_scope({:?})", region_scope);
413 self.scopes.push(Scope {
414 source_scope: vis_scope,
415 region_scope: region_scope.0,
29967ef6
XL
416 drops: vec![],
417 moved_locals: vec![],
418 cached_unwind_block: None,
419 cached_generator_drop_block: None,
420 });
421 }
422
423 fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
424 let scope = self.scopes.pop().unwrap();
425 assert_eq!(scope.region_scope, region_scope.0);
426 scope
427 }
428
429 fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
430 self.scopes
431 .iter()
432 .rposition(|scope| scope.region_scope == region_scope)
433 .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
dc9dc135
XL
434 }
435
436 /// Returns the topmost active scope, which is known to be alive until
437 /// the next scope expression.
29967ef6 438 fn topmost(&self) -> region::Scope {
dc9dc135
XL
439 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
440 }
dc9dc135
XL
441}
442
443impl<'a, 'tcx> Builder<'a, 'tcx> {
7453a54e
SL
444 // Adding and removing scopes
445 // ==========================
dc9dc135
XL
446 // Start a breakable scope, which tracks where `continue`, `break` and
447 // `return` should branch to.
29967ef6 448 crate fn in_breakable_scope<F>(
dfeec247
XL
449 &mut self,
450 loop_block: Option<BasicBlock>,
dfeec247 451 break_destination: Place<'tcx>,
29967ef6 452 span: Span,
dfeec247 453 f: F,
29967ef6 454 ) -> BlockAnd<()>
dfeec247 455 where
29967ef6 456 F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
e9174d1e 457 {
dc9dc135 458 let region_scope = self.scopes.topmost();
cc61c64b 459 let scope = BreakableScope {
ea8adc8c 460 region_scope,
3b2f2976 461 break_destination,
29967ef6
XL
462 break_drops: DropTree::new(),
463 continue_drops: loop_block.map(|_| DropTree::new()),
b039eaaf 464 };
dc9dc135 465 self.scopes.breakable_scopes.push(scope);
29967ef6 466 let normal_exit_block = f(self);
dc9dc135 467 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
ea8adc8c 468 assert!(breakable_scope.region_scope == region_scope);
29967ef6 469 let break_block = self.build_exit_tree(breakable_scope.break_drops, None);
fc512014
XL
470 if let Some(drops) = breakable_scope.continue_drops {
471 self.build_exit_tree(drops, loop_block);
472 }
29967ef6
XL
473 match (normal_exit_block, break_block) {
474 (Some(block), None) | (None, Some(block)) => block,
475 (None, None) => self.cfg.start_new_block().unit(),
476 (Some(normal_block), Some(exit_block)) => {
477 let target = self.cfg.start_new_block();
478 let source_info = self.source_info(span);
479 self.cfg.terminate(
480 unpack!(normal_block),
481 source_info,
482 TerminatorKind::Goto { target },
483 );
484 self.cfg.terminate(
485 unpack!(exit_block),
486 source_info,
487 TerminatorKind::Goto { target },
488 );
489 target.unit()
490 }
491 }
e9174d1e
SL
492 }
493
94222f64
XL
494 /// Start an if-then scope which tracks drop for `if` expressions and `if`
495 /// guards.
496 ///
497 /// For an if-let chain:
498 ///
499 /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
500 ///
5099ac24 501 /// There are three possible ways the condition can be false and we may have
94222f64
XL
502 /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
503 /// To handle this correctly we use a `DropTree` in a similar way to a
504 /// `loop` expression and 'break' out on all of the 'else' paths.
505 ///
506 /// Notes:
507 /// - We don't need to keep a stack of scopes in the `Builder` because the
508 /// 'else' paths will only leave the innermost scope.
509 /// - This is also used for match guards.
510 crate fn in_if_then_scope<F>(
511 &mut self,
512 region_scope: region::Scope,
513 f: F,
514 ) -> (BasicBlock, BasicBlock)
515 where
516 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
517 {
518 let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
519 let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
520
521 let then_block = unpack!(f(self));
522
523 let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
524 assert!(if_then_scope.region_scope == region_scope);
525
526 let else_block = self
527 .build_exit_tree(if_then_scope.else_drops, None)
528 .map_or_else(|| self.cfg.start_new_block(), |else_block_and| unpack!(else_block_and));
529
530 (then_block, else_block)
531 }
532
dfeec247
XL
533 crate fn in_opt_scope<F, R>(
534 &mut self,
535 opt_scope: Option<(region::Scope, SourceInfo)>,
536 f: F,
537 ) -> BlockAnd<R>
538 where
539 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
041b39d2 540 {
dc9dc135 541 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
dfeec247
XL
542 if let Some(region_scope) = opt_scope {
543 self.push_scope(region_scope);
544 }
dc9dc135 545 let mut block;
041b39d2 546 let rv = unpack!(block = f(self));
ea8adc8c
XL
547 if let Some(region_scope) = opt_scope {
548 unpack!(block = self.pop_scope(region_scope, block));
041b39d2 549 }
ea8adc8c 550 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
041b39d2
XL
551 block.and(rv)
552 }
553
92a42be0
SL
554 /// Convenience wrapper that pushes a scope and then executes `f`
555 /// to build its contents, popping the scope afterwards.
dfeec247
XL
556 crate fn in_scope<F, R>(
557 &mut self,
558 region_scope: (region::Scope, SourceInfo),
559 lint_level: LintLevel,
560 f: F,
561 ) -> BlockAnd<R>
562 where
563 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
e9174d1e 564 {
dc9dc135 565 debug!("in_scope(region_scope={:?})", region_scope);
94b46f34 566 let source_scope = self.source_scope;
6a06907d 567 let tcx = self.tcx;
532ac7d7
XL
568 if let LintLevel::Explicit(current_hir_id) = lint_level {
569 // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
5e7ed085 570 // to avoid adding Hir dependencies on our parents.
532ac7d7
XL
571 // We estimate the true lint roots here to avoid creating a lot of source scopes.
572
573 let parent_root = tcx.maybe_lint_level_root_bounded(
dfeec247 574 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root,
6a06907d 575 self.hir_id,
532ac7d7 576 );
6a06907d 577 let current_root = tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir_id);
532ac7d7
XL
578
579 if parent_root != current_root {
580 self.source_scope = self.new_source_scope(
581 region_scope.1.span,
582 LintLevel::Explicit(current_root),
dfeec247 583 None,
532ac7d7 584 );
ea8adc8c
XL
585 }
586 }
587 self.push_scope(region_scope);
dc9dc135 588 let mut block;
3157f602 589 let rv = unpack!(block = f(self));
ea8adc8c 590 unpack!(block = self.pop_scope(region_scope, block));
94b46f34 591 self.source_scope = source_scope;
ea8adc8c 592 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
92a42be0
SL
593 block.and(rv)
594 }
e9174d1e 595
92a42be0
SL
596 /// Push a scope onto the stack. You can then build code in this
597 /// scope and call `pop_scope` afterwards. Note that these two
598 /// calls must be paired; using `in_scope` as a convenience
599 /// wrapper maybe preferable.
dfeec247 600 crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
dc9dc135 601 self.scopes.push_scope(region_scope, self.source_scope);
92a42be0
SL
602 }
603
ea8adc8c
XL
604 /// Pops a scope, which should have region scope `region_scope`,
605 /// adding any drops onto the end of `block` that are needed.
606 /// This must match 1-to-1 with `push_scope`.
dfeec247
XL
607 crate fn pop_scope(
608 &mut self,
609 region_scope: (region::Scope, SourceInfo),
610 mut block: BasicBlock,
611 ) -> BlockAnd<()> {
ea8adc8c 612 debug!("pop_scope({:?}, {:?})", region_scope, block);
a1dfa0c6 613
29967ef6
XL
614 block = self.leave_top_scope(block);
615
616 self.scopes.pop_scope(region_scope);
041b39d2 617
54a0048b 618 block.unit()
e9174d1e
SL
619 }
620
29967ef6 621 /// Sets up the drops for breaking from `block` to `target`.
dfeec247 622 crate fn break_scope(
dc9dc135
XL
623 &mut self,
624 mut block: BasicBlock,
17df50a5 625 value: Option<&Expr<'tcx>>,
29967ef6 626 target: BreakableTarget,
dc9dc135
XL
627 source_info: SourceInfo,
628 ) -> BlockAnd<()> {
29967ef6
XL
629 let span = source_info.span;
630
631 let get_scope_index = |scope: region::Scope| {
632 // find the loop-scope by its `region::Scope`.
633 self.scopes
634 .breakable_scopes
635 .iter()
636 .rposition(|breakable_scope| breakable_scope.region_scope == scope)
637 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
638 };
639 let (break_index, destination) = match target {
640 BreakableTarget::Return => {
641 let scope = &self.scopes.breakable_scopes[0];
642 if scope.break_destination != Place::return_place() {
643 span_bug!(span, "`return` in item with no return scope");
644 }
645 (0, Some(scope.break_destination))
646 }
647 BreakableTarget::Break(scope) => {
648 let break_index = get_scope_index(scope);
649 let scope = &self.scopes.breakable_scopes[break_index];
650 (break_index, Some(scope.break_destination))
651 }
652 BreakableTarget::Continue(scope) => {
653 let break_index = get_scope_index(scope);
654 (break_index, None)
655 }
656 };
657
dc9dc135
XL
658 if let Some(destination) = destination {
659 if let Some(value) = value {
660 debug!("stmt_expr Break val block_context.push(SubExpr)");
661 self.block_context.push(BlockFrame::SubExpr);
6a06907d 662 unpack!(block = self.expr_into_dest(destination, block, value));
dc9dc135
XL
663 self.block_context.pop();
664 } else {
6a06907d 665 self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
dc9dc135
XL
666 }
667 } else {
668 assert!(value.is_none(), "`return` and `break` should have a destination");
cdc7bbd5
XL
669 if self.tcx.sess.instrument_coverage() {
670 // Unlike `break` and `return`, which push an `Assign` statement to MIR, from which
671 // a Coverage code region can be generated, `continue` needs no `Assign`; but
672 // without one, the `InstrumentCoverage` MIR pass cannot generate a code region for
673 // `continue`. Coverage will be missing unless we add a dummy `Assign` to MIR.
5e7ed085 674 self.add_dummy_assignment(span, block, source_info);
cdc7bbd5 675 }
dc9dc135 676 }
29967ef6
XL
677
678 let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
679 let scope_index = self.scopes.scope_index(region_scope, span);
680 let drops = if destination.is_some() {
681 &mut self.scopes.breakable_scopes[break_index].break_drops
682 } else {
683 self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap()
684 };
685 let mut drop_idx = ROOT_NODE;
686 for scope in &self.scopes.scopes[scope_index + 1..] {
687 for drop in &scope.drops {
688 drop_idx = drops.add_drop(*drop, drop_idx);
689 }
690 }
691 drops.add_entry(block, drop_idx);
692
693 // `build_drop_tree` doesn't have access to our source_info, so we
694 // create a dummy terminator now. `TerminatorKind::Resume` is used
695 // because MIR type checking will panic if it hasn't been overwritten.
696 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
697
dc9dc135
XL
698 self.cfg.start_new_block().unit()
699 }
e9174d1e 700
94222f64
XL
701 crate fn break_for_else(
702 &mut self,
703 block: BasicBlock,
704 target: region::Scope,
705 source_info: SourceInfo,
706 ) {
707 let scope_index = self.scopes.scope_index(target, source_info.span);
708 let if_then_scope = self
709 .scopes
710 .if_then_scope
711 .as_mut()
712 .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
713
714 assert_eq!(if_then_scope.region_scope, target, "breaking to incorrect scope");
715
716 let mut drop_idx = ROOT_NODE;
717 let drops = &mut if_then_scope.else_drops;
718 for scope in &self.scopes.scopes[scope_index + 1..] {
719 for drop in &scope.drops {
720 drop_idx = drops.add_drop(*drop, drop_idx);
721 }
722 }
723 drops.add_entry(block, drop_idx);
724
725 // `build_drop_tree` doesn't have access to our source_info, so we
726 // create a dummy terminator now. `TerminatorKind::Resume` is used
727 // because MIR type checking will panic if it hasn't been overwritten.
728 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
729 }
730
cdc7bbd5
XL
731 // Add a dummy `Assign` statement to the CFG, with the span for the source code's `continue`
732 // statement.
5e7ed085
FG
733 fn add_dummy_assignment(&mut self, span: Span, block: BasicBlock, source_info: SourceInfo) {
734 let local_decl = LocalDecl::new(self.tcx.mk_unit(), span).internal();
cdc7bbd5
XL
735 let temp_place = Place::from(self.local_decls.push(local_decl));
736 self.cfg.push_assign_unit(block, source_info, temp_place, self.tcx);
737 }
738
29967ef6 739 fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
3b2f2976
XL
740 // If we are emitting a `drop` statement, we need to have the cached
741 // diverge cleanup pads ready in case that drop panics.
29967ef6
XL
742 let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup());
743 let is_generator = self.generator_kind.is_some();
744 let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
745
746 let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
747 unpack!(build_scope_drops(
748 &mut self.cfg,
749 &mut self.scopes.unwind_drops,
750 scope,
751 block,
752 unwind_to,
753 is_generator && needs_cleanup,
754 self.arg_count,
755 ))
ea8adc8c
XL
756 }
757
94b46f34 758 /// Creates a new source scope, nested in the current one.
dfeec247
XL
759 crate fn new_source_scope(
760 &mut self,
761 span: Span,
762 lint_level: LintLevel,
763 safety: Option<Safety>,
764 ) -> SourceScope {
94b46f34 765 let parent = self.source_scope;
dfeec247
XL
766 debug!(
767 "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
768 span,
769 lint_level,
770 safety,
771 parent,
772 self.source_scopes.get(parent)
773 );
94b46f34 774 let scope_local_data = SourceScopeLocalData {
ea8adc8c
XL
775 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
776 lint_root
777 } else {
60c5eb7d 778 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
ea8adc8c
XL
779 },
780 safety: safety.unwrap_or_else(|| {
60c5eb7d 781 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
dfeec247 782 }),
ea8adc8c 783 };
60c5eb7d
XL
784 self.source_scopes.push(SourceScopeData {
785 span,
786 parent_scope: Some(parent),
29967ef6
XL
787 inlined: None,
788 inlined_parent_scope: None,
60c5eb7d
XL
789 local_data: ClearCrossCrate::Set(scope_local_data),
790 })
3157f602
XL
791 }
792
94b46f34 793 /// Given a span and the current source scope, make a SourceInfo.
dfeec247
XL
794 crate fn source_info(&self, span: Span) -> SourceInfo {
795 SourceInfo { span, scope: self.source_scope }
e9174d1e
SL
796 }
797
dc9dc135
XL
798 // Finding scopes
799 // ==============
7cac9316
XL
800 /// Returns the scope that we should use as the lifetime of an
801 /// operand. Basically, an operand must live until it is consumed.
802 /// This is similar to, but not quite the same as, the temporary
803 /// scope (which can be larger or smaller).
804 ///
805 /// Consider:
806 ///
807 /// let x = foo(bar(X, Y));
808 ///
809 /// We wish to pop the storage for X and Y after `bar()` is
810 /// called, not after the whole `let` is completed.
811 ///
812 /// As another example, if the second argument diverges:
813 ///
814 /// foo(Box::new(2), panic!())
815 ///
816 /// We would allocate the box but then free it on the unwinding
817 /// path; we would also emit a free on the 'success' path from
818 /// panic, but that will turn out to be removed as dead-code.
fc512014
XL
819 crate fn local_scope(&self) -> region::Scope {
820 self.scopes.topmost()
7cac9316
XL
821 }
822
dc9dc135
XL
823 // Scheduling drops
824 // ================
dfeec247 825 crate fn schedule_drop_storage_and_value(
8faf50e0
XL
826 &mut self,
827 span: Span,
828 region_scope: region::Scope,
dc9dc135 829 local: Local,
8faf50e0 830 ) {
e74abb32
XL
831 self.schedule_drop(span, region_scope, local, DropKind::Storage);
832 self.schedule_drop(span, region_scope, local, DropKind::Value);
8faf50e0
XL
833 }
834
29967ef6 835 /// Indicates that `place` should be dropped on exit from `region_scope`.
8faf50e0 836 ///
29967ef6
XL
837 /// When called with `DropKind::Storage`, `place` shouldn't be the return
838 /// place, or a function parameter.
dfeec247 839 crate fn schedule_drop(
8faf50e0
XL
840 &mut self,
841 span: Span,
842 region_scope: region::Scope,
dc9dc135 843 local: Local,
8faf50e0
XL
844 drop_kind: DropKind,
845 ) {
e74abb32
XL
846 let needs_drop = match drop_kind {
847 DropKind::Value => {
6a06907d 848 if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) {
dfeec247
XL
849 return;
850 }
e74abb32 851 true
dfeec247 852 }
8faf50e0 853 DropKind::Storage => {
dc9dc135
XL
854 if local.index() <= self.arg_count {
855 span_bug!(
dfeec247
XL
856 span,
857 "`schedule_drop` called with local {:?} and arg_count {}",
dc9dc135
XL
858 local,
859 self.arg_count,
860 )
8faf50e0 861 }
e74abb32 862 false
5bcae85e 863 }
e74abb32 864 };
5bcae85e 865
29967ef6
XL
866 // When building drops, we try to cache chains of drops to reduce the
867 // number of `DropTree::add_drop` calls. This, however, means that
868 // whenever we add a drop into a scope which already had some entries
869 // in the drop tree built (and thus, cached) for it, we must invalidate
870 // all caches which might branch into the scope which had a drop just
871 // added to it. This is necessary, because otherwise some other code
872 // might use the cache to branch into already built chain of drops,
873 // essentially ignoring the newly added drop.
874 //
875 // For example consider there’s two scopes with a drop in each. These
876 // are built and thus the caches are filled:
877 //
878 // +--------------------------------------------------------+
879 // | +---------------------------------+ |
880 // | | +--------+ +-------------+ | +---------------+ |
881 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
882 // | | +--------+ +-------------+ | +---------------+ |
883 // | +------------|outer_scope cache|--+ |
884 // +------------------------------|middle_scope cache|------+
885 //
886 // Now, a new, inner-most scope is added along with a new drop into
887 // both inner-most and outer-most scopes:
888 //
889 // +------------------------------------------------------------+
890 // | +----------------------------------+ |
891 // | | +--------+ +-------------+ | +---------------+ | +-------------+
892 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
893 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
894 // | | +-+ +-------------+ | |
895 // | +---|invalid outer_scope cache|----+ |
896 // +----=----------------|invalid middle_scope cache|-----------+
897 //
898 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
899 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
900 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
901 // wrong results.
902 //
903 // Note that this code iterates scopes from the inner-most to the outer-most,
904 // invalidating caches of each scope visited. This way bare minimum of the
905 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
906 // cache of outer scope stays intact.
907 //
908 // Since we only cache drops for the unwind path and the generator drop
909 // path, we only need to invalidate the cache for drops that happen on
910 // the unwind or generator drop paths. This means that for
911 // non-generators we don't need to invalidate caches for `DropKind::Storage`.
912 let invalidate_caches = needs_drop || self.generator_kind.is_some();
913 for scope in self.scopes.scopes.iter_mut().rev() {
914 if invalidate_caches {
915 scope.invalidate_cache();
916 }
917
918 if scope.region_scope == region_scope {
6a06907d 919 let region_scope_span = region_scope.span(self.tcx, &self.region_scope_tree);
2c00a5a8 920 // Attribute scope exit drops to scope's closing brace.
6a06907d 921 let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
2c00a5a8 922
7453a54e 923 scope.drops.push(DropData {
29967ef6 924 source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
dc9dc135
XL
925 local,
926 kind: drop_kind,
7453a54e 927 });
29967ef6 928
7453a54e 929 return;
7453a54e
SL
930 }
931 }
29967ef6 932
dc9dc135 933 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
7453a54e
SL
934 }
935
e1599b0c
XL
936 /// Indicates that the "local operand" stored in `local` is
937 /// *moved* at some point during execution (see `local_scope` for
938 /// more information about what a "local operand" is -- in short,
939 /// it's an intermediate operand created as part of preparing some
940 /// MIR instruction). We use this information to suppress
941 /// redundant drops on the non-unwind paths. This results in less
942 /// MIR, but also avoids spurious borrow check errors
943 /// (c.f. #64391).
944 ///
945 /// Example: when compiling the call to `foo` here:
946 ///
947 /// ```rust
948 /// foo(bar(), ...)
949 /// ```
950 ///
951 /// we would evaluate `bar()` to an operand `_X`. We would also
952 /// schedule `_X` to be dropped when the expression scope for
953 /// `foo(bar())` is exited. This is relevant, for example, if the
954 /// later arguments should unwind (it would ensure that `_X` gets
955 /// dropped). However, if no unwind occurs, then `_X` will be
956 /// unconditionally consumed by the `call`:
957 ///
958 /// ```
959 /// bb {
960 /// ...
961 /// _R = CALL(foo, _X, ...)
962 /// }
963 /// ```
964 ///
965 /// However, `_X` is still registered to be dropped, and so if we
966 /// do nothing else, we would generate a `DROP(_X)` that occurs
967 /// after the call. This will later be optimized out by the
5e7ed085 968 /// drop-elaboration code, but in the meantime it can lead to
e1599b0c
XL
969 /// spurious borrow-check errors -- the problem, ironically, is
970 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
971 /// that it creates. See #64391 for an example.
dfeec247 972 crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
fc512014
XL
973 let local_scope = self.local_scope();
974 let scope = self.scopes.scopes.last_mut().unwrap();
e1599b0c 975
5869c6ff 976 assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
e1599b0c
XL
977
978 // look for moves of a local variable, like `MOVE(_X)`
979 let locals_moved = operands.iter().flat_map(|operand| match operand {
980 Operand::Copy(_) | Operand::Constant(_) => None,
981 Operand::Move(place) => place.as_local(),
982 });
983
984 for local in locals_moved {
985 // check if we have a Drop for this operand and -- if so
986 // -- add it to the list of moved operands. Note that this
987 // local might not have been an operand created for this
988 // call, it could come from other places too.
989 if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
990 scope.moved_locals.push(local);
991 }
992 }
993 }
994
7453a54e
SL
995 // Other
996 // =====
29967ef6
XL
997 /// Returns the [DropIdx] for the innermost drop if the function unwound at
998 /// this point. The `DropIdx` will be created if it doesn't already exist.
999 fn diverge_cleanup(&mut self) -> DropIdx {
1000 let is_generator = self.generator_kind.is_some();
1001 let (uncached_scope, mut cached_drop) = self
1002 .scopes
1003 .scopes
1004 .iter()
1005 .enumerate()
1006 .rev()
1007 .find_map(|(scope_idx, scope)| {
1008 scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1009 })
1010 .unwrap_or((0, ROOT_NODE));
1011
1012 for scope in &mut self.scopes.scopes[uncached_scope..] {
1013 for drop in &scope.drops {
1014 if is_generator || drop.kind == DropKind::Value {
1015 cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1016 }
1017 }
1018 scope.cached_unwind_block = Some(cached_drop);
7453a54e 1019 }
29967ef6
XL
1020
1021 cached_drop
ff7c6d11
XL
1022 }
1023
29967ef6
XL
1024 /// Prepares to create a path that performs all required cleanup for a
1025 /// terminator that can unwind at the given basic block.
1026 ///
1027 /// This path terminates in Resume. The path isn't created until after all
1028 /// of the non-unwind paths in this item have been lowered.
1029 crate fn diverge_from(&mut self, start: BasicBlock) {
1030 debug_assert!(
1031 matches!(
1032 self.cfg.block_data(start).terminator().kind,
1033 TerminatorKind::Assert { .. }
5869c6ff
XL
1034 | TerminatorKind::Call { .. }
1035 | TerminatorKind::DropAndReplace { .. }
1036 | TerminatorKind::FalseUnwind { .. }
a2a8927a 1037 | TerminatorKind::InlineAsm { .. }
29967ef6
XL
1038 ),
1039 "diverge_from called on block with terminator that cannot unwind."
1040 );
a1dfa0c6 1041
29967ef6
XL
1042 let next_drop = self.diverge_cleanup();
1043 self.scopes.unwind_drops.add_entry(start, next_drop);
1044 }
1045
1046 /// Sets up a path that performs all required cleanup for dropping a
1047 /// generator, starting from the given block that ends in
1048 /// [TerminatorKind::Yield].
1049 ///
1050 /// This path terminates in GeneratorDrop.
1051 crate fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) {
1052 debug_assert!(
1053 matches!(
1054 self.cfg.block_data(yield_block).terminator().kind,
1055 TerminatorKind::Yield { .. }
1056 ),
1057 "generator_drop_cleanup called on block with non-yield terminator."
1058 );
1059 let (uncached_scope, mut cached_drop) = self
1060 .scopes
1061 .scopes
1062 .iter()
1063 .enumerate()
1064 .rev()
1065 .find_map(|(scope_idx, scope)| {
1066 scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1067 })
1068 .unwrap_or((0, ROOT_NODE));
1069
1070 for scope in &mut self.scopes.scopes[uncached_scope..] {
1071 for drop in &scope.drops {
1072 cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop);
1073 }
1074 scope.cached_generator_drop_block = Some(cached_drop);
7453a54e 1075 }
ff7c6d11 1076
29967ef6 1077 self.scopes.generator_drops.add_entry(yield_block, cached_drop);
e9174d1e
SL
1078 }
1079
3157f602 1080 /// Utility function for *non*-scope code to build their own drops
dfeec247
XL
1081 crate fn build_drop_and_replace(
1082 &mut self,
1083 block: BasicBlock,
1084 span: Span,
f035d41b 1085 place: Place<'tcx>,
dfeec247
XL
1086 value: Operand<'tcx>,
1087 ) -> BlockAnd<()> {
3157f602
XL
1088 let source_info = self.source_info(span);
1089 let next_target = self.cfg.start_new_block();
29967ef6 1090
dfeec247
XL
1091 self.cfg.terminate(
1092 block,
1093 source_info,
29967ef6 1094 TerminatorKind::DropAndReplace { place, value, target: next_target, unwind: None },
dfeec247 1095 );
29967ef6
XL
1096 self.diverge_from(block);
1097
3157f602 1098 next_target.unit()
e9174d1e
SL
1099 }
1100
29967ef6 1101 /// Creates an `Assert` terminator and return the success block.
3157f602
XL
1102 /// If the boolean condition operand is not the expected value,
1103 /// a runtime panic will be caused with the given message.
dfeec247
XL
1104 crate fn assert(
1105 &mut self,
1106 block: BasicBlock,
1107 cond: Operand<'tcx>,
1108 expected: bool,
1109 msg: AssertMessage<'tcx>,
1110 span: Span,
1111 ) -> BasicBlock {
3157f602 1112 let source_info = self.source_info(span);
3157f602 1113 let success_block = self.cfg.start_new_block();
e9174d1e 1114
dfeec247
XL
1115 self.cfg.terminate(
1116 block,
1117 source_info,
29967ef6 1118 TerminatorKind::Assert { cond, expected, msg, target: success_block, cleanup: None },
dfeec247 1119 );
29967ef6 1120 self.diverge_from(block);
e9174d1e 1121
3157f602 1122 success_block
9cc50fc6 1123 }
dc9dc135 1124
dc9dc135
XL
1125 /// Unschedules any drops in the top scope.
1126 ///
1127 /// This is only needed for `match` arm scopes, because they have one
1128 /// entrance per pattern, but only one exit.
29967ef6 1129 crate fn clear_top_scope(&mut self, region_scope: region::Scope) {
dc9dc135
XL
1130 let top_scope = self.scopes.scopes.last_mut().unwrap();
1131
1132 assert_eq!(top_scope.region_scope, region_scope);
1133
1134 top_scope.drops.clear();
29967ef6 1135 top_scope.invalidate_cache();
dc9dc135 1136 }
7453a54e
SL
1137}
1138
29967ef6 1139/// Builds drops for `pop_scope` and `leave_top_scope`.
a1dfa0c6
XL
1140fn build_scope_drops<'tcx>(
1141 cfg: &mut CFG<'tcx>,
29967ef6 1142 unwind_drops: &mut DropTree,
dc9dc135 1143 scope: &Scope,
a1dfa0c6 1144 mut block: BasicBlock,
29967ef6
XL
1145 mut unwind_to: DropIdx,
1146 storage_dead_on_unwind: bool,
a1dfa0c6 1147 arg_count: usize,
a1dfa0c6 1148) -> BlockAnd<()> {
dc9dc135 1149 debug!("build_scope_drops({:?} -> {:?})", block, scope);
a1dfa0c6
XL
1150
1151 // Build up the drops in evaluation order. The end result will
1152 // look like:
1153 //
1154 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1155 // | | |
1156 // : | |
1157 // V V
1158 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1159 //
1160 // The horizontal arrows represent the execution path when the drops return
1161 // successfully. The downwards arrows represent the execution path when the
1162 // drops panic (panicking while unwinding will abort, so there's no need for
dc9dc135 1163 // another set of arrows).
a1dfa0c6 1164 //
dc9dc135
XL
1165 // For generators, we unwind from a drop on a local to its StorageDead
1166 // statement. For other functions we don't worry about StorageDead. The
1167 // drops for the unwind path should have already been generated by
1168 // `diverge_cleanup_gen`.
a1dfa0c6 1169
29967ef6
XL
1170 for drop_data in scope.drops.iter().rev() {
1171 let source_info = drop_data.source_info;
dc9dc135 1172 let local = drop_data.local;
e1599b0c 1173
5bcae85e 1174 match drop_data.kind {
dc9dc135 1175 DropKind::Value => {
29967ef6
XL
1176 // `unwind_to` should drop the value that we're about to
1177 // schedule. If dropping this value panics, then we continue
1178 // with the *next* value on the unwind path.
1179 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1180 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1181 unwind_to = unwind_drops.drops[unwind_to].1;
1182
e1599b0c
XL
1183 // If the operand has been moved, and we are not on an unwind
1184 // path, then don't generate the drop. (We only take this into
1185 // account for non-unwind paths so as not to disturb the
1186 // caching mechanism.)
29967ef6 1187 if scope.moved_locals.iter().any(|&o| o == local) {
e1599b0c
XL
1188 continue;
1189 }
1190
29967ef6 1191 unwind_drops.add_entry(block, unwind_to);
abe05a73 1192
3b2f2976 1193 let next = cfg.start_new_block();
dfeec247
XL
1194 cfg.terminate(
1195 block,
1196 source_info,
29967ef6 1197 TerminatorKind::Drop { place: local.into(), target: next, unwind: None },
dfeec247 1198 );
3b2f2976
XL
1199 block = next;
1200 }
8faf50e0 1201 DropKind::Storage => {
29967ef6
XL
1202 if storage_dead_on_unwind {
1203 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1204 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1205 unwind_to = unwind_drops.drops[unwind_to].1;
1206 }
8faf50e0 1207 // Only temps and vars need their storage dead.
dc9dc135 1208 assert!(local.index() > arg_count);
dfeec247 1209 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
5bcae85e
SL
1210 }
1211 }
7453a54e
SL
1212 }
1213 block.unit()
1214}
1215
29967ef6
XL
1216impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1217 /// Build a drop tree for a breakable scope.
1218 ///
1219 /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1220 /// loop. Otherwise this is for `break` or `return`.
1221 fn build_exit_tree(
1222 &mut self,
1223 mut drops: DropTree,
1224 continue_block: Option<BasicBlock>,
1225 ) -> Option<BlockAnd<()>> {
1226 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1227 blocks[ROOT_NODE] = continue_block;
1228
1229 drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1230
1231 // Link the exit drop tree to unwind drop tree.
1232 if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
1233 let unwind_target = self.diverge_cleanup();
1234 let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1235 for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
1236 match drop_data.0.kind {
1237 DropKind::Storage => {
1238 if self.generator_kind.is_some() {
1239 let unwind_drop = self
1240 .scopes
1241 .unwind_drops
1242 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1243 unwind_indices.push(unwind_drop);
1244 } else {
1245 unwind_indices.push(unwind_indices[drop_data.1]);
1246 }
1247 }
1248 DropKind::Value => {
1249 let unwind_drop = self
1250 .scopes
1251 .unwind_drops
1252 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1253 self.scopes
1254 .unwind_drops
1255 .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
1256 unwind_indices.push(unwind_drop);
1257 }
1258 }
dc9dc135 1259 }
dc9dc135 1260 }
29967ef6 1261 blocks[ROOT_NODE].map(BasicBlock::unit)
dc9dc135 1262 }
dc9dc135 1263
29967ef6 1264 /// Build the unwind and generator drop trees.
94222f64 1265 crate fn build_drop_trees(&mut self) {
29967ef6 1266 if self.generator_kind.is_some() {
94222f64 1267 self.build_generator_drop_trees();
29967ef6
XL
1268 } else {
1269 Self::build_unwind_tree(
1270 &mut self.cfg,
1271 &mut self.scopes.unwind_drops,
1272 self.fn_span,
29967ef6
XL
1273 &mut None,
1274 );
1275 }
1276 }
1277
94222f64 1278 fn build_generator_drop_trees(&mut self) {
29967ef6
XL
1279 // Build the drop tree for dropping the generator while it's suspended.
1280 let drops = &mut self.scopes.generator_drops;
1281 let cfg = &mut self.cfg;
1282 let fn_span = self.fn_span;
1283 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1284 drops.build_mir::<GeneratorDrop>(cfg, &mut blocks);
1285 if let Some(root_block) = blocks[ROOT_NODE] {
1286 cfg.terminate(
1287 root_block,
1288 SourceInfo::outermost(fn_span),
1289 TerminatorKind::GeneratorDrop,
1290 );
1291 }
1292
1293 // Build the drop tree for unwinding in the normal control flow paths.
1294 let resume_block = &mut None;
1295 let unwind_drops = &mut self.scopes.unwind_drops;
94222f64 1296 Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
29967ef6
XL
1297
1298 // Build the drop tree for unwinding when dropping a suspended
1299 // generator.
041b39d2 1300 //
29967ef6
XL
1301 // This is a different tree to the standard unwind paths here to
1302 // prevent drop elaboration from creating drop flags that would have
1303 // to be captured by the generator. I'm not sure how important this
1304 // optimization is, but it is here.
1305 for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
1306 if let DropKind::Value = drop_data.0.kind {
1307 debug_assert!(drop_data.1 < drops.drops.next_index());
1308 drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
dc9dc135 1309 }
29967ef6 1310 }
94222f64 1311 Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
7453a54e 1312 }
abe05a73 1313
29967ef6
XL
1314 fn build_unwind_tree(
1315 cfg: &mut CFG<'tcx>,
1316 drops: &mut DropTree,
1317 fn_span: Span,
29967ef6
XL
1318 resume_block: &mut Option<BasicBlock>,
1319 ) {
1320 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1321 blocks[ROOT_NODE] = *resume_block;
1322 drops.build_mir::<Unwind>(cfg, &mut blocks);
1323 if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
94222f64 1324 cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::Resume);
29967ef6
XL
1325
1326 *resume_block = blocks[ROOT_NODE];
1327 }
1328 }
1329}
041b39d2 1330
29967ef6
XL
1331// DropTreeBuilder implementations.
1332
1333struct ExitScopes;
1334
1335impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1336 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1337 cfg.start_new_block()
1338 }
1339 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1340 cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to };
1341 }
7453a54e 1342}
dc9dc135 1343
29967ef6
XL
1344struct GeneratorDrop;
1345
1346impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop {
1347 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1348 cfg.start_new_block()
1349 }
1350 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1351 let term = cfg.block_data_mut(from).terminator_mut();
1352 if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1353 *drop = Some(to);
1354 } else {
1355 span_bug!(
1356 term.source_info.span,
1357 "cannot enter generator drop tree from {:?}",
1358 term.kind
1359 )
1360 }
1361 }
1362}
1363
1364struct Unwind;
1365
1366impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1367 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1368 cfg.start_new_cleanup_block()
1369 }
1370 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1371 let term = &mut cfg.block_data_mut(from).terminator_mut();
1372 match &mut term.kind {
1373 TerminatorKind::Drop { unwind, .. }
1374 | TerminatorKind::DropAndReplace { unwind, .. }
1375 | TerminatorKind::FalseUnwind { unwind, .. }
1376 | TerminatorKind::Call { cleanup: unwind, .. }
a2a8927a
XL
1377 | TerminatorKind::Assert { cleanup: unwind, .. }
1378 | TerminatorKind::InlineAsm { cleanup: unwind, .. } => {
29967ef6
XL
1379 *unwind = Some(to);
1380 }
1381 TerminatorKind::Goto { .. }
1382 | TerminatorKind::SwitchInt { .. }
1383 | TerminatorKind::Resume
1384 | TerminatorKind::Abort
1385 | TerminatorKind::Return
1386 | TerminatorKind::Unreachable
1387 | TerminatorKind::Yield { .. }
1388 | TerminatorKind::GeneratorDrop
a2a8927a 1389 | TerminatorKind::FalseEdge { .. } => {
29967ef6
XL
1390 span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1391 }
1392 }
dfeec247 1393 }
dc9dc135 1394}