]> git.proxmox.com Git - rustc.git/blob - compiler/rustc_mir_build/src/build/scope.rs
New upstream version 1.70.0+dfsg1
[rustc.git] / compiler / rustc_mir_build / src / build / scope.rs
1 /*!
2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the THIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
5 `region::Scope`.
6
7 ### SEME Regions
8
9 When pushing a new [Scope], we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
15
16 For now, we record the `region::Scope` to each SEME region for later reference
17 (see caveat in next paragraph). This is because destruction scopes are tied to
18 them. This may change in the future so that MIR lowering determines its own
19 destruction scopes.
20
21 ### Not so SEME Regions
22
23 In the course of building matches, it sometimes happens that certain code
24 (namely guards) gets executed multiple times. This means that the scope lexical
25 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26 mapping is from one scope to a vector of SEME regions. Since the SEME regions
27 are disjoint, the mapping is still one-to-one for the set of SEME regions that
28 we're currently in.
29
30 Also in matches, the scopes assigned to arms are not always even SEME regions!
31 Each arm has a single region with one entry for each pattern. We manually
32 manipulate the scheduled drops in this scope to avoid dropping things multiple
33 times.
34
35 ### Drops
36
37 The primary purpose for scopes is to insert drops: while building
38 the contents, we also accumulate places that need to be dropped upon
39 exit from each scope. This is done by calling `schedule_drop`. Once a
40 drop is scheduled, whenever we branch out we will insert drops of all
41 those places onto the outgoing edge. Note that we don't know the full
42 set of scheduled drops up front, and so whenever we exit from the
43 scope we only drop the values scheduled thus far. For example, consider
44 the scope S corresponding to this loop:
45
46 ```
47 # let cond = true;
48 loop {
49 let x = ..;
50 if cond { break; }
51 let y = ..;
52 }
53 ```
54
55 When processing the `let x`, we will add one drop to the scope for
56 `x`. The break will then insert a drop for `x`. When we process `let
57 y`, we will add another drop (in fact, to a subscope, but let's ignore
58 that for now); any later drops would also drop `y`.
59
60 ### Early exit
61
62 There are numerous "normal" ways to early exit a scope: `break`,
63 `continue`, `return` (panics are handled separately). Whenever an
64 early exit occurs, the method `break_scope` is called. It is given the
65 current point in execution where the early exit occurs, as well as the
66 scope you want to branch to (note that all early exits from to some
67 other enclosing scope). `break_scope` will record the set of drops currently
68 scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69 will be added to the CFG.
70
71 Panics are handled in a similar fashion, except that the drops are added to the
72 MIR once the rest of the function has finished being lowered. If a terminator
73 can panic, call `diverge_from(block)` with the block containing the terminator
74 `block`.
75
76 ### Breakable scopes
77
78 In addition to the normal scope stack, we track a loop scope stack
79 that contains only loops and breakable blocks. It tracks where a `break`,
80 `continue` or `return` should go to.
81
82 */
83
84 use std::mem;
85
86 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
87 use rustc_data_structures::fx::FxHashMap;
88 use rustc_hir::HirId;
89 use rustc_index::vec::{IndexSlice, IndexVec};
90 use rustc_middle::middle::region;
91 use rustc_middle::mir::*;
92 use rustc_middle::thir::{Expr, LintLevel};
93
94 use rustc_span::{DesugaringKind, Span, DUMMY_SP};
95
96 #[derive(Debug)]
97 pub struct Scopes<'tcx> {
98 scopes: Vec<Scope>,
99
100 /// The current set of breakable scopes. See module comment for more details.
101 breakable_scopes: Vec<BreakableScope<'tcx>>,
102
103 /// The scope of the innermost if-then currently being lowered.
104 if_then_scope: Option<IfThenScope>,
105
106 /// Drops that need to be done on unwind paths. See the comment on
107 /// [DropTree] for more details.
108 unwind_drops: DropTree,
109
110 /// Drops that need to be done on paths to the `GeneratorDrop` terminator.
111 generator_drops: DropTree,
112 }
113
114 #[derive(Debug)]
115 struct Scope {
116 /// The source scope this scope was created in.
117 source_scope: SourceScope,
118
119 /// the region span of this scope within source code.
120 region_scope: region::Scope,
121
122 /// set of places to drop when exiting this scope. This starts
123 /// out empty but grows as variables are declared during the
124 /// building process. This is a stack, so we always drop from the
125 /// end of the vector (top of the stack) first.
126 drops: Vec<DropData>,
127
128 moved_locals: Vec<Local>,
129
130 /// The drop index that will drop everything in and below this scope on an
131 /// unwind path.
132 cached_unwind_block: Option<DropIdx>,
133
134 /// The drop index that will drop everything in and below this scope on a
135 /// generator drop path.
136 cached_generator_drop_block: Option<DropIdx>,
137 }
138
139 #[derive(Clone, Copy, Debug)]
140 struct DropData {
141 /// The `Span` where drop obligation was incurred (typically where place was
142 /// declared)
143 source_info: SourceInfo,
144
145 /// local to drop
146 local: Local,
147
148 /// Whether this is a value Drop or a StorageDead.
149 kind: DropKind,
150 }
151
152 #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
153 pub(crate) enum DropKind {
154 Value,
155 Storage,
156 }
157
158 #[derive(Debug)]
159 struct BreakableScope<'tcx> {
160 /// Region scope of the loop
161 region_scope: region::Scope,
162 /// The destination of the loop/block expression itself (i.e., where to put
163 /// the result of a `break` or `return` expression)
164 break_destination: Place<'tcx>,
165 /// Drops that happen on the `break`/`return` path.
166 break_drops: DropTree,
167 /// Drops that happen on the `continue` path.
168 continue_drops: Option<DropTree>,
169 }
170
171 #[derive(Debug)]
172 struct IfThenScope {
173 /// The if-then scope or arm scope
174 region_scope: region::Scope,
175 /// Drops that happen on the `else` path.
176 else_drops: DropTree,
177 }
178
179 /// The target of an expression that breaks out of a scope
180 #[derive(Clone, Copy, Debug)]
181 pub(crate) enum BreakableTarget {
182 Continue(region::Scope),
183 Break(region::Scope),
184 Return,
185 }
186
187 rustc_index::newtype_index! {
188 struct DropIdx {}
189 }
190
191 const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
192
193 /// A tree of drops that we have deferred lowering. It's used for:
194 ///
195 /// * Drops on unwind paths
196 /// * Drops on generator drop paths (when a suspended generator is dropped)
197 /// * Drops on return and loop exit paths
198 /// * Drops on the else path in an `if let` chain
199 ///
200 /// Once no more nodes could be added to the tree, we lower it to MIR in one go
201 /// in `build_mir`.
202 #[derive(Debug)]
203 struct DropTree {
204 /// Drops in the tree.
205 drops: IndexVec<DropIdx, (DropData, DropIdx)>,
206 /// Map for finding the inverse of the `next_drop` relation:
207 ///
208 /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
209 previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
210 /// Edges into the `DropTree` that need to be added once it's lowered.
211 entry_points: Vec<(DropIdx, BasicBlock)>,
212 }
213
214 impl Scope {
215 /// Whether there's anything to do for the cleanup path, that is,
216 /// when unwinding through this scope. This includes destructors,
217 /// but not StorageDead statements, which don't get emitted at all
218 /// for unwinding, for several reasons:
219 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
220 /// * LLVM's memory dependency analysis can't handle it atm
221 /// * polluting the cleanup MIR with StorageDead creates
222 /// landing pads even though there's no actual destructors
223 /// * freeing up stack space has no effect during unwinding
224 /// Note that for generators we do emit StorageDeads, for the
225 /// use of optimizations in the MIR generator transform.
226 fn needs_cleanup(&self) -> bool {
227 self.drops.iter().any(|drop| match drop.kind {
228 DropKind::Value => true,
229 DropKind::Storage => false,
230 })
231 }
232
233 fn invalidate_cache(&mut self) {
234 self.cached_unwind_block = None;
235 self.cached_generator_drop_block = None;
236 }
237 }
238
239 /// A trait that determined how [DropTree] creates its blocks and
240 /// links to any entry nodes.
241 trait DropTreeBuilder<'tcx> {
242 /// Create a new block for the tree. This should call either
243 /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
244 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
245
246 /// Links a block outside the drop tree, `from`, to the block `to` inside
247 /// the drop tree.
248 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
249 }
250
251 impl DropTree {
252 fn new() -> Self {
253 // The root node of the tree doesn't represent a drop, but instead
254 // represents the block in the tree that should be jumped to once all
255 // of the required drops have been performed.
256 let fake_source_info = SourceInfo::outermost(DUMMY_SP);
257 let fake_data =
258 DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
259 let drop_idx = DropIdx::MAX;
260 let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
261 Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
262 }
263
264 fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
265 let drops = &mut self.drops;
266 *self
267 .previous_drops
268 .entry((next, drop.local, drop.kind))
269 .or_insert_with(|| drops.push((drop, next)))
270 }
271
272 fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
273 debug_assert!(to < self.drops.next_index());
274 self.entry_points.push((to, from));
275 }
276
277 /// Builds the MIR for a given drop tree.
278 ///
279 /// `blocks` should have the same length as `self.drops`, and may have its
280 /// first value set to some already existing block.
281 fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
282 &mut self,
283 cfg: &mut CFG<'tcx>,
284 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
285 ) {
286 debug!("DropTree::build_mir(drops = {:#?})", self);
287 assert_eq!(blocks.len(), self.drops.len());
288
289 self.assign_blocks::<T>(cfg, blocks);
290 self.link_blocks(cfg, blocks)
291 }
292
293 /// Assign blocks for all of the drops in the drop tree that need them.
294 fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
295 &mut self,
296 cfg: &mut CFG<'tcx>,
297 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
298 ) {
299 // StorageDead statements can share blocks with each other and also with
300 // a Drop terminator. We iterate through the drops to find which drops
301 // need their own block.
302 #[derive(Clone, Copy)]
303 enum Block {
304 // This drop is unreachable
305 None,
306 // This drop is only reachable through the `StorageDead` with the
307 // specified index.
308 Shares(DropIdx),
309 // This drop has more than one way of being reached, or it is
310 // branched to from outside the tree, or its predecessor is a
311 // `Value` drop.
312 Own,
313 }
314
315 let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
316 if blocks[ROOT_NODE].is_some() {
317 // In some cases (such as drops for `continue`) the root node
318 // already has a block. In this case, make sure that we don't
319 // override it.
320 needs_block[ROOT_NODE] = Block::Own;
321 }
322
323 // Sort so that we only need to check the last value.
324 let entry_points = &mut self.entry_points;
325 entry_points.sort();
326
327 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
328 if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
329 let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
330 needs_block[drop_idx] = Block::Own;
331 while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
332 let entry_block = entry_points.pop().unwrap().1;
333 T::add_entry(cfg, entry_block, block);
334 }
335 }
336 match needs_block[drop_idx] {
337 Block::None => continue,
338 Block::Own => {
339 blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
340 }
341 Block::Shares(pred) => {
342 blocks[drop_idx] = blocks[pred];
343 }
344 }
345 if let DropKind::Value = drop_data.0.kind {
346 needs_block[drop_data.1] = Block::Own;
347 } else if drop_idx != ROOT_NODE {
348 match &mut needs_block[drop_data.1] {
349 pred @ Block::None => *pred = Block::Shares(drop_idx),
350 pred @ Block::Shares(_) => *pred = Block::Own,
351 Block::Own => (),
352 }
353 }
354 }
355
356 debug!("assign_blocks: blocks = {:#?}", blocks);
357 assert!(entry_points.is_empty());
358 }
359
360 fn link_blocks<'tcx>(
361 &self,
362 cfg: &mut CFG<'tcx>,
363 blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
364 ) {
365 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
366 let Some(block) = blocks[drop_idx] else { continue };
367 match drop_data.0.kind {
368 DropKind::Value => {
369 let terminator = TerminatorKind::Drop {
370 target: blocks[drop_data.1].unwrap(),
371 // The caller will handle this if needed.
372 unwind: UnwindAction::Terminate,
373 place: drop_data.0.local.into(),
374 };
375 cfg.terminate(block, drop_data.0.source_info, terminator);
376 }
377 // Root nodes don't correspond to a drop.
378 DropKind::Storage if drop_idx == ROOT_NODE => {}
379 DropKind::Storage => {
380 let stmt = Statement {
381 source_info: drop_data.0.source_info,
382 kind: StatementKind::StorageDead(drop_data.0.local),
383 };
384 cfg.push(block, stmt);
385 let target = blocks[drop_data.1].unwrap();
386 if target != block {
387 // Diagnostics don't use this `Span` but debuginfo
388 // might. Since we don't want breakpoints to be placed
389 // here, especially when this is on an unwind path, we
390 // use `DUMMY_SP`.
391 let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
392 let terminator = TerminatorKind::Goto { target };
393 cfg.terminate(block, source_info, terminator);
394 }
395 }
396 }
397 }
398 }
399 }
400
401 impl<'tcx> Scopes<'tcx> {
402 pub(crate) fn new() -> Self {
403 Self {
404 scopes: Vec::new(),
405 breakable_scopes: Vec::new(),
406 if_then_scope: None,
407 unwind_drops: DropTree::new(),
408 generator_drops: DropTree::new(),
409 }
410 }
411
412 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
413 debug!("push_scope({:?})", region_scope);
414 self.scopes.push(Scope {
415 source_scope: vis_scope,
416 region_scope: region_scope.0,
417 drops: vec![],
418 moved_locals: vec![],
419 cached_unwind_block: None,
420 cached_generator_drop_block: None,
421 });
422 }
423
424 fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
425 let scope = self.scopes.pop().unwrap();
426 assert_eq!(scope.region_scope, region_scope.0);
427 scope
428 }
429
430 fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
431 self.scopes
432 .iter()
433 .rposition(|scope| scope.region_scope == region_scope)
434 .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
435 }
436
437 /// Returns the topmost active scope, which is known to be alive until
438 /// the next scope expression.
439 fn topmost(&self) -> region::Scope {
440 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
441 }
442 }
443
444 impl<'a, 'tcx> Builder<'a, 'tcx> {
445 // Adding and removing scopes
446 // ==========================
447
448 /// Start a breakable scope, which tracks where `continue`, `break` and
449 /// `return` should branch to.
450 pub(crate) fn in_breakable_scope<F>(
451 &mut self,
452 loop_block: Option<BasicBlock>,
453 break_destination: Place<'tcx>,
454 span: Span,
455 f: F,
456 ) -> BlockAnd<()>
457 where
458 F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
459 {
460 let region_scope = self.scopes.topmost();
461 let scope = BreakableScope {
462 region_scope,
463 break_destination,
464 break_drops: DropTree::new(),
465 continue_drops: loop_block.map(|_| DropTree::new()),
466 };
467 self.scopes.breakable_scopes.push(scope);
468 let normal_exit_block = f(self);
469 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
470 assert!(breakable_scope.region_scope == region_scope);
471 let break_block =
472 self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
473 if let Some(drops) = breakable_scope.continue_drops {
474 self.build_exit_tree(drops, region_scope, span, loop_block);
475 }
476 match (normal_exit_block, break_block) {
477 (Some(block), None) | (None, Some(block)) => block,
478 (None, None) => self.cfg.start_new_block().unit(),
479 (Some(normal_block), Some(exit_block)) => {
480 let target = self.cfg.start_new_block();
481 let source_info = self.source_info(span);
482 self.cfg.terminate(
483 unpack!(normal_block),
484 source_info,
485 TerminatorKind::Goto { target },
486 );
487 self.cfg.terminate(
488 unpack!(exit_block),
489 source_info,
490 TerminatorKind::Goto { target },
491 );
492 target.unit()
493 }
494 }
495 }
496
497 /// Start an if-then scope which tracks drop for `if` expressions and `if`
498 /// guards.
499 ///
500 /// For an if-let chain:
501 ///
502 /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
503 ///
504 /// There are three possible ways the condition can be false and we may have
505 /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
506 /// To handle this correctly we use a `DropTree` in a similar way to a
507 /// `loop` expression and 'break' out on all of the 'else' paths.
508 ///
509 /// Notes:
510 /// - We don't need to keep a stack of scopes in the `Builder` because the
511 /// 'else' paths will only leave the innermost scope.
512 /// - This is also used for match guards.
513 pub(crate) fn in_if_then_scope<F>(
514 &mut self,
515 region_scope: region::Scope,
516 span: Span,
517 f: F,
518 ) -> (BasicBlock, BasicBlock)
519 where
520 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
521 {
522 let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
523 let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
524
525 let then_block = unpack!(f(self));
526
527 let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
528 assert!(if_then_scope.region_scope == region_scope);
529
530 let else_block = self
531 .build_exit_tree(if_then_scope.else_drops, region_scope, span, None)
532 .map_or_else(|| self.cfg.start_new_block(), |else_block_and| unpack!(else_block_and));
533
534 (then_block, else_block)
535 }
536
537 pub(crate) fn in_opt_scope<F, R>(
538 &mut self,
539 opt_scope: Option<(region::Scope, SourceInfo)>,
540 f: F,
541 ) -> BlockAnd<R>
542 where
543 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
544 {
545 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
546 if let Some(region_scope) = opt_scope {
547 self.push_scope(region_scope);
548 }
549 let mut block;
550 let rv = unpack!(block = f(self));
551 if let Some(region_scope) = opt_scope {
552 unpack!(block = self.pop_scope(region_scope, block));
553 }
554 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
555 block.and(rv)
556 }
557
558 /// Convenience wrapper that pushes a scope and then executes `f`
559 /// to build its contents, popping the scope afterwards.
560 #[instrument(skip(self, f), level = "debug")]
561 pub(crate) fn in_scope<F, R>(
562 &mut self,
563 region_scope: (region::Scope, SourceInfo),
564 lint_level: LintLevel,
565 f: F,
566 ) -> BlockAnd<R>
567 where
568 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
569 {
570 let source_scope = self.source_scope;
571 if let LintLevel::Explicit(current_hir_id) = lint_level {
572 let parent_id =
573 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root;
574 self.maybe_new_source_scope(region_scope.1.span, None, current_hir_id, parent_id);
575 }
576 self.push_scope(region_scope);
577 let mut block;
578 let rv = unpack!(block = f(self));
579 unpack!(block = self.pop_scope(region_scope, block));
580 self.source_scope = source_scope;
581 debug!(?block);
582 block.and(rv)
583 }
584
585 /// Push a scope onto the stack. You can then build code in this
586 /// scope and call `pop_scope` afterwards. Note that these two
587 /// calls must be paired; using `in_scope` as a convenience
588 /// wrapper maybe preferable.
589 pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
590 self.scopes.push_scope(region_scope, self.source_scope);
591 }
592
593 /// Pops a scope, which should have region scope `region_scope`,
594 /// adding any drops onto the end of `block` that are needed.
595 /// This must match 1-to-1 with `push_scope`.
596 pub(crate) fn pop_scope(
597 &mut self,
598 region_scope: (region::Scope, SourceInfo),
599 mut block: BasicBlock,
600 ) -> BlockAnd<()> {
601 debug!("pop_scope({:?}, {:?})", region_scope, block);
602
603 block = self.leave_top_scope(block);
604
605 self.scopes.pop_scope(region_scope);
606
607 block.unit()
608 }
609
610 /// Sets up the drops for breaking from `block` to `target`.
611 pub(crate) fn break_scope(
612 &mut self,
613 mut block: BasicBlock,
614 value: Option<&Expr<'tcx>>,
615 target: BreakableTarget,
616 source_info: SourceInfo,
617 ) -> BlockAnd<()> {
618 let span = source_info.span;
619
620 let get_scope_index = |scope: region::Scope| {
621 // find the loop-scope by its `region::Scope`.
622 self.scopes
623 .breakable_scopes
624 .iter()
625 .rposition(|breakable_scope| breakable_scope.region_scope == scope)
626 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
627 };
628 let (break_index, destination) = match target {
629 BreakableTarget::Return => {
630 let scope = &self.scopes.breakable_scopes[0];
631 if scope.break_destination != Place::return_place() {
632 span_bug!(span, "`return` in item with no return scope");
633 }
634 (0, Some(scope.break_destination))
635 }
636 BreakableTarget::Break(scope) => {
637 let break_index = get_scope_index(scope);
638 let scope = &self.scopes.breakable_scopes[break_index];
639 (break_index, Some(scope.break_destination))
640 }
641 BreakableTarget::Continue(scope) => {
642 let break_index = get_scope_index(scope);
643 (break_index, None)
644 }
645 };
646
647 if let Some(destination) = destination {
648 if let Some(value) = value {
649 debug!("stmt_expr Break val block_context.push(SubExpr)");
650 self.block_context.push(BlockFrame::SubExpr);
651 unpack!(block = self.expr_into_dest(destination, block, value));
652 self.block_context.pop();
653 } else {
654 self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
655 }
656 } else {
657 assert!(value.is_none(), "`return` and `break` should have a destination");
658 if self.tcx.sess.instrument_coverage() {
659 // Unlike `break` and `return`, which push an `Assign` statement to MIR, from which
660 // a Coverage code region can be generated, `continue` needs no `Assign`; but
661 // without one, the `InstrumentCoverage` MIR pass cannot generate a code region for
662 // `continue`. Coverage will be missing unless we add a dummy `Assign` to MIR.
663 self.add_dummy_assignment(span, block, source_info);
664 }
665 }
666
667 let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
668 let scope_index = self.scopes.scope_index(region_scope, span);
669 let drops = if destination.is_some() {
670 &mut self.scopes.breakable_scopes[break_index].break_drops
671 } else {
672 self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap()
673 };
674 let mut drop_idx = ROOT_NODE;
675 for scope in &self.scopes.scopes[scope_index + 1..] {
676 for drop in &scope.drops {
677 drop_idx = drops.add_drop(*drop, drop_idx);
678 }
679 }
680 drops.add_entry(block, drop_idx);
681
682 // `build_drop_trees` doesn't have access to our source_info, so we
683 // create a dummy terminator now. `TerminatorKind::Resume` is used
684 // because MIR type checking will panic if it hasn't been overwritten.
685 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
686
687 self.cfg.start_new_block().unit()
688 }
689
690 pub(crate) fn break_for_else(
691 &mut self,
692 block: BasicBlock,
693 target: region::Scope,
694 source_info: SourceInfo,
695 ) {
696 let scope_index = self.scopes.scope_index(target, source_info.span);
697 let if_then_scope = self
698 .scopes
699 .if_then_scope
700 .as_mut()
701 .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
702
703 assert_eq!(if_then_scope.region_scope, target, "breaking to incorrect scope");
704
705 let mut drop_idx = ROOT_NODE;
706 let drops = &mut if_then_scope.else_drops;
707 for scope in &self.scopes.scopes[scope_index + 1..] {
708 for drop in &scope.drops {
709 drop_idx = drops.add_drop(*drop, drop_idx);
710 }
711 }
712 drops.add_entry(block, drop_idx);
713
714 // `build_drop_trees` doesn't have access to our source_info, so we
715 // create a dummy terminator now. `TerminatorKind::Resume` is used
716 // because MIR type checking will panic if it hasn't been overwritten.
717 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
718 }
719
720 // Add a dummy `Assign` statement to the CFG, with the span for the source code's `continue`
721 // statement.
722 fn add_dummy_assignment(&mut self, span: Span, block: BasicBlock, source_info: SourceInfo) {
723 let local_decl = LocalDecl::new(self.tcx.mk_unit(), span).internal();
724 let temp_place = Place::from(self.local_decls.push(local_decl));
725 self.cfg.push_assign_unit(block, source_info, temp_place, self.tcx);
726 }
727
728 fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
729 // If we are emitting a `drop` statement, we need to have the cached
730 // diverge cleanup pads ready in case that drop panics.
731 let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup());
732 let is_generator = self.generator_kind.is_some();
733 let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
734
735 let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
736 unpack!(build_scope_drops(
737 &mut self.cfg,
738 &mut self.scopes.unwind_drops,
739 scope,
740 block,
741 unwind_to,
742 is_generator && needs_cleanup,
743 self.arg_count,
744 ))
745 }
746
747 /// Possibly creates a new source scope if `current_root` and `parent_root`
748 /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
749 pub(crate) fn maybe_new_source_scope(
750 &mut self,
751 span: Span,
752 safety: Option<Safety>,
753 current_id: HirId,
754 parent_id: HirId,
755 ) {
756 let (current_root, parent_root) =
757 if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
758 // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently the
759 // the only part of rustc that tracks MIR -> HIR is the `SourceScopeLocalData::lint_root`
760 // field that tracks lint levels for MIR locations. Normally the number of source scopes
761 // is limited to the set of nodes with lint annotations. The -Zmaximal-hir-to-mir-coverage
762 // flag changes this behavior to maximize the number of source scopes, increasing the
763 // granularity of the MIR->HIR mapping.
764 (current_id, parent_id)
765 } else {
766 // Use `maybe_lint_level_root_bounded` with `self.hir_id` as a bound
767 // to avoid adding Hir dependencies on our parents.
768 // We estimate the true lint roots here to avoid creating a lot of source scopes.
769 (
770 self.tcx.maybe_lint_level_root_bounded(current_id, self.hir_id),
771 self.tcx.maybe_lint_level_root_bounded(parent_id, self.hir_id),
772 )
773 };
774
775 if current_root != parent_root {
776 let lint_level = LintLevel::Explicit(current_root);
777 self.source_scope = self.new_source_scope(span, lint_level, safety);
778 }
779 }
780
781 /// Creates a new source scope, nested in the current one.
782 pub(crate) fn new_source_scope(
783 &mut self,
784 span: Span,
785 lint_level: LintLevel,
786 safety: Option<Safety>,
787 ) -> SourceScope {
788 let parent = self.source_scope;
789 debug!(
790 "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
791 span,
792 lint_level,
793 safety,
794 parent,
795 self.source_scopes.get(parent)
796 );
797 let scope_local_data = SourceScopeLocalData {
798 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
799 lint_root
800 } else {
801 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
802 },
803 safety: safety.unwrap_or_else(|| {
804 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
805 }),
806 };
807 self.source_scopes.push(SourceScopeData {
808 span,
809 parent_scope: Some(parent),
810 inlined: None,
811 inlined_parent_scope: None,
812 local_data: ClearCrossCrate::Set(scope_local_data),
813 })
814 }
815
816 /// Given a span and the current source scope, make a SourceInfo.
817 pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
818 SourceInfo { span, scope: self.source_scope }
819 }
820
821 // Finding scopes
822 // ==============
823
824 /// Returns the scope that we should use as the lifetime of an
825 /// operand. Basically, an operand must live until it is consumed.
826 /// This is similar to, but not quite the same as, the temporary
827 /// scope (which can be larger or smaller).
828 ///
829 /// Consider:
830 /// ```ignore (illustrative)
831 /// let x = foo(bar(X, Y));
832 /// ```
833 /// We wish to pop the storage for X and Y after `bar()` is
834 /// called, not after the whole `let` is completed.
835 ///
836 /// As another example, if the second argument diverges:
837 /// ```ignore (illustrative)
838 /// foo(Box::new(2), panic!())
839 /// ```
840 /// We would allocate the box but then free it on the unwinding
841 /// path; we would also emit a free on the 'success' path from
842 /// panic, but that will turn out to be removed as dead-code.
843 pub(crate) fn local_scope(&self) -> region::Scope {
844 self.scopes.topmost()
845 }
846
847 // Scheduling drops
848 // ================
849
850 pub(crate) fn schedule_drop_storage_and_value(
851 &mut self,
852 span: Span,
853 region_scope: region::Scope,
854 local: Local,
855 ) {
856 self.schedule_drop(span, region_scope, local, DropKind::Storage);
857 self.schedule_drop(span, region_scope, local, DropKind::Value);
858 }
859
860 /// Indicates that `place` should be dropped on exit from `region_scope`.
861 ///
862 /// When called with `DropKind::Storage`, `place` shouldn't be the return
863 /// place, or a function parameter.
864 pub(crate) fn schedule_drop(
865 &mut self,
866 span: Span,
867 region_scope: region::Scope,
868 local: Local,
869 drop_kind: DropKind,
870 ) {
871 let needs_drop = match drop_kind {
872 DropKind::Value => {
873 if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) {
874 return;
875 }
876 true
877 }
878 DropKind::Storage => {
879 if local.index() <= self.arg_count {
880 span_bug!(
881 span,
882 "`schedule_drop` called with local {:?} and arg_count {}",
883 local,
884 self.arg_count,
885 )
886 }
887 false
888 }
889 };
890
891 // When building drops, we try to cache chains of drops to reduce the
892 // number of `DropTree::add_drop` calls. This, however, means that
893 // whenever we add a drop into a scope which already had some entries
894 // in the drop tree built (and thus, cached) for it, we must invalidate
895 // all caches which might branch into the scope which had a drop just
896 // added to it. This is necessary, because otherwise some other code
897 // might use the cache to branch into already built chain of drops,
898 // essentially ignoring the newly added drop.
899 //
900 // For example consider there’s two scopes with a drop in each. These
901 // are built and thus the caches are filled:
902 //
903 // +--------------------------------------------------------+
904 // | +---------------------------------+ |
905 // | | +--------+ +-------------+ | +---------------+ |
906 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
907 // | | +--------+ +-------------+ | +---------------+ |
908 // | +------------|outer_scope cache|--+ |
909 // +------------------------------|middle_scope cache|------+
910 //
911 // Now, a new, inner-most scope is added along with a new drop into
912 // both inner-most and outer-most scopes:
913 //
914 // +------------------------------------------------------------+
915 // | +----------------------------------+ |
916 // | | +--------+ +-------------+ | +---------------+ | +-------------+
917 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
918 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
919 // | | +-+ +-------------+ | |
920 // | +---|invalid outer_scope cache|----+ |
921 // +----=----------------|invalid middle_scope cache|-----------+
922 //
923 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
924 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
925 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
926 // wrong results.
927 //
928 // Note that this code iterates scopes from the inner-most to the outer-most,
929 // invalidating caches of each scope visited. This way bare minimum of the
930 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
931 // cache of outer scope stays intact.
932 //
933 // Since we only cache drops for the unwind path and the generator drop
934 // path, we only need to invalidate the cache for drops that happen on
935 // the unwind or generator drop paths. This means that for
936 // non-generators we don't need to invalidate caches for `DropKind::Storage`.
937 let invalidate_caches = needs_drop || self.generator_kind.is_some();
938 for scope in self.scopes.scopes.iter_mut().rev() {
939 if invalidate_caches {
940 scope.invalidate_cache();
941 }
942
943 if scope.region_scope == region_scope {
944 let region_scope_span = region_scope.span(self.tcx, &self.region_scope_tree);
945 // Attribute scope exit drops to scope's closing brace.
946 let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
947
948 scope.drops.push(DropData {
949 source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
950 local,
951 kind: drop_kind,
952 });
953
954 return;
955 }
956 }
957
958 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
959 }
960
961 /// Indicates that the "local operand" stored in `local` is
962 /// *moved* at some point during execution (see `local_scope` for
963 /// more information about what a "local operand" is -- in short,
964 /// it's an intermediate operand created as part of preparing some
965 /// MIR instruction). We use this information to suppress
966 /// redundant drops on the non-unwind paths. This results in less
967 /// MIR, but also avoids spurious borrow check errors
968 /// (c.f. #64391).
969 ///
970 /// Example: when compiling the call to `foo` here:
971 ///
972 /// ```ignore (illustrative)
973 /// foo(bar(), ...)
974 /// ```
975 ///
976 /// we would evaluate `bar()` to an operand `_X`. We would also
977 /// schedule `_X` to be dropped when the expression scope for
978 /// `foo(bar())` is exited. This is relevant, for example, if the
979 /// later arguments should unwind (it would ensure that `_X` gets
980 /// dropped). However, if no unwind occurs, then `_X` will be
981 /// unconditionally consumed by the `call`:
982 ///
983 /// ```ignore (illustrative)
984 /// bb {
985 /// ...
986 /// _R = CALL(foo, _X, ...)
987 /// }
988 /// ```
989 ///
990 /// However, `_X` is still registered to be dropped, and so if we
991 /// do nothing else, we would generate a `DROP(_X)` that occurs
992 /// after the call. This will later be optimized out by the
993 /// drop-elaboration code, but in the meantime it can lead to
994 /// spurious borrow-check errors -- the problem, ironically, is
995 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
996 /// that it creates. See #64391 for an example.
997 pub(crate) fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
998 let local_scope = self.local_scope();
999 let scope = self.scopes.scopes.last_mut().unwrap();
1000
1001 assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1002
1003 // look for moves of a local variable, like `MOVE(_X)`
1004 let locals_moved = operands.iter().flat_map(|operand| match operand {
1005 Operand::Copy(_) | Operand::Constant(_) => None,
1006 Operand::Move(place) => place.as_local(),
1007 });
1008
1009 for local in locals_moved {
1010 // check if we have a Drop for this operand and -- if so
1011 // -- add it to the list of moved operands. Note that this
1012 // local might not have been an operand created for this
1013 // call, it could come from other places too.
1014 if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1015 scope.moved_locals.push(local);
1016 }
1017 }
1018 }
1019
1020 // Other
1021 // =====
1022
1023 /// Returns the [DropIdx] for the innermost drop if the function unwound at
1024 /// this point. The `DropIdx` will be created if it doesn't already exist.
1025 fn diverge_cleanup(&mut self) -> DropIdx {
1026 // It is okay to use dummy span because the getting scope index on the topmost scope
1027 // must always succeed.
1028 self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1029 }
1030
1031 /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1032 /// some ancestor scope instead of the current scope.
1033 /// It is possible to unwind to some ancestor scope if some drop panics as
1034 /// the program breaks out of a if-then scope.
1035 fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1036 let target = self.scopes.scope_index(target_scope, span);
1037 let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1038 .iter()
1039 .enumerate()
1040 .rev()
1041 .find_map(|(scope_idx, scope)| {
1042 scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1043 })
1044 .unwrap_or((0, ROOT_NODE));
1045
1046 if uncached_scope > target {
1047 return cached_drop;
1048 }
1049
1050 let is_generator = self.generator_kind.is_some();
1051 for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1052 for drop in &scope.drops {
1053 if is_generator || drop.kind == DropKind::Value {
1054 cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1055 }
1056 }
1057 scope.cached_unwind_block = Some(cached_drop);
1058 }
1059
1060 cached_drop
1061 }
1062
1063 /// Prepares to create a path that performs all required cleanup for a
1064 /// terminator that can unwind at the given basic block.
1065 ///
1066 /// This path terminates in Resume. The path isn't created until after all
1067 /// of the non-unwind paths in this item have been lowered.
1068 pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1069 debug_assert!(
1070 matches!(
1071 self.cfg.block_data(start).terminator().kind,
1072 TerminatorKind::Assert { .. }
1073 | TerminatorKind::Call { .. }
1074 | TerminatorKind::Drop { .. }
1075 | TerminatorKind::FalseUnwind { .. }
1076 | TerminatorKind::InlineAsm { .. }
1077 ),
1078 "diverge_from called on block with terminator that cannot unwind."
1079 );
1080
1081 let next_drop = self.diverge_cleanup();
1082 self.scopes.unwind_drops.add_entry(start, next_drop);
1083 }
1084
1085 /// Sets up a path that performs all required cleanup for dropping a
1086 /// generator, starting from the given block that ends in
1087 /// [TerminatorKind::Yield].
1088 ///
1089 /// This path terminates in GeneratorDrop.
1090 pub(crate) fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) {
1091 debug_assert!(
1092 matches!(
1093 self.cfg.block_data(yield_block).terminator().kind,
1094 TerminatorKind::Yield { .. }
1095 ),
1096 "generator_drop_cleanup called on block with non-yield terminator."
1097 );
1098 let (uncached_scope, mut cached_drop) = self
1099 .scopes
1100 .scopes
1101 .iter()
1102 .enumerate()
1103 .rev()
1104 .find_map(|(scope_idx, scope)| {
1105 scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1106 })
1107 .unwrap_or((0, ROOT_NODE));
1108
1109 for scope in &mut self.scopes.scopes[uncached_scope..] {
1110 for drop in &scope.drops {
1111 cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop);
1112 }
1113 scope.cached_generator_drop_block = Some(cached_drop);
1114 }
1115
1116 self.scopes.generator_drops.add_entry(yield_block, cached_drop);
1117 }
1118
1119 /// Utility function for *non*-scope code to build their own drops
1120 /// Force a drop at this point in the MIR by creating a new block.
1121 pub(crate) fn build_drop_and_replace(
1122 &mut self,
1123 block: BasicBlock,
1124 span: Span,
1125 place: Place<'tcx>,
1126 value: Rvalue<'tcx>,
1127 ) -> BlockAnd<()> {
1128 let span = self.tcx.with_stable_hashing_context(|hcx| {
1129 span.mark_with_reason(None, DesugaringKind::Replace, self.tcx.sess.edition(), hcx)
1130 });
1131 let source_info = self.source_info(span);
1132
1133 // create the new block for the assignment
1134 let assign = self.cfg.start_new_block();
1135 self.cfg.push_assign(assign, source_info, place, value.clone());
1136
1137 // create the new block for the assignment in the case of unwinding
1138 let assign_unwind = self.cfg.start_new_cleanup_block();
1139 self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1140
1141 self.cfg.terminate(
1142 block,
1143 source_info,
1144 TerminatorKind::Drop {
1145 place,
1146 target: assign,
1147 unwind: UnwindAction::Cleanup(assign_unwind),
1148 },
1149 );
1150 self.diverge_from(block);
1151
1152 assign.unit()
1153 }
1154
1155 /// Creates an `Assert` terminator and return the success block.
1156 /// If the boolean condition operand is not the expected value,
1157 /// a runtime panic will be caused with the given message.
1158 pub(crate) fn assert(
1159 &mut self,
1160 block: BasicBlock,
1161 cond: Operand<'tcx>,
1162 expected: bool,
1163 msg: AssertMessage<'tcx>,
1164 span: Span,
1165 ) -> BasicBlock {
1166 let source_info = self.source_info(span);
1167 let success_block = self.cfg.start_new_block();
1168
1169 self.cfg.terminate(
1170 block,
1171 source_info,
1172 TerminatorKind::Assert {
1173 cond,
1174 expected,
1175 msg,
1176 target: success_block,
1177 unwind: UnwindAction::Continue,
1178 },
1179 );
1180 self.diverge_from(block);
1181
1182 success_block
1183 }
1184
1185 /// Unschedules any drops in the top scope.
1186 ///
1187 /// This is only needed for `match` arm scopes, because they have one
1188 /// entrance per pattern, but only one exit.
1189 pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1190 let top_scope = self.scopes.scopes.last_mut().unwrap();
1191
1192 assert_eq!(top_scope.region_scope, region_scope);
1193
1194 top_scope.drops.clear();
1195 top_scope.invalidate_cache();
1196 }
1197 }
1198
1199 /// Builds drops for `pop_scope` and `leave_top_scope`.
1200 fn build_scope_drops<'tcx>(
1201 cfg: &mut CFG<'tcx>,
1202 unwind_drops: &mut DropTree,
1203 scope: &Scope,
1204 mut block: BasicBlock,
1205 mut unwind_to: DropIdx,
1206 storage_dead_on_unwind: bool,
1207 arg_count: usize,
1208 ) -> BlockAnd<()> {
1209 debug!("build_scope_drops({:?} -> {:?})", block, scope);
1210
1211 // Build up the drops in evaluation order. The end result will
1212 // look like:
1213 //
1214 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1215 // | | |
1216 // : | |
1217 // V V
1218 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1219 //
1220 // The horizontal arrows represent the execution path when the drops return
1221 // successfully. The downwards arrows represent the execution path when the
1222 // drops panic (panicking while unwinding will abort, so there's no need for
1223 // another set of arrows).
1224 //
1225 // For generators, we unwind from a drop on a local to its StorageDead
1226 // statement. For other functions we don't worry about StorageDead. The
1227 // drops for the unwind path should have already been generated by
1228 // `diverge_cleanup_gen`.
1229
1230 for drop_data in scope.drops.iter().rev() {
1231 let source_info = drop_data.source_info;
1232 let local = drop_data.local;
1233
1234 match drop_data.kind {
1235 DropKind::Value => {
1236 // `unwind_to` should drop the value that we're about to
1237 // schedule. If dropping this value panics, then we continue
1238 // with the *next* value on the unwind path.
1239 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1240 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1241 unwind_to = unwind_drops.drops[unwind_to].1;
1242
1243 // If the operand has been moved, and we are not on an unwind
1244 // path, then don't generate the drop. (We only take this into
1245 // account for non-unwind paths so as not to disturb the
1246 // caching mechanism.)
1247 if scope.moved_locals.iter().any(|&o| o == local) {
1248 continue;
1249 }
1250
1251 unwind_drops.add_entry(block, unwind_to);
1252
1253 let next = cfg.start_new_block();
1254 cfg.terminate(
1255 block,
1256 source_info,
1257 TerminatorKind::Drop {
1258 place: local.into(),
1259 target: next,
1260 unwind: UnwindAction::Continue,
1261 },
1262 );
1263 block = next;
1264 }
1265 DropKind::Storage => {
1266 if storage_dead_on_unwind {
1267 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1268 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1269 unwind_to = unwind_drops.drops[unwind_to].1;
1270 }
1271 // Only temps and vars need their storage dead.
1272 assert!(local.index() > arg_count);
1273 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1274 }
1275 }
1276 }
1277 block.unit()
1278 }
1279
1280 impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1281 /// Build a drop tree for a breakable scope.
1282 ///
1283 /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1284 /// loop. Otherwise this is for `break` or `return`.
1285 fn build_exit_tree(
1286 &mut self,
1287 mut drops: DropTree,
1288 else_scope: region::Scope,
1289 span: Span,
1290 continue_block: Option<BasicBlock>,
1291 ) -> Option<BlockAnd<()>> {
1292 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1293 blocks[ROOT_NODE] = continue_block;
1294
1295 drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1296 let is_generator = self.generator_kind.is_some();
1297
1298 // Link the exit drop tree to unwind drop tree.
1299 if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
1300 let unwind_target = self.diverge_cleanup_target(else_scope, span);
1301 let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1302 for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
1303 match drop_data.0.kind {
1304 DropKind::Storage => {
1305 if is_generator {
1306 let unwind_drop = self
1307 .scopes
1308 .unwind_drops
1309 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1310 unwind_indices.push(unwind_drop);
1311 } else {
1312 unwind_indices.push(unwind_indices[drop_data.1]);
1313 }
1314 }
1315 DropKind::Value => {
1316 let unwind_drop = self
1317 .scopes
1318 .unwind_drops
1319 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1320 self.scopes
1321 .unwind_drops
1322 .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
1323 unwind_indices.push(unwind_drop);
1324 }
1325 }
1326 }
1327 }
1328 blocks[ROOT_NODE].map(BasicBlock::unit)
1329 }
1330
1331 /// Build the unwind and generator drop trees.
1332 pub(crate) fn build_drop_trees(&mut self) {
1333 if self.generator_kind.is_some() {
1334 self.build_generator_drop_trees();
1335 } else {
1336 Self::build_unwind_tree(
1337 &mut self.cfg,
1338 &mut self.scopes.unwind_drops,
1339 self.fn_span,
1340 &mut None,
1341 );
1342 }
1343 }
1344
1345 fn build_generator_drop_trees(&mut self) {
1346 // Build the drop tree for dropping the generator while it's suspended.
1347 let drops = &mut self.scopes.generator_drops;
1348 let cfg = &mut self.cfg;
1349 let fn_span = self.fn_span;
1350 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1351 drops.build_mir::<GeneratorDrop>(cfg, &mut blocks);
1352 if let Some(root_block) = blocks[ROOT_NODE] {
1353 cfg.terminate(
1354 root_block,
1355 SourceInfo::outermost(fn_span),
1356 TerminatorKind::GeneratorDrop,
1357 );
1358 }
1359
1360 // Build the drop tree for unwinding in the normal control flow paths.
1361 let resume_block = &mut None;
1362 let unwind_drops = &mut self.scopes.unwind_drops;
1363 Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
1364
1365 // Build the drop tree for unwinding when dropping a suspended
1366 // generator.
1367 //
1368 // This is a different tree to the standard unwind paths here to
1369 // prevent drop elaboration from creating drop flags that would have
1370 // to be captured by the generator. I'm not sure how important this
1371 // optimization is, but it is here.
1372 for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
1373 if let DropKind::Value = drop_data.0.kind {
1374 debug_assert!(drop_data.1 < drops.drops.next_index());
1375 drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
1376 }
1377 }
1378 Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
1379 }
1380
1381 fn build_unwind_tree(
1382 cfg: &mut CFG<'tcx>,
1383 drops: &mut DropTree,
1384 fn_span: Span,
1385 resume_block: &mut Option<BasicBlock>,
1386 ) {
1387 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1388 blocks[ROOT_NODE] = *resume_block;
1389 drops.build_mir::<Unwind>(cfg, &mut blocks);
1390 if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1391 cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::Resume);
1392
1393 *resume_block = blocks[ROOT_NODE];
1394 }
1395 }
1396 }
1397
1398 // DropTreeBuilder implementations.
1399
1400 struct ExitScopes;
1401
1402 impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1403 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1404 cfg.start_new_block()
1405 }
1406 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1407 cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to };
1408 }
1409 }
1410
1411 struct GeneratorDrop;
1412
1413 impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop {
1414 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1415 cfg.start_new_block()
1416 }
1417 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1418 let term = cfg.block_data_mut(from).terminator_mut();
1419 if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1420 *drop = Some(to);
1421 } else {
1422 span_bug!(
1423 term.source_info.span,
1424 "cannot enter generator drop tree from {:?}",
1425 term.kind
1426 )
1427 }
1428 }
1429 }
1430
1431 struct Unwind;
1432
1433 impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1434 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1435 cfg.start_new_cleanup_block()
1436 }
1437 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1438 let term = &mut cfg.block_data_mut(from).terminator_mut();
1439 match &mut term.kind {
1440 TerminatorKind::Drop { unwind, .. } => {
1441 if let UnwindAction::Cleanup(unwind) = *unwind {
1442 let source_info = term.source_info;
1443 cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
1444 } else {
1445 *unwind = UnwindAction::Cleanup(to);
1446 }
1447 }
1448 TerminatorKind::FalseUnwind { unwind, .. }
1449 | TerminatorKind::Call { unwind, .. }
1450 | TerminatorKind::Assert { unwind, .. }
1451 | TerminatorKind::InlineAsm { unwind, .. } => {
1452 *unwind = UnwindAction::Cleanup(to);
1453 }
1454 TerminatorKind::Goto { .. }
1455 | TerminatorKind::SwitchInt { .. }
1456 | TerminatorKind::Resume
1457 | TerminatorKind::Terminate
1458 | TerminatorKind::Return
1459 | TerminatorKind::Unreachable
1460 | TerminatorKind::Yield { .. }
1461 | TerminatorKind::GeneratorDrop
1462 | TerminatorKind::FalseEdge { .. } => {
1463 span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1464 }
1465 }
1466 }
1467 }