]> git.proxmox.com Git - rustc.git/blob - compiler/rustc_mir_build/src/build/scope.rs
New upstream version 1.71.1+dfsg1
[rustc.git] / compiler / rustc_mir_build / src / build / scope.rs
1 /*!
2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the THIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
5 `region::Scope`.
6
7 ### SEME Regions
8
9 When pushing a new [Scope], we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
15
16 For now, we record the `region::Scope` to each SEME region for later reference
17 (see caveat in next paragraph). This is because destruction scopes are tied to
18 them. This may change in the future so that MIR lowering determines its own
19 destruction scopes.
20
21 ### Not so SEME Regions
22
23 In the course of building matches, it sometimes happens that certain code
24 (namely guards) gets executed multiple times. This means that the scope lexical
25 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26 mapping is from one scope to a vector of SEME regions. Since the SEME regions
27 are disjoint, the mapping is still one-to-one for the set of SEME regions that
28 we're currently in.
29
30 Also in matches, the scopes assigned to arms are not always even SEME regions!
31 Each arm has a single region with one entry for each pattern. We manually
32 manipulate the scheduled drops in this scope to avoid dropping things multiple
33 times.
34
35 ### Drops
36
37 The primary purpose for scopes is to insert drops: while building
38 the contents, we also accumulate places that need to be dropped upon
39 exit from each scope. This is done by calling `schedule_drop`. Once a
40 drop is scheduled, whenever we branch out we will insert drops of all
41 those places onto the outgoing edge. Note that we don't know the full
42 set of scheduled drops up front, and so whenever we exit from the
43 scope we only drop the values scheduled thus far. For example, consider
44 the scope S corresponding to this loop:
45
46 ```
47 # let cond = true;
48 loop {
49 let x = ..;
50 if cond { break; }
51 let y = ..;
52 }
53 ```
54
55 When processing the `let x`, we will add one drop to the scope for
56 `x`. The break will then insert a drop for `x`. When we process `let
57 y`, we will add another drop (in fact, to a subscope, but let's ignore
58 that for now); any later drops would also drop `y`.
59
60 ### Early exit
61
62 There are numerous "normal" ways to early exit a scope: `break`,
63 `continue`, `return` (panics are handled separately). Whenever an
64 early exit occurs, the method `break_scope` is called. It is given the
65 current point in execution where the early exit occurs, as well as the
66 scope you want to branch to (note that all early exits from to some
67 other enclosing scope). `break_scope` will record the set of drops currently
68 scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69 will be added to the CFG.
70
71 Panics are handled in a similar fashion, except that the drops are added to the
72 MIR once the rest of the function has finished being lowered. If a terminator
73 can panic, call `diverge_from(block)` with the block containing the terminator
74 `block`.
75
76 ### Breakable scopes
77
78 In addition to the normal scope stack, we track a loop scope stack
79 that contains only loops and breakable blocks. It tracks where a `break`,
80 `continue` or `return` should go to.
81
82 */
83
84 use std::mem;
85
86 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
87 use rustc_data_structures::fx::FxHashMap;
88 use rustc_hir::HirId;
89 use rustc_index::{IndexSlice, IndexVec};
90 use rustc_middle::middle::region;
91 use rustc_middle::mir::*;
92 use rustc_middle::thir::{Expr, LintLevel};
93
94 use rustc_span::{Span, DUMMY_SP};
95
96 #[derive(Debug)]
97 pub struct Scopes<'tcx> {
98 scopes: Vec<Scope>,
99
100 /// The current set of breakable scopes. See module comment for more details.
101 breakable_scopes: Vec<BreakableScope<'tcx>>,
102
103 /// The scope of the innermost if-then currently being lowered.
104 if_then_scope: Option<IfThenScope>,
105
106 /// Drops that need to be done on unwind paths. See the comment on
107 /// [DropTree] for more details.
108 unwind_drops: DropTree,
109
110 /// Drops that need to be done on paths to the `GeneratorDrop` terminator.
111 generator_drops: DropTree,
112 }
113
114 #[derive(Debug)]
115 struct Scope {
116 /// The source scope this scope was created in.
117 source_scope: SourceScope,
118
119 /// the region span of this scope within source code.
120 region_scope: region::Scope,
121
122 /// set of places to drop when exiting this scope. This starts
123 /// out empty but grows as variables are declared during the
124 /// building process. This is a stack, so we always drop from the
125 /// end of the vector (top of the stack) first.
126 drops: Vec<DropData>,
127
128 moved_locals: Vec<Local>,
129
130 /// The drop index that will drop everything in and below this scope on an
131 /// unwind path.
132 cached_unwind_block: Option<DropIdx>,
133
134 /// The drop index that will drop everything in and below this scope on a
135 /// generator drop path.
136 cached_generator_drop_block: Option<DropIdx>,
137 }
138
139 #[derive(Clone, Copy, Debug)]
140 struct DropData {
141 /// The `Span` where drop obligation was incurred (typically where place was
142 /// declared)
143 source_info: SourceInfo,
144
145 /// local to drop
146 local: Local,
147
148 /// Whether this is a value Drop or a StorageDead.
149 kind: DropKind,
150 }
151
152 #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
153 pub(crate) enum DropKind {
154 Value,
155 Storage,
156 }
157
158 #[derive(Debug)]
159 struct BreakableScope<'tcx> {
160 /// Region scope of the loop
161 region_scope: region::Scope,
162 /// The destination of the loop/block expression itself (i.e., where to put
163 /// the result of a `break` or `return` expression)
164 break_destination: Place<'tcx>,
165 /// Drops that happen on the `break`/`return` path.
166 break_drops: DropTree,
167 /// Drops that happen on the `continue` path.
168 continue_drops: Option<DropTree>,
169 }
170
171 #[derive(Debug)]
172 struct IfThenScope {
173 /// The if-then scope or arm scope
174 region_scope: region::Scope,
175 /// Drops that happen on the `else` path.
176 else_drops: DropTree,
177 }
178
179 /// The target of an expression that breaks out of a scope
180 #[derive(Clone, Copy, Debug)]
181 pub(crate) enum BreakableTarget {
182 Continue(region::Scope),
183 Break(region::Scope),
184 Return,
185 }
186
187 rustc_index::newtype_index! {
188 struct DropIdx {}
189 }
190
191 const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
192
193 /// A tree of drops that we have deferred lowering. It's used for:
194 ///
195 /// * Drops on unwind paths
196 /// * Drops on generator drop paths (when a suspended generator is dropped)
197 /// * Drops on return and loop exit paths
198 /// * Drops on the else path in an `if let` chain
199 ///
200 /// Once no more nodes could be added to the tree, we lower it to MIR in one go
201 /// in `build_mir`.
202 #[derive(Debug)]
203 struct DropTree {
204 /// Drops in the tree.
205 drops: IndexVec<DropIdx, (DropData, DropIdx)>,
206 /// Map for finding the inverse of the `next_drop` relation:
207 ///
208 /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
209 previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
210 /// Edges into the `DropTree` that need to be added once it's lowered.
211 entry_points: Vec<(DropIdx, BasicBlock)>,
212 }
213
214 impl Scope {
215 /// Whether there's anything to do for the cleanup path, that is,
216 /// when unwinding through this scope. This includes destructors,
217 /// but not StorageDead statements, which don't get emitted at all
218 /// for unwinding, for several reasons:
219 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
220 /// * LLVM's memory dependency analysis can't handle it atm
221 /// * polluting the cleanup MIR with StorageDead creates
222 /// landing pads even though there's no actual destructors
223 /// * freeing up stack space has no effect during unwinding
224 /// Note that for generators we do emit StorageDeads, for the
225 /// use of optimizations in the MIR generator transform.
226 fn needs_cleanup(&self) -> bool {
227 self.drops.iter().any(|drop| match drop.kind {
228 DropKind::Value => true,
229 DropKind::Storage => false,
230 })
231 }
232
233 fn invalidate_cache(&mut self) {
234 self.cached_unwind_block = None;
235 self.cached_generator_drop_block = None;
236 }
237 }
238
239 /// A trait that determined how [DropTree] creates its blocks and
240 /// links to any entry nodes.
241 trait DropTreeBuilder<'tcx> {
242 /// Create a new block for the tree. This should call either
243 /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
244 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
245
246 /// Links a block outside the drop tree, `from`, to the block `to` inside
247 /// the drop tree.
248 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
249 }
250
251 impl DropTree {
252 fn new() -> Self {
253 // The root node of the tree doesn't represent a drop, but instead
254 // represents the block in the tree that should be jumped to once all
255 // of the required drops have been performed.
256 let fake_source_info = SourceInfo::outermost(DUMMY_SP);
257 let fake_data =
258 DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
259 let drop_idx = DropIdx::MAX;
260 let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
261 Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
262 }
263
264 fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
265 let drops = &mut self.drops;
266 *self
267 .previous_drops
268 .entry((next, drop.local, drop.kind))
269 .or_insert_with(|| drops.push((drop, next)))
270 }
271
272 fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
273 debug_assert!(to < self.drops.next_index());
274 self.entry_points.push((to, from));
275 }
276
277 /// Builds the MIR for a given drop tree.
278 ///
279 /// `blocks` should have the same length as `self.drops`, and may have its
280 /// first value set to some already existing block.
281 fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
282 &mut self,
283 cfg: &mut CFG<'tcx>,
284 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
285 ) {
286 debug!("DropTree::build_mir(drops = {:#?})", self);
287 assert_eq!(blocks.len(), self.drops.len());
288
289 self.assign_blocks::<T>(cfg, blocks);
290 self.link_blocks(cfg, blocks)
291 }
292
293 /// Assign blocks for all of the drops in the drop tree that need them.
294 fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
295 &mut self,
296 cfg: &mut CFG<'tcx>,
297 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
298 ) {
299 // StorageDead statements can share blocks with each other and also with
300 // a Drop terminator. We iterate through the drops to find which drops
301 // need their own block.
302 #[derive(Clone, Copy)]
303 enum Block {
304 // This drop is unreachable
305 None,
306 // This drop is only reachable through the `StorageDead` with the
307 // specified index.
308 Shares(DropIdx),
309 // This drop has more than one way of being reached, or it is
310 // branched to from outside the tree, or its predecessor is a
311 // `Value` drop.
312 Own,
313 }
314
315 let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
316 if blocks[ROOT_NODE].is_some() {
317 // In some cases (such as drops for `continue`) the root node
318 // already has a block. In this case, make sure that we don't
319 // override it.
320 needs_block[ROOT_NODE] = Block::Own;
321 }
322
323 // Sort so that we only need to check the last value.
324 let entry_points = &mut self.entry_points;
325 entry_points.sort();
326
327 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
328 if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
329 let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
330 needs_block[drop_idx] = Block::Own;
331 while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
332 let entry_block = entry_points.pop().unwrap().1;
333 T::add_entry(cfg, entry_block, block);
334 }
335 }
336 match needs_block[drop_idx] {
337 Block::None => continue,
338 Block::Own => {
339 blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
340 }
341 Block::Shares(pred) => {
342 blocks[drop_idx] = blocks[pred];
343 }
344 }
345 if let DropKind::Value = drop_data.0.kind {
346 needs_block[drop_data.1] = Block::Own;
347 } else if drop_idx != ROOT_NODE {
348 match &mut needs_block[drop_data.1] {
349 pred @ Block::None => *pred = Block::Shares(drop_idx),
350 pred @ Block::Shares(_) => *pred = Block::Own,
351 Block::Own => (),
352 }
353 }
354 }
355
356 debug!("assign_blocks: blocks = {:#?}", blocks);
357 assert!(entry_points.is_empty());
358 }
359
360 fn link_blocks<'tcx>(
361 &self,
362 cfg: &mut CFG<'tcx>,
363 blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
364 ) {
365 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
366 let Some(block) = blocks[drop_idx] else { continue };
367 match drop_data.0.kind {
368 DropKind::Value => {
369 let terminator = TerminatorKind::Drop {
370 target: blocks[drop_data.1].unwrap(),
371 // The caller will handle this if needed.
372 unwind: UnwindAction::Terminate,
373 place: drop_data.0.local.into(),
374 replace: false,
375 };
376 cfg.terminate(block, drop_data.0.source_info, terminator);
377 }
378 // Root nodes don't correspond to a drop.
379 DropKind::Storage if drop_idx == ROOT_NODE => {}
380 DropKind::Storage => {
381 let stmt = Statement {
382 source_info: drop_data.0.source_info,
383 kind: StatementKind::StorageDead(drop_data.0.local),
384 };
385 cfg.push(block, stmt);
386 let target = blocks[drop_data.1].unwrap();
387 if target != block {
388 // Diagnostics don't use this `Span` but debuginfo
389 // might. Since we don't want breakpoints to be placed
390 // here, especially when this is on an unwind path, we
391 // use `DUMMY_SP`.
392 let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
393 let terminator = TerminatorKind::Goto { target };
394 cfg.terminate(block, source_info, terminator);
395 }
396 }
397 }
398 }
399 }
400 }
401
402 impl<'tcx> Scopes<'tcx> {
403 pub(crate) fn new() -> Self {
404 Self {
405 scopes: Vec::new(),
406 breakable_scopes: Vec::new(),
407 if_then_scope: None,
408 unwind_drops: DropTree::new(),
409 generator_drops: DropTree::new(),
410 }
411 }
412
413 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
414 debug!("push_scope({:?})", region_scope);
415 self.scopes.push(Scope {
416 source_scope: vis_scope,
417 region_scope: region_scope.0,
418 drops: vec![],
419 moved_locals: vec![],
420 cached_unwind_block: None,
421 cached_generator_drop_block: None,
422 });
423 }
424
425 fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
426 let scope = self.scopes.pop().unwrap();
427 assert_eq!(scope.region_scope, region_scope.0);
428 scope
429 }
430
431 fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
432 self.scopes
433 .iter()
434 .rposition(|scope| scope.region_scope == region_scope)
435 .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
436 }
437
438 /// Returns the topmost active scope, which is known to be alive until
439 /// the next scope expression.
440 fn topmost(&self) -> region::Scope {
441 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
442 }
443 }
444
445 impl<'a, 'tcx> Builder<'a, 'tcx> {
446 // Adding and removing scopes
447 // ==========================
448
449 /// Start a breakable scope, which tracks where `continue`, `break` and
450 /// `return` should branch to.
451 pub(crate) fn in_breakable_scope<F>(
452 &mut self,
453 loop_block: Option<BasicBlock>,
454 break_destination: Place<'tcx>,
455 span: Span,
456 f: F,
457 ) -> BlockAnd<()>
458 where
459 F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
460 {
461 let region_scope = self.scopes.topmost();
462 let scope = BreakableScope {
463 region_scope,
464 break_destination,
465 break_drops: DropTree::new(),
466 continue_drops: loop_block.map(|_| DropTree::new()),
467 };
468 self.scopes.breakable_scopes.push(scope);
469 let normal_exit_block = f(self);
470 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
471 assert!(breakable_scope.region_scope == region_scope);
472 let break_block =
473 self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
474 if let Some(drops) = breakable_scope.continue_drops {
475 self.build_exit_tree(drops, region_scope, span, loop_block);
476 }
477 match (normal_exit_block, break_block) {
478 (Some(block), None) | (None, Some(block)) => block,
479 (None, None) => self.cfg.start_new_block().unit(),
480 (Some(normal_block), Some(exit_block)) => {
481 let target = self.cfg.start_new_block();
482 let source_info = self.source_info(span);
483 self.cfg.terminate(
484 unpack!(normal_block),
485 source_info,
486 TerminatorKind::Goto { target },
487 );
488 self.cfg.terminate(
489 unpack!(exit_block),
490 source_info,
491 TerminatorKind::Goto { target },
492 );
493 target.unit()
494 }
495 }
496 }
497
498 /// Start an if-then scope which tracks drop for `if` expressions and `if`
499 /// guards.
500 ///
501 /// For an if-let chain:
502 ///
503 /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
504 ///
505 /// There are three possible ways the condition can be false and we may have
506 /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
507 /// To handle this correctly we use a `DropTree` in a similar way to a
508 /// `loop` expression and 'break' out on all of the 'else' paths.
509 ///
510 /// Notes:
511 /// - We don't need to keep a stack of scopes in the `Builder` because the
512 /// 'else' paths will only leave the innermost scope.
513 /// - This is also used for match guards.
514 pub(crate) fn in_if_then_scope<F>(
515 &mut self,
516 region_scope: region::Scope,
517 span: Span,
518 f: F,
519 ) -> (BasicBlock, BasicBlock)
520 where
521 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
522 {
523 let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
524 let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
525
526 let then_block = unpack!(f(self));
527
528 let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
529 assert!(if_then_scope.region_scope == region_scope);
530
531 let else_block = self
532 .build_exit_tree(if_then_scope.else_drops, region_scope, span, None)
533 .map_or_else(|| self.cfg.start_new_block(), |else_block_and| unpack!(else_block_and));
534
535 (then_block, else_block)
536 }
537
538 pub(crate) fn in_opt_scope<F, R>(
539 &mut self,
540 opt_scope: Option<(region::Scope, SourceInfo)>,
541 f: F,
542 ) -> BlockAnd<R>
543 where
544 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
545 {
546 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
547 if let Some(region_scope) = opt_scope {
548 self.push_scope(region_scope);
549 }
550 let mut block;
551 let rv = unpack!(block = f(self));
552 if let Some(region_scope) = opt_scope {
553 unpack!(block = self.pop_scope(region_scope, block));
554 }
555 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
556 block.and(rv)
557 }
558
559 /// Convenience wrapper that pushes a scope and then executes `f`
560 /// to build its contents, popping the scope afterwards.
561 #[instrument(skip(self, f), level = "debug")]
562 pub(crate) fn in_scope<F, R>(
563 &mut self,
564 region_scope: (region::Scope, SourceInfo),
565 lint_level: LintLevel,
566 f: F,
567 ) -> BlockAnd<R>
568 where
569 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
570 {
571 let source_scope = self.source_scope;
572 if let LintLevel::Explicit(current_hir_id) = lint_level {
573 let parent_id =
574 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root;
575 self.maybe_new_source_scope(region_scope.1.span, None, current_hir_id, parent_id);
576 }
577 self.push_scope(region_scope);
578 let mut block;
579 let rv = unpack!(block = f(self));
580 unpack!(block = self.pop_scope(region_scope, block));
581 self.source_scope = source_scope;
582 debug!(?block);
583 block.and(rv)
584 }
585
586 /// Push a scope onto the stack. You can then build code in this
587 /// scope and call `pop_scope` afterwards. Note that these two
588 /// calls must be paired; using `in_scope` as a convenience
589 /// wrapper maybe preferable.
590 pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
591 self.scopes.push_scope(region_scope, self.source_scope);
592 }
593
594 /// Pops a scope, which should have region scope `region_scope`,
595 /// adding any drops onto the end of `block` that are needed.
596 /// This must match 1-to-1 with `push_scope`.
597 pub(crate) fn pop_scope(
598 &mut self,
599 region_scope: (region::Scope, SourceInfo),
600 mut block: BasicBlock,
601 ) -> BlockAnd<()> {
602 debug!("pop_scope({:?}, {:?})", region_scope, block);
603
604 block = self.leave_top_scope(block);
605
606 self.scopes.pop_scope(region_scope);
607
608 block.unit()
609 }
610
611 /// Sets up the drops for breaking from `block` to `target`.
612 pub(crate) fn break_scope(
613 &mut self,
614 mut block: BasicBlock,
615 value: Option<&Expr<'tcx>>,
616 target: BreakableTarget,
617 source_info: SourceInfo,
618 ) -> BlockAnd<()> {
619 let span = source_info.span;
620
621 let get_scope_index = |scope: region::Scope| {
622 // find the loop-scope by its `region::Scope`.
623 self.scopes
624 .breakable_scopes
625 .iter()
626 .rposition(|breakable_scope| breakable_scope.region_scope == scope)
627 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
628 };
629 let (break_index, destination) = match target {
630 BreakableTarget::Return => {
631 let scope = &self.scopes.breakable_scopes[0];
632 if scope.break_destination != Place::return_place() {
633 span_bug!(span, "`return` in item with no return scope");
634 }
635 (0, Some(scope.break_destination))
636 }
637 BreakableTarget::Break(scope) => {
638 let break_index = get_scope_index(scope);
639 let scope = &self.scopes.breakable_scopes[break_index];
640 (break_index, Some(scope.break_destination))
641 }
642 BreakableTarget::Continue(scope) => {
643 let break_index = get_scope_index(scope);
644 (break_index, None)
645 }
646 };
647
648 match (destination, value) {
649 (Some(destination), Some(value)) => {
650 debug!("stmt_expr Break val block_context.push(SubExpr)");
651 self.block_context.push(BlockFrame::SubExpr);
652 unpack!(block = self.expr_into_dest(destination, block, value));
653 self.block_context.pop();
654 }
655 (Some(destination), None) => {
656 self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
657 }
658 (None, Some(_)) => {
659 panic!("`return`, `become` and `break` with value and must have a destination")
660 }
661 (None, None) if self.tcx.sess.instrument_coverage() => {
662 // Unlike `break` and `return`, which push an `Assign` statement to MIR, from which
663 // a Coverage code region can be generated, `continue` needs no `Assign`; but
664 // without one, the `InstrumentCoverage` MIR pass cannot generate a code region for
665 // `continue`. Coverage will be missing unless we add a dummy `Assign` to MIR.
666 self.add_dummy_assignment(span, block, source_info);
667 }
668 (None, None) => {}
669 }
670
671 let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
672 let scope_index = self.scopes.scope_index(region_scope, span);
673 let drops = if destination.is_some() {
674 &mut self.scopes.breakable_scopes[break_index].break_drops
675 } else {
676 self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap()
677 };
678
679 let drop_idx = self.scopes.scopes[scope_index + 1..]
680 .iter()
681 .flat_map(|scope| &scope.drops)
682 .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
683
684 drops.add_entry(block, drop_idx);
685
686 // `build_drop_trees` doesn't have access to our source_info, so we
687 // create a dummy terminator now. `TerminatorKind::Resume` is used
688 // because MIR type checking will panic if it hasn't been overwritten.
689 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
690
691 self.cfg.start_new_block().unit()
692 }
693
694 pub(crate) fn break_for_else(
695 &mut self,
696 block: BasicBlock,
697 target: region::Scope,
698 source_info: SourceInfo,
699 ) {
700 let scope_index = self.scopes.scope_index(target, source_info.span);
701 let if_then_scope = self
702 .scopes
703 .if_then_scope
704 .as_mut()
705 .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
706
707 assert_eq!(if_then_scope.region_scope, target, "breaking to incorrect scope");
708
709 let mut drop_idx = ROOT_NODE;
710 let drops = &mut if_then_scope.else_drops;
711 for scope in &self.scopes.scopes[scope_index + 1..] {
712 for drop in &scope.drops {
713 drop_idx = drops.add_drop(*drop, drop_idx);
714 }
715 }
716 drops.add_entry(block, drop_idx);
717
718 // `build_drop_trees` doesn't have access to our source_info, so we
719 // create a dummy terminator now. `TerminatorKind::Resume` is used
720 // because MIR type checking will panic if it hasn't been overwritten.
721 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
722 }
723
724 // Add a dummy `Assign` statement to the CFG, with the span for the source code's `continue`
725 // statement.
726 fn add_dummy_assignment(&mut self, span: Span, block: BasicBlock, source_info: SourceInfo) {
727 let local_decl = LocalDecl::new(self.tcx.mk_unit(), span).internal();
728 let temp_place = Place::from(self.local_decls.push(local_decl));
729 self.cfg.push_assign_unit(block, source_info, temp_place, self.tcx);
730 }
731
732 fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
733 // If we are emitting a `drop` statement, we need to have the cached
734 // diverge cleanup pads ready in case that drop panics.
735 let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
736 let is_generator = self.generator_kind.is_some();
737 let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
738
739 let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
740 unpack!(build_scope_drops(
741 &mut self.cfg,
742 &mut self.scopes.unwind_drops,
743 scope,
744 block,
745 unwind_to,
746 is_generator && needs_cleanup,
747 self.arg_count,
748 ))
749 }
750
751 /// Possibly creates a new source scope if `current_root` and `parent_root`
752 /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
753 pub(crate) fn maybe_new_source_scope(
754 &mut self,
755 span: Span,
756 safety: Option<Safety>,
757 current_id: HirId,
758 parent_id: HirId,
759 ) {
760 let (current_root, parent_root) =
761 if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
762 // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently the
763 // the only part of rustc that tracks MIR -> HIR is the `SourceScopeLocalData::lint_root`
764 // field that tracks lint levels for MIR locations. Normally the number of source scopes
765 // is limited to the set of nodes with lint annotations. The -Zmaximal-hir-to-mir-coverage
766 // flag changes this behavior to maximize the number of source scopes, increasing the
767 // granularity of the MIR->HIR mapping.
768 (current_id, parent_id)
769 } else {
770 // Use `maybe_lint_level_root_bounded` with `self.hir_id` as a bound
771 // to avoid adding Hir dependencies on our parents.
772 // We estimate the true lint roots here to avoid creating a lot of source scopes.
773 (
774 self.tcx.maybe_lint_level_root_bounded(current_id, self.hir_id),
775 self.tcx.maybe_lint_level_root_bounded(parent_id, self.hir_id),
776 )
777 };
778
779 if current_root != parent_root {
780 let lint_level = LintLevel::Explicit(current_root);
781 self.source_scope = self.new_source_scope(span, lint_level, safety);
782 }
783 }
784
785 /// Creates a new source scope, nested in the current one.
786 pub(crate) fn new_source_scope(
787 &mut self,
788 span: Span,
789 lint_level: LintLevel,
790 safety: Option<Safety>,
791 ) -> SourceScope {
792 let parent = self.source_scope;
793 debug!(
794 "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
795 span,
796 lint_level,
797 safety,
798 parent,
799 self.source_scopes.get(parent)
800 );
801 let scope_local_data = SourceScopeLocalData {
802 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
803 lint_root
804 } else {
805 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
806 },
807 safety: safety.unwrap_or_else(|| {
808 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
809 }),
810 };
811 self.source_scopes.push(SourceScopeData {
812 span,
813 parent_scope: Some(parent),
814 inlined: None,
815 inlined_parent_scope: None,
816 local_data: ClearCrossCrate::Set(scope_local_data),
817 })
818 }
819
820 /// Given a span and the current source scope, make a SourceInfo.
821 pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
822 SourceInfo { span, scope: self.source_scope }
823 }
824
825 // Finding scopes
826 // ==============
827
828 /// Returns the scope that we should use as the lifetime of an
829 /// operand. Basically, an operand must live until it is consumed.
830 /// This is similar to, but not quite the same as, the temporary
831 /// scope (which can be larger or smaller).
832 ///
833 /// Consider:
834 /// ```ignore (illustrative)
835 /// let x = foo(bar(X, Y));
836 /// ```
837 /// We wish to pop the storage for X and Y after `bar()` is
838 /// called, not after the whole `let` is completed.
839 ///
840 /// As another example, if the second argument diverges:
841 /// ```ignore (illustrative)
842 /// foo(Box::new(2), panic!())
843 /// ```
844 /// We would allocate the box but then free it on the unwinding
845 /// path; we would also emit a free on the 'success' path from
846 /// panic, but that will turn out to be removed as dead-code.
847 pub(crate) fn local_scope(&self) -> region::Scope {
848 self.scopes.topmost()
849 }
850
851 // Scheduling drops
852 // ================
853
854 pub(crate) fn schedule_drop_storage_and_value(
855 &mut self,
856 span: Span,
857 region_scope: region::Scope,
858 local: Local,
859 ) {
860 self.schedule_drop(span, region_scope, local, DropKind::Storage);
861 self.schedule_drop(span, region_scope, local, DropKind::Value);
862 }
863
864 /// Indicates that `place` should be dropped on exit from `region_scope`.
865 ///
866 /// When called with `DropKind::Storage`, `place` shouldn't be the return
867 /// place, or a function parameter.
868 pub(crate) fn schedule_drop(
869 &mut self,
870 span: Span,
871 region_scope: region::Scope,
872 local: Local,
873 drop_kind: DropKind,
874 ) {
875 let needs_drop = match drop_kind {
876 DropKind::Value => {
877 if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) {
878 return;
879 }
880 true
881 }
882 DropKind::Storage => {
883 if local.index() <= self.arg_count {
884 span_bug!(
885 span,
886 "`schedule_drop` called with local {:?} and arg_count {}",
887 local,
888 self.arg_count,
889 )
890 }
891 false
892 }
893 };
894
895 // When building drops, we try to cache chains of drops to reduce the
896 // number of `DropTree::add_drop` calls. This, however, means that
897 // whenever we add a drop into a scope which already had some entries
898 // in the drop tree built (and thus, cached) for it, we must invalidate
899 // all caches which might branch into the scope which had a drop just
900 // added to it. This is necessary, because otherwise some other code
901 // might use the cache to branch into already built chain of drops,
902 // essentially ignoring the newly added drop.
903 //
904 // For example consider there’s two scopes with a drop in each. These
905 // are built and thus the caches are filled:
906 //
907 // +--------------------------------------------------------+
908 // | +---------------------------------+ |
909 // | | +--------+ +-------------+ | +---------------+ |
910 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
911 // | | +--------+ +-------------+ | +---------------+ |
912 // | +------------|outer_scope cache|--+ |
913 // +------------------------------|middle_scope cache|------+
914 //
915 // Now, a new, inner-most scope is added along with a new drop into
916 // both inner-most and outer-most scopes:
917 //
918 // +------------------------------------------------------------+
919 // | +----------------------------------+ |
920 // | | +--------+ +-------------+ | +---------------+ | +-------------+
921 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
922 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
923 // | | +-+ +-------------+ | |
924 // | +---|invalid outer_scope cache|----+ |
925 // +----=----------------|invalid middle_scope cache|-----------+
926 //
927 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
928 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
929 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
930 // wrong results.
931 //
932 // Note that this code iterates scopes from the inner-most to the outer-most,
933 // invalidating caches of each scope visited. This way bare minimum of the
934 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
935 // cache of outer scope stays intact.
936 //
937 // Since we only cache drops for the unwind path and the generator drop
938 // path, we only need to invalidate the cache for drops that happen on
939 // the unwind or generator drop paths. This means that for
940 // non-generators we don't need to invalidate caches for `DropKind::Storage`.
941 let invalidate_caches = needs_drop || self.generator_kind.is_some();
942 for scope in self.scopes.scopes.iter_mut().rev() {
943 if invalidate_caches {
944 scope.invalidate_cache();
945 }
946
947 if scope.region_scope == region_scope {
948 let region_scope_span = region_scope.span(self.tcx, &self.region_scope_tree);
949 // Attribute scope exit drops to scope's closing brace.
950 let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
951
952 scope.drops.push(DropData {
953 source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
954 local,
955 kind: drop_kind,
956 });
957
958 return;
959 }
960 }
961
962 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
963 }
964
965 /// Indicates that the "local operand" stored in `local` is
966 /// *moved* at some point during execution (see `local_scope` for
967 /// more information about what a "local operand" is -- in short,
968 /// it's an intermediate operand created as part of preparing some
969 /// MIR instruction). We use this information to suppress
970 /// redundant drops on the non-unwind paths. This results in less
971 /// MIR, but also avoids spurious borrow check errors
972 /// (c.f. #64391).
973 ///
974 /// Example: when compiling the call to `foo` here:
975 ///
976 /// ```ignore (illustrative)
977 /// foo(bar(), ...)
978 /// ```
979 ///
980 /// we would evaluate `bar()` to an operand `_X`. We would also
981 /// schedule `_X` to be dropped when the expression scope for
982 /// `foo(bar())` is exited. This is relevant, for example, if the
983 /// later arguments should unwind (it would ensure that `_X` gets
984 /// dropped). However, if no unwind occurs, then `_X` will be
985 /// unconditionally consumed by the `call`:
986 ///
987 /// ```ignore (illustrative)
988 /// bb {
989 /// ...
990 /// _R = CALL(foo, _X, ...)
991 /// }
992 /// ```
993 ///
994 /// However, `_X` is still registered to be dropped, and so if we
995 /// do nothing else, we would generate a `DROP(_X)` that occurs
996 /// after the call. This will later be optimized out by the
997 /// drop-elaboration code, but in the meantime it can lead to
998 /// spurious borrow-check errors -- the problem, ironically, is
999 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1000 /// that it creates. See #64391 for an example.
1001 pub(crate) fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
1002 let local_scope = self.local_scope();
1003 let scope = self.scopes.scopes.last_mut().unwrap();
1004
1005 assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1006
1007 // look for moves of a local variable, like `MOVE(_X)`
1008 let locals_moved = operands.iter().flat_map(|operand| match operand {
1009 Operand::Copy(_) | Operand::Constant(_) => None,
1010 Operand::Move(place) => place.as_local(),
1011 });
1012
1013 for local in locals_moved {
1014 // check if we have a Drop for this operand and -- if so
1015 // -- add it to the list of moved operands. Note that this
1016 // local might not have been an operand created for this
1017 // call, it could come from other places too.
1018 if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1019 scope.moved_locals.push(local);
1020 }
1021 }
1022 }
1023
1024 // Other
1025 // =====
1026
1027 /// Returns the [DropIdx] for the innermost drop if the function unwound at
1028 /// this point. The `DropIdx` will be created if it doesn't already exist.
1029 fn diverge_cleanup(&mut self) -> DropIdx {
1030 // It is okay to use dummy span because the getting scope index on the topmost scope
1031 // must always succeed.
1032 self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1033 }
1034
1035 /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1036 /// some ancestor scope instead of the current scope.
1037 /// It is possible to unwind to some ancestor scope if some drop panics as
1038 /// the program breaks out of a if-then scope.
1039 fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1040 let target = self.scopes.scope_index(target_scope, span);
1041 let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1042 .iter()
1043 .enumerate()
1044 .rev()
1045 .find_map(|(scope_idx, scope)| {
1046 scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1047 })
1048 .unwrap_or((0, ROOT_NODE));
1049
1050 if uncached_scope > target {
1051 return cached_drop;
1052 }
1053
1054 let is_generator = self.generator_kind.is_some();
1055 for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1056 for drop in &scope.drops {
1057 if is_generator || drop.kind == DropKind::Value {
1058 cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1059 }
1060 }
1061 scope.cached_unwind_block = Some(cached_drop);
1062 }
1063
1064 cached_drop
1065 }
1066
1067 /// Prepares to create a path that performs all required cleanup for a
1068 /// terminator that can unwind at the given basic block.
1069 ///
1070 /// This path terminates in Resume. The path isn't created until after all
1071 /// of the non-unwind paths in this item have been lowered.
1072 pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1073 debug_assert!(
1074 matches!(
1075 self.cfg.block_data(start).terminator().kind,
1076 TerminatorKind::Assert { .. }
1077 | TerminatorKind::Call { .. }
1078 | TerminatorKind::Drop { .. }
1079 | TerminatorKind::FalseUnwind { .. }
1080 | TerminatorKind::InlineAsm { .. }
1081 ),
1082 "diverge_from called on block with terminator that cannot unwind."
1083 );
1084
1085 let next_drop = self.diverge_cleanup();
1086 self.scopes.unwind_drops.add_entry(start, next_drop);
1087 }
1088
1089 /// Sets up a path that performs all required cleanup for dropping a
1090 /// generator, starting from the given block that ends in
1091 /// [TerminatorKind::Yield].
1092 ///
1093 /// This path terminates in GeneratorDrop.
1094 pub(crate) fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) {
1095 debug_assert!(
1096 matches!(
1097 self.cfg.block_data(yield_block).terminator().kind,
1098 TerminatorKind::Yield { .. }
1099 ),
1100 "generator_drop_cleanup called on block with non-yield terminator."
1101 );
1102 let (uncached_scope, mut cached_drop) = self
1103 .scopes
1104 .scopes
1105 .iter()
1106 .enumerate()
1107 .rev()
1108 .find_map(|(scope_idx, scope)| {
1109 scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1110 })
1111 .unwrap_or((0, ROOT_NODE));
1112
1113 for scope in &mut self.scopes.scopes[uncached_scope..] {
1114 for drop in &scope.drops {
1115 cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop);
1116 }
1117 scope.cached_generator_drop_block = Some(cached_drop);
1118 }
1119
1120 self.scopes.generator_drops.add_entry(yield_block, cached_drop);
1121 }
1122
1123 /// Utility function for *non*-scope code to build their own drops
1124 /// Force a drop at this point in the MIR by creating a new block.
1125 pub(crate) fn build_drop_and_replace(
1126 &mut self,
1127 block: BasicBlock,
1128 span: Span,
1129 place: Place<'tcx>,
1130 value: Rvalue<'tcx>,
1131 ) -> BlockAnd<()> {
1132 let source_info = self.source_info(span);
1133
1134 // create the new block for the assignment
1135 let assign = self.cfg.start_new_block();
1136 self.cfg.push_assign(assign, source_info, place, value.clone());
1137
1138 // create the new block for the assignment in the case of unwinding
1139 let assign_unwind = self.cfg.start_new_cleanup_block();
1140 self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1141
1142 self.cfg.terminate(
1143 block,
1144 source_info,
1145 TerminatorKind::Drop {
1146 place,
1147 target: assign,
1148 unwind: UnwindAction::Cleanup(assign_unwind),
1149 replace: true,
1150 },
1151 );
1152 self.diverge_from(block);
1153
1154 assign.unit()
1155 }
1156
1157 /// Creates an `Assert` terminator and return the success block.
1158 /// If the boolean condition operand is not the expected value,
1159 /// a runtime panic will be caused with the given message.
1160 pub(crate) fn assert(
1161 &mut self,
1162 block: BasicBlock,
1163 cond: Operand<'tcx>,
1164 expected: bool,
1165 msg: AssertMessage<'tcx>,
1166 span: Span,
1167 ) -> BasicBlock {
1168 let source_info = self.source_info(span);
1169 let success_block = self.cfg.start_new_block();
1170
1171 self.cfg.terminate(
1172 block,
1173 source_info,
1174 TerminatorKind::Assert {
1175 cond,
1176 expected,
1177 msg: Box::new(msg),
1178 target: success_block,
1179 unwind: UnwindAction::Continue,
1180 },
1181 );
1182 self.diverge_from(block);
1183
1184 success_block
1185 }
1186
1187 /// Unschedules any drops in the top scope.
1188 ///
1189 /// This is only needed for `match` arm scopes, because they have one
1190 /// entrance per pattern, but only one exit.
1191 pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1192 let top_scope = self.scopes.scopes.last_mut().unwrap();
1193
1194 assert_eq!(top_scope.region_scope, region_scope);
1195
1196 top_scope.drops.clear();
1197 top_scope.invalidate_cache();
1198 }
1199 }
1200
1201 /// Builds drops for `pop_scope` and `leave_top_scope`.
1202 fn build_scope_drops<'tcx>(
1203 cfg: &mut CFG<'tcx>,
1204 unwind_drops: &mut DropTree,
1205 scope: &Scope,
1206 mut block: BasicBlock,
1207 mut unwind_to: DropIdx,
1208 storage_dead_on_unwind: bool,
1209 arg_count: usize,
1210 ) -> BlockAnd<()> {
1211 debug!("build_scope_drops({:?} -> {:?})", block, scope);
1212
1213 // Build up the drops in evaluation order. The end result will
1214 // look like:
1215 //
1216 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1217 // | | |
1218 // : | |
1219 // V V
1220 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1221 //
1222 // The horizontal arrows represent the execution path when the drops return
1223 // successfully. The downwards arrows represent the execution path when the
1224 // drops panic (panicking while unwinding will abort, so there's no need for
1225 // another set of arrows).
1226 //
1227 // For generators, we unwind from a drop on a local to its StorageDead
1228 // statement. For other functions we don't worry about StorageDead. The
1229 // drops for the unwind path should have already been generated by
1230 // `diverge_cleanup_gen`.
1231
1232 for drop_data in scope.drops.iter().rev() {
1233 let source_info = drop_data.source_info;
1234 let local = drop_data.local;
1235
1236 match drop_data.kind {
1237 DropKind::Value => {
1238 // `unwind_to` should drop the value that we're about to
1239 // schedule. If dropping this value panics, then we continue
1240 // with the *next* value on the unwind path.
1241 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1242 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1243 unwind_to = unwind_drops.drops[unwind_to].1;
1244
1245 // If the operand has been moved, and we are not on an unwind
1246 // path, then don't generate the drop. (We only take this into
1247 // account for non-unwind paths so as not to disturb the
1248 // caching mechanism.)
1249 if scope.moved_locals.iter().any(|&o| o == local) {
1250 continue;
1251 }
1252
1253 unwind_drops.add_entry(block, unwind_to);
1254
1255 let next = cfg.start_new_block();
1256 cfg.terminate(
1257 block,
1258 source_info,
1259 TerminatorKind::Drop {
1260 place: local.into(),
1261 target: next,
1262 unwind: UnwindAction::Continue,
1263 replace: false,
1264 },
1265 );
1266 block = next;
1267 }
1268 DropKind::Storage => {
1269 if storage_dead_on_unwind {
1270 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1271 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1272 unwind_to = unwind_drops.drops[unwind_to].1;
1273 }
1274 // Only temps and vars need their storage dead.
1275 assert!(local.index() > arg_count);
1276 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1277 }
1278 }
1279 }
1280 block.unit()
1281 }
1282
1283 impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1284 /// Build a drop tree for a breakable scope.
1285 ///
1286 /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1287 /// loop. Otherwise this is for `break` or `return`.
1288 fn build_exit_tree(
1289 &mut self,
1290 mut drops: DropTree,
1291 else_scope: region::Scope,
1292 span: Span,
1293 continue_block: Option<BasicBlock>,
1294 ) -> Option<BlockAnd<()>> {
1295 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1296 blocks[ROOT_NODE] = continue_block;
1297
1298 drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1299 let is_generator = self.generator_kind.is_some();
1300
1301 // Link the exit drop tree to unwind drop tree.
1302 if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
1303 let unwind_target = self.diverge_cleanup_target(else_scope, span);
1304 let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1305 for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
1306 match drop_data.0.kind {
1307 DropKind::Storage => {
1308 if is_generator {
1309 let unwind_drop = self
1310 .scopes
1311 .unwind_drops
1312 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1313 unwind_indices.push(unwind_drop);
1314 } else {
1315 unwind_indices.push(unwind_indices[drop_data.1]);
1316 }
1317 }
1318 DropKind::Value => {
1319 let unwind_drop = self
1320 .scopes
1321 .unwind_drops
1322 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1323 self.scopes
1324 .unwind_drops
1325 .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
1326 unwind_indices.push(unwind_drop);
1327 }
1328 }
1329 }
1330 }
1331 blocks[ROOT_NODE].map(BasicBlock::unit)
1332 }
1333
1334 /// Build the unwind and generator drop trees.
1335 pub(crate) fn build_drop_trees(&mut self) {
1336 if self.generator_kind.is_some() {
1337 self.build_generator_drop_trees();
1338 } else {
1339 Self::build_unwind_tree(
1340 &mut self.cfg,
1341 &mut self.scopes.unwind_drops,
1342 self.fn_span,
1343 &mut None,
1344 );
1345 }
1346 }
1347
1348 fn build_generator_drop_trees(&mut self) {
1349 // Build the drop tree for dropping the generator while it's suspended.
1350 let drops = &mut self.scopes.generator_drops;
1351 let cfg = &mut self.cfg;
1352 let fn_span = self.fn_span;
1353 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1354 drops.build_mir::<GeneratorDrop>(cfg, &mut blocks);
1355 if let Some(root_block) = blocks[ROOT_NODE] {
1356 cfg.terminate(
1357 root_block,
1358 SourceInfo::outermost(fn_span),
1359 TerminatorKind::GeneratorDrop,
1360 );
1361 }
1362
1363 // Build the drop tree for unwinding in the normal control flow paths.
1364 let resume_block = &mut None;
1365 let unwind_drops = &mut self.scopes.unwind_drops;
1366 Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
1367
1368 // Build the drop tree for unwinding when dropping a suspended
1369 // generator.
1370 //
1371 // This is a different tree to the standard unwind paths here to
1372 // prevent drop elaboration from creating drop flags that would have
1373 // to be captured by the generator. I'm not sure how important this
1374 // optimization is, but it is here.
1375 for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
1376 if let DropKind::Value = drop_data.0.kind {
1377 debug_assert!(drop_data.1 < drops.drops.next_index());
1378 drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
1379 }
1380 }
1381 Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
1382 }
1383
1384 fn build_unwind_tree(
1385 cfg: &mut CFG<'tcx>,
1386 drops: &mut DropTree,
1387 fn_span: Span,
1388 resume_block: &mut Option<BasicBlock>,
1389 ) {
1390 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1391 blocks[ROOT_NODE] = *resume_block;
1392 drops.build_mir::<Unwind>(cfg, &mut blocks);
1393 if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1394 cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::Resume);
1395
1396 *resume_block = blocks[ROOT_NODE];
1397 }
1398 }
1399 }
1400
1401 // DropTreeBuilder implementations.
1402
1403 struct ExitScopes;
1404
1405 impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1406 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1407 cfg.start_new_block()
1408 }
1409 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1410 cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to };
1411 }
1412 }
1413
1414 struct GeneratorDrop;
1415
1416 impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop {
1417 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1418 cfg.start_new_block()
1419 }
1420 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1421 let term = cfg.block_data_mut(from).terminator_mut();
1422 if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1423 *drop = Some(to);
1424 } else {
1425 span_bug!(
1426 term.source_info.span,
1427 "cannot enter generator drop tree from {:?}",
1428 term.kind
1429 )
1430 }
1431 }
1432 }
1433
1434 struct Unwind;
1435
1436 impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1437 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1438 cfg.start_new_cleanup_block()
1439 }
1440 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1441 let term = &mut cfg.block_data_mut(from).terminator_mut();
1442 match &mut term.kind {
1443 TerminatorKind::Drop { unwind, .. } => {
1444 if let UnwindAction::Cleanup(unwind) = *unwind {
1445 let source_info = term.source_info;
1446 cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
1447 } else {
1448 *unwind = UnwindAction::Cleanup(to);
1449 }
1450 }
1451 TerminatorKind::FalseUnwind { unwind, .. }
1452 | TerminatorKind::Call { unwind, .. }
1453 | TerminatorKind::Assert { unwind, .. }
1454 | TerminatorKind::InlineAsm { unwind, .. } => {
1455 *unwind = UnwindAction::Cleanup(to);
1456 }
1457 TerminatorKind::Goto { .. }
1458 | TerminatorKind::SwitchInt { .. }
1459 | TerminatorKind::Resume
1460 | TerminatorKind::Terminate
1461 | TerminatorKind::Return
1462 | TerminatorKind::Unreachable
1463 | TerminatorKind::Yield { .. }
1464 | TerminatorKind::GeneratorDrop
1465 | TerminatorKind::FalseEdge { .. } => {
1466 span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1467 }
1468 }
1469 }
1470 }