]> git.proxmox.com Git - rustc.git/blob - compiler/rustc_mir_build/src/build/scope.rs
New upstream version 1.50.0+dfsg1
[rustc.git] / compiler / rustc_mir_build / src / build / scope.rs
1 /*!
2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the THIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
5 `region::Scope`.
6
7 ### SEME Regions
8
9 When pushing a new [Scope], we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
15
16 For now, we record the `region::Scope` to each SEME region for later reference
17 (see caveat in next paragraph). This is because destruction scopes are tied to
18 them. This may change in the future so that MIR lowering determines its own
19 destruction scopes.
20
21 ### Not so SEME Regions
22
23 In the course of building matches, it sometimes happens that certain code
24 (namely guards) gets executed multiple times. This means that the scope lexical
25 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26 mapping is from one scope to a vector of SEME regions. Since the SEME regions
27 are disjoint, the mapping is still one-to-one for the set of SEME regions that
28 we're currently in.
29
30 Also in matches, the scopes assigned to arms are not always even SEME regions!
31 Each arm has a single region with one entry for each pattern. We manually
32 manipulate the scheduled drops in this scope to avoid dropping things multiple
33 times.
34
35 ### Drops
36
37 The primary purpose for scopes is to insert drops: while building
38 the contents, we also accumulate places that need to be dropped upon
39 exit from each scope. This is done by calling `schedule_drop`. Once a
40 drop is scheduled, whenever we branch out we will insert drops of all
41 those places onto the outgoing edge. Note that we don't know the full
42 set of scheduled drops up front, and so whenever we exit from the
43 scope we only drop the values scheduled thus far. For example, consider
44 the scope S corresponding to this loop:
45
46 ```
47 # let cond = true;
48 loop {
49 let x = ..;
50 if cond { break; }
51 let y = ..;
52 }
53 ```
54
55 When processing the `let x`, we will add one drop to the scope for
56 `x`. The break will then insert a drop for `x`. When we process `let
57 y`, we will add another drop (in fact, to a subscope, but let's ignore
58 that for now); any later drops would also drop `y`.
59
60 ### Early exit
61
62 There are numerous "normal" ways to early exit a scope: `break`,
63 `continue`, `return` (panics are handled separately). Whenever an
64 early exit occurs, the method `break_scope` is called. It is given the
65 current point in execution where the early exit occurs, as well as the
66 scope you want to branch to (note that all early exits from to some
67 other enclosing scope). `break_scope` will record the set of drops currently
68 scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69 will be added to the CFG.
70
71 Panics are handled in a similar fashion, except that the drops are added to the
72 MIR once the rest of the function has finished being lowered. If a terminator
73 can panic, call `diverge_from(block)` with the block containing the terminator
74 `block`.
75
76 ### Breakable scopes
77
78 In addition to the normal scope stack, we track a loop scope stack
79 that contains only loops and breakable blocks. It tracks where a `break`,
80 `continue` or `return` should go to.
81
82 */
83
84 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
85 use crate::thir::{Expr, ExprRef, LintLevel};
86 use rustc_data_structures::fx::FxHashMap;
87 use rustc_index::vec::IndexVec;
88 use rustc_middle::middle::region;
89 use rustc_middle::mir::*;
90 use rustc_span::{Span, DUMMY_SP};
91
92 #[derive(Debug)]
93 pub struct Scopes<'tcx> {
94 scopes: Vec<Scope>,
95 /// The current set of breakable scopes. See module comment for more details.
96 breakable_scopes: Vec<BreakableScope<'tcx>>,
97
98 /// Drops that need to be done on unwind paths. See the comment on
99 /// [DropTree] for more details.
100 unwind_drops: DropTree,
101
102 /// Drops that need to be done on paths to the `GeneratorDrop` terminator.
103 generator_drops: DropTree,
104 }
105
106 #[derive(Debug)]
107 struct Scope {
108 /// The source scope this scope was created in.
109 source_scope: SourceScope,
110
111 /// the region span of this scope within source code.
112 region_scope: region::Scope,
113
114 /// the span of that region_scope
115 region_scope_span: Span,
116
117 /// set of places to drop when exiting this scope. This starts
118 /// out empty but grows as variables are declared during the
119 /// building process. This is a stack, so we always drop from the
120 /// end of the vector (top of the stack) first.
121 drops: Vec<DropData>,
122
123 moved_locals: Vec<Local>,
124
125 /// The drop index that will drop everything in and below this scope on an
126 /// unwind path.
127 cached_unwind_block: Option<DropIdx>,
128
129 /// The drop index that will drop everything in and below this scope on a
130 /// generator drop path.
131 cached_generator_drop_block: Option<DropIdx>,
132 }
133
134 #[derive(Clone, Copy, Debug)]
135 struct DropData {
136 /// The `Span` where drop obligation was incurred (typically where place was
137 /// declared)
138 source_info: SourceInfo,
139
140 /// local to drop
141 local: Local,
142
143 /// Whether this is a value Drop or a StorageDead.
144 kind: DropKind,
145 }
146
147 #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
148 pub(crate) enum DropKind {
149 Value,
150 Storage,
151 }
152
153 #[derive(Debug)]
154 struct BreakableScope<'tcx> {
155 /// Region scope of the loop
156 region_scope: region::Scope,
157 /// The destination of the loop/block expression itself (i.e., where to put
158 /// the result of a `break` or `return` expression)
159 break_destination: Place<'tcx>,
160 /// Drops that happen on the `break`/`return` path.
161 break_drops: DropTree,
162 /// Drops that happen on the `continue` path.
163 continue_drops: Option<DropTree>,
164 }
165
166 /// The target of an expression that breaks out of a scope
167 #[derive(Clone, Copy, Debug)]
168 crate enum BreakableTarget {
169 Continue(region::Scope),
170 Break(region::Scope),
171 Return,
172 }
173
174 rustc_index::newtype_index! {
175 struct DropIdx { .. }
176 }
177
178 const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
179
180 /// A tree of drops that we have deferred lowering. It's used for:
181 ///
182 /// * Drops on unwind paths
183 /// * Drops on generator drop paths (when a suspended generator is dropped)
184 /// * Drops on return and loop exit paths
185 ///
186 /// Once no more nodes could be added to the tree, we lower it to MIR in one go
187 /// in `build_mir`.
188 #[derive(Debug)]
189 struct DropTree {
190 /// Drops in the tree.
191 drops: IndexVec<DropIdx, (DropData, DropIdx)>,
192 /// Map for finding the inverse of the `next_drop` relation:
193 ///
194 /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
195 previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
196 /// Edges into the `DropTree` that need to be added once it's lowered.
197 entry_points: Vec<(DropIdx, BasicBlock)>,
198 }
199
200 impl Scope {
201 /// Whether there's anything to do for the cleanup path, that is,
202 /// when unwinding through this scope. This includes destructors,
203 /// but not StorageDead statements, which don't get emitted at all
204 /// for unwinding, for several reasons:
205 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
206 /// * LLVM's memory dependency analysis can't handle it atm
207 /// * polluting the cleanup MIR with StorageDead creates
208 /// landing pads even though there's no actual destructors
209 /// * freeing up stack space has no effect during unwinding
210 /// Note that for generators we do emit StorageDeads, for the
211 /// use of optimizations in the MIR generator transform.
212 fn needs_cleanup(&self) -> bool {
213 self.drops.iter().any(|drop| match drop.kind {
214 DropKind::Value => true,
215 DropKind::Storage => false,
216 })
217 }
218
219 fn invalidate_cache(&mut self) {
220 self.cached_unwind_block = None;
221 self.cached_generator_drop_block = None;
222 }
223 }
224
225 /// A trait that determined how [DropTree] creates its blocks and
226 /// links to any entry nodes.
227 trait DropTreeBuilder<'tcx> {
228 /// Create a new block for the tree. This should call either
229 /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
230 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
231
232 /// Links a block outside the drop tree, `from`, to the block `to` inside
233 /// the drop tree.
234 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
235 }
236
237 impl DropTree {
238 fn new() -> Self {
239 // The root node of the tree doesn't represent a drop, but instead
240 // represents the block in the tree that should be jumped to once all
241 // of the required drops have been performed.
242 let fake_source_info = SourceInfo::outermost(DUMMY_SP);
243 let fake_data =
244 DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
245 let drop_idx = DropIdx::MAX;
246 let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
247 Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
248 }
249
250 fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
251 let drops = &mut self.drops;
252 *self
253 .previous_drops
254 .entry((next, drop.local, drop.kind))
255 .or_insert_with(|| drops.push((drop, next)))
256 }
257
258 fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
259 debug_assert!(to < self.drops.next_index());
260 self.entry_points.push((to, from));
261 }
262
263 /// Builds the MIR for a given drop tree.
264 ///
265 /// `blocks` should have the same length as `self.drops`, and may have its
266 /// first value set to some already existing block.
267 fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
268 &mut self,
269 cfg: &mut CFG<'tcx>,
270 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
271 ) {
272 debug!("DropTree::build_mir(drops = {:#?})", self);
273 assert_eq!(blocks.len(), self.drops.len());
274
275 self.assign_blocks::<T>(cfg, blocks);
276 self.link_blocks(cfg, blocks)
277 }
278
279 /// Assign blocks for all of the drops in the drop tree that need them.
280 fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
281 &mut self,
282 cfg: &mut CFG<'tcx>,
283 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
284 ) {
285 // StorageDead statements can share blocks with each other and also with
286 // a Drop terminator. We iterate through the drops to find which drops
287 // need their own block.
288 #[derive(Clone, Copy)]
289 enum Block {
290 // This drop is unreachable
291 None,
292 // This drop is only reachable through the `StorageDead` with the
293 // specified index.
294 Shares(DropIdx),
295 // This drop has more than one way of being reached, or it is
296 // branched to from outside the tree, or its predecessor is a
297 // `Value` drop.
298 Own,
299 }
300
301 let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
302 if blocks[ROOT_NODE].is_some() {
303 // In some cases (such as drops for `continue`) the root node
304 // already has a block. In this case, make sure that we don't
305 // override it.
306 needs_block[ROOT_NODE] = Block::Own;
307 }
308
309 // Sort so that we only need to check the last value.
310 let entry_points = &mut self.entry_points;
311 entry_points.sort();
312
313 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
314 if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
315 let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
316 needs_block[drop_idx] = Block::Own;
317 while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
318 let entry_block = entry_points.pop().unwrap().1;
319 T::add_entry(cfg, entry_block, block);
320 }
321 }
322 match needs_block[drop_idx] {
323 Block::None => continue,
324 Block::Own => {
325 blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
326 }
327 Block::Shares(pred) => {
328 blocks[drop_idx] = blocks[pred];
329 }
330 }
331 if let DropKind::Value = drop_data.0.kind {
332 needs_block[drop_data.1] = Block::Own;
333 } else if drop_idx != ROOT_NODE {
334 match &mut needs_block[drop_data.1] {
335 pred @ Block::None => *pred = Block::Shares(drop_idx),
336 pred @ Block::Shares(_) => *pred = Block::Own,
337 Block::Own => (),
338 }
339 }
340 }
341
342 debug!("assign_blocks: blocks = {:#?}", blocks);
343 assert!(entry_points.is_empty());
344 }
345
346 fn link_blocks<'tcx>(
347 &self,
348 cfg: &mut CFG<'tcx>,
349 blocks: &IndexVec<DropIdx, Option<BasicBlock>>,
350 ) {
351 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
352 let block = if let Some(block) = blocks[drop_idx] {
353 block
354 } else {
355 continue;
356 };
357 match drop_data.0.kind {
358 DropKind::Value => {
359 let terminator = TerminatorKind::Drop {
360 target: blocks[drop_data.1].unwrap(),
361 // The caller will handle this if needed.
362 unwind: None,
363 place: drop_data.0.local.into(),
364 };
365 cfg.terminate(block, drop_data.0.source_info, terminator);
366 }
367 // Root nodes don't correspond to a drop.
368 DropKind::Storage if drop_idx == ROOT_NODE => {}
369 DropKind::Storage => {
370 let stmt = Statement {
371 source_info: drop_data.0.source_info,
372 kind: StatementKind::StorageDead(drop_data.0.local),
373 };
374 cfg.push(block, stmt);
375 let target = blocks[drop_data.1].unwrap();
376 if target != block {
377 // Diagnostics don't use this `Span` but debuginfo
378 // might. Since we don't want breakpoints to be placed
379 // here, especially when this is on an unwind path, we
380 // use `DUMMY_SP`.
381 let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
382 let terminator = TerminatorKind::Goto { target };
383 cfg.terminate(block, source_info, terminator);
384 }
385 }
386 }
387 }
388 }
389 }
390
391 impl<'tcx> Scopes<'tcx> {
392 pub(crate) fn new() -> Self {
393 Self {
394 scopes: Vec::new(),
395 breakable_scopes: Vec::new(),
396 unwind_drops: DropTree::new(),
397 generator_drops: DropTree::new(),
398 }
399 }
400
401 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
402 debug!("push_scope({:?})", region_scope);
403 self.scopes.push(Scope {
404 source_scope: vis_scope,
405 region_scope: region_scope.0,
406 region_scope_span: region_scope.1.span,
407 drops: vec![],
408 moved_locals: vec![],
409 cached_unwind_block: None,
410 cached_generator_drop_block: None,
411 });
412 }
413
414 fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
415 let scope = self.scopes.pop().unwrap();
416 assert_eq!(scope.region_scope, region_scope.0);
417 scope
418 }
419
420 fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
421 self.scopes
422 .iter()
423 .rposition(|scope| scope.region_scope == region_scope)
424 .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
425 }
426
427 /// Returns the topmost active scope, which is known to be alive until
428 /// the next scope expression.
429 fn topmost(&self) -> region::Scope {
430 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
431 }
432 }
433
434 impl<'a, 'tcx> Builder<'a, 'tcx> {
435 // Adding and removing scopes
436 // ==========================
437 // Start a breakable scope, which tracks where `continue`, `break` and
438 // `return` should branch to.
439 crate fn in_breakable_scope<F>(
440 &mut self,
441 loop_block: Option<BasicBlock>,
442 break_destination: Place<'tcx>,
443 span: Span,
444 f: F,
445 ) -> BlockAnd<()>
446 where
447 F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
448 {
449 let region_scope = self.scopes.topmost();
450 let scope = BreakableScope {
451 region_scope,
452 break_destination,
453 break_drops: DropTree::new(),
454 continue_drops: loop_block.map(|_| DropTree::new()),
455 };
456 self.scopes.breakable_scopes.push(scope);
457 let normal_exit_block = f(self);
458 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
459 assert!(breakable_scope.region_scope == region_scope);
460 let break_block = self.build_exit_tree(breakable_scope.break_drops, None);
461 if let Some(drops) = breakable_scope.continue_drops {
462 self.build_exit_tree(drops, loop_block);
463 }
464 match (normal_exit_block, break_block) {
465 (Some(block), None) | (None, Some(block)) => block,
466 (None, None) => self.cfg.start_new_block().unit(),
467 (Some(normal_block), Some(exit_block)) => {
468 let target = self.cfg.start_new_block();
469 let source_info = self.source_info(span);
470 self.cfg.terminate(
471 unpack!(normal_block),
472 source_info,
473 TerminatorKind::Goto { target },
474 );
475 self.cfg.terminate(
476 unpack!(exit_block),
477 source_info,
478 TerminatorKind::Goto { target },
479 );
480 target.unit()
481 }
482 }
483 }
484
485 crate fn in_opt_scope<F, R>(
486 &mut self,
487 opt_scope: Option<(region::Scope, SourceInfo)>,
488 f: F,
489 ) -> BlockAnd<R>
490 where
491 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
492 {
493 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
494 if let Some(region_scope) = opt_scope {
495 self.push_scope(region_scope);
496 }
497 let mut block;
498 let rv = unpack!(block = f(self));
499 if let Some(region_scope) = opt_scope {
500 unpack!(block = self.pop_scope(region_scope, block));
501 }
502 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
503 block.and(rv)
504 }
505
506 /// Convenience wrapper that pushes a scope and then executes `f`
507 /// to build its contents, popping the scope afterwards.
508 crate fn in_scope<F, R>(
509 &mut self,
510 region_scope: (region::Scope, SourceInfo),
511 lint_level: LintLevel,
512 f: F,
513 ) -> BlockAnd<R>
514 where
515 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
516 {
517 debug!("in_scope(region_scope={:?})", region_scope);
518 let source_scope = self.source_scope;
519 let tcx = self.hir.tcx();
520 if let LintLevel::Explicit(current_hir_id) = lint_level {
521 // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
522 // to avoid adding Hir dependences on our parents.
523 // We estimate the true lint roots here to avoid creating a lot of source scopes.
524
525 let parent_root = tcx.maybe_lint_level_root_bounded(
526 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root,
527 self.hir.root_lint_level,
528 );
529 let current_root =
530 tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir.root_lint_level);
531
532 if parent_root != current_root {
533 self.source_scope = self.new_source_scope(
534 region_scope.1.span,
535 LintLevel::Explicit(current_root),
536 None,
537 );
538 }
539 }
540 self.push_scope(region_scope);
541 let mut block;
542 let rv = unpack!(block = f(self));
543 unpack!(block = self.pop_scope(region_scope, block));
544 self.source_scope = source_scope;
545 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
546 block.and(rv)
547 }
548
549 /// Push a scope onto the stack. You can then build code in this
550 /// scope and call `pop_scope` afterwards. Note that these two
551 /// calls must be paired; using `in_scope` as a convenience
552 /// wrapper maybe preferable.
553 crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
554 self.scopes.push_scope(region_scope, self.source_scope);
555 }
556
557 /// Pops a scope, which should have region scope `region_scope`,
558 /// adding any drops onto the end of `block` that are needed.
559 /// This must match 1-to-1 with `push_scope`.
560 crate fn pop_scope(
561 &mut self,
562 region_scope: (region::Scope, SourceInfo),
563 mut block: BasicBlock,
564 ) -> BlockAnd<()> {
565 debug!("pop_scope({:?}, {:?})", region_scope, block);
566
567 block = self.leave_top_scope(block);
568
569 self.scopes.pop_scope(region_scope);
570
571 block.unit()
572 }
573
574 /// Sets up the drops for breaking from `block` to `target`.
575 crate fn break_scope(
576 &mut self,
577 mut block: BasicBlock,
578 value: Option<ExprRef<'tcx>>,
579 target: BreakableTarget,
580 source_info: SourceInfo,
581 ) -> BlockAnd<()> {
582 let span = source_info.span;
583
584 let get_scope_index = |scope: region::Scope| {
585 // find the loop-scope by its `region::Scope`.
586 self.scopes
587 .breakable_scopes
588 .iter()
589 .rposition(|breakable_scope| breakable_scope.region_scope == scope)
590 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
591 };
592 let (break_index, destination) = match target {
593 BreakableTarget::Return => {
594 let scope = &self.scopes.breakable_scopes[0];
595 if scope.break_destination != Place::return_place() {
596 span_bug!(span, "`return` in item with no return scope");
597 }
598 (0, Some(scope.break_destination))
599 }
600 BreakableTarget::Break(scope) => {
601 let break_index = get_scope_index(scope);
602 let scope = &self.scopes.breakable_scopes[break_index];
603 (break_index, Some(scope.break_destination))
604 }
605 BreakableTarget::Continue(scope) => {
606 let break_index = get_scope_index(scope);
607 (break_index, None)
608 }
609 };
610
611 if let Some(destination) = destination {
612 if let Some(value) = value {
613 debug!("stmt_expr Break val block_context.push(SubExpr)");
614 self.block_context.push(BlockFrame::SubExpr);
615 unpack!(block = self.into(destination, block, value));
616 self.block_context.pop();
617 } else {
618 self.cfg.push_assign_unit(block, source_info, destination, self.hir.tcx())
619 }
620 } else {
621 assert!(value.is_none(), "`return` and `break` should have a destination");
622 }
623
624 let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
625 let scope_index = self.scopes.scope_index(region_scope, span);
626 let drops = if destination.is_some() {
627 &mut self.scopes.breakable_scopes[break_index].break_drops
628 } else {
629 self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap()
630 };
631 let mut drop_idx = ROOT_NODE;
632 for scope in &self.scopes.scopes[scope_index + 1..] {
633 for drop in &scope.drops {
634 drop_idx = drops.add_drop(*drop, drop_idx);
635 }
636 }
637 drops.add_entry(block, drop_idx);
638
639 // `build_drop_tree` doesn't have access to our source_info, so we
640 // create a dummy terminator now. `TerminatorKind::Resume` is used
641 // because MIR type checking will panic if it hasn't been overwritten.
642 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
643
644 self.cfg.start_new_block().unit()
645 }
646
647 crate fn exit_top_scope(
648 &mut self,
649 mut block: BasicBlock,
650 target: BasicBlock,
651 source_info: SourceInfo,
652 ) {
653 block = self.leave_top_scope(block);
654 self.cfg.terminate(block, source_info, TerminatorKind::Goto { target });
655 }
656
657 fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
658 // If we are emitting a `drop` statement, we need to have the cached
659 // diverge cleanup pads ready in case that drop panics.
660 let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup());
661 let is_generator = self.generator_kind.is_some();
662 let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
663
664 let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
665 unpack!(build_scope_drops(
666 &mut self.cfg,
667 &mut self.scopes.unwind_drops,
668 scope,
669 block,
670 unwind_to,
671 is_generator && needs_cleanup,
672 self.arg_count,
673 ))
674 }
675
676 /// Creates a new source scope, nested in the current one.
677 crate fn new_source_scope(
678 &mut self,
679 span: Span,
680 lint_level: LintLevel,
681 safety: Option<Safety>,
682 ) -> SourceScope {
683 let parent = self.source_scope;
684 debug!(
685 "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
686 span,
687 lint_level,
688 safety,
689 parent,
690 self.source_scopes.get(parent)
691 );
692 let scope_local_data = SourceScopeLocalData {
693 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
694 lint_root
695 } else {
696 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
697 },
698 safety: safety.unwrap_or_else(|| {
699 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
700 }),
701 };
702 self.source_scopes.push(SourceScopeData {
703 span,
704 parent_scope: Some(parent),
705 inlined: None,
706 inlined_parent_scope: None,
707 local_data: ClearCrossCrate::Set(scope_local_data),
708 })
709 }
710
711 /// Given a span and the current source scope, make a SourceInfo.
712 crate fn source_info(&self, span: Span) -> SourceInfo {
713 SourceInfo { span, scope: self.source_scope }
714 }
715
716 // Finding scopes
717 // ==============
718 /// Returns the scope that we should use as the lifetime of an
719 /// operand. Basically, an operand must live until it is consumed.
720 /// This is similar to, but not quite the same as, the temporary
721 /// scope (which can be larger or smaller).
722 ///
723 /// Consider:
724 ///
725 /// let x = foo(bar(X, Y));
726 ///
727 /// We wish to pop the storage for X and Y after `bar()` is
728 /// called, not after the whole `let` is completed.
729 ///
730 /// As another example, if the second argument diverges:
731 ///
732 /// foo(Box::new(2), panic!())
733 ///
734 /// We would allocate the box but then free it on the unwinding
735 /// path; we would also emit a free on the 'success' path from
736 /// panic, but that will turn out to be removed as dead-code.
737 crate fn local_scope(&self) -> region::Scope {
738 self.scopes.topmost()
739 }
740
741 // Scheduling drops
742 // ================
743 crate fn schedule_drop_storage_and_value(
744 &mut self,
745 span: Span,
746 region_scope: region::Scope,
747 local: Local,
748 ) {
749 self.schedule_drop(span, region_scope, local, DropKind::Storage);
750 self.schedule_drop(span, region_scope, local, DropKind::Value);
751 }
752
753 /// Indicates that `place` should be dropped on exit from `region_scope`.
754 ///
755 /// When called with `DropKind::Storage`, `place` shouldn't be the return
756 /// place, or a function parameter.
757 crate fn schedule_drop(
758 &mut self,
759 span: Span,
760 region_scope: region::Scope,
761 local: Local,
762 drop_kind: DropKind,
763 ) {
764 let needs_drop = match drop_kind {
765 DropKind::Value => {
766 if !self.hir.needs_drop(self.local_decls[local].ty) {
767 return;
768 }
769 true
770 }
771 DropKind::Storage => {
772 if local.index() <= self.arg_count {
773 span_bug!(
774 span,
775 "`schedule_drop` called with local {:?} and arg_count {}",
776 local,
777 self.arg_count,
778 )
779 }
780 false
781 }
782 };
783
784 // When building drops, we try to cache chains of drops to reduce the
785 // number of `DropTree::add_drop` calls. This, however, means that
786 // whenever we add a drop into a scope which already had some entries
787 // in the drop tree built (and thus, cached) for it, we must invalidate
788 // all caches which might branch into the scope which had a drop just
789 // added to it. This is necessary, because otherwise some other code
790 // might use the cache to branch into already built chain of drops,
791 // essentially ignoring the newly added drop.
792 //
793 // For example consider there’s two scopes with a drop in each. These
794 // are built and thus the caches are filled:
795 //
796 // +--------------------------------------------------------+
797 // | +---------------------------------+ |
798 // | | +--------+ +-------------+ | +---------------+ |
799 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
800 // | | +--------+ +-------------+ | +---------------+ |
801 // | +------------|outer_scope cache|--+ |
802 // +------------------------------|middle_scope cache|------+
803 //
804 // Now, a new, inner-most scope is added along with a new drop into
805 // both inner-most and outer-most scopes:
806 //
807 // +------------------------------------------------------------+
808 // | +----------------------------------+ |
809 // | | +--------+ +-------------+ | +---------------+ | +-------------+
810 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
811 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
812 // | | +-+ +-------------+ | |
813 // | +---|invalid outer_scope cache|----+ |
814 // +----=----------------|invalid middle_scope cache|-----------+
815 //
816 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
817 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
818 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
819 // wrong results.
820 //
821 // Note that this code iterates scopes from the inner-most to the outer-most,
822 // invalidating caches of each scope visited. This way bare minimum of the
823 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
824 // cache of outer scope stays intact.
825 //
826 // Since we only cache drops for the unwind path and the generator drop
827 // path, we only need to invalidate the cache for drops that happen on
828 // the unwind or generator drop paths. This means that for
829 // non-generators we don't need to invalidate caches for `DropKind::Storage`.
830 let invalidate_caches = needs_drop || self.generator_kind.is_some();
831 for scope in self.scopes.scopes.iter_mut().rev() {
832 if invalidate_caches {
833 scope.invalidate_cache();
834 }
835
836 if scope.region_scope == region_scope {
837 let region_scope_span =
838 region_scope.span(self.hir.tcx(), &self.hir.region_scope_tree);
839 // Attribute scope exit drops to scope's closing brace.
840 let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span);
841
842 scope.drops.push(DropData {
843 source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
844 local,
845 kind: drop_kind,
846 });
847
848 return;
849 }
850 }
851
852 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
853 }
854
855 /// Indicates that the "local operand" stored in `local` is
856 /// *moved* at some point during execution (see `local_scope` for
857 /// more information about what a "local operand" is -- in short,
858 /// it's an intermediate operand created as part of preparing some
859 /// MIR instruction). We use this information to suppress
860 /// redundant drops on the non-unwind paths. This results in less
861 /// MIR, but also avoids spurious borrow check errors
862 /// (c.f. #64391).
863 ///
864 /// Example: when compiling the call to `foo` here:
865 ///
866 /// ```rust
867 /// foo(bar(), ...)
868 /// ```
869 ///
870 /// we would evaluate `bar()` to an operand `_X`. We would also
871 /// schedule `_X` to be dropped when the expression scope for
872 /// `foo(bar())` is exited. This is relevant, for example, if the
873 /// later arguments should unwind (it would ensure that `_X` gets
874 /// dropped). However, if no unwind occurs, then `_X` will be
875 /// unconditionally consumed by the `call`:
876 ///
877 /// ```
878 /// bb {
879 /// ...
880 /// _R = CALL(foo, _X, ...)
881 /// }
882 /// ```
883 ///
884 /// However, `_X` is still registered to be dropped, and so if we
885 /// do nothing else, we would generate a `DROP(_X)` that occurs
886 /// after the call. This will later be optimized out by the
887 /// drop-elaboation code, but in the meantime it can lead to
888 /// spurious borrow-check errors -- the problem, ironically, is
889 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
890 /// that it creates. See #64391 for an example.
891 crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
892 let local_scope = self.local_scope();
893 let scope = self.scopes.scopes.last_mut().unwrap();
894
895 assert_eq!(
896 scope.region_scope, local_scope,
897 "local scope is not the topmost scope!",
898 );
899
900 // look for moves of a local variable, like `MOVE(_X)`
901 let locals_moved = operands.iter().flat_map(|operand| match operand {
902 Operand::Copy(_) | Operand::Constant(_) => None,
903 Operand::Move(place) => place.as_local(),
904 });
905
906 for local in locals_moved {
907 // check if we have a Drop for this operand and -- if so
908 // -- add it to the list of moved operands. Note that this
909 // local might not have been an operand created for this
910 // call, it could come from other places too.
911 if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
912 scope.moved_locals.push(local);
913 }
914 }
915 }
916
917 // Other
918 // =====
919 /// Branch based on a boolean condition.
920 ///
921 /// This is a special case because the temporary for the condition needs to
922 /// be dropped on both the true and the false arm.
923 crate fn test_bool(
924 &mut self,
925 mut block: BasicBlock,
926 condition: Expr<'tcx>,
927 source_info: SourceInfo,
928 ) -> (BasicBlock, BasicBlock) {
929 let cond = unpack!(block = self.as_local_operand(block, condition));
930 let true_block = self.cfg.start_new_block();
931 let false_block = self.cfg.start_new_block();
932 let term = TerminatorKind::if_(self.hir.tcx(), cond.clone(), true_block, false_block);
933 self.cfg.terminate(block, source_info, term);
934
935 match cond {
936 // Don't try to drop a constant
937 Operand::Constant(_) => (),
938 Operand::Copy(place) | Operand::Move(place) => {
939 if let Some(cond_temp) = place.as_local() {
940 // Manually drop the condition on both branches.
941 let top_scope = self.scopes.scopes.last_mut().unwrap();
942 let top_drop_data = top_scope.drops.pop().unwrap();
943 if self.generator_kind.is_some() {
944 top_scope.invalidate_cache();
945 }
946
947 match top_drop_data.kind {
948 DropKind::Value { .. } => {
949 bug!("Drop scheduled on top of condition variable")
950 }
951 DropKind::Storage => {
952 let source_info = top_drop_data.source_info;
953 let local = top_drop_data.local;
954 assert_eq!(local, cond_temp, "Drop scheduled on top of condition");
955 self.cfg.push(
956 true_block,
957 Statement { source_info, kind: StatementKind::StorageDead(local) },
958 );
959 self.cfg.push(
960 false_block,
961 Statement { source_info, kind: StatementKind::StorageDead(local) },
962 );
963 }
964 }
965 } else {
966 bug!("Expected as_local_operand to produce a temporary");
967 }
968 }
969 }
970
971 (true_block, false_block)
972 }
973
974 /// Returns the [DropIdx] for the innermost drop if the function unwound at
975 /// this point. The `DropIdx` will be created if it doesn't already exist.
976 fn diverge_cleanup(&mut self) -> DropIdx {
977 let is_generator = self.generator_kind.is_some();
978 let (uncached_scope, mut cached_drop) = self
979 .scopes
980 .scopes
981 .iter()
982 .enumerate()
983 .rev()
984 .find_map(|(scope_idx, scope)| {
985 scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
986 })
987 .unwrap_or((0, ROOT_NODE));
988
989 for scope in &mut self.scopes.scopes[uncached_scope..] {
990 for drop in &scope.drops {
991 if is_generator || drop.kind == DropKind::Value {
992 cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
993 }
994 }
995 scope.cached_unwind_block = Some(cached_drop);
996 }
997
998 cached_drop
999 }
1000
1001 /// Prepares to create a path that performs all required cleanup for a
1002 /// terminator that can unwind at the given basic block.
1003 ///
1004 /// This path terminates in Resume. The path isn't created until after all
1005 /// of the non-unwind paths in this item have been lowered.
1006 crate fn diverge_from(&mut self, start: BasicBlock) {
1007 debug_assert!(
1008 matches!(
1009 self.cfg.block_data(start).terminator().kind,
1010 TerminatorKind::Assert { .. }
1011 | TerminatorKind::Call {..}
1012 | TerminatorKind::DropAndReplace { .. }
1013 | TerminatorKind::FalseUnwind { .. }
1014 ),
1015 "diverge_from called on block with terminator that cannot unwind."
1016 );
1017
1018 let next_drop = self.diverge_cleanup();
1019 self.scopes.unwind_drops.add_entry(start, next_drop);
1020 }
1021
1022 /// Sets up a path that performs all required cleanup for dropping a
1023 /// generator, starting from the given block that ends in
1024 /// [TerminatorKind::Yield].
1025 ///
1026 /// This path terminates in GeneratorDrop.
1027 crate fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) {
1028 debug_assert!(
1029 matches!(
1030 self.cfg.block_data(yield_block).terminator().kind,
1031 TerminatorKind::Yield { .. }
1032 ),
1033 "generator_drop_cleanup called on block with non-yield terminator."
1034 );
1035 let (uncached_scope, mut cached_drop) = self
1036 .scopes
1037 .scopes
1038 .iter()
1039 .enumerate()
1040 .rev()
1041 .find_map(|(scope_idx, scope)| {
1042 scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1043 })
1044 .unwrap_or((0, ROOT_NODE));
1045
1046 for scope in &mut self.scopes.scopes[uncached_scope..] {
1047 for drop in &scope.drops {
1048 cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop);
1049 }
1050 scope.cached_generator_drop_block = Some(cached_drop);
1051 }
1052
1053 self.scopes.generator_drops.add_entry(yield_block, cached_drop);
1054 }
1055
1056 /// Utility function for *non*-scope code to build their own drops
1057 crate fn build_drop_and_replace(
1058 &mut self,
1059 block: BasicBlock,
1060 span: Span,
1061 place: Place<'tcx>,
1062 value: Operand<'tcx>,
1063 ) -> BlockAnd<()> {
1064 let source_info = self.source_info(span);
1065 let next_target = self.cfg.start_new_block();
1066
1067 self.cfg.terminate(
1068 block,
1069 source_info,
1070 TerminatorKind::DropAndReplace { place, value, target: next_target, unwind: None },
1071 );
1072 self.diverge_from(block);
1073
1074 next_target.unit()
1075 }
1076
1077 /// Creates an `Assert` terminator and return the success block.
1078 /// If the boolean condition operand is not the expected value,
1079 /// a runtime panic will be caused with the given message.
1080 crate fn assert(
1081 &mut self,
1082 block: BasicBlock,
1083 cond: Operand<'tcx>,
1084 expected: bool,
1085 msg: AssertMessage<'tcx>,
1086 span: Span,
1087 ) -> BasicBlock {
1088 let source_info = self.source_info(span);
1089 let success_block = self.cfg.start_new_block();
1090
1091 self.cfg.terminate(
1092 block,
1093 source_info,
1094 TerminatorKind::Assert { cond, expected, msg, target: success_block, cleanup: None },
1095 );
1096 self.diverge_from(block);
1097
1098 success_block
1099 }
1100
1101 /// Unschedules any drops in the top scope.
1102 ///
1103 /// This is only needed for `match` arm scopes, because they have one
1104 /// entrance per pattern, but only one exit.
1105 crate fn clear_top_scope(&mut self, region_scope: region::Scope) {
1106 let top_scope = self.scopes.scopes.last_mut().unwrap();
1107
1108 assert_eq!(top_scope.region_scope, region_scope);
1109
1110 top_scope.drops.clear();
1111 top_scope.invalidate_cache();
1112 }
1113 }
1114
1115 /// Builds drops for `pop_scope` and `leave_top_scope`.
1116 fn build_scope_drops<'tcx>(
1117 cfg: &mut CFG<'tcx>,
1118 unwind_drops: &mut DropTree,
1119 scope: &Scope,
1120 mut block: BasicBlock,
1121 mut unwind_to: DropIdx,
1122 storage_dead_on_unwind: bool,
1123 arg_count: usize,
1124 ) -> BlockAnd<()> {
1125 debug!("build_scope_drops({:?} -> {:?})", block, scope);
1126
1127 // Build up the drops in evaluation order. The end result will
1128 // look like:
1129 //
1130 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1131 // | | |
1132 // : | |
1133 // V V
1134 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1135 //
1136 // The horizontal arrows represent the execution path when the drops return
1137 // successfully. The downwards arrows represent the execution path when the
1138 // drops panic (panicking while unwinding will abort, so there's no need for
1139 // another set of arrows).
1140 //
1141 // For generators, we unwind from a drop on a local to its StorageDead
1142 // statement. For other functions we don't worry about StorageDead. The
1143 // drops for the unwind path should have already been generated by
1144 // `diverge_cleanup_gen`.
1145
1146 for drop_data in scope.drops.iter().rev() {
1147 let source_info = drop_data.source_info;
1148 let local = drop_data.local;
1149
1150 match drop_data.kind {
1151 DropKind::Value => {
1152 // `unwind_to` should drop the value that we're about to
1153 // schedule. If dropping this value panics, then we continue
1154 // with the *next* value on the unwind path.
1155 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1156 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1157 unwind_to = unwind_drops.drops[unwind_to].1;
1158
1159 // If the operand has been moved, and we are not on an unwind
1160 // path, then don't generate the drop. (We only take this into
1161 // account for non-unwind paths so as not to disturb the
1162 // caching mechanism.)
1163 if scope.moved_locals.iter().any(|&o| o == local) {
1164 continue;
1165 }
1166
1167 unwind_drops.add_entry(block, unwind_to);
1168
1169 let next = cfg.start_new_block();
1170 cfg.terminate(
1171 block,
1172 source_info,
1173 TerminatorKind::Drop { place: local.into(), target: next, unwind: None },
1174 );
1175 block = next;
1176 }
1177 DropKind::Storage => {
1178 if storage_dead_on_unwind {
1179 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1180 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1181 unwind_to = unwind_drops.drops[unwind_to].1;
1182 }
1183 // Only temps and vars need their storage dead.
1184 assert!(local.index() > arg_count);
1185 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1186 }
1187 }
1188 }
1189 block.unit()
1190 }
1191
1192 impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1193 /// Build a drop tree for a breakable scope.
1194 ///
1195 /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1196 /// loop. Otherwise this is for `break` or `return`.
1197 fn build_exit_tree(
1198 &mut self,
1199 mut drops: DropTree,
1200 continue_block: Option<BasicBlock>,
1201 ) -> Option<BlockAnd<()>> {
1202 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1203 blocks[ROOT_NODE] = continue_block;
1204
1205 drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1206
1207 // Link the exit drop tree to unwind drop tree.
1208 if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
1209 let unwind_target = self.diverge_cleanup();
1210 let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1211 for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
1212 match drop_data.0.kind {
1213 DropKind::Storage => {
1214 if self.generator_kind.is_some() {
1215 let unwind_drop = self
1216 .scopes
1217 .unwind_drops
1218 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1219 unwind_indices.push(unwind_drop);
1220 } else {
1221 unwind_indices.push(unwind_indices[drop_data.1]);
1222 }
1223 }
1224 DropKind::Value => {
1225 let unwind_drop = self
1226 .scopes
1227 .unwind_drops
1228 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1229 self.scopes
1230 .unwind_drops
1231 .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
1232 unwind_indices.push(unwind_drop);
1233 }
1234 }
1235 }
1236 }
1237 blocks[ROOT_NODE].map(BasicBlock::unit)
1238 }
1239
1240 /// Build the unwind and generator drop trees.
1241 crate fn build_drop_trees(&mut self, should_abort: bool) {
1242 if self.generator_kind.is_some() {
1243 self.build_generator_drop_trees(should_abort);
1244 } else {
1245 Self::build_unwind_tree(
1246 &mut self.cfg,
1247 &mut self.scopes.unwind_drops,
1248 self.fn_span,
1249 should_abort,
1250 &mut None,
1251 );
1252 }
1253 }
1254
1255 fn build_generator_drop_trees(&mut self, should_abort: bool) {
1256 // Build the drop tree for dropping the generator while it's suspended.
1257 let drops = &mut self.scopes.generator_drops;
1258 let cfg = &mut self.cfg;
1259 let fn_span = self.fn_span;
1260 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1261 drops.build_mir::<GeneratorDrop>(cfg, &mut blocks);
1262 if let Some(root_block) = blocks[ROOT_NODE] {
1263 cfg.terminate(
1264 root_block,
1265 SourceInfo::outermost(fn_span),
1266 TerminatorKind::GeneratorDrop,
1267 );
1268 }
1269
1270 // Build the drop tree for unwinding in the normal control flow paths.
1271 let resume_block = &mut None;
1272 let unwind_drops = &mut self.scopes.unwind_drops;
1273 Self::build_unwind_tree(cfg, unwind_drops, fn_span, should_abort, resume_block);
1274
1275 // Build the drop tree for unwinding when dropping a suspended
1276 // generator.
1277 //
1278 // This is a different tree to the standard unwind paths here to
1279 // prevent drop elaboration from creating drop flags that would have
1280 // to be captured by the generator. I'm not sure how important this
1281 // optimization is, but it is here.
1282 for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
1283 if let DropKind::Value = drop_data.0.kind {
1284 debug_assert!(drop_data.1 < drops.drops.next_index());
1285 drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
1286 }
1287 }
1288 Self::build_unwind_tree(cfg, drops, fn_span, should_abort, resume_block);
1289 }
1290
1291 fn build_unwind_tree(
1292 cfg: &mut CFG<'tcx>,
1293 drops: &mut DropTree,
1294 fn_span: Span,
1295 should_abort: bool,
1296 resume_block: &mut Option<BasicBlock>,
1297 ) {
1298 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1299 blocks[ROOT_NODE] = *resume_block;
1300 drops.build_mir::<Unwind>(cfg, &mut blocks);
1301 if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1302 // `TerminatorKind::Abort` is used for `#[unwind(aborts)]`
1303 // functions.
1304 let terminator =
1305 if should_abort { TerminatorKind::Abort } else { TerminatorKind::Resume };
1306
1307 cfg.terminate(resume, SourceInfo::outermost(fn_span), terminator);
1308
1309 *resume_block = blocks[ROOT_NODE];
1310 }
1311 }
1312 }
1313
1314 // DropTreeBuilder implementations.
1315
1316 struct ExitScopes;
1317
1318 impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1319 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1320 cfg.start_new_block()
1321 }
1322 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1323 cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to };
1324 }
1325 }
1326
1327 struct GeneratorDrop;
1328
1329 impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop {
1330 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1331 cfg.start_new_block()
1332 }
1333 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1334 let term = cfg.block_data_mut(from).terminator_mut();
1335 if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1336 *drop = Some(to);
1337 } else {
1338 span_bug!(
1339 term.source_info.span,
1340 "cannot enter generator drop tree from {:?}",
1341 term.kind
1342 )
1343 }
1344 }
1345 }
1346
1347 struct Unwind;
1348
1349 impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1350 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1351 cfg.start_new_cleanup_block()
1352 }
1353 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1354 let term = &mut cfg.block_data_mut(from).terminator_mut();
1355 match &mut term.kind {
1356 TerminatorKind::Drop { unwind, .. }
1357 | TerminatorKind::DropAndReplace { unwind, .. }
1358 | TerminatorKind::FalseUnwind { unwind, .. }
1359 | TerminatorKind::Call { cleanup: unwind, .. }
1360 | TerminatorKind::Assert { cleanup: unwind, .. } => {
1361 *unwind = Some(to);
1362 }
1363 TerminatorKind::Goto { .. }
1364 | TerminatorKind::SwitchInt { .. }
1365 | TerminatorKind::Resume
1366 | TerminatorKind::Abort
1367 | TerminatorKind::Return
1368 | TerminatorKind::Unreachable
1369 | TerminatorKind::Yield { .. }
1370 | TerminatorKind::GeneratorDrop
1371 | TerminatorKind::FalseEdge { .. }
1372 | TerminatorKind::InlineAsm { .. } => {
1373 span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1374 }
1375 }
1376 }
1377 }