]>
Commit | Line | Data |
---|---|---|
e9174d1e SL |
1 | /*! |
2 | Managing the scope stack. The scopes are tied to lexical scopes, so as | |
3dfed10e | 3 | we descend the THIR, we push a scope on the stack, build its |
e9174d1e | 4 | contents, and then pop it off. Every scope is named by a |
ea8adc8c | 5 | `region::Scope`. |
e9174d1e SL |
6 | |
7 | ### SEME Regions | |
8 | ||
29967ef6 | 9 | When pushing a new [Scope], we record the current point in the graph (a |
e9174d1e SL |
10 | basic block); this marks the entry to the scope. We then generate more |
11 | stuff in the control-flow graph. Whenever the scope is exited, either | |
12 | via a `break` or `return` or just by fallthrough, that marks an exit | |
13 | from the scope. Each lexical scope thus corresponds to a single-entry, | |
14 | multiple-exit (SEME) region in the control-flow graph. | |
15 | ||
29967ef6 XL |
16 | For now, we record the `region::Scope` to each SEME region for later reference |
17 | (see caveat in next paragraph). This is because destruction scopes are tied to | |
18 | them. This may change in the future so that MIR lowering determines its own | |
19 | destruction scopes. | |
e9174d1e | 20 | |
dc9dc135 XL |
21 | ### Not so SEME Regions |
22 | ||
23 | In the course of building matches, it sometimes happens that certain code | |
24 | (namely guards) gets executed multiple times. This means that the scope lexical | |
25 | scope may in fact correspond to multiple, disjoint SEME regions. So in fact our | |
29967ef6 XL |
26 | mapping is from one scope to a vector of SEME regions. Since the SEME regions |
27 | are disjoint, the mapping is still one-to-one for the set of SEME regions that | |
28 | we're currently in. | |
e9174d1e | 29 | |
29967ef6 XL |
30 | Also in matches, the scopes assigned to arms are not always even SEME regions! |
31 | Each arm has a single region with one entry for each pattern. We manually | |
dc9dc135 | 32 | manipulate the scheduled drops in this scope to avoid dropping things multiple |
29967ef6 | 33 | times. |
dc9dc135 | 34 | |
e9174d1e SL |
35 | ### Drops |
36 | ||
94b46f34 | 37 | The primary purpose for scopes is to insert drops: while building |
ff7c6d11 | 38 | the contents, we also accumulate places that need to be dropped upon |
e9174d1e SL |
39 | exit from each scope. This is done by calling `schedule_drop`. Once a |
40 | drop is scheduled, whenever we branch out we will insert drops of all | |
ff7c6d11 | 41 | those places onto the outgoing edge. Note that we don't know the full |
e9174d1e SL |
42 | set of scheduled drops up front, and so whenever we exit from the |
43 | scope we only drop the values scheduled thus far. For example, consider | |
44 | the scope S corresponding to this loop: | |
45 | ||
041b39d2 XL |
46 | ``` |
47 | # let cond = true; | |
e9174d1e | 48 | loop { |
041b39d2 | 49 | let x = ..; |
e9174d1e | 50 | if cond { break; } |
041b39d2 | 51 | let y = ..; |
e9174d1e SL |
52 | } |
53 | ``` | |
54 | ||
55 | When processing the `let x`, we will add one drop to the scope for | |
56 | `x`. The break will then insert a drop for `x`. When we process `let | |
57 | y`, we will add another drop (in fact, to a subscope, but let's ignore | |
58 | that for now); any later drops would also drop `y`. | |
59 | ||
60 | ### Early exit | |
61 | ||
62 | There are numerous "normal" ways to early exit a scope: `break`, | |
63 | `continue`, `return` (panics are handled separately). Whenever an | |
29967ef6 | 64 | early exit occurs, the method `break_scope` is called. It is given the |
e9174d1e SL |
65 | current point in execution where the early exit occurs, as well as the |
66 | scope you want to branch to (note that all early exits from to some | |
29967ef6 XL |
67 | other enclosing scope). `break_scope` will record the set of drops currently |
68 | scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops | |
69 | will be added to the CFG. | |
e9174d1e | 70 | |
29967ef6 XL |
71 | Panics are handled in a similar fashion, except that the drops are added to the |
72 | MIR once the rest of the function has finished being lowered. If a terminator | |
73 | can panic, call `diverge_from(block)` with the block containing the terminator | |
74 | `block`. | |
e9174d1e | 75 | |
29967ef6 | 76 | ### Breakable scopes |
e9174d1e SL |
77 | |
78 | In addition to the normal scope stack, we track a loop scope stack | |
29967ef6 XL |
79 | that contains only loops and breakable blocks. It tracks where a `break`, |
80 | `continue` or `return` should go to. | |
e9174d1e SL |
81 | |
82 | */ | |
83 | ||
dc9dc135 | 84 | use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG}; |
6a06907d | 85 | use crate::thir::{Expr, LintLevel}; |
476ff2be | 86 | use rustc_data_structures::fx::FxHashMap; |
29967ef6 | 87 | use rustc_index::vec::IndexVec; |
3dfed10e XL |
88 | use rustc_middle::middle::region; |
89 | use rustc_middle::mir::*; | |
dfeec247 | 90 | use rustc_span::{Span, DUMMY_SP}; |
29967ef6 XL |
91 | |
92 | #[derive(Debug)] | |
93 | pub struct Scopes<'tcx> { | |
94 | scopes: Vec<Scope>, | |
95 | /// The current set of breakable scopes. See module comment for more details. | |
96 | breakable_scopes: Vec<BreakableScope<'tcx>>, | |
97 | ||
98 | /// Drops that need to be done on unwind paths. See the comment on | |
99 | /// [DropTree] for more details. | |
100 | unwind_drops: DropTree, | |
101 | ||
102 | /// Drops that need to be done on paths to the `GeneratorDrop` terminator. | |
103 | generator_drops: DropTree, | |
104 | } | |
e9174d1e | 105 | |
041b39d2 | 106 | #[derive(Debug)] |
dc9dc135 | 107 | struct Scope { |
94b46f34 XL |
108 | /// The source scope this scope was created in. |
109 | source_scope: SourceScope, | |
3157f602 | 110 | |
ea8adc8c XL |
111 | /// the region span of this scope within source code. |
112 | region_scope: region::Scope, | |
54a0048b | 113 | |
ea8adc8c XL |
114 | /// the span of that region_scope |
115 | region_scope_span: Span, | |
3b2f2976 | 116 | |
ff7c6d11 | 117 | /// set of places to drop when exiting this scope. This starts |
54a0048b SL |
118 | /// out empty but grows as variables are declared during the |
119 | /// building process. This is a stack, so we always drop from the | |
120 | /// end of the vector (top of the stack) first. | |
dc9dc135 | 121 | drops: Vec<DropData>, |
54a0048b | 122 | |
e1599b0c XL |
123 | moved_locals: Vec<Local>, |
124 | ||
29967ef6 XL |
125 | /// The drop index that will drop everything in and below this scope on an |
126 | /// unwind path. | |
127 | cached_unwind_block: Option<DropIdx>, | |
ea8adc8c | 128 | |
29967ef6 XL |
129 | /// The drop index that will drop everything in and below this scope on a |
130 | /// generator drop path. | |
131 | cached_generator_drop_block: Option<DropIdx>, | |
7453a54e SL |
132 | } |
133 | ||
29967ef6 | 134 | #[derive(Clone, Copy, Debug)] |
dc9dc135 | 135 | struct DropData { |
29967ef6 XL |
136 | /// The `Span` where drop obligation was incurred (typically where place was |
137 | /// declared) | |
138 | source_info: SourceInfo, | |
54a0048b | 139 | |
dc9dc135 XL |
140 | /// local to drop |
141 | local: Local, | |
54a0048b | 142 | |
8faf50e0 XL |
143 | /// Whether this is a value Drop or a StorageDead. |
144 | kind: DropKind, | |
ea8adc8c XL |
145 | } |
146 | ||
29967ef6 | 147 | #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] |
8faf50e0 | 148 | pub(crate) enum DropKind { |
dc9dc135 XL |
149 | Value, |
150 | Storage, | |
7453a54e SL |
151 | } |
152 | ||
29967ef6 | 153 | #[derive(Debug)] |
dc9dc135 | 154 | struct BreakableScope<'tcx> { |
ea8adc8c | 155 | /// Region scope of the loop |
dc9dc135 | 156 | region_scope: region::Scope, |
e74abb32 | 157 | /// The destination of the loop/block expression itself (i.e., where to put |
29967ef6 | 158 | /// the result of a `break` or `return` expression) |
dc9dc135 | 159 | break_destination: Place<'tcx>, |
29967ef6 XL |
160 | /// Drops that happen on the `break`/`return` path. |
161 | break_drops: DropTree, | |
162 | /// Drops that happen on the `continue` path. | |
163 | continue_drops: Option<DropTree>, | |
dc9dc135 XL |
164 | } |
165 | ||
166 | /// The target of an expression that breaks out of a scope | |
167 | #[derive(Clone, Copy, Debug)] | |
dfeec247 | 168 | crate enum BreakableTarget { |
dc9dc135 XL |
169 | Continue(region::Scope), |
170 | Break(region::Scope), | |
171 | Return, | |
7453a54e SL |
172 | } |
173 | ||
29967ef6 XL |
174 | rustc_index::newtype_index! { |
175 | struct DropIdx { .. } | |
176 | } | |
ea8adc8c | 177 | |
29967ef6 | 178 | const ROOT_NODE: DropIdx = DropIdx::from_u32(0); |
ea8adc8c | 179 | |
29967ef6 XL |
180 | /// A tree of drops that we have deferred lowering. It's used for: |
181 | /// | |
182 | /// * Drops on unwind paths | |
183 | /// * Drops on generator drop paths (when a suspended generator is dropped) | |
184 | /// * Drops on return and loop exit paths | |
185 | /// | |
186 | /// Once no more nodes could be added to the tree, we lower it to MIR in one go | |
187 | /// in `build_mir`. | |
188 | #[derive(Debug)] | |
189 | struct DropTree { | |
190 | /// Drops in the tree. | |
191 | drops: IndexVec<DropIdx, (DropData, DropIdx)>, | |
192 | /// Map for finding the inverse of the `next_drop` relation: | |
193 | /// | |
194 | /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i` | |
195 | previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>, | |
196 | /// Edges into the `DropTree` that need to be added once it's lowered. | |
197 | entry_points: Vec<(DropIdx, BasicBlock)>, | |
ea8adc8c XL |
198 | } |
199 | ||
dc9dc135 | 200 | impl Scope { |
e1599b0c XL |
201 | /// Whether there's anything to do for the cleanup path, that is, |
202 | /// when unwinding through this scope. This includes destructors, | |
203 | /// but not StorageDead statements, which don't get emitted at all | |
204 | /// for unwinding, for several reasons: | |
205 | /// * clang doesn't emit llvm.lifetime.end for C++ unwinding | |
206 | /// * LLVM's memory dependency analysis can't handle it atm | |
207 | /// * polluting the cleanup MIR with StorageDead creates | |
208 | /// landing pads even though there's no actual destructors | |
209 | /// * freeing up stack space has no effect during unwinding | |
210 | /// Note that for generators we do emit StorageDeads, for the | |
211 | /// use of optimizations in the MIR generator transform. | |
212 | fn needs_cleanup(&self) -> bool { | |
213 | self.drops.iter().any(|drop| match drop.kind { | |
214 | DropKind::Value => true, | |
215 | DropKind::Storage => false, | |
216 | }) | |
217 | } | |
29967ef6 XL |
218 | |
219 | fn invalidate_cache(&mut self) { | |
220 | self.cached_unwind_block = None; | |
221 | self.cached_generator_drop_block = None; | |
222 | } | |
e9174d1e SL |
223 | } |
224 | ||
29967ef6 XL |
225 | /// A trait that determined how [DropTree] creates its blocks and |
226 | /// links to any entry nodes. | |
227 | trait DropTreeBuilder<'tcx> { | |
228 | /// Create a new block for the tree. This should call either | |
229 | /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`. | |
230 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock; | |
231 | ||
232 | /// Links a block outside the drop tree, `from`, to the block `to` inside | |
233 | /// the drop tree. | |
234 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock); | |
235 | } | |
236 | ||
237 | impl DropTree { | |
238 | fn new() -> Self { | |
239 | // The root node of the tree doesn't represent a drop, but instead | |
240 | // represents the block in the tree that should be jumped to once all | |
241 | // of the required drops have been performed. | |
242 | let fake_source_info = SourceInfo::outermost(DUMMY_SP); | |
243 | let fake_data = | |
244 | DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage }; | |
245 | let drop_idx = DropIdx::MAX; | |
246 | let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1); | |
247 | Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() } | |
dc9dc135 XL |
248 | } |
249 | ||
29967ef6 XL |
250 | fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx { |
251 | let drops = &mut self.drops; | |
252 | *self | |
253 | .previous_drops | |
254 | .entry((next, drop.local, drop.kind)) | |
255 | .or_insert_with(|| drops.push((drop, next))) | |
dc9dc135 XL |
256 | } |
257 | ||
29967ef6 XL |
258 | fn add_entry(&mut self, from: BasicBlock, to: DropIdx) { |
259 | debug_assert!(to < self.drops.next_index()); | |
260 | self.entry_points.push((to, from)); | |
dc9dc135 XL |
261 | } |
262 | ||
29967ef6 XL |
263 | /// Builds the MIR for a given drop tree. |
264 | /// | |
265 | /// `blocks` should have the same length as `self.drops`, and may have its | |
266 | /// first value set to some already existing block. | |
267 | fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>( | |
268 | &mut self, | |
269 | cfg: &mut CFG<'tcx>, | |
270 | blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, | |
271 | ) { | |
272 | debug!("DropTree::build_mir(drops = {:#?})", self); | |
273 | assert_eq!(blocks.len(), self.drops.len()); | |
274 | ||
275 | self.assign_blocks::<T>(cfg, blocks); | |
276 | self.link_blocks(cfg, blocks) | |
dc9dc135 XL |
277 | } |
278 | ||
29967ef6 XL |
279 | /// Assign blocks for all of the drops in the drop tree that need them. |
280 | fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>( | |
281 | &mut self, | |
282 | cfg: &mut CFG<'tcx>, | |
283 | blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, | |
284 | ) { | |
285 | // StorageDead statements can share blocks with each other and also with | |
286 | // a Drop terminator. We iterate through the drops to find which drops | |
287 | // need their own block. | |
288 | #[derive(Clone, Copy)] | |
289 | enum Block { | |
290 | // This drop is unreachable | |
291 | None, | |
292 | // This drop is only reachable through the `StorageDead` with the | |
293 | // specified index. | |
294 | Shares(DropIdx), | |
295 | // This drop has more than one way of being reached, or it is | |
296 | // branched to from outside the tree, or its predecessor is a | |
297 | // `Value` drop. | |
298 | Own, | |
299 | } | |
300 | ||
301 | let mut needs_block = IndexVec::from_elem(Block::None, &self.drops); | |
302 | if blocks[ROOT_NODE].is_some() { | |
303 | // In some cases (such as drops for `continue`) the root node | |
304 | // already has a block. In this case, make sure that we don't | |
305 | // override it. | |
306 | needs_block[ROOT_NODE] = Block::Own; | |
307 | } | |
308 | ||
309 | // Sort so that we only need to check the last value. | |
310 | let entry_points = &mut self.entry_points; | |
311 | entry_points.sort(); | |
312 | ||
313 | for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() { | |
314 | if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) { | |
315 | let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg)); | |
316 | needs_block[drop_idx] = Block::Own; | |
317 | while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) { | |
318 | let entry_block = entry_points.pop().unwrap().1; | |
319 | T::add_entry(cfg, entry_block, block); | |
dc9dc135 | 320 | } |
dc9dc135 | 321 | } |
29967ef6 XL |
322 | match needs_block[drop_idx] { |
323 | Block::None => continue, | |
324 | Block::Own => { | |
325 | blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg)); | |
326 | } | |
327 | Block::Shares(pred) => { | |
328 | blocks[drop_idx] = blocks[pred]; | |
329 | } | |
dc9dc135 | 330 | } |
29967ef6 XL |
331 | if let DropKind::Value = drop_data.0.kind { |
332 | needs_block[drop_data.1] = Block::Own; | |
333 | } else if drop_idx != ROOT_NODE { | |
334 | match &mut needs_block[drop_data.1] { | |
335 | pred @ Block::None => *pred = Block::Shares(drop_idx), | |
336 | pred @ Block::Shares(_) => *pred = Block::Own, | |
337 | Block::Own => (), | |
338 | } | |
dc9dc135 XL |
339 | } |
340 | } | |
29967ef6 XL |
341 | |
342 | debug!("assign_blocks: blocks = {:#?}", blocks); | |
343 | assert!(entry_points.is_empty()); | |
dc9dc135 XL |
344 | } |
345 | ||
29967ef6 XL |
346 | fn link_blocks<'tcx>( |
347 | &self, | |
348 | cfg: &mut CFG<'tcx>, | |
349 | blocks: &IndexVec<DropIdx, Option<BasicBlock>>, | |
350 | ) { | |
351 | for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() { | |
352 | let block = if let Some(block) = blocks[drop_idx] { | |
353 | block | |
354 | } else { | |
355 | continue; | |
356 | }; | |
357 | match drop_data.0.kind { | |
358 | DropKind::Value => { | |
359 | let terminator = TerminatorKind::Drop { | |
360 | target: blocks[drop_data.1].unwrap(), | |
361 | // The caller will handle this if needed. | |
362 | unwind: None, | |
363 | place: drop_data.0.local.into(), | |
364 | }; | |
365 | cfg.terminate(block, drop_data.0.source_info, terminator); | |
366 | } | |
367 | // Root nodes don't correspond to a drop. | |
368 | DropKind::Storage if drop_idx == ROOT_NODE => {} | |
369 | DropKind::Storage => { | |
370 | let stmt = Statement { | |
371 | source_info: drop_data.0.source_info, | |
372 | kind: StatementKind::StorageDead(drop_data.0.local), | |
373 | }; | |
374 | cfg.push(block, stmt); | |
375 | let target = blocks[drop_data.1].unwrap(); | |
376 | if target != block { | |
377 | // Diagnostics don't use this `Span` but debuginfo | |
378 | // might. Since we don't want breakpoints to be placed | |
379 | // here, especially when this is on an unwind path, we | |
380 | // use `DUMMY_SP`. | |
381 | let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info }; | |
382 | let terminator = TerminatorKind::Goto { target }; | |
383 | cfg.terminate(block, source_info, terminator); | |
384 | } | |
385 | } | |
386 | } | |
387 | } | |
dc9dc135 | 388 | } |
29967ef6 | 389 | } |
dc9dc135 | 390 | |
29967ef6 XL |
391 | impl<'tcx> Scopes<'tcx> { |
392 | pub(crate) fn new() -> Self { | |
393 | Self { | |
394 | scopes: Vec::new(), | |
395 | breakable_scopes: Vec::new(), | |
396 | unwind_drops: DropTree::new(), | |
397 | generator_drops: DropTree::new(), | |
398 | } | |
dc9dc135 XL |
399 | } |
400 | ||
29967ef6 XL |
401 | fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) { |
402 | debug!("push_scope({:?})", region_scope); | |
403 | self.scopes.push(Scope { | |
404 | source_scope: vis_scope, | |
405 | region_scope: region_scope.0, | |
406 | region_scope_span: region_scope.1.span, | |
407 | drops: vec![], | |
408 | moved_locals: vec![], | |
409 | cached_unwind_block: None, | |
410 | cached_generator_drop_block: None, | |
411 | }); | |
412 | } | |
413 | ||
414 | fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope { | |
415 | let scope = self.scopes.pop().unwrap(); | |
416 | assert_eq!(scope.region_scope, region_scope.0); | |
417 | scope | |
418 | } | |
419 | ||
420 | fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize { | |
421 | self.scopes | |
422 | .iter() | |
423 | .rposition(|scope| scope.region_scope == region_scope) | |
424 | .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope)) | |
dc9dc135 XL |
425 | } |
426 | ||
427 | /// Returns the topmost active scope, which is known to be alive until | |
428 | /// the next scope expression. | |
29967ef6 | 429 | fn topmost(&self) -> region::Scope { |
dc9dc135 XL |
430 | self.scopes.last().expect("topmost_scope: no scopes present").region_scope |
431 | } | |
dc9dc135 XL |
432 | } |
433 | ||
434 | impl<'a, 'tcx> Builder<'a, 'tcx> { | |
7453a54e SL |
435 | // Adding and removing scopes |
436 | // ========================== | |
dc9dc135 XL |
437 | // Start a breakable scope, which tracks where `continue`, `break` and |
438 | // `return` should branch to. | |
29967ef6 | 439 | crate fn in_breakable_scope<F>( |
dfeec247 XL |
440 | &mut self, |
441 | loop_block: Option<BasicBlock>, | |
dfeec247 | 442 | break_destination: Place<'tcx>, |
29967ef6 | 443 | span: Span, |
dfeec247 | 444 | f: F, |
29967ef6 | 445 | ) -> BlockAnd<()> |
dfeec247 | 446 | where |
29967ef6 | 447 | F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>, |
e9174d1e | 448 | { |
dc9dc135 | 449 | let region_scope = self.scopes.topmost(); |
cc61c64b | 450 | let scope = BreakableScope { |
ea8adc8c | 451 | region_scope, |
3b2f2976 | 452 | break_destination, |
29967ef6 XL |
453 | break_drops: DropTree::new(), |
454 | continue_drops: loop_block.map(|_| DropTree::new()), | |
b039eaaf | 455 | }; |
dc9dc135 | 456 | self.scopes.breakable_scopes.push(scope); |
29967ef6 | 457 | let normal_exit_block = f(self); |
dc9dc135 | 458 | let breakable_scope = self.scopes.breakable_scopes.pop().unwrap(); |
ea8adc8c | 459 | assert!(breakable_scope.region_scope == region_scope); |
29967ef6 | 460 | let break_block = self.build_exit_tree(breakable_scope.break_drops, None); |
fc512014 XL |
461 | if let Some(drops) = breakable_scope.continue_drops { |
462 | self.build_exit_tree(drops, loop_block); | |
463 | } | |
29967ef6 XL |
464 | match (normal_exit_block, break_block) { |
465 | (Some(block), None) | (None, Some(block)) => block, | |
466 | (None, None) => self.cfg.start_new_block().unit(), | |
467 | (Some(normal_block), Some(exit_block)) => { | |
468 | let target = self.cfg.start_new_block(); | |
469 | let source_info = self.source_info(span); | |
470 | self.cfg.terminate( | |
471 | unpack!(normal_block), | |
472 | source_info, | |
473 | TerminatorKind::Goto { target }, | |
474 | ); | |
475 | self.cfg.terminate( | |
476 | unpack!(exit_block), | |
477 | source_info, | |
478 | TerminatorKind::Goto { target }, | |
479 | ); | |
480 | target.unit() | |
481 | } | |
482 | } | |
e9174d1e SL |
483 | } |
484 | ||
dfeec247 XL |
485 | crate fn in_opt_scope<F, R>( |
486 | &mut self, | |
487 | opt_scope: Option<(region::Scope, SourceInfo)>, | |
488 | f: F, | |
489 | ) -> BlockAnd<R> | |
490 | where | |
491 | F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>, | |
041b39d2 | 492 | { |
dc9dc135 | 493 | debug!("in_opt_scope(opt_scope={:?})", opt_scope); |
dfeec247 XL |
494 | if let Some(region_scope) = opt_scope { |
495 | self.push_scope(region_scope); | |
496 | } | |
dc9dc135 | 497 | let mut block; |
041b39d2 | 498 | let rv = unpack!(block = f(self)); |
ea8adc8c XL |
499 | if let Some(region_scope) = opt_scope { |
500 | unpack!(block = self.pop_scope(region_scope, block)); | |
041b39d2 | 501 | } |
ea8adc8c | 502 | debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block); |
041b39d2 XL |
503 | block.and(rv) |
504 | } | |
505 | ||
92a42be0 SL |
506 | /// Convenience wrapper that pushes a scope and then executes `f` |
507 | /// to build its contents, popping the scope afterwards. | |
dfeec247 XL |
508 | crate fn in_scope<F, R>( |
509 | &mut self, | |
510 | region_scope: (region::Scope, SourceInfo), | |
511 | lint_level: LintLevel, | |
512 | f: F, | |
513 | ) -> BlockAnd<R> | |
514 | where | |
515 | F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>, | |
e9174d1e | 516 | { |
dc9dc135 | 517 | debug!("in_scope(region_scope={:?})", region_scope); |
94b46f34 | 518 | let source_scope = self.source_scope; |
6a06907d | 519 | let tcx = self.tcx; |
532ac7d7 XL |
520 | if let LintLevel::Explicit(current_hir_id) = lint_level { |
521 | // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound | |
522 | // to avoid adding Hir dependences on our parents. | |
523 | // We estimate the true lint roots here to avoid creating a lot of source scopes. | |
524 | ||
525 | let parent_root = tcx.maybe_lint_level_root_bounded( | |
dfeec247 | 526 | self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root, |
6a06907d | 527 | self.hir_id, |
532ac7d7 | 528 | ); |
6a06907d | 529 | let current_root = tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir_id); |
532ac7d7 XL |
530 | |
531 | if parent_root != current_root { | |
532 | self.source_scope = self.new_source_scope( | |
533 | region_scope.1.span, | |
534 | LintLevel::Explicit(current_root), | |
dfeec247 | 535 | None, |
532ac7d7 | 536 | ); |
ea8adc8c XL |
537 | } |
538 | } | |
539 | self.push_scope(region_scope); | |
dc9dc135 | 540 | let mut block; |
3157f602 | 541 | let rv = unpack!(block = f(self)); |
ea8adc8c | 542 | unpack!(block = self.pop_scope(region_scope, block)); |
94b46f34 | 543 | self.source_scope = source_scope; |
ea8adc8c | 544 | debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block); |
92a42be0 SL |
545 | block.and(rv) |
546 | } | |
e9174d1e | 547 | |
92a42be0 SL |
548 | /// Push a scope onto the stack. You can then build code in this |
549 | /// scope and call `pop_scope` afterwards. Note that these two | |
550 | /// calls must be paired; using `in_scope` as a convenience | |
551 | /// wrapper maybe preferable. | |
dfeec247 | 552 | crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) { |
dc9dc135 | 553 | self.scopes.push_scope(region_scope, self.source_scope); |
92a42be0 SL |
554 | } |
555 | ||
ea8adc8c XL |
556 | /// Pops a scope, which should have region scope `region_scope`, |
557 | /// adding any drops onto the end of `block` that are needed. | |
558 | /// This must match 1-to-1 with `push_scope`. | |
dfeec247 XL |
559 | crate fn pop_scope( |
560 | &mut self, | |
561 | region_scope: (region::Scope, SourceInfo), | |
562 | mut block: BasicBlock, | |
563 | ) -> BlockAnd<()> { | |
ea8adc8c | 564 | debug!("pop_scope({:?}, {:?})", region_scope, block); |
a1dfa0c6 | 565 | |
29967ef6 XL |
566 | block = self.leave_top_scope(block); |
567 | ||
568 | self.scopes.pop_scope(region_scope); | |
041b39d2 | 569 | |
54a0048b | 570 | block.unit() |
e9174d1e SL |
571 | } |
572 | ||
29967ef6 | 573 | /// Sets up the drops for breaking from `block` to `target`. |
dfeec247 | 574 | crate fn break_scope( |
dc9dc135 XL |
575 | &mut self, |
576 | mut block: BasicBlock, | |
6a06907d | 577 | value: Option<&Expr<'_, 'tcx>>, |
29967ef6 | 578 | target: BreakableTarget, |
dc9dc135 XL |
579 | source_info: SourceInfo, |
580 | ) -> BlockAnd<()> { | |
29967ef6 XL |
581 | let span = source_info.span; |
582 | ||
583 | let get_scope_index = |scope: region::Scope| { | |
584 | // find the loop-scope by its `region::Scope`. | |
585 | self.scopes | |
586 | .breakable_scopes | |
587 | .iter() | |
588 | .rposition(|breakable_scope| breakable_scope.region_scope == scope) | |
589 | .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found")) | |
590 | }; | |
591 | let (break_index, destination) = match target { | |
592 | BreakableTarget::Return => { | |
593 | let scope = &self.scopes.breakable_scopes[0]; | |
594 | if scope.break_destination != Place::return_place() { | |
595 | span_bug!(span, "`return` in item with no return scope"); | |
596 | } | |
597 | (0, Some(scope.break_destination)) | |
598 | } | |
599 | BreakableTarget::Break(scope) => { | |
600 | let break_index = get_scope_index(scope); | |
601 | let scope = &self.scopes.breakable_scopes[break_index]; | |
602 | (break_index, Some(scope.break_destination)) | |
603 | } | |
604 | BreakableTarget::Continue(scope) => { | |
605 | let break_index = get_scope_index(scope); | |
606 | (break_index, None) | |
607 | } | |
608 | }; | |
609 | ||
dc9dc135 XL |
610 | if let Some(destination) = destination { |
611 | if let Some(value) = value { | |
612 | debug!("stmt_expr Break val block_context.push(SubExpr)"); | |
613 | self.block_context.push(BlockFrame::SubExpr); | |
6a06907d | 614 | unpack!(block = self.expr_into_dest(destination, block, value)); |
dc9dc135 XL |
615 | self.block_context.pop(); |
616 | } else { | |
6a06907d | 617 | self.cfg.push_assign_unit(block, source_info, destination, self.tcx) |
dc9dc135 XL |
618 | } |
619 | } else { | |
620 | assert!(value.is_none(), "`return` and `break` should have a destination"); | |
621 | } | |
29967ef6 XL |
622 | |
623 | let region_scope = self.scopes.breakable_scopes[break_index].region_scope; | |
624 | let scope_index = self.scopes.scope_index(region_scope, span); | |
625 | let drops = if destination.is_some() { | |
626 | &mut self.scopes.breakable_scopes[break_index].break_drops | |
627 | } else { | |
628 | self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap() | |
629 | }; | |
630 | let mut drop_idx = ROOT_NODE; | |
631 | for scope in &self.scopes.scopes[scope_index + 1..] { | |
632 | for drop in &scope.drops { | |
633 | drop_idx = drops.add_drop(*drop, drop_idx); | |
634 | } | |
635 | } | |
636 | drops.add_entry(block, drop_idx); | |
637 | ||
638 | // `build_drop_tree` doesn't have access to our source_info, so we | |
639 | // create a dummy terminator now. `TerminatorKind::Resume` is used | |
640 | // because MIR type checking will panic if it hasn't been overwritten. | |
641 | self.cfg.terminate(block, source_info, TerminatorKind::Resume); | |
642 | ||
dc9dc135 XL |
643 | self.cfg.start_new_block().unit() |
644 | } | |
e9174d1e | 645 | |
29967ef6 | 646 | crate fn exit_top_scope( |
dfeec247 | 647 | &mut self, |
dfeec247 XL |
648 | mut block: BasicBlock, |
649 | target: BasicBlock, | |
29967ef6 | 650 | source_info: SourceInfo, |
dfeec247 | 651 | ) { |
29967ef6 XL |
652 | block = self.leave_top_scope(block); |
653 | self.cfg.terminate(block, source_info, TerminatorKind::Goto { target }); | |
654 | } | |
3b2f2976 | 655 | |
29967ef6 | 656 | fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock { |
3b2f2976 XL |
657 | // If we are emitting a `drop` statement, we need to have the cached |
658 | // diverge cleanup pads ready in case that drop panics. | |
29967ef6 XL |
659 | let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup()); |
660 | let is_generator = self.generator_kind.is_some(); | |
661 | let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX }; | |
662 | ||
663 | let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes"); | |
664 | unpack!(build_scope_drops( | |
665 | &mut self.cfg, | |
666 | &mut self.scopes.unwind_drops, | |
667 | scope, | |
668 | block, | |
669 | unwind_to, | |
670 | is_generator && needs_cleanup, | |
671 | self.arg_count, | |
672 | )) | |
ea8adc8c XL |
673 | } |
674 | ||
94b46f34 | 675 | /// Creates a new source scope, nested in the current one. |
dfeec247 XL |
676 | crate fn new_source_scope( |
677 | &mut self, | |
678 | span: Span, | |
679 | lint_level: LintLevel, | |
680 | safety: Option<Safety>, | |
681 | ) -> SourceScope { | |
94b46f34 | 682 | let parent = self.source_scope; |
dfeec247 XL |
683 | debug!( |
684 | "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}", | |
685 | span, | |
686 | lint_level, | |
687 | safety, | |
688 | parent, | |
689 | self.source_scopes.get(parent) | |
690 | ); | |
94b46f34 | 691 | let scope_local_data = SourceScopeLocalData { |
ea8adc8c XL |
692 | lint_root: if let LintLevel::Explicit(lint_root) = lint_level { |
693 | lint_root | |
694 | } else { | |
60c5eb7d | 695 | self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root |
ea8adc8c XL |
696 | }, |
697 | safety: safety.unwrap_or_else(|| { | |
60c5eb7d | 698 | self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety |
dfeec247 | 699 | }), |
ea8adc8c | 700 | }; |
60c5eb7d XL |
701 | self.source_scopes.push(SourceScopeData { |
702 | span, | |
703 | parent_scope: Some(parent), | |
29967ef6 XL |
704 | inlined: None, |
705 | inlined_parent_scope: None, | |
60c5eb7d XL |
706 | local_data: ClearCrossCrate::Set(scope_local_data), |
707 | }) | |
3157f602 XL |
708 | } |
709 | ||
94b46f34 | 710 | /// Given a span and the current source scope, make a SourceInfo. |
dfeec247 XL |
711 | crate fn source_info(&self, span: Span) -> SourceInfo { |
712 | SourceInfo { span, scope: self.source_scope } | |
e9174d1e SL |
713 | } |
714 | ||
dc9dc135 XL |
715 | // Finding scopes |
716 | // ============== | |
7cac9316 XL |
717 | /// Returns the scope that we should use as the lifetime of an |
718 | /// operand. Basically, an operand must live until it is consumed. | |
719 | /// This is similar to, but not quite the same as, the temporary | |
720 | /// scope (which can be larger or smaller). | |
721 | /// | |
722 | /// Consider: | |
723 | /// | |
724 | /// let x = foo(bar(X, Y)); | |
725 | /// | |
726 | /// We wish to pop the storage for X and Y after `bar()` is | |
727 | /// called, not after the whole `let` is completed. | |
728 | /// | |
729 | /// As another example, if the second argument diverges: | |
730 | /// | |
731 | /// foo(Box::new(2), panic!()) | |
732 | /// | |
733 | /// We would allocate the box but then free it on the unwinding | |
734 | /// path; we would also emit a free on the 'success' path from | |
735 | /// panic, but that will turn out to be removed as dead-code. | |
fc512014 XL |
736 | crate fn local_scope(&self) -> region::Scope { |
737 | self.scopes.topmost() | |
7cac9316 XL |
738 | } |
739 | ||
dc9dc135 XL |
740 | // Scheduling drops |
741 | // ================ | |
dfeec247 | 742 | crate fn schedule_drop_storage_and_value( |
8faf50e0 XL |
743 | &mut self, |
744 | span: Span, | |
745 | region_scope: region::Scope, | |
dc9dc135 | 746 | local: Local, |
8faf50e0 | 747 | ) { |
e74abb32 XL |
748 | self.schedule_drop(span, region_scope, local, DropKind::Storage); |
749 | self.schedule_drop(span, region_scope, local, DropKind::Value); | |
8faf50e0 XL |
750 | } |
751 | ||
29967ef6 | 752 | /// Indicates that `place` should be dropped on exit from `region_scope`. |
8faf50e0 | 753 | /// |
29967ef6 XL |
754 | /// When called with `DropKind::Storage`, `place` shouldn't be the return |
755 | /// place, or a function parameter. | |
dfeec247 | 756 | crate fn schedule_drop( |
8faf50e0 XL |
757 | &mut self, |
758 | span: Span, | |
759 | region_scope: region::Scope, | |
dc9dc135 | 760 | local: Local, |
8faf50e0 XL |
761 | drop_kind: DropKind, |
762 | ) { | |
e74abb32 XL |
763 | let needs_drop = match drop_kind { |
764 | DropKind::Value => { | |
6a06907d | 765 | if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) { |
dfeec247 XL |
766 | return; |
767 | } | |
e74abb32 | 768 | true |
dfeec247 | 769 | } |
8faf50e0 | 770 | DropKind::Storage => { |
dc9dc135 XL |
771 | if local.index() <= self.arg_count { |
772 | span_bug!( | |
dfeec247 XL |
773 | span, |
774 | "`schedule_drop` called with local {:?} and arg_count {}", | |
dc9dc135 XL |
775 | local, |
776 | self.arg_count, | |
777 | ) | |
8faf50e0 | 778 | } |
e74abb32 | 779 | false |
5bcae85e | 780 | } |
e74abb32 | 781 | }; |
5bcae85e | 782 | |
29967ef6 XL |
783 | // When building drops, we try to cache chains of drops to reduce the |
784 | // number of `DropTree::add_drop` calls. This, however, means that | |
785 | // whenever we add a drop into a scope which already had some entries | |
786 | // in the drop tree built (and thus, cached) for it, we must invalidate | |
787 | // all caches which might branch into the scope which had a drop just | |
788 | // added to it. This is necessary, because otherwise some other code | |
789 | // might use the cache to branch into already built chain of drops, | |
790 | // essentially ignoring the newly added drop. | |
791 | // | |
792 | // For example consider there’s two scopes with a drop in each. These | |
793 | // are built and thus the caches are filled: | |
794 | // | |
795 | // +--------------------------------------------------------+ | |
796 | // | +---------------------------------+ | | |
797 | // | | +--------+ +-------------+ | +---------------+ | | |
798 | // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | | | |
799 | // | | +--------+ +-------------+ | +---------------+ | | |
800 | // | +------------|outer_scope cache|--+ | | |
801 | // +------------------------------|middle_scope cache|------+ | |
802 | // | |
803 | // Now, a new, inner-most scope is added along with a new drop into | |
804 | // both inner-most and outer-most scopes: | |
805 | // | |
806 | // +------------------------------------------------------------+ | |
807 | // | +----------------------------------+ | | |
808 | // | | +--------+ +-------------+ | +---------------+ | +-------------+ | |
809 | // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) | | |
810 | // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+ | |
811 | // | | +-+ +-------------+ | | | |
812 | // | +---|invalid outer_scope cache|----+ | | |
813 | // +----=----------------|invalid middle_scope cache|-----------+ | |
814 | // | |
815 | // If, when adding `drop(new)` we do not invalidate the cached blocks for both | |
816 | // outer_scope and middle_scope, then, when building drops for the inner (right-most) | |
817 | // scope, the old, cached blocks, without `drop(new)` will get used, producing the | |
818 | // wrong results. | |
819 | // | |
820 | // Note that this code iterates scopes from the inner-most to the outer-most, | |
821 | // invalidating caches of each scope visited. This way bare minimum of the | |
822 | // caches gets invalidated. i.e., if a new drop is added into the middle scope, the | |
823 | // cache of outer scope stays intact. | |
824 | // | |
825 | // Since we only cache drops for the unwind path and the generator drop | |
826 | // path, we only need to invalidate the cache for drops that happen on | |
827 | // the unwind or generator drop paths. This means that for | |
828 | // non-generators we don't need to invalidate caches for `DropKind::Storage`. | |
829 | let invalidate_caches = needs_drop || self.generator_kind.is_some(); | |
830 | for scope in self.scopes.scopes.iter_mut().rev() { | |
831 | if invalidate_caches { | |
832 | scope.invalidate_cache(); | |
833 | } | |
834 | ||
835 | if scope.region_scope == region_scope { | |
6a06907d | 836 | let region_scope_span = region_scope.span(self.tcx, &self.region_scope_tree); |
2c00a5a8 | 837 | // Attribute scope exit drops to scope's closing brace. |
6a06907d | 838 | let scope_end = self.tcx.sess.source_map().end_point(region_scope_span); |
2c00a5a8 | 839 | |
7453a54e | 840 | scope.drops.push(DropData { |
29967ef6 | 841 | source_info: SourceInfo { span: scope_end, scope: scope.source_scope }, |
dc9dc135 XL |
842 | local, |
843 | kind: drop_kind, | |
7453a54e | 844 | }); |
29967ef6 | 845 | |
7453a54e | 846 | return; |
7453a54e SL |
847 | } |
848 | } | |
29967ef6 | 849 | |
dc9dc135 | 850 | span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local); |
7453a54e SL |
851 | } |
852 | ||
e1599b0c XL |
853 | /// Indicates that the "local operand" stored in `local` is |
854 | /// *moved* at some point during execution (see `local_scope` for | |
855 | /// more information about what a "local operand" is -- in short, | |
856 | /// it's an intermediate operand created as part of preparing some | |
857 | /// MIR instruction). We use this information to suppress | |
858 | /// redundant drops on the non-unwind paths. This results in less | |
859 | /// MIR, but also avoids spurious borrow check errors | |
860 | /// (c.f. #64391). | |
861 | /// | |
862 | /// Example: when compiling the call to `foo` here: | |
863 | /// | |
864 | /// ```rust | |
865 | /// foo(bar(), ...) | |
866 | /// ``` | |
867 | /// | |
868 | /// we would evaluate `bar()` to an operand `_X`. We would also | |
869 | /// schedule `_X` to be dropped when the expression scope for | |
870 | /// `foo(bar())` is exited. This is relevant, for example, if the | |
871 | /// later arguments should unwind (it would ensure that `_X` gets | |
872 | /// dropped). However, if no unwind occurs, then `_X` will be | |
873 | /// unconditionally consumed by the `call`: | |
874 | /// | |
875 | /// ``` | |
876 | /// bb { | |
877 | /// ... | |
878 | /// _R = CALL(foo, _X, ...) | |
879 | /// } | |
880 | /// ``` | |
881 | /// | |
882 | /// However, `_X` is still registered to be dropped, and so if we | |
883 | /// do nothing else, we would generate a `DROP(_X)` that occurs | |
884 | /// after the call. This will later be optimized out by the | |
885 | /// drop-elaboation code, but in the meantime it can lead to | |
886 | /// spurious borrow-check errors -- the problem, ironically, is | |
887 | /// not the `DROP(_X)` itself, but the (spurious) unwind pathways | |
888 | /// that it creates. See #64391 for an example. | |
dfeec247 | 889 | crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) { |
fc512014 XL |
890 | let local_scope = self.local_scope(); |
891 | let scope = self.scopes.scopes.last_mut().unwrap(); | |
e1599b0c | 892 | |
5869c6ff | 893 | assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",); |
e1599b0c XL |
894 | |
895 | // look for moves of a local variable, like `MOVE(_X)` | |
896 | let locals_moved = operands.iter().flat_map(|operand| match operand { | |
897 | Operand::Copy(_) | Operand::Constant(_) => None, | |
898 | Operand::Move(place) => place.as_local(), | |
899 | }); | |
900 | ||
901 | for local in locals_moved { | |
902 | // check if we have a Drop for this operand and -- if so | |
903 | // -- add it to the list of moved operands. Note that this | |
904 | // local might not have been an operand created for this | |
905 | // call, it could come from other places too. | |
906 | if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) { | |
907 | scope.moved_locals.push(local); | |
908 | } | |
909 | } | |
910 | } | |
911 | ||
7453a54e SL |
912 | // Other |
913 | // ===== | |
dc9dc135 XL |
914 | /// Branch based on a boolean condition. |
915 | /// | |
916 | /// This is a special case because the temporary for the condition needs to | |
917 | /// be dropped on both the true and the false arm. | |
dfeec247 | 918 | crate fn test_bool( |
dc9dc135 XL |
919 | &mut self, |
920 | mut block: BasicBlock, | |
6a06907d | 921 | condition: &Expr<'_, 'tcx>, |
dc9dc135 XL |
922 | source_info: SourceInfo, |
923 | ) -> (BasicBlock, BasicBlock) { | |
924 | let cond = unpack!(block = self.as_local_operand(block, condition)); | |
925 | let true_block = self.cfg.start_new_block(); | |
926 | let false_block = self.cfg.start_new_block(); | |
6a06907d | 927 | let term = TerminatorKind::if_(self.tcx, cond.clone(), true_block, false_block); |
dc9dc135 XL |
928 | self.cfg.terminate(block, source_info, term); |
929 | ||
930 | match cond { | |
931 | // Don't try to drop a constant | |
932 | Operand::Constant(_) => (), | |
dfeec247 | 933 | Operand::Copy(place) | Operand::Move(place) => { |
e74abb32 XL |
934 | if let Some(cond_temp) = place.as_local() { |
935 | // Manually drop the condition on both branches. | |
936 | let top_scope = self.scopes.scopes.last_mut().unwrap(); | |
937 | let top_drop_data = top_scope.drops.pop().unwrap(); | |
29967ef6 XL |
938 | if self.generator_kind.is_some() { |
939 | top_scope.invalidate_cache(); | |
940 | } | |
e74abb32 XL |
941 | |
942 | match top_drop_data.kind { | |
943 | DropKind::Value { .. } => { | |
944 | bug!("Drop scheduled on top of condition variable") | |
945 | } | |
946 | DropKind::Storage => { | |
29967ef6 | 947 | let source_info = top_drop_data.source_info; |
e74abb32 XL |
948 | let local = top_drop_data.local; |
949 | assert_eq!(local, cond_temp, "Drop scheduled on top of condition"); | |
950 | self.cfg.push( | |
951 | true_block, | |
dfeec247 | 952 | Statement { source_info, kind: StatementKind::StorageDead(local) }, |
e74abb32 XL |
953 | ); |
954 | self.cfg.push( | |
955 | false_block, | |
dfeec247 | 956 | Statement { source_info, kind: StatementKind::StorageDead(local) }, |
e74abb32 XL |
957 | ); |
958 | } | |
dc9dc135 | 959 | } |
e74abb32 XL |
960 | } else { |
961 | bug!("Expected as_local_operand to produce a temporary"); | |
962 | } | |
dc9dc135 | 963 | } |
dc9dc135 XL |
964 | } |
965 | ||
966 | (true_block, false_block) | |
967 | } | |
968 | ||
29967ef6 XL |
969 | /// Returns the [DropIdx] for the innermost drop if the function unwound at |
970 | /// this point. The `DropIdx` will be created if it doesn't already exist. | |
971 | fn diverge_cleanup(&mut self) -> DropIdx { | |
972 | let is_generator = self.generator_kind.is_some(); | |
973 | let (uncached_scope, mut cached_drop) = self | |
974 | .scopes | |
975 | .scopes | |
976 | .iter() | |
977 | .enumerate() | |
978 | .rev() | |
979 | .find_map(|(scope_idx, scope)| { | |
980 | scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block)) | |
981 | }) | |
982 | .unwrap_or((0, ROOT_NODE)); | |
983 | ||
984 | for scope in &mut self.scopes.scopes[uncached_scope..] { | |
985 | for drop in &scope.drops { | |
986 | if is_generator || drop.kind == DropKind::Value { | |
987 | cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop); | |
988 | } | |
989 | } | |
990 | scope.cached_unwind_block = Some(cached_drop); | |
7453a54e | 991 | } |
29967ef6 XL |
992 | |
993 | cached_drop | |
ff7c6d11 XL |
994 | } |
995 | ||
29967ef6 XL |
996 | /// Prepares to create a path that performs all required cleanup for a |
997 | /// terminator that can unwind at the given basic block. | |
998 | /// | |
999 | /// This path terminates in Resume. The path isn't created until after all | |
1000 | /// of the non-unwind paths in this item have been lowered. | |
1001 | crate fn diverge_from(&mut self, start: BasicBlock) { | |
1002 | debug_assert!( | |
1003 | matches!( | |
1004 | self.cfg.block_data(start).terminator().kind, | |
1005 | TerminatorKind::Assert { .. } | |
5869c6ff XL |
1006 | | TerminatorKind::Call { .. } |
1007 | | TerminatorKind::DropAndReplace { .. } | |
1008 | | TerminatorKind::FalseUnwind { .. } | |
29967ef6 XL |
1009 | ), |
1010 | "diverge_from called on block with terminator that cannot unwind." | |
1011 | ); | |
a1dfa0c6 | 1012 | |
29967ef6 XL |
1013 | let next_drop = self.diverge_cleanup(); |
1014 | self.scopes.unwind_drops.add_entry(start, next_drop); | |
1015 | } | |
1016 | ||
1017 | /// Sets up a path that performs all required cleanup for dropping a | |
1018 | /// generator, starting from the given block that ends in | |
1019 | /// [TerminatorKind::Yield]. | |
1020 | /// | |
1021 | /// This path terminates in GeneratorDrop. | |
1022 | crate fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) { | |
1023 | debug_assert!( | |
1024 | matches!( | |
1025 | self.cfg.block_data(yield_block).terminator().kind, | |
1026 | TerminatorKind::Yield { .. } | |
1027 | ), | |
1028 | "generator_drop_cleanup called on block with non-yield terminator." | |
1029 | ); | |
1030 | let (uncached_scope, mut cached_drop) = self | |
1031 | .scopes | |
1032 | .scopes | |
1033 | .iter() | |
1034 | .enumerate() | |
1035 | .rev() | |
1036 | .find_map(|(scope_idx, scope)| { | |
1037 | scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block)) | |
1038 | }) | |
1039 | .unwrap_or((0, ROOT_NODE)); | |
1040 | ||
1041 | for scope in &mut self.scopes.scopes[uncached_scope..] { | |
1042 | for drop in &scope.drops { | |
1043 | cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop); | |
1044 | } | |
1045 | scope.cached_generator_drop_block = Some(cached_drop); | |
7453a54e | 1046 | } |
ff7c6d11 | 1047 | |
29967ef6 | 1048 | self.scopes.generator_drops.add_entry(yield_block, cached_drop); |
e9174d1e SL |
1049 | } |
1050 | ||
3157f602 | 1051 | /// Utility function for *non*-scope code to build their own drops |
dfeec247 XL |
1052 | crate fn build_drop_and_replace( |
1053 | &mut self, | |
1054 | block: BasicBlock, | |
1055 | span: Span, | |
f035d41b | 1056 | place: Place<'tcx>, |
dfeec247 XL |
1057 | value: Operand<'tcx>, |
1058 | ) -> BlockAnd<()> { | |
3157f602 XL |
1059 | let source_info = self.source_info(span); |
1060 | let next_target = self.cfg.start_new_block(); | |
29967ef6 | 1061 | |
dfeec247 XL |
1062 | self.cfg.terminate( |
1063 | block, | |
1064 | source_info, | |
29967ef6 | 1065 | TerminatorKind::DropAndReplace { place, value, target: next_target, unwind: None }, |
dfeec247 | 1066 | ); |
29967ef6 XL |
1067 | self.diverge_from(block); |
1068 | ||
3157f602 | 1069 | next_target.unit() |
e9174d1e SL |
1070 | } |
1071 | ||
29967ef6 | 1072 | /// Creates an `Assert` terminator and return the success block. |
3157f602 XL |
1073 | /// If the boolean condition operand is not the expected value, |
1074 | /// a runtime panic will be caused with the given message. | |
dfeec247 XL |
1075 | crate fn assert( |
1076 | &mut self, | |
1077 | block: BasicBlock, | |
1078 | cond: Operand<'tcx>, | |
1079 | expected: bool, | |
1080 | msg: AssertMessage<'tcx>, | |
1081 | span: Span, | |
1082 | ) -> BasicBlock { | |
3157f602 | 1083 | let source_info = self.source_info(span); |
3157f602 | 1084 | let success_block = self.cfg.start_new_block(); |
e9174d1e | 1085 | |
dfeec247 XL |
1086 | self.cfg.terminate( |
1087 | block, | |
1088 | source_info, | |
29967ef6 | 1089 | TerminatorKind::Assert { cond, expected, msg, target: success_block, cleanup: None }, |
dfeec247 | 1090 | ); |
29967ef6 | 1091 | self.diverge_from(block); |
e9174d1e | 1092 | |
3157f602 | 1093 | success_block |
9cc50fc6 | 1094 | } |
dc9dc135 | 1095 | |
dc9dc135 XL |
1096 | /// Unschedules any drops in the top scope. |
1097 | /// | |
1098 | /// This is only needed for `match` arm scopes, because they have one | |
1099 | /// entrance per pattern, but only one exit. | |
29967ef6 | 1100 | crate fn clear_top_scope(&mut self, region_scope: region::Scope) { |
dc9dc135 XL |
1101 | let top_scope = self.scopes.scopes.last_mut().unwrap(); |
1102 | ||
1103 | assert_eq!(top_scope.region_scope, region_scope); | |
1104 | ||
1105 | top_scope.drops.clear(); | |
29967ef6 | 1106 | top_scope.invalidate_cache(); |
dc9dc135 | 1107 | } |
7453a54e SL |
1108 | } |
1109 | ||
29967ef6 | 1110 | /// Builds drops for `pop_scope` and `leave_top_scope`. |
a1dfa0c6 XL |
1111 | fn build_scope_drops<'tcx>( |
1112 | cfg: &mut CFG<'tcx>, | |
29967ef6 | 1113 | unwind_drops: &mut DropTree, |
dc9dc135 | 1114 | scope: &Scope, |
a1dfa0c6 | 1115 | mut block: BasicBlock, |
29967ef6 XL |
1116 | mut unwind_to: DropIdx, |
1117 | storage_dead_on_unwind: bool, | |
a1dfa0c6 | 1118 | arg_count: usize, |
a1dfa0c6 | 1119 | ) -> BlockAnd<()> { |
dc9dc135 | 1120 | debug!("build_scope_drops({:?} -> {:?})", block, scope); |
a1dfa0c6 XL |
1121 | |
1122 | // Build up the drops in evaluation order. The end result will | |
1123 | // look like: | |
1124 | // | |
1125 | // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]] | |
1126 | // | | | | |
1127 | // : | | | |
1128 | // V V | |
1129 | // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to] | |
1130 | // | |
1131 | // The horizontal arrows represent the execution path when the drops return | |
1132 | // successfully. The downwards arrows represent the execution path when the | |
1133 | // drops panic (panicking while unwinding will abort, so there's no need for | |
dc9dc135 | 1134 | // another set of arrows). |
a1dfa0c6 | 1135 | // |
dc9dc135 XL |
1136 | // For generators, we unwind from a drop on a local to its StorageDead |
1137 | // statement. For other functions we don't worry about StorageDead. The | |
1138 | // drops for the unwind path should have already been generated by | |
1139 | // `diverge_cleanup_gen`. | |
a1dfa0c6 | 1140 | |
29967ef6 XL |
1141 | for drop_data in scope.drops.iter().rev() { |
1142 | let source_info = drop_data.source_info; | |
dc9dc135 | 1143 | let local = drop_data.local; |
e1599b0c | 1144 | |
5bcae85e | 1145 | match drop_data.kind { |
dc9dc135 | 1146 | DropKind::Value => { |
29967ef6 XL |
1147 | // `unwind_to` should drop the value that we're about to |
1148 | // schedule. If dropping this value panics, then we continue | |
1149 | // with the *next* value on the unwind path. | |
1150 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local); | |
1151 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind); | |
1152 | unwind_to = unwind_drops.drops[unwind_to].1; | |
1153 | ||
e1599b0c XL |
1154 | // If the operand has been moved, and we are not on an unwind |
1155 | // path, then don't generate the drop. (We only take this into | |
1156 | // account for non-unwind paths so as not to disturb the | |
1157 | // caching mechanism.) | |
29967ef6 | 1158 | if scope.moved_locals.iter().any(|&o| o == local) { |
e1599b0c XL |
1159 | continue; |
1160 | } | |
1161 | ||
29967ef6 | 1162 | unwind_drops.add_entry(block, unwind_to); |
abe05a73 | 1163 | |
3b2f2976 | 1164 | let next = cfg.start_new_block(); |
dfeec247 XL |
1165 | cfg.terminate( |
1166 | block, | |
1167 | source_info, | |
29967ef6 | 1168 | TerminatorKind::Drop { place: local.into(), target: next, unwind: None }, |
dfeec247 | 1169 | ); |
3b2f2976 XL |
1170 | block = next; |
1171 | } | |
8faf50e0 | 1172 | DropKind::Storage => { |
29967ef6 XL |
1173 | if storage_dead_on_unwind { |
1174 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local); | |
1175 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind); | |
1176 | unwind_to = unwind_drops.drops[unwind_to].1; | |
1177 | } | |
8faf50e0 | 1178 | // Only temps and vars need their storage dead. |
dc9dc135 | 1179 | assert!(local.index() > arg_count); |
dfeec247 | 1180 | cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) }); |
5bcae85e SL |
1181 | } |
1182 | } | |
7453a54e SL |
1183 | } |
1184 | block.unit() | |
1185 | } | |
1186 | ||
29967ef6 XL |
1187 | impl<'a, 'tcx: 'a> Builder<'a, 'tcx> { |
1188 | /// Build a drop tree for a breakable scope. | |
1189 | /// | |
1190 | /// If `continue_block` is `Some`, then the tree is for `continue` inside a | |
1191 | /// loop. Otherwise this is for `break` or `return`. | |
1192 | fn build_exit_tree( | |
1193 | &mut self, | |
1194 | mut drops: DropTree, | |
1195 | continue_block: Option<BasicBlock>, | |
1196 | ) -> Option<BlockAnd<()>> { | |
1197 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1198 | blocks[ROOT_NODE] = continue_block; | |
1199 | ||
1200 | drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks); | |
1201 | ||
1202 | // Link the exit drop tree to unwind drop tree. | |
1203 | if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) { | |
1204 | let unwind_target = self.diverge_cleanup(); | |
1205 | let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1); | |
1206 | for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) { | |
1207 | match drop_data.0.kind { | |
1208 | DropKind::Storage => { | |
1209 | if self.generator_kind.is_some() { | |
1210 | let unwind_drop = self | |
1211 | .scopes | |
1212 | .unwind_drops | |
1213 | .add_drop(drop_data.0, unwind_indices[drop_data.1]); | |
1214 | unwind_indices.push(unwind_drop); | |
1215 | } else { | |
1216 | unwind_indices.push(unwind_indices[drop_data.1]); | |
1217 | } | |
1218 | } | |
1219 | DropKind::Value => { | |
1220 | let unwind_drop = self | |
1221 | .scopes | |
1222 | .unwind_drops | |
1223 | .add_drop(drop_data.0, unwind_indices[drop_data.1]); | |
1224 | self.scopes | |
1225 | .unwind_drops | |
1226 | .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]); | |
1227 | unwind_indices.push(unwind_drop); | |
1228 | } | |
1229 | } | |
dc9dc135 | 1230 | } |
dc9dc135 | 1231 | } |
29967ef6 | 1232 | blocks[ROOT_NODE].map(BasicBlock::unit) |
dc9dc135 | 1233 | } |
dc9dc135 | 1234 | |
29967ef6 XL |
1235 | /// Build the unwind and generator drop trees. |
1236 | crate fn build_drop_trees(&mut self, should_abort: bool) { | |
1237 | if self.generator_kind.is_some() { | |
1238 | self.build_generator_drop_trees(should_abort); | |
1239 | } else { | |
1240 | Self::build_unwind_tree( | |
1241 | &mut self.cfg, | |
1242 | &mut self.scopes.unwind_drops, | |
1243 | self.fn_span, | |
1244 | should_abort, | |
1245 | &mut None, | |
1246 | ); | |
1247 | } | |
1248 | } | |
1249 | ||
1250 | fn build_generator_drop_trees(&mut self, should_abort: bool) { | |
1251 | // Build the drop tree for dropping the generator while it's suspended. | |
1252 | let drops = &mut self.scopes.generator_drops; | |
1253 | let cfg = &mut self.cfg; | |
1254 | let fn_span = self.fn_span; | |
1255 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1256 | drops.build_mir::<GeneratorDrop>(cfg, &mut blocks); | |
1257 | if let Some(root_block) = blocks[ROOT_NODE] { | |
1258 | cfg.terminate( | |
1259 | root_block, | |
1260 | SourceInfo::outermost(fn_span), | |
1261 | TerminatorKind::GeneratorDrop, | |
1262 | ); | |
1263 | } | |
1264 | ||
1265 | // Build the drop tree for unwinding in the normal control flow paths. | |
1266 | let resume_block = &mut None; | |
1267 | let unwind_drops = &mut self.scopes.unwind_drops; | |
1268 | Self::build_unwind_tree(cfg, unwind_drops, fn_span, should_abort, resume_block); | |
1269 | ||
1270 | // Build the drop tree for unwinding when dropping a suspended | |
1271 | // generator. | |
041b39d2 | 1272 | // |
29967ef6 XL |
1273 | // This is a different tree to the standard unwind paths here to |
1274 | // prevent drop elaboration from creating drop flags that would have | |
1275 | // to be captured by the generator. I'm not sure how important this | |
1276 | // optimization is, but it is here. | |
1277 | for (drop_idx, drop_data) in drops.drops.iter_enumerated() { | |
1278 | if let DropKind::Value = drop_data.0.kind { | |
1279 | debug_assert!(drop_data.1 < drops.drops.next_index()); | |
1280 | drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap())); | |
dc9dc135 | 1281 | } |
29967ef6 XL |
1282 | } |
1283 | Self::build_unwind_tree(cfg, drops, fn_span, should_abort, resume_block); | |
7453a54e | 1284 | } |
abe05a73 | 1285 | |
29967ef6 XL |
1286 | fn build_unwind_tree( |
1287 | cfg: &mut CFG<'tcx>, | |
1288 | drops: &mut DropTree, | |
1289 | fn_span: Span, | |
1290 | should_abort: bool, | |
1291 | resume_block: &mut Option<BasicBlock>, | |
1292 | ) { | |
1293 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1294 | blocks[ROOT_NODE] = *resume_block; | |
1295 | drops.build_mir::<Unwind>(cfg, &mut blocks); | |
1296 | if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) { | |
1297 | // `TerminatorKind::Abort` is used for `#[unwind(aborts)]` | |
1298 | // functions. | |
1299 | let terminator = | |
1300 | if should_abort { TerminatorKind::Abort } else { TerminatorKind::Resume }; | |
1301 | ||
1302 | cfg.terminate(resume, SourceInfo::outermost(fn_span), terminator); | |
1303 | ||
1304 | *resume_block = blocks[ROOT_NODE]; | |
1305 | } | |
1306 | } | |
1307 | } | |
041b39d2 | 1308 | |
29967ef6 XL |
1309 | // DropTreeBuilder implementations. |
1310 | ||
1311 | struct ExitScopes; | |
1312 | ||
1313 | impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes { | |
1314 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1315 | cfg.start_new_block() | |
1316 | } | |
1317 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1318 | cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to }; | |
1319 | } | |
7453a54e | 1320 | } |
dc9dc135 | 1321 | |
29967ef6 XL |
1322 | struct GeneratorDrop; |
1323 | ||
1324 | impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop { | |
1325 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1326 | cfg.start_new_block() | |
1327 | } | |
1328 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1329 | let term = cfg.block_data_mut(from).terminator_mut(); | |
1330 | if let TerminatorKind::Yield { ref mut drop, .. } = term.kind { | |
1331 | *drop = Some(to); | |
1332 | } else { | |
1333 | span_bug!( | |
1334 | term.source_info.span, | |
1335 | "cannot enter generator drop tree from {:?}", | |
1336 | term.kind | |
1337 | ) | |
1338 | } | |
1339 | } | |
1340 | } | |
1341 | ||
1342 | struct Unwind; | |
1343 | ||
1344 | impl<'tcx> DropTreeBuilder<'tcx> for Unwind { | |
1345 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1346 | cfg.start_new_cleanup_block() | |
1347 | } | |
1348 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1349 | let term = &mut cfg.block_data_mut(from).terminator_mut(); | |
1350 | match &mut term.kind { | |
1351 | TerminatorKind::Drop { unwind, .. } | |
1352 | | TerminatorKind::DropAndReplace { unwind, .. } | |
1353 | | TerminatorKind::FalseUnwind { unwind, .. } | |
1354 | | TerminatorKind::Call { cleanup: unwind, .. } | |
1355 | | TerminatorKind::Assert { cleanup: unwind, .. } => { | |
1356 | *unwind = Some(to); | |
1357 | } | |
1358 | TerminatorKind::Goto { .. } | |
1359 | | TerminatorKind::SwitchInt { .. } | |
1360 | | TerminatorKind::Resume | |
1361 | | TerminatorKind::Abort | |
1362 | | TerminatorKind::Return | |
1363 | | TerminatorKind::Unreachable | |
1364 | | TerminatorKind::Yield { .. } | |
1365 | | TerminatorKind::GeneratorDrop | |
1366 | | TerminatorKind::FalseEdge { .. } | |
fc512014 | 1367 | | TerminatorKind::InlineAsm { .. } => { |
29967ef6 XL |
1368 | span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind) |
1369 | } | |
1370 | } | |
dfeec247 | 1371 | } |
dc9dc135 | 1372 | } |