]>
Commit | Line | Data |
---|---|---|
e9174d1e SL |
1 | /*! |
2 | Managing the scope stack. The scopes are tied to lexical scopes, so as | |
3dfed10e | 3 | we descend the THIR, we push a scope on the stack, build its |
e9174d1e | 4 | contents, and then pop it off. Every scope is named by a |
ea8adc8c | 5 | `region::Scope`. |
e9174d1e SL |
6 | |
7 | ### SEME Regions | |
8 | ||
29967ef6 | 9 | When pushing a new [Scope], we record the current point in the graph (a |
e9174d1e SL |
10 | basic block); this marks the entry to the scope. We then generate more |
11 | stuff in the control-flow graph. Whenever the scope is exited, either | |
12 | via a `break` or `return` or just by fallthrough, that marks an exit | |
13 | from the scope. Each lexical scope thus corresponds to a single-entry, | |
14 | multiple-exit (SEME) region in the control-flow graph. | |
15 | ||
29967ef6 XL |
16 | For now, we record the `region::Scope` to each SEME region for later reference |
17 | (see caveat in next paragraph). This is because destruction scopes are tied to | |
18 | them. This may change in the future so that MIR lowering determines its own | |
19 | destruction scopes. | |
e9174d1e | 20 | |
dc9dc135 XL |
21 | ### Not so SEME Regions |
22 | ||
23 | In the course of building matches, it sometimes happens that certain code | |
24 | (namely guards) gets executed multiple times. This means that the scope lexical | |
25 | scope may in fact correspond to multiple, disjoint SEME regions. So in fact our | |
29967ef6 XL |
26 | mapping is from one scope to a vector of SEME regions. Since the SEME regions |
27 | are disjoint, the mapping is still one-to-one for the set of SEME regions that | |
28 | we're currently in. | |
e9174d1e | 29 | |
29967ef6 XL |
30 | Also in matches, the scopes assigned to arms are not always even SEME regions! |
31 | Each arm has a single region with one entry for each pattern. We manually | |
dc9dc135 | 32 | manipulate the scheduled drops in this scope to avoid dropping things multiple |
29967ef6 | 33 | times. |
dc9dc135 | 34 | |
e9174d1e SL |
35 | ### Drops |
36 | ||
94b46f34 | 37 | The primary purpose for scopes is to insert drops: while building |
ff7c6d11 | 38 | the contents, we also accumulate places that need to be dropped upon |
e9174d1e SL |
39 | exit from each scope. This is done by calling `schedule_drop`. Once a |
40 | drop is scheduled, whenever we branch out we will insert drops of all | |
ff7c6d11 | 41 | those places onto the outgoing edge. Note that we don't know the full |
e9174d1e SL |
42 | set of scheduled drops up front, and so whenever we exit from the |
43 | scope we only drop the values scheduled thus far. For example, consider | |
44 | the scope S corresponding to this loop: | |
45 | ||
041b39d2 XL |
46 | ``` |
47 | # let cond = true; | |
e9174d1e | 48 | loop { |
041b39d2 | 49 | let x = ..; |
e9174d1e | 50 | if cond { break; } |
041b39d2 | 51 | let y = ..; |
e9174d1e SL |
52 | } |
53 | ``` | |
54 | ||
55 | When processing the `let x`, we will add one drop to the scope for | |
56 | `x`. The break will then insert a drop for `x`. When we process `let | |
57 | y`, we will add another drop (in fact, to a subscope, but let's ignore | |
58 | that for now); any later drops would also drop `y`. | |
59 | ||
60 | ### Early exit | |
61 | ||
62 | There are numerous "normal" ways to early exit a scope: `break`, | |
63 | `continue`, `return` (panics are handled separately). Whenever an | |
29967ef6 | 64 | early exit occurs, the method `break_scope` is called. It is given the |
e9174d1e SL |
65 | current point in execution where the early exit occurs, as well as the |
66 | scope you want to branch to (note that all early exits from to some | |
29967ef6 XL |
67 | other enclosing scope). `break_scope` will record the set of drops currently |
68 | scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops | |
69 | will be added to the CFG. | |
e9174d1e | 70 | |
29967ef6 XL |
71 | Panics are handled in a similar fashion, except that the drops are added to the |
72 | MIR once the rest of the function has finished being lowered. If a terminator | |
73 | can panic, call `diverge_from(block)` with the block containing the terminator | |
74 | `block`. | |
e9174d1e | 75 | |
29967ef6 | 76 | ### Breakable scopes |
e9174d1e SL |
77 | |
78 | In addition to the normal scope stack, we track a loop scope stack | |
29967ef6 XL |
79 | that contains only loops and breakable blocks. It tracks where a `break`, |
80 | `continue` or `return` should go to. | |
e9174d1e SL |
81 | |
82 | */ | |
83 | ||
dc9dc135 | 84 | use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG}; |
6a06907d | 85 | use crate::thir::{Expr, LintLevel}; |
476ff2be | 86 | use rustc_data_structures::fx::FxHashMap; |
29967ef6 | 87 | use rustc_index::vec::IndexVec; |
3dfed10e XL |
88 | use rustc_middle::middle::region; |
89 | use rustc_middle::mir::*; | |
dfeec247 | 90 | use rustc_span::{Span, DUMMY_SP}; |
29967ef6 XL |
91 | |
92 | #[derive(Debug)] | |
93 | pub struct Scopes<'tcx> { | |
94 | scopes: Vec<Scope>, | |
95 | /// The current set of breakable scopes. See module comment for more details. | |
96 | breakable_scopes: Vec<BreakableScope<'tcx>>, | |
97 | ||
98 | /// Drops that need to be done on unwind paths. See the comment on | |
99 | /// [DropTree] for more details. | |
100 | unwind_drops: DropTree, | |
101 | ||
102 | /// Drops that need to be done on paths to the `GeneratorDrop` terminator. | |
103 | generator_drops: DropTree, | |
104 | } | |
e9174d1e | 105 | |
041b39d2 | 106 | #[derive(Debug)] |
dc9dc135 | 107 | struct Scope { |
94b46f34 XL |
108 | /// The source scope this scope was created in. |
109 | source_scope: SourceScope, | |
3157f602 | 110 | |
ea8adc8c XL |
111 | /// the region span of this scope within source code. |
112 | region_scope: region::Scope, | |
54a0048b | 113 | |
ea8adc8c XL |
114 | /// the span of that region_scope |
115 | region_scope_span: Span, | |
3b2f2976 | 116 | |
ff7c6d11 | 117 | /// set of places to drop when exiting this scope. This starts |
54a0048b SL |
118 | /// out empty but grows as variables are declared during the |
119 | /// building process. This is a stack, so we always drop from the | |
120 | /// end of the vector (top of the stack) first. | |
dc9dc135 | 121 | drops: Vec<DropData>, |
54a0048b | 122 | |
e1599b0c XL |
123 | moved_locals: Vec<Local>, |
124 | ||
29967ef6 XL |
125 | /// The drop index that will drop everything in and below this scope on an |
126 | /// unwind path. | |
127 | cached_unwind_block: Option<DropIdx>, | |
ea8adc8c | 128 | |
29967ef6 XL |
129 | /// The drop index that will drop everything in and below this scope on a |
130 | /// generator drop path. | |
131 | cached_generator_drop_block: Option<DropIdx>, | |
7453a54e SL |
132 | } |
133 | ||
29967ef6 | 134 | #[derive(Clone, Copy, Debug)] |
dc9dc135 | 135 | struct DropData { |
29967ef6 XL |
136 | /// The `Span` where drop obligation was incurred (typically where place was |
137 | /// declared) | |
138 | source_info: SourceInfo, | |
54a0048b | 139 | |
dc9dc135 XL |
140 | /// local to drop |
141 | local: Local, | |
54a0048b | 142 | |
8faf50e0 XL |
143 | /// Whether this is a value Drop or a StorageDead. |
144 | kind: DropKind, | |
ea8adc8c XL |
145 | } |
146 | ||
29967ef6 | 147 | #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] |
8faf50e0 | 148 | pub(crate) enum DropKind { |
dc9dc135 XL |
149 | Value, |
150 | Storage, | |
7453a54e SL |
151 | } |
152 | ||
29967ef6 | 153 | #[derive(Debug)] |
dc9dc135 | 154 | struct BreakableScope<'tcx> { |
ea8adc8c | 155 | /// Region scope of the loop |
dc9dc135 | 156 | region_scope: region::Scope, |
e74abb32 | 157 | /// The destination of the loop/block expression itself (i.e., where to put |
29967ef6 | 158 | /// the result of a `break` or `return` expression) |
dc9dc135 | 159 | break_destination: Place<'tcx>, |
29967ef6 XL |
160 | /// Drops that happen on the `break`/`return` path. |
161 | break_drops: DropTree, | |
162 | /// Drops that happen on the `continue` path. | |
163 | continue_drops: Option<DropTree>, | |
dc9dc135 XL |
164 | } |
165 | ||
166 | /// The target of an expression that breaks out of a scope | |
167 | #[derive(Clone, Copy, Debug)] | |
dfeec247 | 168 | crate enum BreakableTarget { |
dc9dc135 XL |
169 | Continue(region::Scope), |
170 | Break(region::Scope), | |
171 | Return, | |
7453a54e SL |
172 | } |
173 | ||
29967ef6 XL |
174 | rustc_index::newtype_index! { |
175 | struct DropIdx { .. } | |
176 | } | |
ea8adc8c | 177 | |
29967ef6 | 178 | const ROOT_NODE: DropIdx = DropIdx::from_u32(0); |
ea8adc8c | 179 | |
29967ef6 XL |
180 | /// A tree of drops that we have deferred lowering. It's used for: |
181 | /// | |
182 | /// * Drops on unwind paths | |
183 | /// * Drops on generator drop paths (when a suspended generator is dropped) | |
184 | /// * Drops on return and loop exit paths | |
185 | /// | |
186 | /// Once no more nodes could be added to the tree, we lower it to MIR in one go | |
187 | /// in `build_mir`. | |
188 | #[derive(Debug)] | |
189 | struct DropTree { | |
190 | /// Drops in the tree. | |
191 | drops: IndexVec<DropIdx, (DropData, DropIdx)>, | |
192 | /// Map for finding the inverse of the `next_drop` relation: | |
193 | /// | |
194 | /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i` | |
195 | previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>, | |
196 | /// Edges into the `DropTree` that need to be added once it's lowered. | |
197 | entry_points: Vec<(DropIdx, BasicBlock)>, | |
ea8adc8c XL |
198 | } |
199 | ||
dc9dc135 | 200 | impl Scope { |
e1599b0c XL |
201 | /// Whether there's anything to do for the cleanup path, that is, |
202 | /// when unwinding through this scope. This includes destructors, | |
203 | /// but not StorageDead statements, which don't get emitted at all | |
204 | /// for unwinding, for several reasons: | |
205 | /// * clang doesn't emit llvm.lifetime.end for C++ unwinding | |
206 | /// * LLVM's memory dependency analysis can't handle it atm | |
207 | /// * polluting the cleanup MIR with StorageDead creates | |
208 | /// landing pads even though there's no actual destructors | |
209 | /// * freeing up stack space has no effect during unwinding | |
210 | /// Note that for generators we do emit StorageDeads, for the | |
211 | /// use of optimizations in the MIR generator transform. | |
212 | fn needs_cleanup(&self) -> bool { | |
213 | self.drops.iter().any(|drop| match drop.kind { | |
214 | DropKind::Value => true, | |
215 | DropKind::Storage => false, | |
216 | }) | |
217 | } | |
29967ef6 XL |
218 | |
219 | fn invalidate_cache(&mut self) { | |
220 | self.cached_unwind_block = None; | |
221 | self.cached_generator_drop_block = None; | |
222 | } | |
e9174d1e SL |
223 | } |
224 | ||
29967ef6 XL |
225 | /// A trait that determined how [DropTree] creates its blocks and |
226 | /// links to any entry nodes. | |
227 | trait DropTreeBuilder<'tcx> { | |
228 | /// Create a new block for the tree. This should call either | |
229 | /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`. | |
230 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock; | |
231 | ||
232 | /// Links a block outside the drop tree, `from`, to the block `to` inside | |
233 | /// the drop tree. | |
234 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock); | |
235 | } | |
236 | ||
237 | impl DropTree { | |
238 | fn new() -> Self { | |
239 | // The root node of the tree doesn't represent a drop, but instead | |
240 | // represents the block in the tree that should be jumped to once all | |
241 | // of the required drops have been performed. | |
242 | let fake_source_info = SourceInfo::outermost(DUMMY_SP); | |
243 | let fake_data = | |
244 | DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage }; | |
245 | let drop_idx = DropIdx::MAX; | |
246 | let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1); | |
247 | Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() } | |
dc9dc135 XL |
248 | } |
249 | ||
29967ef6 XL |
250 | fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx { |
251 | let drops = &mut self.drops; | |
252 | *self | |
253 | .previous_drops | |
254 | .entry((next, drop.local, drop.kind)) | |
255 | .or_insert_with(|| drops.push((drop, next))) | |
dc9dc135 XL |
256 | } |
257 | ||
29967ef6 XL |
258 | fn add_entry(&mut self, from: BasicBlock, to: DropIdx) { |
259 | debug_assert!(to < self.drops.next_index()); | |
260 | self.entry_points.push((to, from)); | |
dc9dc135 XL |
261 | } |
262 | ||
29967ef6 XL |
263 | /// Builds the MIR for a given drop tree. |
264 | /// | |
265 | /// `blocks` should have the same length as `self.drops`, and may have its | |
266 | /// first value set to some already existing block. | |
267 | fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>( | |
268 | &mut self, | |
269 | cfg: &mut CFG<'tcx>, | |
270 | blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, | |
271 | ) { | |
272 | debug!("DropTree::build_mir(drops = {:#?})", self); | |
273 | assert_eq!(blocks.len(), self.drops.len()); | |
274 | ||
275 | self.assign_blocks::<T>(cfg, blocks); | |
276 | self.link_blocks(cfg, blocks) | |
dc9dc135 XL |
277 | } |
278 | ||
29967ef6 XL |
279 | /// Assign blocks for all of the drops in the drop tree that need them. |
280 | fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>( | |
281 | &mut self, | |
282 | cfg: &mut CFG<'tcx>, | |
283 | blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, | |
284 | ) { | |
285 | // StorageDead statements can share blocks with each other and also with | |
286 | // a Drop terminator. We iterate through the drops to find which drops | |
287 | // need their own block. | |
288 | #[derive(Clone, Copy)] | |
289 | enum Block { | |
290 | // This drop is unreachable | |
291 | None, | |
292 | // This drop is only reachable through the `StorageDead` with the | |
293 | // specified index. | |
294 | Shares(DropIdx), | |
295 | // This drop has more than one way of being reached, or it is | |
296 | // branched to from outside the tree, or its predecessor is a | |
297 | // `Value` drop. | |
298 | Own, | |
299 | } | |
300 | ||
301 | let mut needs_block = IndexVec::from_elem(Block::None, &self.drops); | |
302 | if blocks[ROOT_NODE].is_some() { | |
303 | // In some cases (such as drops for `continue`) the root node | |
304 | // already has a block. In this case, make sure that we don't | |
305 | // override it. | |
306 | needs_block[ROOT_NODE] = Block::Own; | |
307 | } | |
308 | ||
309 | // Sort so that we only need to check the last value. | |
310 | let entry_points = &mut self.entry_points; | |
311 | entry_points.sort(); | |
312 | ||
313 | for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() { | |
314 | if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) { | |
315 | let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg)); | |
316 | needs_block[drop_idx] = Block::Own; | |
317 | while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) { | |
318 | let entry_block = entry_points.pop().unwrap().1; | |
319 | T::add_entry(cfg, entry_block, block); | |
dc9dc135 | 320 | } |
dc9dc135 | 321 | } |
29967ef6 XL |
322 | match needs_block[drop_idx] { |
323 | Block::None => continue, | |
324 | Block::Own => { | |
325 | blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg)); | |
326 | } | |
327 | Block::Shares(pred) => { | |
328 | blocks[drop_idx] = blocks[pred]; | |
329 | } | |
dc9dc135 | 330 | } |
29967ef6 XL |
331 | if let DropKind::Value = drop_data.0.kind { |
332 | needs_block[drop_data.1] = Block::Own; | |
333 | } else if drop_idx != ROOT_NODE { | |
334 | match &mut needs_block[drop_data.1] { | |
335 | pred @ Block::None => *pred = Block::Shares(drop_idx), | |
336 | pred @ Block::Shares(_) => *pred = Block::Own, | |
337 | Block::Own => (), | |
338 | } | |
dc9dc135 XL |
339 | } |
340 | } | |
29967ef6 XL |
341 | |
342 | debug!("assign_blocks: blocks = {:#?}", blocks); | |
343 | assert!(entry_points.is_empty()); | |
dc9dc135 XL |
344 | } |
345 | ||
29967ef6 XL |
346 | fn link_blocks<'tcx>( |
347 | &self, | |
348 | cfg: &mut CFG<'tcx>, | |
349 | blocks: &IndexVec<DropIdx, Option<BasicBlock>>, | |
350 | ) { | |
351 | for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() { | |
352 | let block = if let Some(block) = blocks[drop_idx] { | |
353 | block | |
354 | } else { | |
355 | continue; | |
356 | }; | |
357 | match drop_data.0.kind { | |
358 | DropKind::Value => { | |
359 | let terminator = TerminatorKind::Drop { | |
360 | target: blocks[drop_data.1].unwrap(), | |
361 | // The caller will handle this if needed. | |
362 | unwind: None, | |
363 | place: drop_data.0.local.into(), | |
364 | }; | |
365 | cfg.terminate(block, drop_data.0.source_info, terminator); | |
366 | } | |
367 | // Root nodes don't correspond to a drop. | |
368 | DropKind::Storage if drop_idx == ROOT_NODE => {} | |
369 | DropKind::Storage => { | |
370 | let stmt = Statement { | |
371 | source_info: drop_data.0.source_info, | |
372 | kind: StatementKind::StorageDead(drop_data.0.local), | |
373 | }; | |
374 | cfg.push(block, stmt); | |
375 | let target = blocks[drop_data.1].unwrap(); | |
376 | if target != block { | |
377 | // Diagnostics don't use this `Span` but debuginfo | |
378 | // might. Since we don't want breakpoints to be placed | |
379 | // here, especially when this is on an unwind path, we | |
380 | // use `DUMMY_SP`. | |
381 | let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info }; | |
382 | let terminator = TerminatorKind::Goto { target }; | |
383 | cfg.terminate(block, source_info, terminator); | |
384 | } | |
385 | } | |
386 | } | |
387 | } | |
dc9dc135 | 388 | } |
29967ef6 | 389 | } |
dc9dc135 | 390 | |
29967ef6 XL |
391 | impl<'tcx> Scopes<'tcx> { |
392 | pub(crate) fn new() -> Self { | |
393 | Self { | |
394 | scopes: Vec::new(), | |
395 | breakable_scopes: Vec::new(), | |
396 | unwind_drops: DropTree::new(), | |
397 | generator_drops: DropTree::new(), | |
398 | } | |
dc9dc135 XL |
399 | } |
400 | ||
29967ef6 XL |
401 | fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) { |
402 | debug!("push_scope({:?})", region_scope); | |
403 | self.scopes.push(Scope { | |
404 | source_scope: vis_scope, | |
405 | region_scope: region_scope.0, | |
406 | region_scope_span: region_scope.1.span, | |
407 | drops: vec![], | |
408 | moved_locals: vec![], | |
409 | cached_unwind_block: None, | |
410 | cached_generator_drop_block: None, | |
411 | }); | |
412 | } | |
413 | ||
414 | fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope { | |
415 | let scope = self.scopes.pop().unwrap(); | |
416 | assert_eq!(scope.region_scope, region_scope.0); | |
417 | scope | |
418 | } | |
419 | ||
420 | fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize { | |
421 | self.scopes | |
422 | .iter() | |
423 | .rposition(|scope| scope.region_scope == region_scope) | |
424 | .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope)) | |
dc9dc135 XL |
425 | } |
426 | ||
427 | /// Returns the topmost active scope, which is known to be alive until | |
428 | /// the next scope expression. | |
29967ef6 | 429 | fn topmost(&self) -> region::Scope { |
dc9dc135 XL |
430 | self.scopes.last().expect("topmost_scope: no scopes present").region_scope |
431 | } | |
dc9dc135 XL |
432 | } |
433 | ||
434 | impl<'a, 'tcx> Builder<'a, 'tcx> { | |
7453a54e SL |
435 | // Adding and removing scopes |
436 | // ========================== | |
dc9dc135 XL |
437 | // Start a breakable scope, which tracks where `continue`, `break` and |
438 | // `return` should branch to. | |
29967ef6 | 439 | crate fn in_breakable_scope<F>( |
dfeec247 XL |
440 | &mut self, |
441 | loop_block: Option<BasicBlock>, | |
dfeec247 | 442 | break_destination: Place<'tcx>, |
29967ef6 | 443 | span: Span, |
dfeec247 | 444 | f: F, |
29967ef6 | 445 | ) -> BlockAnd<()> |
dfeec247 | 446 | where |
29967ef6 | 447 | F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>, |
e9174d1e | 448 | { |
dc9dc135 | 449 | let region_scope = self.scopes.topmost(); |
cc61c64b | 450 | let scope = BreakableScope { |
ea8adc8c | 451 | region_scope, |
3b2f2976 | 452 | break_destination, |
29967ef6 XL |
453 | break_drops: DropTree::new(), |
454 | continue_drops: loop_block.map(|_| DropTree::new()), | |
b039eaaf | 455 | }; |
dc9dc135 | 456 | self.scopes.breakable_scopes.push(scope); |
29967ef6 | 457 | let normal_exit_block = f(self); |
dc9dc135 | 458 | let breakable_scope = self.scopes.breakable_scopes.pop().unwrap(); |
ea8adc8c | 459 | assert!(breakable_scope.region_scope == region_scope); |
29967ef6 | 460 | let break_block = self.build_exit_tree(breakable_scope.break_drops, None); |
fc512014 XL |
461 | if let Some(drops) = breakable_scope.continue_drops { |
462 | self.build_exit_tree(drops, loop_block); | |
463 | } | |
29967ef6 XL |
464 | match (normal_exit_block, break_block) { |
465 | (Some(block), None) | (None, Some(block)) => block, | |
466 | (None, None) => self.cfg.start_new_block().unit(), | |
467 | (Some(normal_block), Some(exit_block)) => { | |
468 | let target = self.cfg.start_new_block(); | |
469 | let source_info = self.source_info(span); | |
470 | self.cfg.terminate( | |
471 | unpack!(normal_block), | |
472 | source_info, | |
473 | TerminatorKind::Goto { target }, | |
474 | ); | |
475 | self.cfg.terminate( | |
476 | unpack!(exit_block), | |
477 | source_info, | |
478 | TerminatorKind::Goto { target }, | |
479 | ); | |
480 | target.unit() | |
481 | } | |
482 | } | |
e9174d1e SL |
483 | } |
484 | ||
dfeec247 XL |
485 | crate fn in_opt_scope<F, R>( |
486 | &mut self, | |
487 | opt_scope: Option<(region::Scope, SourceInfo)>, | |
488 | f: F, | |
489 | ) -> BlockAnd<R> | |
490 | where | |
491 | F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>, | |
041b39d2 | 492 | { |
dc9dc135 | 493 | debug!("in_opt_scope(opt_scope={:?})", opt_scope); |
dfeec247 XL |
494 | if let Some(region_scope) = opt_scope { |
495 | self.push_scope(region_scope); | |
496 | } | |
dc9dc135 | 497 | let mut block; |
041b39d2 | 498 | let rv = unpack!(block = f(self)); |
ea8adc8c XL |
499 | if let Some(region_scope) = opt_scope { |
500 | unpack!(block = self.pop_scope(region_scope, block)); | |
041b39d2 | 501 | } |
ea8adc8c | 502 | debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block); |
041b39d2 XL |
503 | block.and(rv) |
504 | } | |
505 | ||
92a42be0 SL |
506 | /// Convenience wrapper that pushes a scope and then executes `f` |
507 | /// to build its contents, popping the scope afterwards. | |
dfeec247 XL |
508 | crate fn in_scope<F, R>( |
509 | &mut self, | |
510 | region_scope: (region::Scope, SourceInfo), | |
511 | lint_level: LintLevel, | |
512 | f: F, | |
513 | ) -> BlockAnd<R> | |
514 | where | |
515 | F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>, | |
e9174d1e | 516 | { |
dc9dc135 | 517 | debug!("in_scope(region_scope={:?})", region_scope); |
94b46f34 | 518 | let source_scope = self.source_scope; |
6a06907d | 519 | let tcx = self.tcx; |
532ac7d7 XL |
520 | if let LintLevel::Explicit(current_hir_id) = lint_level { |
521 | // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound | |
522 | // to avoid adding Hir dependences on our parents. | |
523 | // We estimate the true lint roots here to avoid creating a lot of source scopes. | |
524 | ||
525 | let parent_root = tcx.maybe_lint_level_root_bounded( | |
dfeec247 | 526 | self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root, |
6a06907d | 527 | self.hir_id, |
532ac7d7 | 528 | ); |
6a06907d | 529 | let current_root = tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir_id); |
532ac7d7 XL |
530 | |
531 | if parent_root != current_root { | |
532 | self.source_scope = self.new_source_scope( | |
533 | region_scope.1.span, | |
534 | LintLevel::Explicit(current_root), | |
dfeec247 | 535 | None, |
532ac7d7 | 536 | ); |
ea8adc8c XL |
537 | } |
538 | } | |
539 | self.push_scope(region_scope); | |
dc9dc135 | 540 | let mut block; |
3157f602 | 541 | let rv = unpack!(block = f(self)); |
ea8adc8c | 542 | unpack!(block = self.pop_scope(region_scope, block)); |
94b46f34 | 543 | self.source_scope = source_scope; |
ea8adc8c | 544 | debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block); |
92a42be0 SL |
545 | block.and(rv) |
546 | } | |
e9174d1e | 547 | |
92a42be0 SL |
548 | /// Push a scope onto the stack. You can then build code in this |
549 | /// scope and call `pop_scope` afterwards. Note that these two | |
550 | /// calls must be paired; using `in_scope` as a convenience | |
551 | /// wrapper maybe preferable. | |
dfeec247 | 552 | crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) { |
dc9dc135 | 553 | self.scopes.push_scope(region_scope, self.source_scope); |
92a42be0 SL |
554 | } |
555 | ||
ea8adc8c XL |
556 | /// Pops a scope, which should have region scope `region_scope`, |
557 | /// adding any drops onto the end of `block` that are needed. | |
558 | /// This must match 1-to-1 with `push_scope`. | |
dfeec247 XL |
559 | crate fn pop_scope( |
560 | &mut self, | |
561 | region_scope: (region::Scope, SourceInfo), | |
562 | mut block: BasicBlock, | |
563 | ) -> BlockAnd<()> { | |
ea8adc8c | 564 | debug!("pop_scope({:?}, {:?})", region_scope, block); |
a1dfa0c6 | 565 | |
29967ef6 XL |
566 | block = self.leave_top_scope(block); |
567 | ||
568 | self.scopes.pop_scope(region_scope); | |
041b39d2 | 569 | |
54a0048b | 570 | block.unit() |
e9174d1e SL |
571 | } |
572 | ||
29967ef6 | 573 | /// Sets up the drops for breaking from `block` to `target`. |
dfeec247 | 574 | crate fn break_scope( |
dc9dc135 XL |
575 | &mut self, |
576 | mut block: BasicBlock, | |
6a06907d | 577 | value: Option<&Expr<'_, 'tcx>>, |
29967ef6 | 578 | target: BreakableTarget, |
dc9dc135 XL |
579 | source_info: SourceInfo, |
580 | ) -> BlockAnd<()> { | |
29967ef6 XL |
581 | let span = source_info.span; |
582 | ||
583 | let get_scope_index = |scope: region::Scope| { | |
584 | // find the loop-scope by its `region::Scope`. | |
585 | self.scopes | |
586 | .breakable_scopes | |
587 | .iter() | |
588 | .rposition(|breakable_scope| breakable_scope.region_scope == scope) | |
589 | .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found")) | |
590 | }; | |
591 | let (break_index, destination) = match target { | |
592 | BreakableTarget::Return => { | |
593 | let scope = &self.scopes.breakable_scopes[0]; | |
594 | if scope.break_destination != Place::return_place() { | |
595 | span_bug!(span, "`return` in item with no return scope"); | |
596 | } | |
597 | (0, Some(scope.break_destination)) | |
598 | } | |
599 | BreakableTarget::Break(scope) => { | |
600 | let break_index = get_scope_index(scope); | |
601 | let scope = &self.scopes.breakable_scopes[break_index]; | |
602 | (break_index, Some(scope.break_destination)) | |
603 | } | |
604 | BreakableTarget::Continue(scope) => { | |
605 | let break_index = get_scope_index(scope); | |
606 | (break_index, None) | |
607 | } | |
608 | }; | |
609 | ||
dc9dc135 XL |
610 | if let Some(destination) = destination { |
611 | if let Some(value) = value { | |
612 | debug!("stmt_expr Break val block_context.push(SubExpr)"); | |
613 | self.block_context.push(BlockFrame::SubExpr); | |
6a06907d | 614 | unpack!(block = self.expr_into_dest(destination, block, value)); |
dc9dc135 XL |
615 | self.block_context.pop(); |
616 | } else { | |
6a06907d | 617 | self.cfg.push_assign_unit(block, source_info, destination, self.tcx) |
dc9dc135 XL |
618 | } |
619 | } else { | |
620 | assert!(value.is_none(), "`return` and `break` should have a destination"); | |
cdc7bbd5 XL |
621 | if self.tcx.sess.instrument_coverage() { |
622 | // Unlike `break` and `return`, which push an `Assign` statement to MIR, from which | |
623 | // a Coverage code region can be generated, `continue` needs no `Assign`; but | |
624 | // without one, the `InstrumentCoverage` MIR pass cannot generate a code region for | |
625 | // `continue`. Coverage will be missing unless we add a dummy `Assign` to MIR. | |
626 | self.add_dummy_assignment(&span, block, source_info); | |
627 | } | |
dc9dc135 | 628 | } |
29967ef6 XL |
629 | |
630 | let region_scope = self.scopes.breakable_scopes[break_index].region_scope; | |
631 | let scope_index = self.scopes.scope_index(region_scope, span); | |
632 | let drops = if destination.is_some() { | |
633 | &mut self.scopes.breakable_scopes[break_index].break_drops | |
634 | } else { | |
635 | self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap() | |
636 | }; | |
637 | let mut drop_idx = ROOT_NODE; | |
638 | for scope in &self.scopes.scopes[scope_index + 1..] { | |
639 | for drop in &scope.drops { | |
640 | drop_idx = drops.add_drop(*drop, drop_idx); | |
641 | } | |
642 | } | |
643 | drops.add_entry(block, drop_idx); | |
644 | ||
645 | // `build_drop_tree` doesn't have access to our source_info, so we | |
646 | // create a dummy terminator now. `TerminatorKind::Resume` is used | |
647 | // because MIR type checking will panic if it hasn't been overwritten. | |
648 | self.cfg.terminate(block, source_info, TerminatorKind::Resume); | |
649 | ||
dc9dc135 XL |
650 | self.cfg.start_new_block().unit() |
651 | } | |
e9174d1e | 652 | |
cdc7bbd5 XL |
653 | // Add a dummy `Assign` statement to the CFG, with the span for the source code's `continue` |
654 | // statement. | |
655 | fn add_dummy_assignment(&mut self, span: &Span, block: BasicBlock, source_info: SourceInfo) { | |
656 | let local_decl = LocalDecl::new(self.tcx.mk_unit(), *span).internal(); | |
657 | let temp_place = Place::from(self.local_decls.push(local_decl)); | |
658 | self.cfg.push_assign_unit(block, source_info, temp_place, self.tcx); | |
659 | } | |
660 | ||
29967ef6 | 661 | crate fn exit_top_scope( |
dfeec247 | 662 | &mut self, |
dfeec247 XL |
663 | mut block: BasicBlock, |
664 | target: BasicBlock, | |
29967ef6 | 665 | source_info: SourceInfo, |
dfeec247 | 666 | ) { |
29967ef6 XL |
667 | block = self.leave_top_scope(block); |
668 | self.cfg.terminate(block, source_info, TerminatorKind::Goto { target }); | |
669 | } | |
3b2f2976 | 670 | |
29967ef6 | 671 | fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock { |
3b2f2976 XL |
672 | // If we are emitting a `drop` statement, we need to have the cached |
673 | // diverge cleanup pads ready in case that drop panics. | |
29967ef6 XL |
674 | let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup()); |
675 | let is_generator = self.generator_kind.is_some(); | |
676 | let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX }; | |
677 | ||
678 | let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes"); | |
679 | unpack!(build_scope_drops( | |
680 | &mut self.cfg, | |
681 | &mut self.scopes.unwind_drops, | |
682 | scope, | |
683 | block, | |
684 | unwind_to, | |
685 | is_generator && needs_cleanup, | |
686 | self.arg_count, | |
687 | )) | |
ea8adc8c XL |
688 | } |
689 | ||
94b46f34 | 690 | /// Creates a new source scope, nested in the current one. |
dfeec247 XL |
691 | crate fn new_source_scope( |
692 | &mut self, | |
693 | span: Span, | |
694 | lint_level: LintLevel, | |
695 | safety: Option<Safety>, | |
696 | ) -> SourceScope { | |
94b46f34 | 697 | let parent = self.source_scope; |
dfeec247 XL |
698 | debug!( |
699 | "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}", | |
700 | span, | |
701 | lint_level, | |
702 | safety, | |
703 | parent, | |
704 | self.source_scopes.get(parent) | |
705 | ); | |
94b46f34 | 706 | let scope_local_data = SourceScopeLocalData { |
ea8adc8c XL |
707 | lint_root: if let LintLevel::Explicit(lint_root) = lint_level { |
708 | lint_root | |
709 | } else { | |
60c5eb7d | 710 | self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root |
ea8adc8c XL |
711 | }, |
712 | safety: safety.unwrap_or_else(|| { | |
60c5eb7d | 713 | self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety |
dfeec247 | 714 | }), |
ea8adc8c | 715 | }; |
60c5eb7d XL |
716 | self.source_scopes.push(SourceScopeData { |
717 | span, | |
718 | parent_scope: Some(parent), | |
29967ef6 XL |
719 | inlined: None, |
720 | inlined_parent_scope: None, | |
60c5eb7d XL |
721 | local_data: ClearCrossCrate::Set(scope_local_data), |
722 | }) | |
3157f602 XL |
723 | } |
724 | ||
94b46f34 | 725 | /// Given a span and the current source scope, make a SourceInfo. |
dfeec247 XL |
726 | crate fn source_info(&self, span: Span) -> SourceInfo { |
727 | SourceInfo { span, scope: self.source_scope } | |
e9174d1e SL |
728 | } |
729 | ||
dc9dc135 XL |
730 | // Finding scopes |
731 | // ============== | |
7cac9316 XL |
732 | /// Returns the scope that we should use as the lifetime of an |
733 | /// operand. Basically, an operand must live until it is consumed. | |
734 | /// This is similar to, but not quite the same as, the temporary | |
735 | /// scope (which can be larger or smaller). | |
736 | /// | |
737 | /// Consider: | |
738 | /// | |
739 | /// let x = foo(bar(X, Y)); | |
740 | /// | |
741 | /// We wish to pop the storage for X and Y after `bar()` is | |
742 | /// called, not after the whole `let` is completed. | |
743 | /// | |
744 | /// As another example, if the second argument diverges: | |
745 | /// | |
746 | /// foo(Box::new(2), panic!()) | |
747 | /// | |
748 | /// We would allocate the box but then free it on the unwinding | |
749 | /// path; we would also emit a free on the 'success' path from | |
750 | /// panic, but that will turn out to be removed as dead-code. | |
fc512014 XL |
751 | crate fn local_scope(&self) -> region::Scope { |
752 | self.scopes.topmost() | |
7cac9316 XL |
753 | } |
754 | ||
dc9dc135 XL |
755 | // Scheduling drops |
756 | // ================ | |
dfeec247 | 757 | crate fn schedule_drop_storage_and_value( |
8faf50e0 XL |
758 | &mut self, |
759 | span: Span, | |
760 | region_scope: region::Scope, | |
dc9dc135 | 761 | local: Local, |
8faf50e0 | 762 | ) { |
e74abb32 XL |
763 | self.schedule_drop(span, region_scope, local, DropKind::Storage); |
764 | self.schedule_drop(span, region_scope, local, DropKind::Value); | |
8faf50e0 XL |
765 | } |
766 | ||
29967ef6 | 767 | /// Indicates that `place` should be dropped on exit from `region_scope`. |
8faf50e0 | 768 | /// |
29967ef6 XL |
769 | /// When called with `DropKind::Storage`, `place` shouldn't be the return |
770 | /// place, or a function parameter. | |
dfeec247 | 771 | crate fn schedule_drop( |
8faf50e0 XL |
772 | &mut self, |
773 | span: Span, | |
774 | region_scope: region::Scope, | |
dc9dc135 | 775 | local: Local, |
8faf50e0 XL |
776 | drop_kind: DropKind, |
777 | ) { | |
e74abb32 XL |
778 | let needs_drop = match drop_kind { |
779 | DropKind::Value => { | |
6a06907d | 780 | if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) { |
dfeec247 XL |
781 | return; |
782 | } | |
e74abb32 | 783 | true |
dfeec247 | 784 | } |
8faf50e0 | 785 | DropKind::Storage => { |
dc9dc135 XL |
786 | if local.index() <= self.arg_count { |
787 | span_bug!( | |
dfeec247 XL |
788 | span, |
789 | "`schedule_drop` called with local {:?} and arg_count {}", | |
dc9dc135 XL |
790 | local, |
791 | self.arg_count, | |
792 | ) | |
8faf50e0 | 793 | } |
e74abb32 | 794 | false |
5bcae85e | 795 | } |
e74abb32 | 796 | }; |
5bcae85e | 797 | |
29967ef6 XL |
798 | // When building drops, we try to cache chains of drops to reduce the |
799 | // number of `DropTree::add_drop` calls. This, however, means that | |
800 | // whenever we add a drop into a scope which already had some entries | |
801 | // in the drop tree built (and thus, cached) for it, we must invalidate | |
802 | // all caches which might branch into the scope which had a drop just | |
803 | // added to it. This is necessary, because otherwise some other code | |
804 | // might use the cache to branch into already built chain of drops, | |
805 | // essentially ignoring the newly added drop. | |
806 | // | |
807 | // For example consider there’s two scopes with a drop in each. These | |
808 | // are built and thus the caches are filled: | |
809 | // | |
810 | // +--------------------------------------------------------+ | |
811 | // | +---------------------------------+ | | |
812 | // | | +--------+ +-------------+ | +---------------+ | | |
813 | // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | | | |
814 | // | | +--------+ +-------------+ | +---------------+ | | |
815 | // | +------------|outer_scope cache|--+ | | |
816 | // +------------------------------|middle_scope cache|------+ | |
817 | // | |
818 | // Now, a new, inner-most scope is added along with a new drop into | |
819 | // both inner-most and outer-most scopes: | |
820 | // | |
821 | // +------------------------------------------------------------+ | |
822 | // | +----------------------------------+ | | |
823 | // | | +--------+ +-------------+ | +---------------+ | +-------------+ | |
824 | // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) | | |
825 | // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+ | |
826 | // | | +-+ +-------------+ | | | |
827 | // | +---|invalid outer_scope cache|----+ | | |
828 | // +----=----------------|invalid middle_scope cache|-----------+ | |
829 | // | |
830 | // If, when adding `drop(new)` we do not invalidate the cached blocks for both | |
831 | // outer_scope and middle_scope, then, when building drops for the inner (right-most) | |
832 | // scope, the old, cached blocks, without `drop(new)` will get used, producing the | |
833 | // wrong results. | |
834 | // | |
835 | // Note that this code iterates scopes from the inner-most to the outer-most, | |
836 | // invalidating caches of each scope visited. This way bare minimum of the | |
837 | // caches gets invalidated. i.e., if a new drop is added into the middle scope, the | |
838 | // cache of outer scope stays intact. | |
839 | // | |
840 | // Since we only cache drops for the unwind path and the generator drop | |
841 | // path, we only need to invalidate the cache for drops that happen on | |
842 | // the unwind or generator drop paths. This means that for | |
843 | // non-generators we don't need to invalidate caches for `DropKind::Storage`. | |
844 | let invalidate_caches = needs_drop || self.generator_kind.is_some(); | |
845 | for scope in self.scopes.scopes.iter_mut().rev() { | |
846 | if invalidate_caches { | |
847 | scope.invalidate_cache(); | |
848 | } | |
849 | ||
850 | if scope.region_scope == region_scope { | |
6a06907d | 851 | let region_scope_span = region_scope.span(self.tcx, &self.region_scope_tree); |
2c00a5a8 | 852 | // Attribute scope exit drops to scope's closing brace. |
6a06907d | 853 | let scope_end = self.tcx.sess.source_map().end_point(region_scope_span); |
2c00a5a8 | 854 | |
7453a54e | 855 | scope.drops.push(DropData { |
29967ef6 | 856 | source_info: SourceInfo { span: scope_end, scope: scope.source_scope }, |
dc9dc135 XL |
857 | local, |
858 | kind: drop_kind, | |
7453a54e | 859 | }); |
29967ef6 | 860 | |
7453a54e | 861 | return; |
7453a54e SL |
862 | } |
863 | } | |
29967ef6 | 864 | |
dc9dc135 | 865 | span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local); |
7453a54e SL |
866 | } |
867 | ||
e1599b0c XL |
868 | /// Indicates that the "local operand" stored in `local` is |
869 | /// *moved* at some point during execution (see `local_scope` for | |
870 | /// more information about what a "local operand" is -- in short, | |
871 | /// it's an intermediate operand created as part of preparing some | |
872 | /// MIR instruction). We use this information to suppress | |
873 | /// redundant drops on the non-unwind paths. This results in less | |
874 | /// MIR, but also avoids spurious borrow check errors | |
875 | /// (c.f. #64391). | |
876 | /// | |
877 | /// Example: when compiling the call to `foo` here: | |
878 | /// | |
879 | /// ```rust | |
880 | /// foo(bar(), ...) | |
881 | /// ``` | |
882 | /// | |
883 | /// we would evaluate `bar()` to an operand `_X`. We would also | |
884 | /// schedule `_X` to be dropped when the expression scope for | |
885 | /// `foo(bar())` is exited. This is relevant, for example, if the | |
886 | /// later arguments should unwind (it would ensure that `_X` gets | |
887 | /// dropped). However, if no unwind occurs, then `_X` will be | |
888 | /// unconditionally consumed by the `call`: | |
889 | /// | |
890 | /// ``` | |
891 | /// bb { | |
892 | /// ... | |
893 | /// _R = CALL(foo, _X, ...) | |
894 | /// } | |
895 | /// ``` | |
896 | /// | |
897 | /// However, `_X` is still registered to be dropped, and so if we | |
898 | /// do nothing else, we would generate a `DROP(_X)` that occurs | |
899 | /// after the call. This will later be optimized out by the | |
900 | /// drop-elaboation code, but in the meantime it can lead to | |
901 | /// spurious borrow-check errors -- the problem, ironically, is | |
902 | /// not the `DROP(_X)` itself, but the (spurious) unwind pathways | |
903 | /// that it creates. See #64391 for an example. | |
dfeec247 | 904 | crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) { |
fc512014 XL |
905 | let local_scope = self.local_scope(); |
906 | let scope = self.scopes.scopes.last_mut().unwrap(); | |
e1599b0c | 907 | |
5869c6ff | 908 | assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",); |
e1599b0c XL |
909 | |
910 | // look for moves of a local variable, like `MOVE(_X)` | |
911 | let locals_moved = operands.iter().flat_map(|operand| match operand { | |
912 | Operand::Copy(_) | Operand::Constant(_) => None, | |
913 | Operand::Move(place) => place.as_local(), | |
914 | }); | |
915 | ||
916 | for local in locals_moved { | |
917 | // check if we have a Drop for this operand and -- if so | |
918 | // -- add it to the list of moved operands. Note that this | |
919 | // local might not have been an operand created for this | |
920 | // call, it could come from other places too. | |
921 | if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) { | |
922 | scope.moved_locals.push(local); | |
923 | } | |
924 | } | |
925 | } | |
926 | ||
7453a54e SL |
927 | // Other |
928 | // ===== | |
dc9dc135 XL |
929 | /// Branch based on a boolean condition. |
930 | /// | |
931 | /// This is a special case because the temporary for the condition needs to | |
932 | /// be dropped on both the true and the false arm. | |
dfeec247 | 933 | crate fn test_bool( |
dc9dc135 XL |
934 | &mut self, |
935 | mut block: BasicBlock, | |
6a06907d | 936 | condition: &Expr<'_, 'tcx>, |
dc9dc135 XL |
937 | source_info: SourceInfo, |
938 | ) -> (BasicBlock, BasicBlock) { | |
939 | let cond = unpack!(block = self.as_local_operand(block, condition)); | |
940 | let true_block = self.cfg.start_new_block(); | |
941 | let false_block = self.cfg.start_new_block(); | |
6a06907d | 942 | let term = TerminatorKind::if_(self.tcx, cond.clone(), true_block, false_block); |
dc9dc135 XL |
943 | self.cfg.terminate(block, source_info, term); |
944 | ||
945 | match cond { | |
946 | // Don't try to drop a constant | |
947 | Operand::Constant(_) => (), | |
dfeec247 | 948 | Operand::Copy(place) | Operand::Move(place) => { |
e74abb32 XL |
949 | if let Some(cond_temp) = place.as_local() { |
950 | // Manually drop the condition on both branches. | |
951 | let top_scope = self.scopes.scopes.last_mut().unwrap(); | |
952 | let top_drop_data = top_scope.drops.pop().unwrap(); | |
29967ef6 XL |
953 | if self.generator_kind.is_some() { |
954 | top_scope.invalidate_cache(); | |
955 | } | |
e74abb32 XL |
956 | |
957 | match top_drop_data.kind { | |
958 | DropKind::Value { .. } => { | |
959 | bug!("Drop scheduled on top of condition variable") | |
960 | } | |
961 | DropKind::Storage => { | |
29967ef6 | 962 | let source_info = top_drop_data.source_info; |
e74abb32 XL |
963 | let local = top_drop_data.local; |
964 | assert_eq!(local, cond_temp, "Drop scheduled on top of condition"); | |
965 | self.cfg.push( | |
966 | true_block, | |
dfeec247 | 967 | Statement { source_info, kind: StatementKind::StorageDead(local) }, |
e74abb32 XL |
968 | ); |
969 | self.cfg.push( | |
970 | false_block, | |
dfeec247 | 971 | Statement { source_info, kind: StatementKind::StorageDead(local) }, |
e74abb32 XL |
972 | ); |
973 | } | |
dc9dc135 | 974 | } |
e74abb32 XL |
975 | } else { |
976 | bug!("Expected as_local_operand to produce a temporary"); | |
977 | } | |
dc9dc135 | 978 | } |
dc9dc135 XL |
979 | } |
980 | ||
981 | (true_block, false_block) | |
982 | } | |
983 | ||
29967ef6 XL |
984 | /// Returns the [DropIdx] for the innermost drop if the function unwound at |
985 | /// this point. The `DropIdx` will be created if it doesn't already exist. | |
986 | fn diverge_cleanup(&mut self) -> DropIdx { | |
987 | let is_generator = self.generator_kind.is_some(); | |
988 | let (uncached_scope, mut cached_drop) = self | |
989 | .scopes | |
990 | .scopes | |
991 | .iter() | |
992 | .enumerate() | |
993 | .rev() | |
994 | .find_map(|(scope_idx, scope)| { | |
995 | scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block)) | |
996 | }) | |
997 | .unwrap_or((0, ROOT_NODE)); | |
998 | ||
999 | for scope in &mut self.scopes.scopes[uncached_scope..] { | |
1000 | for drop in &scope.drops { | |
1001 | if is_generator || drop.kind == DropKind::Value { | |
1002 | cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop); | |
1003 | } | |
1004 | } | |
1005 | scope.cached_unwind_block = Some(cached_drop); | |
7453a54e | 1006 | } |
29967ef6 XL |
1007 | |
1008 | cached_drop | |
ff7c6d11 XL |
1009 | } |
1010 | ||
29967ef6 XL |
1011 | /// Prepares to create a path that performs all required cleanup for a |
1012 | /// terminator that can unwind at the given basic block. | |
1013 | /// | |
1014 | /// This path terminates in Resume. The path isn't created until after all | |
1015 | /// of the non-unwind paths in this item have been lowered. | |
1016 | crate fn diverge_from(&mut self, start: BasicBlock) { | |
1017 | debug_assert!( | |
1018 | matches!( | |
1019 | self.cfg.block_data(start).terminator().kind, | |
1020 | TerminatorKind::Assert { .. } | |
5869c6ff XL |
1021 | | TerminatorKind::Call { .. } |
1022 | | TerminatorKind::DropAndReplace { .. } | |
1023 | | TerminatorKind::FalseUnwind { .. } | |
29967ef6 XL |
1024 | ), |
1025 | "diverge_from called on block with terminator that cannot unwind." | |
1026 | ); | |
a1dfa0c6 | 1027 | |
29967ef6 XL |
1028 | let next_drop = self.diverge_cleanup(); |
1029 | self.scopes.unwind_drops.add_entry(start, next_drop); | |
1030 | } | |
1031 | ||
1032 | /// Sets up a path that performs all required cleanup for dropping a | |
1033 | /// generator, starting from the given block that ends in | |
1034 | /// [TerminatorKind::Yield]. | |
1035 | /// | |
1036 | /// This path terminates in GeneratorDrop. | |
1037 | crate fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) { | |
1038 | debug_assert!( | |
1039 | matches!( | |
1040 | self.cfg.block_data(yield_block).terminator().kind, | |
1041 | TerminatorKind::Yield { .. } | |
1042 | ), | |
1043 | "generator_drop_cleanup called on block with non-yield terminator." | |
1044 | ); | |
1045 | let (uncached_scope, mut cached_drop) = self | |
1046 | .scopes | |
1047 | .scopes | |
1048 | .iter() | |
1049 | .enumerate() | |
1050 | .rev() | |
1051 | .find_map(|(scope_idx, scope)| { | |
1052 | scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block)) | |
1053 | }) | |
1054 | .unwrap_or((0, ROOT_NODE)); | |
1055 | ||
1056 | for scope in &mut self.scopes.scopes[uncached_scope..] { | |
1057 | for drop in &scope.drops { | |
1058 | cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop); | |
1059 | } | |
1060 | scope.cached_generator_drop_block = Some(cached_drop); | |
7453a54e | 1061 | } |
ff7c6d11 | 1062 | |
29967ef6 | 1063 | self.scopes.generator_drops.add_entry(yield_block, cached_drop); |
e9174d1e SL |
1064 | } |
1065 | ||
3157f602 | 1066 | /// Utility function for *non*-scope code to build their own drops |
dfeec247 XL |
1067 | crate fn build_drop_and_replace( |
1068 | &mut self, | |
1069 | block: BasicBlock, | |
1070 | span: Span, | |
f035d41b | 1071 | place: Place<'tcx>, |
dfeec247 XL |
1072 | value: Operand<'tcx>, |
1073 | ) -> BlockAnd<()> { | |
3157f602 XL |
1074 | let source_info = self.source_info(span); |
1075 | let next_target = self.cfg.start_new_block(); | |
29967ef6 | 1076 | |
dfeec247 XL |
1077 | self.cfg.terminate( |
1078 | block, | |
1079 | source_info, | |
29967ef6 | 1080 | TerminatorKind::DropAndReplace { place, value, target: next_target, unwind: None }, |
dfeec247 | 1081 | ); |
29967ef6 XL |
1082 | self.diverge_from(block); |
1083 | ||
3157f602 | 1084 | next_target.unit() |
e9174d1e SL |
1085 | } |
1086 | ||
29967ef6 | 1087 | /// Creates an `Assert` terminator and return the success block. |
3157f602 XL |
1088 | /// If the boolean condition operand is not the expected value, |
1089 | /// a runtime panic will be caused with the given message. | |
dfeec247 XL |
1090 | crate fn assert( |
1091 | &mut self, | |
1092 | block: BasicBlock, | |
1093 | cond: Operand<'tcx>, | |
1094 | expected: bool, | |
1095 | msg: AssertMessage<'tcx>, | |
1096 | span: Span, | |
1097 | ) -> BasicBlock { | |
3157f602 | 1098 | let source_info = self.source_info(span); |
3157f602 | 1099 | let success_block = self.cfg.start_new_block(); |
e9174d1e | 1100 | |
dfeec247 XL |
1101 | self.cfg.terminate( |
1102 | block, | |
1103 | source_info, | |
29967ef6 | 1104 | TerminatorKind::Assert { cond, expected, msg, target: success_block, cleanup: None }, |
dfeec247 | 1105 | ); |
29967ef6 | 1106 | self.diverge_from(block); |
e9174d1e | 1107 | |
3157f602 | 1108 | success_block |
9cc50fc6 | 1109 | } |
dc9dc135 | 1110 | |
dc9dc135 XL |
1111 | /// Unschedules any drops in the top scope. |
1112 | /// | |
1113 | /// This is only needed for `match` arm scopes, because they have one | |
1114 | /// entrance per pattern, but only one exit. | |
29967ef6 | 1115 | crate fn clear_top_scope(&mut self, region_scope: region::Scope) { |
dc9dc135 XL |
1116 | let top_scope = self.scopes.scopes.last_mut().unwrap(); |
1117 | ||
1118 | assert_eq!(top_scope.region_scope, region_scope); | |
1119 | ||
1120 | top_scope.drops.clear(); | |
29967ef6 | 1121 | top_scope.invalidate_cache(); |
dc9dc135 | 1122 | } |
7453a54e SL |
1123 | } |
1124 | ||
29967ef6 | 1125 | /// Builds drops for `pop_scope` and `leave_top_scope`. |
a1dfa0c6 XL |
1126 | fn build_scope_drops<'tcx>( |
1127 | cfg: &mut CFG<'tcx>, | |
29967ef6 | 1128 | unwind_drops: &mut DropTree, |
dc9dc135 | 1129 | scope: &Scope, |
a1dfa0c6 | 1130 | mut block: BasicBlock, |
29967ef6 XL |
1131 | mut unwind_to: DropIdx, |
1132 | storage_dead_on_unwind: bool, | |
a1dfa0c6 | 1133 | arg_count: usize, |
a1dfa0c6 | 1134 | ) -> BlockAnd<()> { |
dc9dc135 | 1135 | debug!("build_scope_drops({:?} -> {:?})", block, scope); |
a1dfa0c6 XL |
1136 | |
1137 | // Build up the drops in evaluation order. The end result will | |
1138 | // look like: | |
1139 | // | |
1140 | // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]] | |
1141 | // | | | | |
1142 | // : | | | |
1143 | // V V | |
1144 | // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to] | |
1145 | // | |
1146 | // The horizontal arrows represent the execution path when the drops return | |
1147 | // successfully. The downwards arrows represent the execution path when the | |
1148 | // drops panic (panicking while unwinding will abort, so there's no need for | |
dc9dc135 | 1149 | // another set of arrows). |
a1dfa0c6 | 1150 | // |
dc9dc135 XL |
1151 | // For generators, we unwind from a drop on a local to its StorageDead |
1152 | // statement. For other functions we don't worry about StorageDead. The | |
1153 | // drops for the unwind path should have already been generated by | |
1154 | // `diverge_cleanup_gen`. | |
a1dfa0c6 | 1155 | |
29967ef6 XL |
1156 | for drop_data in scope.drops.iter().rev() { |
1157 | let source_info = drop_data.source_info; | |
dc9dc135 | 1158 | let local = drop_data.local; |
e1599b0c | 1159 | |
5bcae85e | 1160 | match drop_data.kind { |
dc9dc135 | 1161 | DropKind::Value => { |
29967ef6 XL |
1162 | // `unwind_to` should drop the value that we're about to |
1163 | // schedule. If dropping this value panics, then we continue | |
1164 | // with the *next* value on the unwind path. | |
1165 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local); | |
1166 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind); | |
1167 | unwind_to = unwind_drops.drops[unwind_to].1; | |
1168 | ||
e1599b0c XL |
1169 | // If the operand has been moved, and we are not on an unwind |
1170 | // path, then don't generate the drop. (We only take this into | |
1171 | // account for non-unwind paths so as not to disturb the | |
1172 | // caching mechanism.) | |
29967ef6 | 1173 | if scope.moved_locals.iter().any(|&o| o == local) { |
e1599b0c XL |
1174 | continue; |
1175 | } | |
1176 | ||
29967ef6 | 1177 | unwind_drops.add_entry(block, unwind_to); |
abe05a73 | 1178 | |
3b2f2976 | 1179 | let next = cfg.start_new_block(); |
dfeec247 XL |
1180 | cfg.terminate( |
1181 | block, | |
1182 | source_info, | |
29967ef6 | 1183 | TerminatorKind::Drop { place: local.into(), target: next, unwind: None }, |
dfeec247 | 1184 | ); |
3b2f2976 XL |
1185 | block = next; |
1186 | } | |
8faf50e0 | 1187 | DropKind::Storage => { |
29967ef6 XL |
1188 | if storage_dead_on_unwind { |
1189 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local); | |
1190 | debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind); | |
1191 | unwind_to = unwind_drops.drops[unwind_to].1; | |
1192 | } | |
8faf50e0 | 1193 | // Only temps and vars need their storage dead. |
dc9dc135 | 1194 | assert!(local.index() > arg_count); |
dfeec247 | 1195 | cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) }); |
5bcae85e SL |
1196 | } |
1197 | } | |
7453a54e SL |
1198 | } |
1199 | block.unit() | |
1200 | } | |
1201 | ||
29967ef6 XL |
1202 | impl<'a, 'tcx: 'a> Builder<'a, 'tcx> { |
1203 | /// Build a drop tree for a breakable scope. | |
1204 | /// | |
1205 | /// If `continue_block` is `Some`, then the tree is for `continue` inside a | |
1206 | /// loop. Otherwise this is for `break` or `return`. | |
1207 | fn build_exit_tree( | |
1208 | &mut self, | |
1209 | mut drops: DropTree, | |
1210 | continue_block: Option<BasicBlock>, | |
1211 | ) -> Option<BlockAnd<()>> { | |
1212 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1213 | blocks[ROOT_NODE] = continue_block; | |
1214 | ||
1215 | drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks); | |
1216 | ||
1217 | // Link the exit drop tree to unwind drop tree. | |
1218 | if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) { | |
1219 | let unwind_target = self.diverge_cleanup(); | |
1220 | let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1); | |
1221 | for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) { | |
1222 | match drop_data.0.kind { | |
1223 | DropKind::Storage => { | |
1224 | if self.generator_kind.is_some() { | |
1225 | let unwind_drop = self | |
1226 | .scopes | |
1227 | .unwind_drops | |
1228 | .add_drop(drop_data.0, unwind_indices[drop_data.1]); | |
1229 | unwind_indices.push(unwind_drop); | |
1230 | } else { | |
1231 | unwind_indices.push(unwind_indices[drop_data.1]); | |
1232 | } | |
1233 | } | |
1234 | DropKind::Value => { | |
1235 | let unwind_drop = self | |
1236 | .scopes | |
1237 | .unwind_drops | |
1238 | .add_drop(drop_data.0, unwind_indices[drop_data.1]); | |
1239 | self.scopes | |
1240 | .unwind_drops | |
1241 | .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]); | |
1242 | unwind_indices.push(unwind_drop); | |
1243 | } | |
1244 | } | |
dc9dc135 | 1245 | } |
dc9dc135 | 1246 | } |
29967ef6 | 1247 | blocks[ROOT_NODE].map(BasicBlock::unit) |
dc9dc135 | 1248 | } |
dc9dc135 | 1249 | |
29967ef6 XL |
1250 | /// Build the unwind and generator drop trees. |
1251 | crate fn build_drop_trees(&mut self, should_abort: bool) { | |
1252 | if self.generator_kind.is_some() { | |
1253 | self.build_generator_drop_trees(should_abort); | |
1254 | } else { | |
1255 | Self::build_unwind_tree( | |
1256 | &mut self.cfg, | |
1257 | &mut self.scopes.unwind_drops, | |
1258 | self.fn_span, | |
1259 | should_abort, | |
1260 | &mut None, | |
1261 | ); | |
1262 | } | |
1263 | } | |
1264 | ||
1265 | fn build_generator_drop_trees(&mut self, should_abort: bool) { | |
1266 | // Build the drop tree for dropping the generator while it's suspended. | |
1267 | let drops = &mut self.scopes.generator_drops; | |
1268 | let cfg = &mut self.cfg; | |
1269 | let fn_span = self.fn_span; | |
1270 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1271 | drops.build_mir::<GeneratorDrop>(cfg, &mut blocks); | |
1272 | if let Some(root_block) = blocks[ROOT_NODE] { | |
1273 | cfg.terminate( | |
1274 | root_block, | |
1275 | SourceInfo::outermost(fn_span), | |
1276 | TerminatorKind::GeneratorDrop, | |
1277 | ); | |
1278 | } | |
1279 | ||
1280 | // Build the drop tree for unwinding in the normal control flow paths. | |
1281 | let resume_block = &mut None; | |
1282 | let unwind_drops = &mut self.scopes.unwind_drops; | |
1283 | Self::build_unwind_tree(cfg, unwind_drops, fn_span, should_abort, resume_block); | |
1284 | ||
1285 | // Build the drop tree for unwinding when dropping a suspended | |
1286 | // generator. | |
041b39d2 | 1287 | // |
29967ef6 XL |
1288 | // This is a different tree to the standard unwind paths here to |
1289 | // prevent drop elaboration from creating drop flags that would have | |
1290 | // to be captured by the generator. I'm not sure how important this | |
1291 | // optimization is, but it is here. | |
1292 | for (drop_idx, drop_data) in drops.drops.iter_enumerated() { | |
1293 | if let DropKind::Value = drop_data.0.kind { | |
1294 | debug_assert!(drop_data.1 < drops.drops.next_index()); | |
1295 | drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap())); | |
dc9dc135 | 1296 | } |
29967ef6 XL |
1297 | } |
1298 | Self::build_unwind_tree(cfg, drops, fn_span, should_abort, resume_block); | |
7453a54e | 1299 | } |
abe05a73 | 1300 | |
29967ef6 XL |
1301 | fn build_unwind_tree( |
1302 | cfg: &mut CFG<'tcx>, | |
1303 | drops: &mut DropTree, | |
1304 | fn_span: Span, | |
1305 | should_abort: bool, | |
1306 | resume_block: &mut Option<BasicBlock>, | |
1307 | ) { | |
1308 | let mut blocks = IndexVec::from_elem(None, &drops.drops); | |
1309 | blocks[ROOT_NODE] = *resume_block; | |
1310 | drops.build_mir::<Unwind>(cfg, &mut blocks); | |
1311 | if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) { | |
1312 | // `TerminatorKind::Abort` is used for `#[unwind(aborts)]` | |
1313 | // functions. | |
1314 | let terminator = | |
1315 | if should_abort { TerminatorKind::Abort } else { TerminatorKind::Resume }; | |
1316 | ||
1317 | cfg.terminate(resume, SourceInfo::outermost(fn_span), terminator); | |
1318 | ||
1319 | *resume_block = blocks[ROOT_NODE]; | |
1320 | } | |
1321 | } | |
1322 | } | |
041b39d2 | 1323 | |
29967ef6 XL |
1324 | // DropTreeBuilder implementations. |
1325 | ||
1326 | struct ExitScopes; | |
1327 | ||
1328 | impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes { | |
1329 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1330 | cfg.start_new_block() | |
1331 | } | |
1332 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1333 | cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to }; | |
1334 | } | |
7453a54e | 1335 | } |
dc9dc135 | 1336 | |
29967ef6 XL |
1337 | struct GeneratorDrop; |
1338 | ||
1339 | impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop { | |
1340 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1341 | cfg.start_new_block() | |
1342 | } | |
1343 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1344 | let term = cfg.block_data_mut(from).terminator_mut(); | |
1345 | if let TerminatorKind::Yield { ref mut drop, .. } = term.kind { | |
1346 | *drop = Some(to); | |
1347 | } else { | |
1348 | span_bug!( | |
1349 | term.source_info.span, | |
1350 | "cannot enter generator drop tree from {:?}", | |
1351 | term.kind | |
1352 | ) | |
1353 | } | |
1354 | } | |
1355 | } | |
1356 | ||
1357 | struct Unwind; | |
1358 | ||
1359 | impl<'tcx> DropTreeBuilder<'tcx> for Unwind { | |
1360 | fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock { | |
1361 | cfg.start_new_cleanup_block() | |
1362 | } | |
1363 | fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) { | |
1364 | let term = &mut cfg.block_data_mut(from).terminator_mut(); | |
1365 | match &mut term.kind { | |
1366 | TerminatorKind::Drop { unwind, .. } | |
1367 | | TerminatorKind::DropAndReplace { unwind, .. } | |
1368 | | TerminatorKind::FalseUnwind { unwind, .. } | |
1369 | | TerminatorKind::Call { cleanup: unwind, .. } | |
1370 | | TerminatorKind::Assert { cleanup: unwind, .. } => { | |
1371 | *unwind = Some(to); | |
1372 | } | |
1373 | TerminatorKind::Goto { .. } | |
1374 | | TerminatorKind::SwitchInt { .. } | |
1375 | | TerminatorKind::Resume | |
1376 | | TerminatorKind::Abort | |
1377 | | TerminatorKind::Return | |
1378 | | TerminatorKind::Unreachable | |
1379 | | TerminatorKind::Yield { .. } | |
1380 | | TerminatorKind::GeneratorDrop | |
1381 | | TerminatorKind::FalseEdge { .. } | |
fc512014 | 1382 | | TerminatorKind::InlineAsm { .. } => { |
29967ef6 XL |
1383 | span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind) |
1384 | } | |
1385 | } | |
dfeec247 | 1386 | } |
dc9dc135 | 1387 | } |