]> git.proxmox.com Git - rustc.git/blame - src/librustc_mir/build/scope.rs
New upstream version 1.23.0+dfsg1
[rustc.git] / src / librustc_mir / build / scope.rs
CommitLineData
e9174d1e
SL
1// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
2// file at the top-level directory of this distribution and at
3// http://rust-lang.org/COPYRIGHT.
4//
5// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8// option. This file may not be copied, modified, or distributed
9// except according to those terms.
10
11/*!
12Managing the scope stack. The scopes are tied to lexical scopes, so as
13we descend the HAIR, we push a scope on the stack, translate ite
14contents, and then pop it off. Every scope is named by a
ea8adc8c 15`region::Scope`.
e9174d1e
SL
16
17### SEME Regions
18
19When pushing a new scope, we record the current point in the graph (a
20basic block); this marks the entry to the scope. We then generate more
21stuff in the control-flow graph. Whenever the scope is exited, either
22via a `break` or `return` or just by fallthrough, that marks an exit
23from the scope. Each lexical scope thus corresponds to a single-entry,
24multiple-exit (SEME) region in the control-flow graph.
25
ea8adc8c 26For now, we keep a mapping from each `region::Scope` to its
e9174d1e
SL
27corresponding SEME region for later reference (see caveat in next
28paragraph). This is because region scopes are tied to
c30ab7b3 29them. Eventually, when we shift to non-lexical lifetimes, there should
e9174d1e
SL
30be no need to remember this mapping.
31
32There is one additional wrinkle, actually, that I wanted to hide from
33you but duty compels me to mention. In the course of translating
34matches, it sometimes happen that certain code (namely guards) gets
35executed multiple times. This means that the scope lexical scope may
36in fact correspond to multiple, disjoint SEME regions. So in fact our
9cc50fc6 37mapping is from one scope to a vector of SEME regions.
e9174d1e
SL
38
39### Drops
40
41The primary purpose for scopes is to insert drops: while translating
42the contents, we also accumulate lvalues that need to be dropped upon
43exit from each scope. This is done by calling `schedule_drop`. Once a
44drop is scheduled, whenever we branch out we will insert drops of all
45those lvalues onto the outgoing edge. Note that we don't know the full
46set of scheduled drops up front, and so whenever we exit from the
47scope we only drop the values scheduled thus far. For example, consider
48the scope S corresponding to this loop:
49
041b39d2
XL
50```
51# let cond = true;
e9174d1e 52loop {
041b39d2 53 let x = ..;
e9174d1e 54 if cond { break; }
041b39d2 55 let y = ..;
e9174d1e
SL
56}
57```
58
59When processing the `let x`, we will add one drop to the scope for
60`x`. The break will then insert a drop for `x`. When we process `let
61y`, we will add another drop (in fact, to a subscope, but let's ignore
62that for now); any later drops would also drop `y`.
63
64### Early exit
65
66There are numerous "normal" ways to early exit a scope: `break`,
67`continue`, `return` (panics are handled separately). Whenever an
68early exit occurs, the method `exit_scope` is called. It is given the
69current point in execution where the early exit occurs, as well as the
70scope you want to branch to (note that all early exits from to some
c30ab7b3 71other enclosing scope). `exit_scope` will record this exit point and
e9174d1e
SL
72also add all drops.
73
74Panics are handled in a similar fashion, except that a panic always
75returns out to the `DIVERGE_BLOCK`. To trigger a panic, simply call
76`panic(p)` with the current point `p`. Or else you can call
77`diverge_cleanup`, which will produce a block that you can branch to
78which does the appropriate cleanup and then diverges. `panic(p)`
79simply calls `diverge_cleanup()` and adds an edge from `p` to the
80result.
81
82### Loop scopes
83
84In addition to the normal scope stack, we track a loop scope stack
85that contains only loops. It tracks where a `break` and `continue`
86should go to.
87
88*/
89
476ff2be 90use build::{BlockAnd, BlockAndExtension, Builder, CFG};
ea8adc8c
XL
91use hair::LintLevel;
92use rustc::middle::region;
93use rustc::ty::{Ty, TyCtxt};
abe05a73 94use rustc::hir;
ea8adc8c 95use rustc::hir::def_id::LOCAL_CRATE;
c30ab7b3 96use rustc::mir::*;
041b39d2 97use syntax_pos::{Span};
3157f602 98use rustc_data_structures::indexed_vec::Idx;
476ff2be 99use rustc_data_structures::fx::FxHashMap;
e9174d1e 100
041b39d2 101#[derive(Debug)]
b039eaaf 102pub struct Scope<'tcx> {
3157f602
XL
103 /// The visibility scope this scope was created in.
104 visibility_scope: VisibilityScope,
105
ea8adc8c
XL
106 /// the region span of this scope within source code.
107 region_scope: region::Scope,
54a0048b 108
ea8adc8c
XL
109 /// the span of that region_scope
110 region_scope_span: Span,
3b2f2976 111
5bcae85e
SL
112 /// Whether there's anything to do for the cleanup path, that is,
113 /// when unwinding through this scope. This includes destructors,
114 /// but not StorageDead statements, which don't get emitted at all
115 /// for unwinding, for several reasons:
116 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
117 /// * LLVM's memory dependency analysis can't handle it atm
3b2f2976 118 /// * polluting the cleanup MIR with StorageDead creates
5bcae85e
SL
119 /// landing pads even though there's no actual destructors
120 /// * freeing up stack space has no effect during unwinding
3b2f2976 121 needs_cleanup: bool,
5bcae85e 122
54a0048b
SL
123 /// set of lvalues to drop when exiting this scope. This starts
124 /// out empty but grows as variables are declared during the
125 /// building process. This is a stack, so we always drop from the
126 /// end of the vector (top of the stack) first.
7453a54e 127 drops: Vec<DropData<'tcx>>,
54a0048b 128
3157f602 129 /// The cache for drop chain on “normal” exit into a particular BasicBlock.
ea8adc8c
XL
130 cached_exits: FxHashMap<(BasicBlock, region::Scope), BasicBlock>,
131
132 /// The cache for drop chain on "generator drop" exit.
133 cached_generator_drop: Option<BasicBlock>,
abe05a73
XL
134
135 /// The cache for drop chain on "unwind" exit.
136 cached_unwind: CachedBlock,
7453a54e
SL
137}
138
041b39d2 139#[derive(Debug)]
7453a54e 140struct DropData<'tcx> {
54a0048b
SL
141 /// span where drop obligation was incurred (typically where lvalue was declared)
142 span: Span,
143
144 /// lvalue to drop
3157f602 145 location: Lvalue<'tcx>,
54a0048b 146
5bcae85e
SL
147 /// Whether this is a full value Drop, or just a StorageDead.
148 kind: DropKind
149}
150
ea8adc8c
XL
151#[derive(Debug, Default, Clone, Copy)]
152struct CachedBlock {
153 /// The cached block for the cleanups-on-diverge path. This block
154 /// contains code to run the current drop and all the preceding
155 /// drops (i.e. those having lower index in Drop’s Scope drop
156 /// array)
157 unwind: Option<BasicBlock>,
158
159 /// The cached block for unwinds during cleanups-on-generator-drop path
abe05a73
XL
160 ///
161 /// This is split from the standard unwind path here to prevent drop
162 /// elaboration from creating drop flags that would have to be captured
163 /// by the generator. I'm not sure how important this optimization is,
164 /// but it is here.
ea8adc8c
XL
165 generator_drop: Option<BasicBlock>,
166}
167
041b39d2 168#[derive(Debug)]
5bcae85e
SL
169enum DropKind {
170 Value {
ea8adc8c 171 cached_block: CachedBlock,
5bcae85e
SL
172 },
173 Storage
7453a54e
SL
174}
175
e9174d1e 176#[derive(Clone, Debug)]
cc61c64b 177pub struct BreakableScope<'tcx> {
ea8adc8c
XL
178 /// Region scope of the loop
179 pub region_scope: region::Scope,
cc61c64b
XL
180 /// Where the body of the loop begins. `None` if block
181 pub continue_block: Option<BasicBlock>,
182 /// Block to branch into when the loop or block terminates (either by being `break`-en out
183 /// from, or by having its condition to become false)
3157f602 184 pub break_block: BasicBlock,
cc61c64b
XL
185 /// The destination of the loop/block expression itself (i.e. where to put the result of a
186 /// `break` expression)
476ff2be 187 pub break_destination: Lvalue<'tcx>,
7453a54e
SL
188}
189
ea8adc8c
XL
190impl CachedBlock {
191 fn invalidate(&mut self) {
192 self.generator_drop = None;
193 self.unwind = None;
194 }
195
196 fn get(&self, generator_drop: bool) -> Option<BasicBlock> {
197 if generator_drop {
198 self.generator_drop
199 } else {
200 self.unwind
201 }
202 }
203
204 fn ref_mut(&mut self, generator_drop: bool) -> &mut Option<BasicBlock> {
205 if generator_drop {
206 &mut self.generator_drop
207 } else {
208 &mut self.unwind
209 }
210 }
211}
212
3b2f2976
XL
213impl DropKind {
214 fn may_panic(&self) -> bool {
215 match *self {
216 DropKind::Value { .. } => true,
217 DropKind::Storage => false
218 }
219 }
220}
221
7453a54e
SL
222impl<'tcx> Scope<'tcx> {
223 /// Invalidate all the cached blocks in the scope.
224 ///
225 /// Should always be run for all inner scopes when a drop is pushed into some scope enclosing a
226 /// larger extent of code.
5bcae85e 227 ///
abe05a73
XL
228 /// `storage_only` controls whether to invalidate only drop paths run `StorageDead`.
229 /// `this_scope_only` controls whether to invalidate only drop paths that refer to the current
230 /// top-of-scope (as opposed to dependent scopes).
231 fn invalidate_cache(&mut self, storage_only: bool, this_scope_only: bool) {
232 // FIXME: maybe do shared caching of `cached_exits` etc. to handle functions
233 // with lots of `try!`?
234
235 // cached exits drop storage and refer to the top-of-scope
5bcae85e 236 self.cached_exits.clear();
abe05a73
XL
237
238 if !storage_only {
239 // the current generator drop and unwind ignore
240 // storage but refer to top-of-scope
241 self.cached_generator_drop = None;
242 self.cached_unwind.invalidate();
7453a54e 243 }
7453a54e 244
abe05a73
XL
245 if !storage_only && !this_scope_only {
246 for dropdata in &mut self.drops {
247 if let DropKind::Value { ref mut cached_block } = dropdata.kind {
248 cached_block.invalidate();
ea8adc8c 249 }
5bcae85e 250 }
7453a54e
SL
251 }
252 }
3157f602
XL
253
254 /// Given a span and this scope's visibility scope, make a SourceInfo.
255 fn source_info(&self, span: Span) -> SourceInfo {
256 SourceInfo {
3b2f2976 257 span,
3157f602
XL
258 scope: self.visibility_scope
259 }
260 }
e9174d1e
SL
261}
262
a7813a04 263impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
7453a54e
SL
264 // Adding and removing scopes
265 // ==========================
cc61c64b 266 /// Start a breakable scope, which tracks where `continue` and `break`
e9174d1e 267 /// should branch to. See module comment for more details.
7453a54e 268 ///
cc61c64b
XL
269 /// Returns the might_break attribute of the BreakableScope used.
270 pub fn in_breakable_scope<F, R>(&mut self,
7cac9316
XL
271 loop_block: Option<BasicBlock>,
272 break_block: BasicBlock,
273 break_destination: Lvalue<'tcx>,
274 f: F) -> R
cc61c64b 275 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> R
e9174d1e 276 {
ea8adc8c 277 let region_scope = self.topmost_scope();
cc61c64b 278 let scope = BreakableScope {
ea8adc8c 279 region_scope,
b039eaaf 280 continue_block: loop_block,
3b2f2976
XL
281 break_block,
282 break_destination,
b039eaaf 283 };
cc61c64b
XL
284 self.breakable_scopes.push(scope);
285 let res = f(self);
286 let breakable_scope = self.breakable_scopes.pop().unwrap();
ea8adc8c 287 assert!(breakable_scope.region_scope == region_scope);
cc61c64b 288 res
e9174d1e
SL
289 }
290
041b39d2 291 pub fn in_opt_scope<F, R>(&mut self,
ea8adc8c 292 opt_scope: Option<(region::Scope, SourceInfo)>,
041b39d2
XL
293 mut block: BasicBlock,
294 f: F)
295 -> BlockAnd<R>
296 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd<R>
297 {
ea8adc8c
XL
298 debug!("in_opt_scope(opt_scope={:?}, block={:?})", opt_scope, block);
299 if let Some(region_scope) = opt_scope { self.push_scope(region_scope); }
041b39d2 300 let rv = unpack!(block = f(self));
ea8adc8c
XL
301 if let Some(region_scope) = opt_scope {
302 unpack!(block = self.pop_scope(region_scope, block));
041b39d2 303 }
ea8adc8c 304 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
041b39d2
XL
305 block.and(rv)
306 }
307
92a42be0
SL
308 /// Convenience wrapper that pushes a scope and then executes `f`
309 /// to build its contents, popping the scope afterwards.
7cac9316 310 pub fn in_scope<F, R>(&mut self,
ea8adc8c
XL
311 region_scope: (region::Scope, SourceInfo),
312 lint_level: LintLevel,
7cac9316
XL
313 mut block: BasicBlock,
314 f: F)
315 -> BlockAnd<R>
3157f602 316 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd<R>
e9174d1e 317 {
ea8adc8c
XL
318 debug!("in_scope(region_scope={:?}, block={:?})", region_scope, block);
319 let visibility_scope = self.visibility_scope;
320 let tcx = self.hir.tcx();
321 if let LintLevel::Explicit(node_id) = lint_level {
322 let same_lint_scopes = tcx.dep_graph.with_ignore(|| {
323 let sets = tcx.lint_levels(LOCAL_CRATE);
324 let parent_hir_id =
325 tcx.hir.definitions().node_to_hir_id(
326 self.visibility_scope_info[visibility_scope].lint_root
327 );
328 let current_hir_id =
329 tcx.hir.definitions().node_to_hir_id(node_id);
330 sets.lint_level_set(parent_hir_id) ==
331 sets.lint_level_set(current_hir_id)
332 });
333
334 if !same_lint_scopes {
335 self.visibility_scope =
336 self.new_visibility_scope(region_scope.1.span, lint_level,
337 None);
338 }
339 }
340 self.push_scope(region_scope);
3157f602 341 let rv = unpack!(block = f(self));
ea8adc8c
XL
342 unpack!(block = self.pop_scope(region_scope, block));
343 self.visibility_scope = visibility_scope;
344 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
92a42be0
SL
345 block.and(rv)
346 }
e9174d1e 347
92a42be0
SL
348 /// Push a scope onto the stack. You can then build code in this
349 /// scope and call `pop_scope` afterwards. Note that these two
350 /// calls must be paired; using `in_scope` as a convenience
351 /// wrapper maybe preferable.
ea8adc8c
XL
352 pub fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
353 debug!("push_scope({:?})", region_scope);
3157f602 354 let vis_scope = self.visibility_scope;
e9174d1e 355 self.scopes.push(Scope {
3157f602 356 visibility_scope: vis_scope,
ea8adc8c
XL
357 region_scope: region_scope.0,
358 region_scope_span: region_scope.1.span,
5bcae85e 359 needs_cleanup: false,
e9174d1e 360 drops: vec![],
ea8adc8c 361 cached_generator_drop: None,
abe05a73
XL
362 cached_exits: FxHashMap(),
363 cached_unwind: CachedBlock::default(),
54a0048b 364 });
92a42be0
SL
365 }
366
ea8adc8c
XL
367 /// Pops a scope, which should have region scope `region_scope`,
368 /// adding any drops onto the end of `block` that are needed.
369 /// This must match 1-to-1 with `push_scope`.
54a0048b 370 pub fn pop_scope(&mut self,
ea8adc8c 371 region_scope: (region::Scope, SourceInfo),
54a0048b
SL
372 mut block: BasicBlock)
373 -> BlockAnd<()> {
ea8adc8c 374 debug!("pop_scope({:?}, {:?})", region_scope, block);
3b2f2976
XL
375 // If we are emitting a `drop` statement, we need to have the cached
376 // diverge cleanup pads ready in case that drop panics.
377 let may_panic =
378 self.scopes.last().unwrap().drops.iter().any(|s| s.kind.may_panic());
379 if may_panic {
380 self.diverge_cleanup();
381 }
92a42be0 382 let scope = self.scopes.pop().unwrap();
ea8adc8c
XL
383 assert_eq!(scope.region_scope, region_scope.0);
384
385 self.cfg.push_end_region(self.hir.tcx(), block, region_scope.1, scope.region_scope);
c30ab7b3
SL
386 unpack!(block = build_scope_drops(&mut self.cfg,
387 &scope,
388 &self.scopes,
389 block,
ea8adc8c
XL
390 self.arg_count,
391 false));
041b39d2 392
54a0048b 393 block.unit()
e9174d1e
SL
394 }
395
e9174d1e 396
e9174d1e 397 /// Branch out of `block` to `target`, exiting all scopes up to
ea8adc8c 398 /// and including `region_scope`. This will insert whatever drops are
e9174d1e
SL
399 /// needed, as well as tracking this exit for the SEME region. See
400 /// module comment for details.
401 pub fn exit_scope(&mut self,
b039eaaf 402 span: Span,
ea8adc8c 403 region_scope: (region::Scope, SourceInfo),
7453a54e 404 mut block: BasicBlock,
e9174d1e 405 target: BasicBlock) {
ea8adc8c
XL
406 debug!("exit_scope(region_scope={:?}, block={:?}, target={:?})",
407 region_scope, block, target);
408 let scope_count = 1 + self.scopes.iter().rev()
409 .position(|scope| scope.region_scope == region_scope.0)
410 .unwrap_or_else(|| {
411 span_bug!(span, "region_scope {:?} does not enclose", region_scope)
412 });
3157f602
XL
413 let len = self.scopes.len();
414 assert!(scope_count < len, "should not use `exit_scope` to pop ALL scopes");
3b2f2976
XL
415
416 // If we are emitting a `drop` statement, we need to have the cached
417 // diverge cleanup pads ready in case that drop panics.
418 let may_panic = self.scopes[(len - scope_count)..].iter()
419 .any(|s| s.drops.iter().any(|s| s.kind.may_panic()));
420 if may_panic {
421 self.diverge_cleanup();
422 }
423
3157f602
XL
424 {
425 let mut rest = &mut self.scopes[(len - scope_count)..];
426 while let Some((scope, rest_)) = {rest}.split_last_mut() {
427 rest = rest_;
ea8adc8c 428 block = if let Some(&e) = scope.cached_exits.get(&(target, region_scope.0)) {
3157f602
XL
429 self.cfg.terminate(block, scope.source_info(span),
430 TerminatorKind::Goto { target: e });
431 return;
432 } else {
433 let b = self.cfg.start_new_block();
434 self.cfg.terminate(block, scope.source_info(span),
435 TerminatorKind::Goto { target: b });
ea8adc8c 436 scope.cached_exits.insert((target, region_scope.0), b);
3157f602
XL
437 b
438 };
ea8adc8c
XL
439
440 // End all regions for scopes out of which we are breaking.
441 self.cfg.push_end_region(self.hir.tcx(), block, region_scope.1, scope.region_scope);
442
c30ab7b3
SL
443 unpack!(block = build_scope_drops(&mut self.cfg,
444 scope,
445 rest,
446 block,
ea8adc8c
XL
447 self.arg_count,
448 false));
e9174d1e 449 }
3157f602
XL
450 }
451 let scope = &self.scopes[len - scope_count];
452 self.cfg.terminate(block, scope.source_info(span),
54a0048b 453 TerminatorKind::Goto { target: target });
e9174d1e
SL
454 }
455
ea8adc8c
XL
456 /// Creates a path that performs all required cleanup for dropping a generator.
457 ///
458 /// This path terminates in GeneratorDrop. Returns the start of the path.
459 /// None indicates there’s no cleanup to do at this point.
460 pub fn generator_drop_cleanup(&mut self) -> Option<BasicBlock> {
461 if !self.scopes.iter().any(|scope| scope.needs_cleanup) {
462 return None;
463 }
464
465 // Fill in the cache
466 self.diverge_cleanup_gen(true);
467
468 let src_info = self.scopes[0].source_info(self.fn_span);
469 let mut block = self.cfg.start_new_block();
470 let result = block;
471 let mut rest = &mut self.scopes[..];
472
473 while let Some((scope, rest_)) = {rest}.split_last_mut() {
474 rest = rest_;
475 if !scope.needs_cleanup {
476 continue;
477 }
478 block = if let Some(b) = scope.cached_generator_drop {
479 self.cfg.terminate(block, src_info,
480 TerminatorKind::Goto { target: b });
481 return Some(result);
482 } else {
483 let b = self.cfg.start_new_block();
484 scope.cached_generator_drop = Some(b);
485 self.cfg.terminate(block, src_info,
486 TerminatorKind::Goto { target: b });
487 b
488 };
abe05a73
XL
489
490 // End all regions for scopes out of which we are breaking.
491 self.cfg.push_end_region(self.hir.tcx(), block, src_info, scope.region_scope);
492
ea8adc8c
XL
493 unpack!(block = build_scope_drops(&mut self.cfg,
494 scope,
495 rest,
496 block,
497 self.arg_count,
498 true));
ea8adc8c
XL
499 }
500
501 self.cfg.terminate(block, src_info, TerminatorKind::GeneratorDrop);
502
503 Some(result)
504 }
505
3157f602 506 /// Creates a new visibility scope, nested in the current one.
ea8adc8c
XL
507 pub fn new_visibility_scope(&mut self,
508 span: Span,
509 lint_level: LintLevel,
510 safety: Option<Safety>) -> VisibilityScope {
3157f602 511 let parent = self.visibility_scope;
ea8adc8c
XL
512 debug!("new_visibility_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
513 span, lint_level, safety,
514 parent, self.visibility_scope_info.get(parent));
515 let scope = self.visibility_scopes.push(VisibilityScopeData {
3b2f2976 516 span,
3157f602
XL
517 parent_scope: Some(parent),
518 });
ea8adc8c
XL
519 let scope_info = VisibilityScopeInfo {
520 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
521 lint_root
522 } else {
523 self.visibility_scope_info[parent].lint_root
524 },
525 safety: safety.unwrap_or_else(|| {
526 self.visibility_scope_info[parent].safety
527 })
528 };
529 self.visibility_scope_info.push(scope_info);
3157f602
XL
530 scope
531 }
532
7453a54e
SL
533 // Finding scopes
534 // ==============
cc61c64b 535 /// Finds the breakable scope for a given label. This is used for
7453a54e 536 /// resolving `break` and `continue`.
cc61c64b 537 pub fn find_breakable_scope(&mut self,
7453a54e 538 span: Span,
ea8adc8c 539 label: region::Scope)
cc61c64b 540 -> &mut BreakableScope<'tcx> {
8bb4bdeb 541 // find the loop-scope with the correct id
cc61c64b 542 self.breakable_scopes.iter_mut()
8bb4bdeb 543 .rev()
ea8adc8c 544 .filter(|breakable_scope| breakable_scope.region_scope == label)
8bb4bdeb 545 .next()
cc61c64b 546 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
54a0048b
SL
547 }
548
3157f602
XL
549 /// Given a span and the current visibility scope, make a SourceInfo.
550 pub fn source_info(&self, span: Span) -> SourceInfo {
551 SourceInfo {
3b2f2976 552 span,
3157f602
XL
553 scope: self.visibility_scope
554 }
e9174d1e
SL
555 }
556
ea8adc8c 557 /// Returns the `region::Scope` of the scope which should be exited by a
54a0048b 558 /// return.
ea8adc8c 559 pub fn region_scope_of_return_scope(&self) -> region::Scope {
54a0048b
SL
560 // The outermost scope (`scopes[0]`) will be the `CallSiteScope`.
561 // We want `scopes[1]`, which is the `ParameterScope`.
562 assert!(self.scopes.len() >= 2);
ea8adc8c
XL
563 assert!(match self.scopes[1].region_scope.data() {
564 region::ScopeData::Arguments(_) => true,
54a0048b
SL
565 _ => false,
566 });
ea8adc8c 567 self.scopes[1].region_scope
7453a54e
SL
568 }
569
8bb4bdeb
XL
570 /// Returns the topmost active scope, which is known to be alive until
571 /// the next scope expression.
ea8adc8c
XL
572 pub fn topmost_scope(&self) -> region::Scope {
573 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
8bb4bdeb
XL
574 }
575
7cac9316
XL
576 /// Returns the scope that we should use as the lifetime of an
577 /// operand. Basically, an operand must live until it is consumed.
578 /// This is similar to, but not quite the same as, the temporary
579 /// scope (which can be larger or smaller).
580 ///
581 /// Consider:
582 ///
583 /// let x = foo(bar(X, Y));
584 ///
585 /// We wish to pop the storage for X and Y after `bar()` is
586 /// called, not after the whole `let` is completed.
587 ///
588 /// As another example, if the second argument diverges:
589 ///
590 /// foo(Box::new(2), panic!())
591 ///
592 /// We would allocate the box but then free it on the unwinding
593 /// path; we would also emit a free on the 'success' path from
594 /// panic, but that will turn out to be removed as dead-code.
595 ///
596 /// When building statics/constants, returns `None` since
597 /// intermediate values do not have to be dropped in that case.
ea8adc8c 598 pub fn local_scope(&self) -> Option<region::Scope> {
abe05a73
XL
599 match self.hir.body_owner_kind {
600 hir::BodyOwnerKind::Const |
601 hir::BodyOwnerKind::Static(_) =>
7cac9316
XL
602 // No need to free storage in this context.
603 None,
abe05a73 604 hir::BodyOwnerKind::Fn =>
7cac9316 605 Some(self.topmost_scope()),
7cac9316
XL
606 }
607 }
608
7453a54e
SL
609 // Scheduling drops
610 // ================
e9174d1e 611 /// Indicates that `lvalue` should be dropped on exit from
ea8adc8c 612 /// `region_scope`.
e9174d1e 613 pub fn schedule_drop(&mut self,
b039eaaf 614 span: Span,
ea8adc8c 615 region_scope: region::Scope,
b039eaaf
SL
616 lvalue: &Lvalue<'tcx>,
617 lvalue_ty: Ty<'tcx>) {
5bcae85e
SL
618 let needs_drop = self.hir.needs_drop(lvalue_ty);
619 let drop_kind = if needs_drop {
ea8adc8c 620 DropKind::Value { cached_block: CachedBlock::default() }
5bcae85e
SL
621 } else {
622 // Only temps and vars need their storage dead.
623 match *lvalue {
c30ab7b3 624 Lvalue::Local(index) if index.index() > self.arg_count => DropKind::Storage,
5bcae85e
SL
625 _ => return
626 }
627 };
628
7453a54e 629 for scope in self.scopes.iter_mut().rev() {
ea8adc8c 630 let this_scope = scope.region_scope == region_scope;
5bcae85e
SL
631 // When building drops, we try to cache chains of drops in such a way so these drops
632 // could be reused by the drops which would branch into the cached (already built)
633 // blocks. This, however, means that whenever we add a drop into a scope which already
634 // had some blocks built (and thus, cached) for it, we must invalidate all caches which
635 // might branch into the scope which had a drop just added to it. This is necessary,
636 // because otherwise some other code might use the cache to branch into already built
637 // chain of drops, essentially ignoring the newly added drop.
638 //
639 // For example consider there’s two scopes with a drop in each. These are built and
640 // thus the caches are filled:
641 //
642 // +--------------------------------------------------------+
643 // | +---------------------------------+ |
644 // | | +--------+ +-------------+ | +---------------+ |
645 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
646 // | | +--------+ +-------------+ | +---------------+ |
647 // | +------------|outer_scope cache|--+ |
648 // +------------------------------|middle_scope cache|------+
649 //
650 // Now, a new, inner-most scope is added along with a new drop into both inner-most and
651 // outer-most scopes:
652 //
653 // +------------------------------------------------------------+
654 // | +----------------------------------+ |
655 // | | +--------+ +-------------+ | +---------------+ | +-------------+
656 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
657 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
658 // | | +-+ +-------------+ | |
659 // | +---|invalid outer_scope cache|----+ |
660 // +----=----------------|invalid middle_scope cache|-----------+
661 //
662 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
663 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
664 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
665 // wrong results.
666 //
667 // The cache and its invalidation for unwind branch is somewhat special. The cache is
668 // per-drop, rather than per scope, which has a several different implications. Adding
669 // a new drop into a scope will not invalidate cached blocks of the prior drops in the
670 // scope. That is true, because none of the already existing drops will have an edge
671 // into a block with the newly added drop.
672 //
673 // Note that this code iterates scopes from the inner-most to the outer-most,
674 // invalidating caches of each scope visited. This way bare minimum of the
675 // caches gets invalidated. i.e. if a new drop is added into the middle scope, the
676 // cache of outer scpoe stays intact.
abe05a73 677 scope.invalidate_cache(!needs_drop, this_scope);
5bcae85e
SL
678 if this_scope {
679 if let DropKind::Value { .. } = drop_kind {
680 scope.needs_cleanup = true;
681 }
ea8adc8c
XL
682 let region_scope_span = region_scope.span(self.hir.tcx(),
683 &self.hir.region_scope_tree);
c30ab7b3 684 // Attribute scope exit drops to scope's closing brace
ea8adc8c 685 let scope_end = region_scope_span.with_lo(region_scope_span.hi());
7453a54e 686 scope.drops.push(DropData {
c30ab7b3 687 span: scope_end,
3157f602 688 location: lvalue.clone(),
5bcae85e 689 kind: drop_kind
7453a54e
SL
690 });
691 return;
7453a54e
SL
692 }
693 }
ea8adc8c 694 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, lvalue);
7453a54e
SL
695 }
696
7453a54e
SL
697 // Other
698 // =====
699 /// Creates a path that performs all required cleanup for unwinding.
700 ///
701 /// This path terminates in Resume. Returns the start of the path.
702 /// See module comment for more details. None indicates there’s no
703 /// cleanup to do at this point.
3b2f2976 704 pub fn diverge_cleanup(&mut self) -> Option<BasicBlock> {
ea8adc8c
XL
705 self.diverge_cleanup_gen(false)
706 }
707
708 fn diverge_cleanup_gen(&mut self, generator_drop: bool) -> Option<BasicBlock> {
5bcae85e 709 if !self.scopes.iter().any(|scope| scope.needs_cleanup) {
7453a54e
SL
710 return None;
711 }
5bcae85e 712 assert!(!self.scopes.is_empty()); // or `any` above would be false
54a0048b 713
3b2f2976 714 let Builder { ref mut cfg, ref mut scopes,
54a0048b
SL
715 ref mut cached_resume_block, .. } = *self;
716
717 // Build up the drops in **reverse** order. The end result will
718 // look like:
719 //
720 // scopes[n] -> scopes[n-1] -> ... -> scopes[0]
721 //
722 // However, we build this in **reverse order**. That is, we
723 // process scopes[0], then scopes[1], etc, pointing each one at
724 // the result generates from the one before. Along the way, we
725 // store caches. If everything is cached, we'll just walk right
726 // to left reading the cached results but never created anything.
727
728 // To start, create the resume terminator.
729 let mut target = if let Some(target) = *cached_resume_block {
730 target
731 } else {
732 let resumeblk = cfg.start_new_cleanup_block();
3157f602
XL
733 cfg.terminate(resumeblk,
734 scopes[0].source_info(self.fn_span),
735 TerminatorKind::Resume);
54a0048b
SL
736 *cached_resume_block = Some(resumeblk);
737 resumeblk
738 };
739
041b39d2 740 for scope in scopes.iter_mut() {
ea8adc8c
XL
741 target = build_diverge_scope(self.hir.tcx(), cfg, scope.region_scope_span,
742 scope, target, generator_drop);
7453a54e 743 }
54a0048b 744 Some(target)
e9174d1e
SL
745 }
746
7453a54e 747 /// Utility function for *non*-scope code to build their own drops
54a0048b
SL
748 pub fn build_drop(&mut self,
749 block: BasicBlock,
750 span: Span,
3157f602 751 location: Lvalue<'tcx>,
a7813a04
XL
752 ty: Ty<'tcx>) -> BlockAnd<()> {
753 if !self.hir.needs_drop(ty) {
754 return block.unit();
755 }
3157f602 756 let source_info = self.source_info(span);
7453a54e 757 let next_target = self.cfg.start_new_block();
3b2f2976 758 let diverge_target = self.diverge_cleanup();
3157f602 759 self.cfg.terminate(block, source_info,
54a0048b 760 TerminatorKind::Drop {
3b2f2976 761 location,
54a0048b
SL
762 target: next_target,
763 unwind: diverge_target,
764 });
7453a54e 765 next_target.unit()
e9174d1e 766 }
e9174d1e 767
3157f602
XL
768 /// Utility function for *non*-scope code to build their own drops
769 pub fn build_drop_and_replace(&mut self,
770 block: BasicBlock,
771 span: Span,
772 location: Lvalue<'tcx>,
773 value: Operand<'tcx>) -> BlockAnd<()> {
774 let source_info = self.source_info(span);
775 let next_target = self.cfg.start_new_block();
3b2f2976 776 let diverge_target = self.diverge_cleanup();
3157f602
XL
777 self.cfg.terminate(block, source_info,
778 TerminatorKind::DropAndReplace {
3b2f2976
XL
779 location,
780 value,
3157f602
XL
781 target: next_target,
782 unwind: diverge_target,
783 });
784 next_target.unit()
e9174d1e
SL
785 }
786
3157f602
XL
787 /// Create an Assert terminator and return the success block.
788 /// If the boolean condition operand is not the expected value,
789 /// a runtime panic will be caused with the given message.
790 pub fn assert(&mut self, block: BasicBlock,
791 cond: Operand<'tcx>,
792 expected: bool,
793 msg: AssertMessage<'tcx>,
794 span: Span)
795 -> BasicBlock {
796 let source_info = self.source_info(span);
797
798 let success_block = self.cfg.start_new_block();
3b2f2976 799 let cleanup = self.diverge_cleanup();
e9174d1e 800
3157f602
XL
801 self.cfg.terminate(block, source_info,
802 TerminatorKind::Assert {
3b2f2976
XL
803 cond,
804 expected,
805 msg,
3157f602 806 target: success_block,
3b2f2976 807 cleanup,
3157f602 808 });
e9174d1e 809
3157f602 810 success_block
9cc50fc6 811 }
7453a54e
SL
812}
813
814/// Builds drops for pop_scope and exit_scope.
815fn build_scope_drops<'tcx>(cfg: &mut CFG<'tcx>,
816 scope: &Scope<'tcx>,
817 earlier_scopes: &[Scope<'tcx>],
c30ab7b3 818 mut block: BasicBlock,
ea8adc8c
XL
819 arg_count: usize,
820 generator_drop: bool)
7453a54e 821 -> BlockAnd<()> {
3b2f2976 822 debug!("build_scope_drops({:?} -> {:?})", block, scope);
abe05a73 823 let mut iter = scope.drops.iter().rev();
7453a54e 824 while let Some(drop_data) = iter.next() {
5bcae85e 825 let source_info = scope.source_info(drop_data.span);
5bcae85e 826 match drop_data.kind {
3b2f2976 827 DropKind::Value { .. } => {
abe05a73
XL
828 // Try to find the next block with its cached block for us to
829 // diverge into, either a previous block in this current scope or
830 // the top of the previous scope.
831 //
832 // If it wasn't for EndRegion, we could just chain all the DropData
833 // together and pick the first DropKind::Value. Please do that
834 // when we replace EndRegion with NLL.
835 let on_diverge = iter.clone().filter_map(|dd| {
3b2f2976 836 match dd.kind {
abe05a73 837 DropKind::Value { cached_block } => Some(cached_block),
3b2f2976
XL
838 DropKind::Storage => None
839 }
abe05a73
XL
840 }).next().or_else(|| {
841 if earlier_scopes.iter().any(|scope| scope.needs_cleanup) {
842 // If *any* scope requires cleanup code to be run,
843 // we must use the cached unwind from the *topmost*
844 // scope, to ensure all EndRegions from surrounding
845 // scopes are executed before the drop code runs.
846 Some(earlier_scopes.last().unwrap().cached_unwind)
847 } else {
848 // We don't need any further cleanup, so return None
849 // to avoid creating a landing pad. We can skip
850 // EndRegions because all local regions end anyway
851 // when the function unwinds.
852 //
853 // This is an important optimization because LLVM is
854 // terrible at optimizing landing pads. FIXME: I think
855 // it would be cleaner and better to do this optimization
856 // in SimplifyCfg instead of here.
857 None
858 }
859 });
860
861 let on_diverge = on_diverge.map(|cached_block| {
862 cached_block.get(generator_drop).unwrap_or_else(|| {
863 span_bug!(drop_data.span, "cached block not present?")
864 })
3b2f2976 865 });
abe05a73 866
3b2f2976
XL
867 let next = cfg.start_new_block();
868 cfg.terminate(block, source_info, TerminatorKind::Drop {
869 location: drop_data.location.clone(),
870 target: next,
871 unwind: on_diverge
872 });
873 block = next;
874 }
875 DropKind::Storage => {}
876 }
5bcae85e 877
ea8adc8c
XL
878 // We do not need to emit StorageDead for generator drops
879 if generator_drop {
880 continue
881 }
882
3b2f2976
XL
883 // Drop the storage for both value and storage drops.
884 // Only temps and vars need their storage dead.
885 match drop_data.location {
886 Lvalue::Local(index) if index.index() > arg_count => {
5bcae85e 887 cfg.push(block, Statement {
3b2f2976 888 source_info,
ea8adc8c 889 kind: StatementKind::StorageDead(index)
5bcae85e
SL
890 });
891 }
3b2f2976 892 _ => continue
5bcae85e 893 }
7453a54e
SL
894 }
895 block.unit()
896}
897
ea8adc8c
XL
898fn build_diverge_scope<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
899 cfg: &mut CFG<'tcx>,
041b39d2 900 span: Span,
a7813a04 901 scope: &mut Scope<'tcx>,
ea8adc8c
XL
902 mut target: BasicBlock,
903 generator_drop: bool)
a7813a04 904 -> BasicBlock
54a0048b
SL
905{
906 // Build up the drops in **reverse** order. The end result will
907 // look like:
908 //
041b39d2
XL
909 // [EndRegion Block] -> [drops[n]] -...-> [drops[0]] -> [Free] -> [target]
910 // | |
911 // +---------------------------------------------------------+
54a0048b
SL
912 // code for scope
913 //
914 // The code in this function reads from right to left. At each
915 // point, we check for cached blocks representing the
916 // remainder. If everything is cached, we'll just walk right to
ea8adc8c 917 // left reading the cached results but never create anything.
7453a54e 918
3157f602
XL
919 let visibility_scope = scope.visibility_scope;
920 let source_info = |span| SourceInfo {
3b2f2976 921 span,
3157f602
XL
922 scope: visibility_scope
923 };
924
54a0048b
SL
925 // Next, build up the drops. Here we iterate the vector in
926 // *forward* order, so that we generate drops[0] first (right to
927 // left in diagram above).
041b39d2
XL
928 for (j, drop_data) in scope.drops.iter_mut().enumerate() {
929 debug!("build_diverge_scope drop_data[{}]: {:?}", j, drop_data);
5bcae85e
SL
930 // Only full value drops are emitted in the diverging path,
931 // not StorageDead.
041b39d2
XL
932 //
933 // Note: This may not actually be what we desire (are we
934 // "freeing" stack storage as we unwind, or merely observing a
935 // frozen stack)? In particular, the intent may have been to
936 // match the behavior of clang, but on inspection eddyb says
937 // this is not what clang does.
5bcae85e 938 let cached_block = match drop_data.kind {
ea8adc8c 939 DropKind::Value { ref mut cached_block } => cached_block.ref_mut(generator_drop),
5bcae85e
SL
940 DropKind::Storage => continue
941 };
942 target = if let Some(cached_block) = *cached_block {
54a0048b
SL
943 cached_block
944 } else {
945 let block = cfg.start_new_cleanup_block();
3157f602 946 cfg.terminate(block, source_info(drop_data.span),
54a0048b 947 TerminatorKind::Drop {
3157f602 948 location: drop_data.location.clone(),
3b2f2976 949 target,
54a0048b
SL
950 unwind: None
951 });
5bcae85e 952 *cached_block = Some(block);
54a0048b
SL
953 block
954 };
7453a54e 955 }
54a0048b 956
abe05a73
XL
957 // Finally, push the EndRegion block, used by mir-borrowck, and set
958 // `cached_unwind` to point to it (Block becomes trivial goto after
959 // pass that removes all EndRegions).
960 target = {
961 let cached_block = scope.cached_unwind.ref_mut(generator_drop);
962 if let Some(cached_block) = *cached_block {
963 cached_block
964 } else {
965 let block = cfg.start_new_cleanup_block();
966 cfg.push_end_region(tcx, block, source_info(span), scope.region_scope);
967 cfg.terminate(block, source_info(span), TerminatorKind::Goto { target: target });
968 *cached_block = Some(block);
969 block
970 }
971 };
972
973 debug!("build_diverge_scope({:?}, {:?}) = {:?}", scope, span, target);
041b39d2 974
54a0048b 975 target
7453a54e 976}