1 // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
11 //! ## The Cleanup module
13 //! The cleanup module tracks what values need to be cleaned up as scopes
14 //! are exited, either via panic or just normal control flow. The basic
15 //! idea is that the function context maintains a stack of cleanup scopes
16 //! that are pushed/popped as we traverse the AST tree. There is typically
17 //! at least one cleanup scope per AST node; some AST nodes may introduce
18 //! additional temporary scopes.
20 //! Cleanup items can be scheduled into any of the scopes on the stack.
21 //! Typically, when a scope is popped, we will also generate the code for
22 //! each of its cleanups at that time. This corresponds to a normal exit
23 //! from a block (for example, an expression completing evaluation
24 //! successfully without panic). However, it is also possible to pop a
25 //! block *without* executing its cleanups; this is typically used to
26 //! guard intermediate values that must be cleaned up on panic, but not
27 //! if everything goes right. See the section on custom scopes below for
30 //! Cleanup scopes come in three kinds:
32 //! - **AST scopes:** each AST node in a function body has a corresponding
33 //! AST scope. We push the AST scope when we start generate code for an AST
34 //! node and pop it once the AST node has been fully generated.
35 //! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
36 //! never scheduled into loop scopes; instead, they are used to record the
37 //! basic blocks that we should branch to when a `continue` or `break` statement
39 //! - **Custom scopes:** custom scopes are typically used to ensure cleanup
40 //! of intermediate values.
42 //! ### When to schedule cleanup
44 //! Although the cleanup system is intended to *feel* fairly declarative,
45 //! it's still important to time calls to `schedule_clean()` correctly.
46 //! Basically, you should not schedule cleanup for memory until it has
47 //! been initialized, because if an unwind should occur before the memory
48 //! is fully initialized, then the cleanup will run and try to free or
49 //! drop uninitialized memory. If the initialization itself produces
50 //! byproducts that need to be freed, then you should use temporary custom
51 //! scopes to ensure that those byproducts will get freed on unwind. For
52 //! example, an expression like `box foo()` will first allocate a box in the
53 //! heap and then call `foo()` -- if `foo()` should panic, this box needs
54 //! to be *shallowly* freed.
56 //! ### Long-distance jumps
58 //! In addition to popping a scope, which corresponds to normal control
59 //! flow exiting the scope, we may also *jump out* of a scope into some
60 //! earlier scope on the stack. This can occur in response to a `return`,
61 //! `break`, or `continue` statement, but also in response to panic. In
62 //! any of these cases, we will generate a series of cleanup blocks for
63 //! each of the scopes that is exited. So, if the stack contains scopes A
64 //! ... Z, and we break out of a loop whose corresponding cleanup scope is
65 //! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
66 //! After cleanup is done we would branch to the exit point for scope X.
67 //! But if panic should occur, we would generate cleanups for all the
68 //! scopes from A to Z and then resume the unwind process afterwards.
70 //! To avoid generating tons of code, we cache the cleanup blocks that we
71 //! create for breaks, returns, unwinds, and other jumps. Whenever a new
72 //! cleanup is scheduled, though, we must clear these cached blocks. A
73 //! possible improvement would be to keep the cached blocks but simply
74 //! generate a new block which performs the additional cleanup and then
75 //! branches to the existing cached blocks.
77 //! ### AST and loop cleanup scopes
79 //! AST cleanup scopes are pushed when we begin and end processing an AST
80 //! node. They are used to house cleanups related to rvalue temporary that
81 //! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
82 //! AST scope is popped, we always trans all the cleanups, adding the cleanup
83 //! code after the postdominator of the AST node.
85 //! AST nodes that represent breakable loops also push a loop scope; the
86 //! loop scope never has any actual cleanups, it's just used to point to
87 //! the basic blocks where control should flow after a "continue" or
88 //! "break" statement. Popping a loop scope never generates code.
90 //! ### Custom cleanup scopes
92 //! Custom cleanup scopes are used for a variety of purposes. The most
93 //! common though is to handle temporary byproducts, where cleanup only
94 //! needs to occur on panic. The general strategy is to push a custom
95 //! cleanup scope, schedule *shallow* cleanups into the custom scope, and
96 //! then pop the custom scope (without transing the cleanups) when
97 //! execution succeeds normally. This way the cleanups are only trans'd on
98 //! unwind, and only up until the point where execution succeeded, at
99 //! which time the complete value should be stored in an lvalue or some
100 //! other place where normal cleanup applies.
102 //! To spell it out, here is an example. Imagine an expression `box expr`.
103 //! We would basically:
105 //! 1. Push a custom cleanup scope C.
106 //! 2. Allocate the box.
107 //! 3. Schedule a shallow free in the scope C.
108 //! 4. Trans `expr` into the box.
109 //! 5. Pop the scope C.
110 //! 6. Return the box as an rvalue.
112 //! This way, if a panic occurs while transing `expr`, the custom
113 //! cleanup scope C is pushed and hence the box will be freed. The trans
114 //! code for `expr` itself is responsible for freeing any other byproducts
115 //! that may be in play.
117 pub use self::ScopeId
::*;
118 pub use self::CleanupScopeKind
::*;
119 pub use self::EarlyExitLabel
::*;
120 pub use self::Heap
::*;
122 use llvm
::{BasicBlockRef, ValueRef}
;
126 use common
::{Block, FunctionContext, NodeIdAndSpan, LandingPad}
;
127 use datum
::{Datum, Lvalue}
;
128 use debuginfo
::{DebugLoc, ToDebugLoc}
;
133 use rustc
::ty
::{Ty, TyCtxt}
;
138 pub struct CleanupScope
<'blk
, 'tcx
: 'blk
> {
139 // The id of this cleanup scope. If the id is None,
140 // this is a *temporary scope* that is pushed during trans to
141 // cleanup miscellaneous garbage that trans may generate whose
142 // lifetime is a subset of some expression. See module doc for
144 kind
: CleanupScopeKind
<'blk
, 'tcx
>,
146 // Cleanups to run upon scope exit.
147 cleanups
: Vec
<CleanupObj
<'tcx
>>,
149 // The debug location any drop calls generated for this scope will be
153 cached_early_exits
: Vec
<CachedEarlyExit
>,
154 cached_landing_pad
: Option
<BasicBlockRef
>,
157 #[derive(Copy, Clone, Debug)]
158 pub struct CustomScopeIndex
{
162 pub const EXIT_BREAK
: usize = 0;
163 pub const EXIT_LOOP
: usize = 1;
164 pub const EXIT_MAX
: usize = 2;
166 pub enum CleanupScopeKind
<'blk
, 'tcx
: 'blk
> {
168 AstScopeKind(ast
::NodeId
),
169 LoopScopeKind(ast
::NodeId
, [Block
<'blk
, 'tcx
>; EXIT_MAX
])
172 impl<'blk
, 'tcx
: 'blk
> fmt
::Debug
for CleanupScopeKind
<'blk
, 'tcx
> {
173 fn fmt(&self, f
: &mut fmt
::Formatter
) -> fmt
::Result
{
175 CustomScopeKind
=> write
!(f
, "CustomScopeKind"),
176 AstScopeKind(nid
) => write
!(f
, "AstScopeKind({})", nid
),
177 LoopScopeKind(nid
, ref blks
) => {
178 write
!(f
, "LoopScopeKind({}, [", nid
)?
;
180 write
!(f
, "{:p}, ", blk
)?
;
188 #[derive(Copy, Clone, PartialEq, Debug)]
189 pub enum EarlyExitLabel
{
190 UnwindExit(UnwindKind
),
192 LoopExit(ast
::NodeId
, usize)
195 #[derive(Copy, Clone, Debug)]
196 pub enum UnwindKind
{
198 CleanupPad(ValueRef
),
201 #[derive(Copy, Clone)]
202 pub struct CachedEarlyExit
{
203 label
: EarlyExitLabel
,
204 cleanup_block
: BasicBlockRef
,
208 pub trait Cleanup
<'tcx
> {
209 fn must_unwind(&self) -> bool
;
210 fn is_lifetime_end(&self) -> bool
;
211 fn trans
<'blk
>(&self,
212 bcx
: Block
<'blk
, 'tcx
>,
214 -> Block
<'blk
, 'tcx
>;
217 pub type CleanupObj
<'tcx
> = Box
<Cleanup
<'tcx
>+'tcx
>;
219 #[derive(Copy, Clone, Debug)]
221 AstScope(ast
::NodeId
),
222 CustomScope(CustomScopeIndex
)
225 #[derive(Copy, Clone, Debug)]
226 pub struct DropHint
<K
>(pub ast
::NodeId
, pub K
);
228 pub type DropHintDatum
<'tcx
> = DropHint
<Datum
<'tcx
, Lvalue
>>;
229 pub type DropHintValue
= DropHint
<ValueRef
>;
231 impl<K
> DropHint
<K
> {
232 pub fn new(id
: ast
::NodeId
, k
: K
) -> DropHint
<K
> { DropHint(id, k) }
235 impl DropHint
<ValueRef
> {
236 pub fn value(&self) -> ValueRef { self.1 }
239 pub trait DropHintMethods
{
241 fn to_value(&self) -> Self::ValueKind
;
243 impl<'tcx
> DropHintMethods
for DropHintDatum
<'tcx
> {
244 type ValueKind
= DropHintValue
;
245 fn to_value(&self) -> DropHintValue { DropHint(self.0, self.1.val) }
248 impl<'blk
, 'tcx
> CleanupMethods
<'blk
, 'tcx
> for FunctionContext
<'blk
, 'tcx
> {
249 /// Invoked when we start to trans the code contained within a new cleanup scope.
250 fn push_ast_cleanup_scope(&self, debug_loc
: NodeIdAndSpan
) {
251 debug
!("push_ast_cleanup_scope({})",
252 self.ccx
.tcx().map
.node_to_string(debug_loc
.id
));
254 // FIXME(#2202) -- currently closure bodies have a parent
255 // region, which messes up the assertion below, since there
256 // are no cleanup scopes on the stack at the start of
257 // trans'ing a closure body. I think though that this should
258 // eventually be fixed by closure bodies not having a parent
259 // region, though that's a touch unclear, and it might also be
260 // better just to narrow this assertion more (i.e., by
261 // excluding id's that correspond to closure bodies only). For
262 // now we just say that if there is already an AST scope on the stack,
263 // this new AST scope had better be its immediate child.
264 let top_scope
= self.top_ast_scope();
265 let region_maps
= &self.ccx
.tcx().region_maps
;
266 if top_scope
.is_some() {
268 .opt_encl_scope(region_maps
.node_extent(debug_loc
.id
))
269 .map(|s
|s
.node_id(region_maps
)) == top_scope
)
272 .opt_encl_scope(region_maps
.lookup_code_extent(
273 region
::CodeExtentData
::DestructionScope(debug_loc
.id
)))
274 .map(|s
|s
.node_id(region_maps
)) == top_scope
));
277 self.push_scope(CleanupScope
::new(AstScopeKind(debug_loc
.id
),
278 debug_loc
.debug_loc()));
281 fn push_loop_cleanup_scope(&self,
283 exits
: [Block
<'blk
, 'tcx
>; EXIT_MAX
]) {
284 debug
!("push_loop_cleanup_scope({})",
285 self.ccx
.tcx().map
.node_to_string(id
));
286 assert_eq
!(Some(id
), self.top_ast_scope());
288 // Just copy the debuginfo source location from the enclosing scope
289 let debug_loc
= self.scopes
295 self.push_scope(CleanupScope
::new(LoopScopeKind(id
, exits
), debug_loc
));
298 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex
{
299 let index
= self.scopes_len();
300 debug
!("push_custom_cleanup_scope(): {}", index
);
302 // Just copy the debuginfo source location from the enclosing scope
303 let debug_loc
= self.scopes
306 .map(|opt_scope
| opt_scope
.debug_loc
)
307 .unwrap_or(DebugLoc
::None
);
309 self.push_scope(CleanupScope
::new(CustomScopeKind
, debug_loc
));
310 CustomScopeIndex { index: index }
313 fn push_custom_cleanup_scope_with_debug_loc(&self,
314 debug_loc
: NodeIdAndSpan
)
315 -> CustomScopeIndex
{
316 let index
= self.scopes_len();
317 debug
!("push_custom_cleanup_scope(): {}", index
);
319 self.push_scope(CleanupScope
::new(CustomScopeKind
,
320 debug_loc
.debug_loc()));
321 CustomScopeIndex { index: index }
324 /// Removes the cleanup scope for id `cleanup_scope`, which must be at the top of the cleanup
325 /// stack, and generates the code to do its cleanups for normal exit.
326 fn pop_and_trans_ast_cleanup_scope(&self,
327 bcx
: Block
<'blk
, 'tcx
>,
328 cleanup_scope
: ast
::NodeId
)
329 -> Block
<'blk
, 'tcx
> {
330 debug
!("pop_and_trans_ast_cleanup_scope({})",
331 self.ccx
.tcx().map
.node_to_string(cleanup_scope
));
333 assert
!(self.top_scope(|s
| s
.kind
.is_ast_with_id(cleanup_scope
)));
335 let scope
= self.pop_scope();
336 self.trans_scope_cleanups(bcx
, &scope
)
339 /// Removes the loop cleanup scope for id `cleanup_scope`, which must be at the top of the
340 /// cleanup stack. Does not generate any cleanup code, since loop scopes should exit by
341 /// branching to a block generated by `normal_exit_block`.
342 fn pop_loop_cleanup_scope(&self,
343 cleanup_scope
: ast
::NodeId
) {
344 debug
!("pop_loop_cleanup_scope({})",
345 self.ccx
.tcx().map
.node_to_string(cleanup_scope
));
347 assert
!(self.top_scope(|s
| s
.kind
.is_loop_with_id(cleanup_scope
)));
349 let _
= self.pop_scope();
352 /// Removes the top cleanup scope from the stack without executing its cleanups. The top
353 /// cleanup scope must be the temporary scope `custom_scope`.
354 fn pop_custom_cleanup_scope(&self,
355 custom_scope
: CustomScopeIndex
) {
356 debug
!("pop_custom_cleanup_scope({})", custom_scope
.index
);
357 assert
!(self.is_valid_to_pop_custom_scope(custom_scope
));
358 let _
= self.pop_scope();
361 /// Removes the top cleanup scope from the stack, which must be a temporary scope, and
362 /// generates the code to do its cleanups for normal exit.
363 fn pop_and_trans_custom_cleanup_scope(&self,
364 bcx
: Block
<'blk
, 'tcx
>,
365 custom_scope
: CustomScopeIndex
)
366 -> Block
<'blk
, 'tcx
> {
367 debug
!("pop_and_trans_custom_cleanup_scope({:?})", custom_scope
);
368 assert
!(self.is_valid_to_pop_custom_scope(custom_scope
));
370 let scope
= self.pop_scope();
371 self.trans_scope_cleanups(bcx
, &scope
)
374 /// Returns the id of the top-most loop scope
375 fn top_loop_scope(&self) -> ast
::NodeId
{
376 for scope
in self.scopes
.borrow().iter().rev() {
377 if let LoopScopeKind(id
, _
) = scope
.kind
{
381 bug
!("no loop scope found");
384 /// Returns a block to branch to which will perform all pending cleanups and
385 /// then break/continue (depending on `exit`) out of the loop with id
387 fn normal_exit_block(&'blk
self,
388 cleanup_scope
: ast
::NodeId
,
389 exit
: usize) -> BasicBlockRef
{
390 self.trans_cleanups_to_exit_scope(LoopExit(cleanup_scope
, exit
))
393 /// Returns a block to branch to which will perform all pending cleanups and
394 /// then return from this function
395 fn return_exit_block(&'blk
self) -> BasicBlockRef
{
396 self.trans_cleanups_to_exit_scope(ReturnExit
)
399 fn schedule_lifetime_end(&self,
400 cleanup_scope
: ScopeId
,
402 let drop
= box LifetimeEnd
{
406 debug
!("schedule_lifetime_end({:?}, val={:?})",
407 cleanup_scope
, Value(val
));
409 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
412 /// Schedules a (deep) drop of `val`, which is a pointer to an instance of
414 fn schedule_drop_mem(&self,
415 cleanup_scope
: ScopeId
,
418 drop_hint
: Option
<DropHintDatum
<'tcx
>>) {
419 if !self.type_needs_drop(ty
) { return; }
420 let drop_hint
= drop_hint
.map(|hint
|hint
.to_value());
421 let drop
= box DropValue
{
427 drop_hint
: drop_hint
,
430 debug
!("schedule_drop_mem({:?}, val={:?}, ty={:?}) fill_on_drop={} skip_dtor={}",
437 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
440 /// Schedules a (deep) drop and filling of `val`, which is a pointer to an instance of `ty`
441 fn schedule_drop_and_fill_mem(&self,
442 cleanup_scope
: ScopeId
,
445 drop_hint
: Option
<DropHintDatum
<'tcx
>>) {
446 if !self.type_needs_drop(ty
) { return; }
448 let drop_hint
= drop_hint
.map(|datum
|datum
.to_value());
449 let drop
= box DropValue
{
455 drop_hint
: drop_hint
,
458 debug
!("schedule_drop_and_fill_mem({:?}, val={:?}, ty={:?},
459 fill_on_drop={}, skip_dtor={}, has_drop_hint={})",
465 drop_hint
.is_some());
467 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
470 /// Issue #23611: Schedules a (deep) drop of the contents of
471 /// `val`, which is a pointer to an instance of struct/enum type
472 /// `ty`. The scheduled code handles extracting the discriminant
473 /// and dropping the contents associated with that variant
474 /// *without* executing any associated drop implementation.
475 fn schedule_drop_adt_contents(&self,
476 cleanup_scope
: ScopeId
,
479 // `if` below could be "!contents_needs_drop"; skipping drop
480 // is just an optimization, so sound to be conservative.
481 if !self.type_needs_drop(ty
) { return; }
483 let drop
= box DropValue
{
492 debug
!("schedule_drop_adt_contents({:?}, val={:?}, ty={:?}) fill_on_drop={} skip_dtor={}",
499 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
502 /// Schedules a (deep) drop of `val`, which is an instance of `ty`
503 fn schedule_drop_immediate(&self,
504 cleanup_scope
: ScopeId
,
508 if !self.type_needs_drop(ty
) { return; }
509 let drop
= Box
::new(DropValue
{
518 debug
!("schedule_drop_immediate({:?}, val={:?}, ty={:?}) fill_on_drop={} skip_dtor={}",
525 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
528 /// Schedules a call to `free(val)`. Note that this is a shallow operation.
529 fn schedule_free_value(&self,
530 cleanup_scope
: ScopeId
,
533 content_ty
: Ty
<'tcx
>) {
534 let drop
= box FreeValue { ptr: val, heap: heap, content_ty: content_ty }
;
536 debug
!("schedule_free_value({:?}, val={:?}, heap={:?})",
537 cleanup_scope
, Value(val
), heap
);
539 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
542 fn schedule_clean(&self,
543 cleanup_scope
: ScopeId
,
544 cleanup
: CleanupObj
<'tcx
>) {
545 match cleanup_scope
{
546 AstScope(id
) => self.schedule_clean_in_ast_scope(id
, cleanup
),
547 CustomScope(id
) => self.schedule_clean_in_custom_scope(id
, cleanup
),
551 /// Schedules a cleanup to occur upon exit from `cleanup_scope`. If `cleanup_scope` is not
552 /// provided, then the cleanup is scheduled in the topmost scope, which must be a temporary
554 fn schedule_clean_in_ast_scope(&self,
555 cleanup_scope
: ast
::NodeId
,
556 cleanup
: CleanupObj
<'tcx
>) {
557 debug
!("schedule_clean_in_ast_scope(cleanup_scope={})",
560 for scope
in self.scopes
.borrow_mut().iter_mut().rev() {
561 if scope
.kind
.is_ast_with_id(cleanup_scope
) {
562 scope
.cleanups
.push(cleanup
);
563 scope
.cached_landing_pad
= None
;
566 // will be adding a cleanup to some enclosing scope
567 scope
.clear_cached_exits();
571 bug
!("no cleanup scope {} found",
572 self.ccx
.tcx().map
.node_to_string(cleanup_scope
));
575 /// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope.
576 fn schedule_clean_in_custom_scope(&self,
577 custom_scope
: CustomScopeIndex
,
578 cleanup
: CleanupObj
<'tcx
>) {
579 debug
!("schedule_clean_in_custom_scope(custom_scope={})",
582 assert
!(self.is_valid_custom_scope(custom_scope
));
584 let mut scopes
= self.scopes
.borrow_mut();
585 let scope
= &mut (*scopes
)[custom_scope
.index
];
586 scope
.cleanups
.push(cleanup
);
587 scope
.cached_landing_pad
= None
;
590 /// Returns true if there are pending cleanups that should execute on panic.
591 fn needs_invoke(&self) -> bool
{
592 self.scopes
.borrow().iter().rev().any(|s
| s
.needs_invoke())
595 /// Returns a basic block to branch to in the event of a panic. This block
596 /// will run the panic cleanups and eventually resume the exception that
597 /// caused the landing pad to be run.
598 fn get_landing_pad(&'blk
self) -> BasicBlockRef
{
599 let _icx
= base
::push_ctxt("get_landing_pad");
601 debug
!("get_landing_pad");
603 let orig_scopes_len
= self.scopes_len();
604 assert
!(orig_scopes_len
> 0);
606 // Remove any scopes that do not have cleanups on panic:
607 let mut popped_scopes
= vec
!();
608 while !self.top_scope(|s
| s
.needs_invoke()) {
609 debug
!("top scope does not need invoke");
610 popped_scopes
.push(self.pop_scope());
613 // Check for an existing landing pad in the new topmost scope:
614 let llbb
= self.get_or_create_landing_pad();
616 // Push the scopes we removed back on:
618 match popped_scopes
.pop() {
619 Some(scope
) => self.push_scope(scope
),
624 assert_eq
!(self.scopes_len(), orig_scopes_len
);
630 impl<'blk
, 'tcx
> CleanupHelperMethods
<'blk
, 'tcx
> for FunctionContext
<'blk
, 'tcx
> {
631 /// Returns the id of the current top-most AST scope, if any.
632 fn top_ast_scope(&self) -> Option
<ast
::NodeId
> {
633 for scope
in self.scopes
.borrow().iter().rev() {
635 CustomScopeKind
| LoopScopeKind(..) => {}
644 fn top_nonempty_cleanup_scope(&self) -> Option
<usize> {
645 self.scopes
.borrow().iter().rev().position(|s
| !s
.cleanups
.is_empty())
648 fn is_valid_to_pop_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
{
649 self.is_valid_custom_scope(custom_scope
) &&
650 custom_scope
.index
== self.scopes
.borrow().len() - 1
653 fn is_valid_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
{
654 let scopes
= self.scopes
.borrow();
655 custom_scope
.index
< scopes
.len() &&
656 (*scopes
)[custom_scope
.index
].kind
.is_temp()
659 /// Generates the cleanups for `scope` into `bcx`
660 fn trans_scope_cleanups(&self, // cannot borrow self, will recurse
661 bcx
: Block
<'blk
, 'tcx
>,
662 scope
: &CleanupScope
<'blk
, 'tcx
>) -> Block
<'blk
, 'tcx
> {
665 if !bcx
.unreachable
.get() {
666 for cleanup
in scope
.cleanups
.iter().rev() {
667 bcx
= cleanup
.trans(bcx
, scope
.debug_loc
);
673 fn scopes_len(&self) -> usize {
674 self.scopes
.borrow().len()
677 fn push_scope(&self, scope
: CleanupScope
<'blk
, 'tcx
>) {
678 self.scopes
.borrow_mut().push(scope
)
681 fn pop_scope(&self) -> CleanupScope
<'blk
, 'tcx
> {
682 debug
!("popping cleanup scope {}, {} scopes remaining",
683 self.top_scope(|s
| s
.block_name("")),
684 self.scopes_len() - 1);
686 self.scopes
.borrow_mut().pop().unwrap()
689 fn top_scope
<R
, F
>(&self, f
: F
) -> R
where F
: FnOnce(&CleanupScope
<'blk
, 'tcx
>) -> R
{
690 f(self.scopes
.borrow().last().unwrap())
693 /// Used when the caller wishes to jump to an early exit, such as a return,
694 /// break, continue, or unwind. This function will generate all cleanups
695 /// between the top of the stack and the exit `label` and return a basic
696 /// block that the caller can branch to.
698 /// For example, if the current stack of cleanups were as follows:
707 /// and the `label` specifies a break from `Loop 23`, then this function
708 /// would generate a series of basic blocks as follows:
710 /// Cleanup(AST 24) -> Cleanup(Custom 2) -> break_blk
712 /// where `break_blk` is the block specified in `Loop 23` as the target for
713 /// breaks. The return value would be the first basic block in that sequence
714 /// (`Cleanup(AST 24)`). The caller could then branch to `Cleanup(AST 24)`
715 /// and it will perform all cleanups and finally branch to the `break_blk`.
716 fn trans_cleanups_to_exit_scope(&'blk
self,
717 label
: EarlyExitLabel
)
719 debug
!("trans_cleanups_to_exit_scope label={:?} scopes={}",
720 label
, self.scopes_len());
722 let orig_scopes_len
= self.scopes_len();
724 let mut popped_scopes
= vec
!();
727 // First we pop off all the cleanup stacks that are
728 // traversed until the exit is reached, pushing them
729 // onto the side vector `popped_scopes`. No code is
730 // generated at this time.
732 // So, continuing the example from above, we would wind up
733 // with a `popped_scopes` vector of `[AST 24, Custom 2]`.
734 // (Presuming that there are no cached exits)
736 if self.scopes_len() == 0 {
739 // Generate a block that will resume unwinding to the
741 let bcx
= self.new_block("resume", None
);
743 UnwindKind
::LandingPad
=> {
744 let addr
= self.landingpad_alloca
.get()
746 let lp
= build
::Load(bcx
, addr
);
747 base
::call_lifetime_end(bcx
, addr
);
748 base
::trans_unwind_resume(bcx
, lp
);
750 UnwindKind
::CleanupPad(_
) => {
751 let pad
= build
::CleanupPad(bcx
, None
, &[]);
752 build
::CleanupRet(bcx
, pad
, None
);
755 prev_llbb
= bcx
.llbb
;
760 prev_llbb
= self.get_llreturn();
765 bug
!("cannot exit from scope {}, not in scope", id
);
770 // Pop off the scope, since we may be generating
771 // unwinding code for it.
772 let top_scope
= self.pop_scope();
773 let cached_exit
= top_scope
.cached_early_exit(label
);
774 popped_scopes
.push(top_scope
);
776 // Check if we have already cached the unwinding of this
777 // scope for this label. If so, we can stop popping scopes
778 // and branch to the cached label, since it contains the
779 // cleanups for any subsequent scopes.
780 if let Some((exit
, last_cleanup
)) = cached_exit
{
786 // If we are searching for a loop exit,
787 // and this scope is that loop, then stop popping and set
788 // `prev_llbb` to the appropriate exit block from the loop.
789 let scope
= popped_scopes
.last().unwrap();
791 UnwindExit(..) | ReturnExit
=> { }
792 LoopExit(id
, exit
) => {
793 if let Some(exit
) = scope
.kind
.early_exit_block(id
, exit
) {
801 debug
!("trans_cleanups_to_exit_scope: popped {} scopes",
802 popped_scopes
.len());
804 // Now push the popped scopes back on. As we go,
805 // we track in `prev_llbb` the exit to which this scope
806 // should branch when it's done.
808 // So, continuing with our example, we will start out with
809 // `prev_llbb` being set to `break_blk` (or possibly a cached
810 // early exit). We will then pop the scopes from `popped_scopes`
811 // and generate a basic block for each one, prepending it in the
812 // series and updating `prev_llbb`. So we begin by popping `Custom 2`
813 // and generating `Cleanup(Custom 2)`. We make `Cleanup(Custom 2)`
814 // branch to `prev_llbb == break_blk`, giving us a sequence like:
816 // Cleanup(Custom 2) -> prev_llbb
818 // We then pop `AST 24` and repeat the process, giving us the sequence:
820 // Cleanup(AST 24) -> Cleanup(Custom 2) -> prev_llbb
822 // At this point, `popped_scopes` is empty, and so the final block
823 // that we return to the user is `Cleanup(AST 24)`.
824 while let Some(mut scope
) = popped_scopes
.pop() {
825 if !scope
.cleanups
.is_empty() {
826 let name
= scope
.block_name("clean");
827 debug
!("generating cleanups for {}", name
);
829 let bcx_in
= self.new_block(&name
[..], None
);
830 let exit_label
= label
.start(bcx_in
);
831 let mut bcx_out
= bcx_in
;
832 let len
= scope
.cleanups
.len();
833 for cleanup
in scope
.cleanups
.iter().rev().take(len
- skip
) {
834 bcx_out
= cleanup
.trans(bcx_out
, scope
.debug_loc
);
837 exit_label
.branch(bcx_out
, prev_llbb
);
838 prev_llbb
= bcx_in
.llbb
;
840 scope
.add_cached_early_exit(exit_label
, prev_llbb
, len
);
842 self.push_scope(scope
);
845 debug
!("trans_cleanups_to_exit_scope: prev_llbb={:?}", prev_llbb
);
847 assert_eq
!(self.scopes_len(), orig_scopes_len
);
851 /// Creates a landing pad for the top scope, if one does not exist. The
852 /// landing pad will perform all cleanups necessary for an unwind and then
853 /// `resume` to continue error propagation:
855 /// landing_pad -> ... cleanups ... -> [resume]
857 /// (The cleanups and resume instruction are created by
858 /// `trans_cleanups_to_exit_scope()`, not in this function itself.)
859 fn get_or_create_landing_pad(&'blk
self) -> BasicBlockRef
{
862 debug
!("get_or_create_landing_pad");
864 // Check if a landing pad block exists; if not, create one.
866 let mut scopes
= self.scopes
.borrow_mut();
867 let last_scope
= scopes
.last_mut().unwrap();
868 match last_scope
.cached_landing_pad
{
869 Some(llbb
) => return llbb
,
871 let name
= last_scope
.block_name("unwind");
872 pad_bcx
= self.new_block(&name
[..], None
);
873 last_scope
.cached_landing_pad
= Some(pad_bcx
.llbb
);
878 let llpersonality
= pad_bcx
.fcx
.eh_personality();
880 let val
= if base
::wants_msvc_seh(self.ccx
.sess()) {
881 // A cleanup pad requires a personality function to be specified, so
882 // we do that here explicitly (happens implicitly below through
883 // creation of the landingpad instruction). We then create a
884 // cleanuppad instruction which has no filters to run cleanup on all
886 build
::SetPersonalityFn(pad_bcx
, llpersonality
);
887 let llretval
= build
::CleanupPad(pad_bcx
, None
, &[]);
888 UnwindKind
::CleanupPad(llretval
)
890 // The landing pad return type (the type being propagated). Not sure
891 // what this represents but it's determined by the personality
892 // function and this is what the EH proposal example uses.
893 let llretty
= Type
::struct_(self.ccx
,
894 &[Type
::i8p(self.ccx
), Type
::i32(self.ccx
)],
897 // The only landing pad clause will be 'cleanup'
898 let llretval
= build
::LandingPad(pad_bcx
, llretty
, llpersonality
, 1);
900 // The landing pad block is a cleanup
901 build
::SetCleanup(pad_bcx
, llretval
);
903 let addr
= match self.landingpad_alloca
.get() {
906 let addr
= base
::alloca(pad_bcx
, common
::val_ty(llretval
),
908 base
::call_lifetime_start(pad_bcx
, addr
);
909 self.landingpad_alloca
.set(Some(addr
));
913 build
::Store(pad_bcx
, llretval
, addr
);
914 UnwindKind
::LandingPad
917 // Generate the cleanup block and branch to it.
918 let label
= UnwindExit(val
);
919 let cleanup_llbb
= self.trans_cleanups_to_exit_scope(label
);
920 label
.branch(pad_bcx
, cleanup_llbb
);
926 impl<'blk
, 'tcx
> CleanupScope
<'blk
, 'tcx
> {
927 fn new(kind
: CleanupScopeKind
<'blk
, 'tcx
>,
929 -> CleanupScope
<'blk
, 'tcx
> {
932 debug_loc
: debug_loc
,
934 cached_early_exits
: vec
!(),
935 cached_landing_pad
: None
,
939 fn clear_cached_exits(&mut self) {
940 self.cached_early_exits
= vec
!();
941 self.cached_landing_pad
= None
;
944 fn cached_early_exit(&self,
945 label
: EarlyExitLabel
)
946 -> Option
<(BasicBlockRef
, usize)> {
947 self.cached_early_exits
.iter().rev().
948 find(|e
| e
.label
== label
).
949 map(|e
| (e
.cleanup_block
, e
.last_cleanup
))
952 fn add_cached_early_exit(&mut self,
953 label
: EarlyExitLabel
,
955 last_cleanup
: usize) {
956 self.cached_early_exits
.push(
957 CachedEarlyExit
{ label
: label
,
959 last_cleanup
: last_cleanup
});
962 /// True if this scope has cleanups that need unwinding
963 fn needs_invoke(&self) -> bool
{
965 self.cached_landing_pad
.is_some() ||
966 self.cleanups
.iter().any(|c
| c
.must_unwind())
969 /// Returns a suitable name to use for the basic block that handles this cleanup scope
970 fn block_name(&self, prefix
: &str) -> String
{
972 CustomScopeKind
=> format
!("{}_custom_", prefix
),
973 AstScopeKind(id
) => format
!("{}_ast_{}_", prefix
, id
),
974 LoopScopeKind(id
, _
) => format
!("{}_loop_{}_", prefix
, id
),
978 /// Manipulate cleanup scope for call arguments. Conceptually, each
979 /// argument to a call is an lvalue, and performing the call moves each
980 /// of the arguments into a new rvalue (which gets cleaned up by the
981 /// callee). As an optimization, instead of actually performing all of
982 /// those moves, trans just manipulates the cleanup scope to obtain the
984 pub fn drop_non_lifetime_clean(&mut self) {
985 self.cleanups
.retain(|c
| c
.is_lifetime_end());
986 self.clear_cached_exits();
990 impl<'blk
, 'tcx
> CleanupScopeKind
<'blk
, 'tcx
> {
991 fn is_temp(&self) -> bool
{
993 CustomScopeKind
=> true,
994 LoopScopeKind(..) | AstScopeKind(..) => false,
998 fn is_ast_with_id(&self, id
: ast
::NodeId
) -> bool
{
1000 CustomScopeKind
| LoopScopeKind(..) => false,
1001 AstScopeKind(i
) => i
== id
1005 fn is_loop_with_id(&self, id
: ast
::NodeId
) -> bool
{
1007 CustomScopeKind
| AstScopeKind(..) => false,
1008 LoopScopeKind(i
, _
) => i
== id
1012 /// If this is a loop scope with id `id`, return the early exit block `exit`, else `None`
1013 fn early_exit_block(&self,
1015 exit
: usize) -> Option
<BasicBlockRef
> {
1017 LoopScopeKind(i
, ref exits
) if id
== i
=> Some(exits
[exit
].llbb
),
1023 impl EarlyExitLabel
{
1024 /// Generates a branch going from `from_bcx` to `to_llbb` where `self` is
1025 /// the exit label attached to the start of `from_bcx`.
1027 /// Transitions from an exit label to other exit labels depend on the type
1028 /// of label. For example with MSVC exceptions unwind exit labels will use
1029 /// the `cleanupret` instruction instead of the `br` instruction.
1030 fn branch(&self, from_bcx
: Block
, to_llbb
: BasicBlockRef
) {
1031 if let UnwindExit(UnwindKind
::CleanupPad(pad
)) = *self {
1032 build
::CleanupRet(from_bcx
, pad
, Some(to_llbb
));
1034 build
::Br(from_bcx
, to_llbb
, DebugLoc
::None
);
1038 /// Generates the necessary instructions at the start of `bcx` to prepare
1039 /// for the same kind of early exit label that `self` is.
1041 /// This function will appropriately configure `bcx` based on the kind of
1042 /// label this is. For UnwindExit labels, the `lpad` field of the block will
1043 /// be set to `Some`, and for MSVC exceptions this function will generate a
1044 /// `cleanuppad` instruction at the start of the block so it may be jumped
1045 /// to in the future (e.g. so this block can be cached as an early exit).
1047 /// Returns a new label which will can be used to cache `bcx` in the list of
1049 fn start(&self, bcx
: Block
) -> EarlyExitLabel
{
1051 UnwindExit(UnwindKind
::CleanupPad(..)) => {
1052 let pad
= build
::CleanupPad(bcx
, None
, &[]);
1053 bcx
.lpad
.set(Some(bcx
.fcx
.lpad_arena
.alloc(LandingPad
::msvc(pad
))));
1054 UnwindExit(UnwindKind
::CleanupPad(pad
))
1056 UnwindExit(UnwindKind
::LandingPad
) => {
1057 bcx
.lpad
.set(Some(bcx
.fcx
.lpad_arena
.alloc(LandingPad
::gnu())));
1065 impl PartialEq
for UnwindKind
{
1066 fn eq(&self, val
: &UnwindKind
) -> bool
{
1067 match (*self, *val
) {
1068 (UnwindKind
::LandingPad
, UnwindKind
::LandingPad
) |
1069 (UnwindKind
::CleanupPad(..), UnwindKind
::CleanupPad(..)) => true,
1075 ///////////////////////////////////////////////////////////////////////////
1078 #[derive(Copy, Clone)]
1079 pub struct DropValue
<'tcx
> {
1085 drop_hint
: Option
<DropHintValue
>,
1088 impl<'tcx
> Cleanup
<'tcx
> for DropValue
<'tcx
> {
1089 fn must_unwind(&self) -> bool
{
1093 fn is_lifetime_end(&self) -> bool
{
1097 fn trans
<'blk
>(&self,
1098 bcx
: Block
<'blk
, 'tcx
>,
1099 debug_loc
: DebugLoc
)
1100 -> Block
<'blk
, 'tcx
> {
1101 let skip_dtor
= self.skip_dtor
;
1102 let _icx
= if skip_dtor
{
1103 base
::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=true")
1105 base
::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=false")
1107 let bcx
= if self.is_immediate
{
1108 glue
::drop_ty_immediate(bcx
, self.val
, self.ty
, debug_loc
, self.skip_dtor
)
1110 glue
::drop_ty_core(bcx
, self.val
, self.ty
, debug_loc
, self.skip_dtor
, self.drop_hint
)
1112 if self.fill_on_drop
{
1113 base
::drop_done_fill_mem(bcx
, self.val
, self.ty
);
1119 #[derive(Copy, Clone, Debug)]
1124 #[derive(Copy, Clone)]
1125 pub struct FreeValue
<'tcx
> {
1128 content_ty
: Ty
<'tcx
>
1131 impl<'tcx
> Cleanup
<'tcx
> for FreeValue
<'tcx
> {
1132 fn must_unwind(&self) -> bool
{
1136 fn is_lifetime_end(&self) -> bool
{
1140 fn trans
<'blk
>(&self,
1141 bcx
: Block
<'blk
, 'tcx
>,
1142 debug_loc
: DebugLoc
)
1143 -> Block
<'blk
, 'tcx
> {
1146 glue
::trans_exchange_free_ty(bcx
,
1155 #[derive(Copy, Clone)]
1156 pub struct LifetimeEnd
{
1160 impl<'tcx
> Cleanup
<'tcx
> for LifetimeEnd
{
1161 fn must_unwind(&self) -> bool
{
1165 fn is_lifetime_end(&self) -> bool
{
1169 fn trans
<'blk
>(&self,
1170 bcx
: Block
<'blk
, 'tcx
>,
1171 debug_loc
: DebugLoc
)
1172 -> Block
<'blk
, 'tcx
> {
1173 debug_loc
.apply(bcx
.fcx
);
1174 base
::call_lifetime_end(bcx
, self.ptr
);
1179 pub fn temporary_scope(tcx
: &TyCtxt
,
1182 match tcx
.region_maps
.temporary_scope(id
) {
1184 let r
= AstScope(scope
.node_id(&tcx
.region_maps
));
1185 debug
!("temporary_scope({}) = {:?}", id
, r
);
1189 bug
!("no temporary scope available for expr {}", id
)
1194 pub fn var_scope(tcx
: &TyCtxt
,
1197 let r
= AstScope(tcx
.region_maps
.var_scope(id
).node_id(&tcx
.region_maps
));
1198 debug
!("var_scope({}) = {:?}", id
, r
);
1202 ///////////////////////////////////////////////////////////////////////////
1203 // These traits just exist to put the methods into this file.
1205 pub trait CleanupMethods
<'blk
, 'tcx
> {
1206 fn push_ast_cleanup_scope(&self, id
: NodeIdAndSpan
);
1207 fn push_loop_cleanup_scope(&self,
1209 exits
: [Block
<'blk
, 'tcx
>; EXIT_MAX
]);
1210 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex
;
1211 fn push_custom_cleanup_scope_with_debug_loc(&self,
1212 debug_loc
: NodeIdAndSpan
)
1213 -> CustomScopeIndex
;
1214 fn pop_and_trans_ast_cleanup_scope(&self,
1215 bcx
: Block
<'blk
, 'tcx
>,
1216 cleanup_scope
: ast
::NodeId
)
1217 -> Block
<'blk
, 'tcx
>;
1218 fn pop_loop_cleanup_scope(&self,
1219 cleanup_scope
: ast
::NodeId
);
1220 fn pop_custom_cleanup_scope(&self,
1221 custom_scope
: CustomScopeIndex
);
1222 fn pop_and_trans_custom_cleanup_scope(&self,
1223 bcx
: Block
<'blk
, 'tcx
>,
1224 custom_scope
: CustomScopeIndex
)
1225 -> Block
<'blk
, 'tcx
>;
1226 fn top_loop_scope(&self) -> ast
::NodeId
;
1227 fn normal_exit_block(&'blk
self,
1228 cleanup_scope
: ast
::NodeId
,
1229 exit
: usize) -> BasicBlockRef
;
1230 fn return_exit_block(&'blk
self) -> BasicBlockRef
;
1231 fn schedule_lifetime_end(&self,
1232 cleanup_scope
: ScopeId
,
1234 fn schedule_drop_mem(&self,
1235 cleanup_scope
: ScopeId
,
1238 drop_hint
: Option
<DropHintDatum
<'tcx
>>);
1239 fn schedule_drop_and_fill_mem(&self,
1240 cleanup_scope
: ScopeId
,
1243 drop_hint
: Option
<DropHintDatum
<'tcx
>>);
1244 fn schedule_drop_adt_contents(&self,
1245 cleanup_scope
: ScopeId
,
1248 fn schedule_drop_immediate(&self,
1249 cleanup_scope
: ScopeId
,
1252 fn schedule_free_value(&self,
1253 cleanup_scope
: ScopeId
,
1256 content_ty
: Ty
<'tcx
>);
1257 fn schedule_clean(&self,
1258 cleanup_scope
: ScopeId
,
1259 cleanup
: CleanupObj
<'tcx
>);
1260 fn schedule_clean_in_ast_scope(&self,
1261 cleanup_scope
: ast
::NodeId
,
1262 cleanup
: CleanupObj
<'tcx
>);
1263 fn schedule_clean_in_custom_scope(&self,
1264 custom_scope
: CustomScopeIndex
,
1265 cleanup
: CleanupObj
<'tcx
>);
1266 fn needs_invoke(&self) -> bool
;
1267 fn get_landing_pad(&'blk
self) -> BasicBlockRef
;
1270 trait CleanupHelperMethods
<'blk
, 'tcx
> {
1271 fn top_ast_scope(&self) -> Option
<ast
::NodeId
>;
1272 fn top_nonempty_cleanup_scope(&self) -> Option
<usize>;
1273 fn is_valid_to_pop_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
;
1274 fn is_valid_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
;
1275 fn trans_scope_cleanups(&self,
1276 bcx
: Block
<'blk
, 'tcx
>,
1277 scope
: &CleanupScope
<'blk
, 'tcx
>) -> Block
<'blk
, 'tcx
>;
1278 fn trans_cleanups_to_exit_scope(&'blk
self,
1279 label
: EarlyExitLabel
)
1281 fn get_or_create_landing_pad(&'blk
self) -> BasicBlockRef
;
1282 fn scopes_len(&self) -> usize;
1283 fn push_scope(&self, scope
: CleanupScope
<'blk
, 'tcx
>);
1284 fn pop_scope(&self) -> CleanupScope
<'blk
, 'tcx
>;
1285 fn top_scope
<R
, F
>(&self, f
: F
) -> R
where F
: FnOnce(&CleanupScope
<'blk
, 'tcx
>) -> R
;