1 // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
11 //! ## The Cleanup module
13 //! The cleanup module tracks what values need to be cleaned up as scopes
14 //! are exited, either via panic or just normal control flow. The basic
15 //! idea is that the function context maintains a stack of cleanup scopes
16 //! that are pushed/popped as we traverse the AST tree. There is typically
17 //! at least one cleanup scope per AST node; some AST nodes may introduce
18 //! additional temporary scopes.
20 //! Cleanup items can be scheduled into any of the scopes on the stack.
21 //! Typically, when a scope is popped, we will also generate the code for
22 //! each of its cleanups at that time. This corresponds to a normal exit
23 //! from a block (for example, an expression completing evaluation
24 //! successfully without panic). However, it is also possible to pop a
25 //! block *without* executing its cleanups; this is typically used to
26 //! guard intermediate values that must be cleaned up on panic, but not
27 //! if everything goes right. See the section on custom scopes below for
30 //! Cleanup scopes come in three kinds:
32 //! - **AST scopes:** each AST node in a function body has a corresponding
33 //! AST scope. We push the AST scope when we start generate code for an AST
34 //! node and pop it once the AST node has been fully generated.
35 //! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
36 //! never scheduled into loop scopes; instead, they are used to record the
37 //! basic blocks that we should branch to when a `continue` or `break` statement
39 //! - **Custom scopes:** custom scopes are typically used to ensure cleanup
40 //! of intermediate values.
42 //! ### When to schedule cleanup
44 //! Although the cleanup system is intended to *feel* fairly declarative,
45 //! it's still important to time calls to `schedule_clean()` correctly.
46 //! Basically, you should not schedule cleanup for memory until it has
47 //! been initialized, because if an unwind should occur before the memory
48 //! is fully initialized, then the cleanup will run and try to free or
49 //! drop uninitialized memory. If the initialization itself produces
50 //! byproducts that need to be freed, then you should use temporary custom
51 //! scopes to ensure that those byproducts will get freed on unwind. For
52 //! example, an expression like `box foo()` will first allocate a box in the
53 //! heap and then call `foo()` -- if `foo()` should panic, this box needs
54 //! to be *shallowly* freed.
56 //! ### Long-distance jumps
58 //! In addition to popping a scope, which corresponds to normal control
59 //! flow exiting the scope, we may also *jump out* of a scope into some
60 //! earlier scope on the stack. This can occur in response to a `return`,
61 //! `break`, or `continue` statement, but also in response to panic. In
62 //! any of these cases, we will generate a series of cleanup blocks for
63 //! each of the scopes that is exited. So, if the stack contains scopes A
64 //! ... Z, and we break out of a loop whose corresponding cleanup scope is
65 //! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
66 //! After cleanup is done we would branch to the exit point for scope X.
67 //! But if panic should occur, we would generate cleanups for all the
68 //! scopes from A to Z and then resume the unwind process afterwards.
70 //! To avoid generating tons of code, we cache the cleanup blocks that we
71 //! create for breaks, returns, unwinds, and other jumps. Whenever a new
72 //! cleanup is scheduled, though, we must clear these cached blocks. A
73 //! possible improvement would be to keep the cached blocks but simply
74 //! generate a new block which performs the additional cleanup and then
75 //! branches to the existing cached blocks.
77 //! ### AST and loop cleanup scopes
79 //! AST cleanup scopes are pushed when we begin and end processing an AST
80 //! node. They are used to house cleanups related to rvalue temporary that
81 //! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
82 //! AST scope is popped, we always trans all the cleanups, adding the cleanup
83 //! code after the postdominator of the AST node.
85 //! AST nodes that represent breakable loops also push a loop scope; the
86 //! loop scope never has any actual cleanups, it's just used to point to
87 //! the basic blocks where control should flow after a "continue" or
88 //! "break" statement. Popping a loop scope never generates code.
90 //! ### Custom cleanup scopes
92 //! Custom cleanup scopes are used for a variety of purposes. The most
93 //! common though is to handle temporary byproducts, where cleanup only
94 //! needs to occur on panic. The general strategy is to push a custom
95 //! cleanup scope, schedule *shallow* cleanups into the custom scope, and
96 //! then pop the custom scope (without transing the cleanups) when
97 //! execution succeeds normally. This way the cleanups are only trans'd on
98 //! unwind, and only up until the point where execution succeeded, at
99 //! which time the complete value should be stored in an lvalue or some
100 //! other place where normal cleanup applies.
102 //! To spell it out, here is an example. Imagine an expression `box expr`.
103 //! We would basically:
105 //! 1. Push a custom cleanup scope C.
106 //! 2. Allocate the box.
107 //! 3. Schedule a shallow free in the scope C.
108 //! 4. Trans `expr` into the box.
109 //! 5. Pop the scope C.
110 //! 6. Return the box as an rvalue.
112 //! This way, if a panic occurs while transing `expr`, the custom
113 //! cleanup scope C is pushed and hence the box will be freed. The trans
114 //! code for `expr` itself is responsible for freeing any other byproducts
115 //! that may be in play.
117 pub use self::ScopeId
::*;
118 pub use self::CleanupScopeKind
::*;
119 pub use self::EarlyExitLabel
::*;
120 pub use self::Heap
::*;
122 use llvm
::{BasicBlockRef, ValueRef}
;
126 use trans
::common
::{Block, FunctionContext, NodeIdAndSpan}
;
127 use trans
::datum
::{Datum, Lvalue}
;
128 use trans
::debuginfo
::{DebugLoc, ToDebugLoc}
;
131 use trans
::type_
::Type
;
132 use middle
::ty
::{self, Ty}
;
136 pub struct CleanupScope
<'blk
, 'tcx
: 'blk
> {
137 // The id of this cleanup scope. If the id is None,
138 // this is a *temporary scope* that is pushed during trans to
139 // cleanup miscellaneous garbage that trans may generate whose
140 // lifetime is a subset of some expression. See module doc for
142 kind
: CleanupScopeKind
<'blk
, 'tcx
>,
144 // Cleanups to run upon scope exit.
145 cleanups
: Vec
<CleanupObj
<'tcx
>>,
147 // The debug location any drop calls generated for this scope will be
151 cached_early_exits
: Vec
<CachedEarlyExit
>,
152 cached_landing_pad
: Option
<BasicBlockRef
>,
155 #[derive(Copy, Clone, Debug)]
156 pub struct CustomScopeIndex
{
160 pub const EXIT_BREAK
: usize = 0;
161 pub const EXIT_LOOP
: usize = 1;
162 pub const EXIT_MAX
: usize = 2;
164 pub enum CleanupScopeKind
<'blk
, 'tcx
: 'blk
> {
166 AstScopeKind(ast
::NodeId
),
167 LoopScopeKind(ast
::NodeId
, [Block
<'blk
, 'tcx
>; EXIT_MAX
])
170 impl<'blk
, 'tcx
: 'blk
> fmt
::Debug
for CleanupScopeKind
<'blk
, 'tcx
> {
171 fn fmt(&self, f
: &mut fmt
::Formatter
) -> fmt
::Result
{
173 CustomScopeKind
=> write
!(f
, "CustomScopeKind"),
174 AstScopeKind(nid
) => write
!(f
, "AstScopeKind({})", nid
),
175 LoopScopeKind(nid
, ref blks
) => {
176 try
!(write
!(f
, "LoopScopeKind({}, [", nid
));
178 try
!(write
!(f
, "{:p}, ", blk
));
186 #[derive(Copy, Clone, PartialEq, Debug)]
187 pub enum EarlyExitLabel
{
190 LoopExit(ast
::NodeId
, usize)
193 #[derive(Copy, Clone)]
194 pub struct CachedEarlyExit
{
195 label
: EarlyExitLabel
,
196 cleanup_block
: BasicBlockRef
,
199 pub trait Cleanup
<'tcx
> {
200 fn must_unwind(&self) -> bool
;
201 fn is_lifetime_end(&self) -> bool
;
202 fn trans
<'blk
>(&self,
203 bcx
: Block
<'blk
, 'tcx
>,
205 -> Block
<'blk
, 'tcx
>;
208 pub type CleanupObj
<'tcx
> = Box
<Cleanup
<'tcx
>+'tcx
>;
210 #[derive(Copy, Clone, Debug)]
212 AstScope(ast
::NodeId
),
213 CustomScope(CustomScopeIndex
)
216 #[derive(Copy, Clone, Debug)]
217 pub struct DropHint
<K
>(pub ast
::NodeId
, pub K
);
219 pub type DropHintDatum
<'tcx
> = DropHint
<Datum
<'tcx
, Lvalue
>>;
220 pub type DropHintValue
= DropHint
<ValueRef
>;
222 impl<K
> DropHint
<K
> {
223 pub fn new(id
: ast
::NodeId
, k
: K
) -> DropHint
<K
> { DropHint(id, k) }
226 impl DropHint
<ValueRef
> {
227 pub fn value(&self) -> ValueRef { self.1 }
230 pub trait DropHintMethods
{
232 fn to_value(&self) -> Self::ValueKind
;
234 impl<'tcx
> DropHintMethods
for DropHintDatum
<'tcx
> {
235 type ValueKind
= DropHintValue
;
236 fn to_value(&self) -> DropHintValue { DropHint(self.0, self.1.val) }
239 impl<'blk
, 'tcx
> CleanupMethods
<'blk
, 'tcx
> for FunctionContext
<'blk
, 'tcx
> {
240 /// Invoked when we start to trans the code contained within a new cleanup scope.
241 fn push_ast_cleanup_scope(&self, debug_loc
: NodeIdAndSpan
) {
242 debug
!("push_ast_cleanup_scope({})",
243 self.ccx
.tcx().map
.node_to_string(debug_loc
.id
));
245 // FIXME(#2202) -- currently closure bodies have a parent
246 // region, which messes up the assertion below, since there
247 // are no cleanup scopes on the stack at the start of
248 // trans'ing a closure body. I think though that this should
249 // eventually be fixed by closure bodies not having a parent
250 // region, though that's a touch unclear, and it might also be
251 // better just to narrow this assertion more (i.e., by
252 // excluding id's that correspond to closure bodies only). For
253 // now we just say that if there is already an AST scope on the stack,
254 // this new AST scope had better be its immediate child.
255 let top_scope
= self.top_ast_scope();
256 let region_maps
= &self.ccx
.tcx().region_maps
;
257 if top_scope
.is_some() {
259 .opt_encl_scope(region_maps
.node_extent(debug_loc
.id
))
260 .map(|s
|s
.node_id(region_maps
)) == top_scope
)
263 .opt_encl_scope(region_maps
.lookup_code_extent(
264 region
::CodeExtentData
::DestructionScope(debug_loc
.id
)))
265 .map(|s
|s
.node_id(region_maps
)) == top_scope
));
268 self.push_scope(CleanupScope
::new(AstScopeKind(debug_loc
.id
),
269 debug_loc
.debug_loc()));
272 fn push_loop_cleanup_scope(&self,
274 exits
: [Block
<'blk
, 'tcx
>; EXIT_MAX
]) {
275 debug
!("push_loop_cleanup_scope({})",
276 self.ccx
.tcx().map
.node_to_string(id
));
277 assert_eq
!(Some(id
), self.top_ast_scope());
279 // Just copy the debuginfo source location from the enclosing scope
280 let debug_loc
= self.scopes
286 self.push_scope(CleanupScope
::new(LoopScopeKind(id
, exits
), debug_loc
));
289 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex
{
290 let index
= self.scopes_len();
291 debug
!("push_custom_cleanup_scope(): {}", index
);
293 // Just copy the debuginfo source location from the enclosing scope
294 let debug_loc
= self.scopes
297 .map(|opt_scope
| opt_scope
.debug_loc
)
298 .unwrap_or(DebugLoc
::None
);
300 self.push_scope(CleanupScope
::new(CustomScopeKind
, debug_loc
));
301 CustomScopeIndex { index: index }
304 fn push_custom_cleanup_scope_with_debug_loc(&self,
305 debug_loc
: NodeIdAndSpan
)
306 -> CustomScopeIndex
{
307 let index
= self.scopes_len();
308 debug
!("push_custom_cleanup_scope(): {}", index
);
310 self.push_scope(CleanupScope
::new(CustomScopeKind
,
311 debug_loc
.debug_loc()));
312 CustomScopeIndex { index: index }
315 /// Removes the cleanup scope for id `cleanup_scope`, which must be at the top of the cleanup
316 /// stack, and generates the code to do its cleanups for normal exit.
317 fn pop_and_trans_ast_cleanup_scope(&self,
318 bcx
: Block
<'blk
, 'tcx
>,
319 cleanup_scope
: ast
::NodeId
)
320 -> Block
<'blk
, 'tcx
> {
321 debug
!("pop_and_trans_ast_cleanup_scope({})",
322 self.ccx
.tcx().map
.node_to_string(cleanup_scope
));
324 assert
!(self.top_scope(|s
| s
.kind
.is_ast_with_id(cleanup_scope
)));
326 let scope
= self.pop_scope();
327 self.trans_scope_cleanups(bcx
, &scope
)
330 /// Removes the loop cleanup scope for id `cleanup_scope`, which must be at the top of the
331 /// cleanup stack. Does not generate any cleanup code, since loop scopes should exit by
332 /// branching to a block generated by `normal_exit_block`.
333 fn pop_loop_cleanup_scope(&self,
334 cleanup_scope
: ast
::NodeId
) {
335 debug
!("pop_loop_cleanup_scope({})",
336 self.ccx
.tcx().map
.node_to_string(cleanup_scope
));
338 assert
!(self.top_scope(|s
| s
.kind
.is_loop_with_id(cleanup_scope
)));
340 let _
= self.pop_scope();
343 /// Removes the top cleanup scope from the stack without executing its cleanups. The top
344 /// cleanup scope must be the temporary scope `custom_scope`.
345 fn pop_custom_cleanup_scope(&self,
346 custom_scope
: CustomScopeIndex
) {
347 debug
!("pop_custom_cleanup_scope({})", custom_scope
.index
);
348 assert
!(self.is_valid_to_pop_custom_scope(custom_scope
));
349 let _
= self.pop_scope();
352 /// Removes the top cleanup scope from the stack, which must be a temporary scope, and
353 /// generates the code to do its cleanups for normal exit.
354 fn pop_and_trans_custom_cleanup_scope(&self,
355 bcx
: Block
<'blk
, 'tcx
>,
356 custom_scope
: CustomScopeIndex
)
357 -> Block
<'blk
, 'tcx
> {
358 debug
!("pop_and_trans_custom_cleanup_scope({:?})", custom_scope
);
359 assert
!(self.is_valid_to_pop_custom_scope(custom_scope
));
361 let scope
= self.pop_scope();
362 self.trans_scope_cleanups(bcx
, &scope
)
365 /// Returns the id of the top-most loop scope
366 fn top_loop_scope(&self) -> ast
::NodeId
{
367 for scope
in self.scopes
.borrow().iter().rev() {
368 if let LoopScopeKind(id
, _
) = scope
.kind
{
372 self.ccx
.sess().bug("no loop scope found");
375 /// Returns a block to branch to which will perform all pending cleanups and then
376 /// break/continue (depending on `exit`) out of the loop with id `cleanup_scope`
377 fn normal_exit_block(&'blk
self,
378 cleanup_scope
: ast
::NodeId
,
379 exit
: usize) -> BasicBlockRef
{
380 self.trans_cleanups_to_exit_scope(LoopExit(cleanup_scope
, exit
))
383 /// Returns a block to branch to which will perform all pending cleanups and then return from
385 fn return_exit_block(&'blk
self) -> BasicBlockRef
{
386 self.trans_cleanups_to_exit_scope(ReturnExit
)
389 fn schedule_lifetime_end(&self,
390 cleanup_scope
: ScopeId
,
392 let drop
= box LifetimeEnd
{
396 debug
!("schedule_lifetime_end({:?}, val={})",
398 self.ccx
.tn().val_to_string(val
));
400 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
403 /// Schedules a (deep) drop of `val`, which is a pointer to an instance of `ty`
404 fn schedule_drop_mem(&self,
405 cleanup_scope
: ScopeId
,
408 drop_hint
: Option
<DropHintDatum
<'tcx
>>) {
409 if !self.type_needs_drop(ty
) { return; }
410 let drop_hint
= drop_hint
.map(|hint
|hint
.to_value());
411 let drop
= box DropValue
{
417 drop_hint
: drop_hint
,
420 debug
!("schedule_drop_mem({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
422 self.ccx
.tn().val_to_string(val
),
427 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
430 /// Schedules a (deep) drop and filling of `val`, which is a pointer to an instance of `ty`
431 fn schedule_drop_and_fill_mem(&self,
432 cleanup_scope
: ScopeId
,
435 drop_hint
: Option
<DropHintDatum
<'tcx
>>) {
436 if !self.type_needs_drop(ty
) { return; }
438 let drop_hint
= drop_hint
.map(|datum
|datum
.to_value());
439 let drop
= box DropValue
{
445 drop_hint
: drop_hint
,
448 debug
!("schedule_drop_and_fill_mem({:?}, val={}, ty={:?},
449 fill_on_drop={}, skip_dtor={}, has_drop_hint={})",
451 self.ccx
.tn().val_to_string(val
),
455 drop_hint
.is_some());
457 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
460 /// Issue #23611: Schedules a (deep) drop of the contents of
461 /// `val`, which is a pointer to an instance of struct/enum type
462 /// `ty`. The scheduled code handles extracting the discriminant
463 /// and dropping the contents associated with that variant
464 /// *without* executing any associated drop implementation.
465 fn schedule_drop_adt_contents(&self,
466 cleanup_scope
: ScopeId
,
469 // `if` below could be "!contents_needs_drop"; skipping drop
470 // is just an optimization, so sound to be conservative.
471 if !self.type_needs_drop(ty
) { return; }
473 let drop
= box DropValue
{
482 debug
!("schedule_drop_adt_contents({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
484 self.ccx
.tn().val_to_string(val
),
489 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
492 /// Schedules a (deep) drop of `val`, which is an instance of `ty`
493 fn schedule_drop_immediate(&self,
494 cleanup_scope
: ScopeId
,
498 if !self.type_needs_drop(ty
) { return; }
499 let drop
= Box
::new(DropValue
{
508 debug
!("schedule_drop_immediate({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
510 self.ccx
.tn().val_to_string(val
),
515 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
518 /// Schedules a call to `free(val)`. Note that this is a shallow operation.
519 fn schedule_free_value(&self,
520 cleanup_scope
: ScopeId
,
523 content_ty
: Ty
<'tcx
>) {
524 let drop
= box FreeValue { ptr: val, heap: heap, content_ty: content_ty }
;
526 debug
!("schedule_free_value({:?}, val={}, heap={:?})",
528 self.ccx
.tn().val_to_string(val
),
531 self.schedule_clean(cleanup_scope
, drop
as CleanupObj
);
534 fn schedule_clean(&self,
535 cleanup_scope
: ScopeId
,
536 cleanup
: CleanupObj
<'tcx
>) {
537 match cleanup_scope
{
538 AstScope(id
) => self.schedule_clean_in_ast_scope(id
, cleanup
),
539 CustomScope(id
) => self.schedule_clean_in_custom_scope(id
, cleanup
),
543 /// Schedules a cleanup to occur upon exit from `cleanup_scope`. If `cleanup_scope` is not
544 /// provided, then the cleanup is scheduled in the topmost scope, which must be a temporary
546 fn schedule_clean_in_ast_scope(&self,
547 cleanup_scope
: ast
::NodeId
,
548 cleanup
: CleanupObj
<'tcx
>) {
549 debug
!("schedule_clean_in_ast_scope(cleanup_scope={})",
552 for scope
in self.scopes
.borrow_mut().iter_mut().rev() {
553 if scope
.kind
.is_ast_with_id(cleanup_scope
) {
554 scope
.cleanups
.push(cleanup
);
555 scope
.clear_cached_exits();
558 // will be adding a cleanup to some enclosing scope
559 scope
.clear_cached_exits();
564 &format
!("no cleanup scope {} found",
565 self.ccx
.tcx().map
.node_to_string(cleanup_scope
)));
568 /// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope.
569 fn schedule_clean_in_custom_scope(&self,
570 custom_scope
: CustomScopeIndex
,
571 cleanup
: CleanupObj
<'tcx
>) {
572 debug
!("schedule_clean_in_custom_scope(custom_scope={})",
575 assert
!(self.is_valid_custom_scope(custom_scope
));
577 let mut scopes
= self.scopes
.borrow_mut();
578 let scope
= &mut (*scopes
)[custom_scope
.index
];
579 scope
.cleanups
.push(cleanup
);
580 scope
.clear_cached_exits();
583 /// Returns true if there are pending cleanups that should execute on panic.
584 fn needs_invoke(&self) -> bool
{
585 self.scopes
.borrow().iter().rev().any(|s
| s
.needs_invoke())
588 /// Returns a basic block to branch to in the event of a panic. This block will run the panic
589 /// cleanups and eventually invoke the LLVM `Resume` instruction.
590 fn get_landing_pad(&'blk
self) -> BasicBlockRef
{
591 let _icx
= base
::push_ctxt("get_landing_pad");
593 debug
!("get_landing_pad");
595 let orig_scopes_len
= self.scopes_len();
596 assert
!(orig_scopes_len
> 0);
598 // Remove any scopes that do not have cleanups on panic:
599 let mut popped_scopes
= vec
!();
600 while !self.top_scope(|s
| s
.needs_invoke()) {
601 debug
!("top scope does not need invoke");
602 popped_scopes
.push(self.pop_scope());
605 // Check for an existing landing pad in the new topmost scope:
606 let llbb
= self.get_or_create_landing_pad();
608 // Push the scopes we removed back on:
610 match popped_scopes
.pop() {
611 Some(scope
) => self.push_scope(scope
),
616 assert_eq
!(self.scopes_len(), orig_scopes_len
);
622 impl<'blk
, 'tcx
> CleanupHelperMethods
<'blk
, 'tcx
> for FunctionContext
<'blk
, 'tcx
> {
623 /// Returns the id of the current top-most AST scope, if any.
624 fn top_ast_scope(&self) -> Option
<ast
::NodeId
> {
625 for scope
in self.scopes
.borrow().iter().rev() {
627 CustomScopeKind
| LoopScopeKind(..) => {}
636 fn top_nonempty_cleanup_scope(&self) -> Option
<usize> {
637 self.scopes
.borrow().iter().rev().position(|s
| !s
.cleanups
.is_empty())
640 fn is_valid_to_pop_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
{
641 self.is_valid_custom_scope(custom_scope
) &&
642 custom_scope
.index
== self.scopes
.borrow().len() - 1
645 fn is_valid_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
{
646 let scopes
= self.scopes
.borrow();
647 custom_scope
.index
< scopes
.len() &&
648 (*scopes
)[custom_scope
.index
].kind
.is_temp()
651 /// Generates the cleanups for `scope` into `bcx`
652 fn trans_scope_cleanups(&self, // cannot borrow self, will recurse
653 bcx
: Block
<'blk
, 'tcx
>,
654 scope
: &CleanupScope
<'blk
, 'tcx
>) -> Block
<'blk
, 'tcx
> {
657 if !bcx
.unreachable
.get() {
658 for cleanup
in scope
.cleanups
.iter().rev() {
659 bcx
= cleanup
.trans(bcx
, scope
.debug_loc
);
665 fn scopes_len(&self) -> usize {
666 self.scopes
.borrow().len()
669 fn push_scope(&self, scope
: CleanupScope
<'blk
, 'tcx
>) {
670 self.scopes
.borrow_mut().push(scope
)
673 fn pop_scope(&self) -> CleanupScope
<'blk
, 'tcx
> {
674 debug
!("popping cleanup scope {}, {} scopes remaining",
675 self.top_scope(|s
| s
.block_name("")),
676 self.scopes_len() - 1);
678 self.scopes
.borrow_mut().pop().unwrap()
681 fn top_scope
<R
, F
>(&self, f
: F
) -> R
where F
: FnOnce(&CleanupScope
<'blk
, 'tcx
>) -> R
{
682 f(self.scopes
.borrow().last().unwrap())
685 /// Used when the caller wishes to jump to an early exit, such as a return, break, continue, or
686 /// unwind. This function will generate all cleanups between the top of the stack and the exit
687 /// `label` and return a basic block that the caller can branch to.
689 /// For example, if the current stack of cleanups were as follows:
698 /// and the `label` specifies a break from `Loop 23`, then this function would generate a
699 /// series of basic blocks as follows:
701 /// Cleanup(AST 24) -> Cleanup(Custom 2) -> break_blk
703 /// where `break_blk` is the block specified in `Loop 23` as the target for breaks. The return
704 /// value would be the first basic block in that sequence (`Cleanup(AST 24)`). The caller could
705 /// then branch to `Cleanup(AST 24)` and it will perform all cleanups and finally branch to the
707 fn trans_cleanups_to_exit_scope(&'blk
self,
708 label
: EarlyExitLabel
)
710 debug
!("trans_cleanups_to_exit_scope label={:?} scopes={}",
711 label
, self.scopes_len());
713 let orig_scopes_len
= self.scopes_len();
715 let mut popped_scopes
= vec
!();
717 // First we pop off all the cleanup stacks that are
718 // traversed until the exit is reached, pushing them
719 // onto the side vector `popped_scopes`. No code is
720 // generated at this time.
722 // So, continuing the example from above, we would wind up
723 // with a `popped_scopes` vector of `[AST 24, Custom 2]`.
724 // (Presuming that there are no cached exits)
726 if self.scopes_len() == 0 {
729 // Generate a block that will `Resume`.
730 let prev_bcx
= self.new_block(true, "resume", None
);
731 let personality
= self.personality
.get().expect(
732 "create_landing_pad() should have set this");
733 let lp
= build
::Load(prev_bcx
, personality
);
734 base
::call_lifetime_end(prev_bcx
, personality
);
735 build
::Resume(prev_bcx
, lp
);
736 prev_llbb
= prev_bcx
.llbb
;
741 prev_llbb
= self.get_llreturn();
746 self.ccx
.sess().bug(&format
!(
747 "cannot exit from scope {}, \
753 // Check if we have already cached the unwinding of this
754 // scope for this label. If so, we can stop popping scopes
755 // and branch to the cached label, since it contains the
756 // cleanups for any subsequent scopes.
757 match self.top_scope(|s
| s
.cached_early_exit(label
)) {
758 Some(cleanup_block
) => {
759 prev_llbb
= cleanup_block
;
765 // Pop off the scope, since we will be generating
766 // unwinding code for it. If we are searching for a loop exit,
767 // and this scope is that loop, then stop popping and set
768 // `prev_llbb` to the appropriate exit block from the loop.
769 popped_scopes
.push(self.pop_scope());
770 let scope
= popped_scopes
.last().unwrap();
772 UnwindExit
| ReturnExit
=> { }
773 LoopExit(id
, exit
) => {
774 match scope
.kind
.early_exit_block(id
, exit
) {
776 prev_llbb
= exitllbb
;
786 debug
!("trans_cleanups_to_exit_scope: popped {} scopes",
787 popped_scopes
.len());
789 // Now push the popped scopes back on. As we go,
790 // we track in `prev_llbb` the exit to which this scope
791 // should branch when it's done.
793 // So, continuing with our example, we will start out with
794 // `prev_llbb` being set to `break_blk` (or possibly a cached
795 // early exit). We will then pop the scopes from `popped_scopes`
796 // and generate a basic block for each one, prepending it in the
797 // series and updating `prev_llbb`. So we begin by popping `Custom 2`
798 // and generating `Cleanup(Custom 2)`. We make `Cleanup(Custom 2)`
799 // branch to `prev_llbb == break_blk`, giving us a sequence like:
801 // Cleanup(Custom 2) -> prev_llbb
803 // We then pop `AST 24` and repeat the process, giving us the sequence:
805 // Cleanup(AST 24) -> Cleanup(Custom 2) -> prev_llbb
807 // At this point, `popped_scopes` is empty, and so the final block
808 // that we return to the user is `Cleanup(AST 24)`.
809 while let Some(mut scope
) = popped_scopes
.pop() {
810 if !scope
.cleanups
.is_empty() {
811 let name
= scope
.block_name("clean");
812 debug
!("generating cleanups for {}", name
);
813 let bcx_in
= self.new_block(label
.is_unwind(),
816 let mut bcx_out
= bcx_in
;
817 for cleanup
in scope
.cleanups
.iter().rev() {
818 bcx_out
= cleanup
.trans(bcx_out
,
821 build
::Br(bcx_out
, prev_llbb
, DebugLoc
::None
);
822 prev_llbb
= bcx_in
.llbb
;
824 scope
.add_cached_early_exit(label
, prev_llbb
);
826 self.push_scope(scope
);
829 debug
!("trans_cleanups_to_exit_scope: prev_llbb={:?}", prev_llbb
);
831 assert_eq
!(self.scopes_len(), orig_scopes_len
);
835 /// Creates a landing pad for the top scope, if one does not exist. The landing pad will
836 /// perform all cleanups necessary for an unwind and then `resume` to continue error
839 /// landing_pad -> ... cleanups ... -> [resume]
841 /// (The cleanups and resume instruction are created by `trans_cleanups_to_exit_scope()`, not
842 /// in this function itself.)
843 fn get_or_create_landing_pad(&'blk
self) -> BasicBlockRef
{
846 debug
!("get_or_create_landing_pad");
848 self.inject_unwind_resume_hook();
850 // Check if a landing pad block exists; if not, create one.
852 let mut scopes
= self.scopes
.borrow_mut();
853 let last_scope
= scopes
.last_mut().unwrap();
854 match last_scope
.cached_landing_pad
{
855 Some(llbb
) => { return llbb; }
857 let name
= last_scope
.block_name("unwind");
858 pad_bcx
= self.new_block(true, &name
[..], None
);
859 last_scope
.cached_landing_pad
= Some(pad_bcx
.llbb
);
864 // The landing pad return type (the type being propagated). Not sure what
865 // this represents but it's determined by the personality function and
866 // this is what the EH proposal example uses.
867 let llretty
= Type
::struct_(self.ccx
,
868 &[Type
::i8p(self.ccx
), Type
::i32(self.ccx
)],
871 let llpersonality
= pad_bcx
.fcx
.eh_personality();
873 // The only landing pad clause will be 'cleanup'
874 let llretval
= build
::LandingPad(pad_bcx
, llretty
, llpersonality
, 1);
876 // The landing pad block is a cleanup
877 build
::SetCleanup(pad_bcx
, llretval
);
879 // We store the retval in a function-central alloca, so that calls to
880 // Resume can find it.
881 match self.personality
.get() {
883 build
::Store(pad_bcx
, llretval
, addr
);
886 let addr
= base
::alloca(pad_bcx
, common
::val_ty(llretval
), "");
887 base
::call_lifetime_start(pad_bcx
, addr
);
888 self.personality
.set(Some(addr
));
889 build
::Store(pad_bcx
, llretval
, addr
);
893 // Generate the cleanup block and branch to it.
894 let cleanup_llbb
= self.trans_cleanups_to_exit_scope(UnwindExit
);
895 build
::Br(pad_bcx
, cleanup_llbb
, DebugLoc
::None
);
901 impl<'blk
, 'tcx
> CleanupScope
<'blk
, 'tcx
> {
902 fn new(kind
: CleanupScopeKind
<'blk
, 'tcx
>,
904 -> CleanupScope
<'blk
, 'tcx
> {
907 debug_loc
: debug_loc
,
909 cached_early_exits
: vec
!(),
910 cached_landing_pad
: None
,
914 fn clear_cached_exits(&mut self) {
915 self.cached_early_exits
= vec
!();
916 self.cached_landing_pad
= None
;
919 fn cached_early_exit(&self,
920 label
: EarlyExitLabel
)
921 -> Option
<BasicBlockRef
> {
922 self.cached_early_exits
.iter().
923 find(|e
| e
.label
== label
).
924 map(|e
| e
.cleanup_block
)
927 fn add_cached_early_exit(&mut self,
928 label
: EarlyExitLabel
,
929 blk
: BasicBlockRef
) {
930 self.cached_early_exits
.push(
931 CachedEarlyExit
{ label
: label
,
932 cleanup_block
: blk
});
935 /// True if this scope has cleanups that need unwinding
936 fn needs_invoke(&self) -> bool
{
938 self.cached_landing_pad
.is_some() ||
939 self.cleanups
.iter().any(|c
| c
.must_unwind())
942 /// Returns a suitable name to use for the basic block that handles this cleanup scope
943 fn block_name(&self, prefix
: &str) -> String
{
945 CustomScopeKind
=> format
!("{}_custom_", prefix
),
946 AstScopeKind(id
) => format
!("{}_ast_{}_", prefix
, id
),
947 LoopScopeKind(id
, _
) => format
!("{}_loop_{}_", prefix
, id
),
951 /// Manipulate cleanup scope for call arguments. Conceptually, each
952 /// argument to a call is an lvalue, and performing the call moves each
953 /// of the arguments into a new rvalue (which gets cleaned up by the
954 /// callee). As an optimization, instead of actually performing all of
955 /// those moves, trans just manipulates the cleanup scope to obtain the
957 pub fn drop_non_lifetime_clean(&mut self) {
958 self.cleanups
.retain(|c
| c
.is_lifetime_end());
959 self.clear_cached_exits();
963 impl<'blk
, 'tcx
> CleanupScopeKind
<'blk
, 'tcx
> {
964 fn is_temp(&self) -> bool
{
966 CustomScopeKind
=> true,
967 LoopScopeKind(..) | AstScopeKind(..) => false,
971 fn is_ast_with_id(&self, id
: ast
::NodeId
) -> bool
{
973 CustomScopeKind
| LoopScopeKind(..) => false,
974 AstScopeKind(i
) => i
== id
978 fn is_loop_with_id(&self, id
: ast
::NodeId
) -> bool
{
980 CustomScopeKind
| AstScopeKind(..) => false,
981 LoopScopeKind(i
, _
) => i
== id
985 /// If this is a loop scope with id `id`, return the early exit block `exit`, else `None`
986 fn early_exit_block(&self,
988 exit
: usize) -> Option
<BasicBlockRef
> {
990 LoopScopeKind(i
, ref exits
) if id
== i
=> Some(exits
[exit
].llbb
),
996 impl EarlyExitLabel
{
997 fn is_unwind(&self) -> bool
{
1005 ///////////////////////////////////////////////////////////////////////////
1008 #[derive(Copy, Clone)]
1009 pub struct DropValue
<'tcx
> {
1015 drop_hint
: Option
<DropHintValue
>,
1018 impl<'tcx
> Cleanup
<'tcx
> for DropValue
<'tcx
> {
1019 fn must_unwind(&self) -> bool
{
1023 fn is_lifetime_end(&self) -> bool
{
1027 fn trans
<'blk
>(&self,
1028 bcx
: Block
<'blk
, 'tcx
>,
1029 debug_loc
: DebugLoc
)
1030 -> Block
<'blk
, 'tcx
> {
1031 let skip_dtor
= self.skip_dtor
;
1032 let _icx
= if skip_dtor
{
1033 base
::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=true")
1035 base
::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=false")
1037 let bcx
= if self.is_immediate
{
1038 glue
::drop_ty_immediate(bcx
, self.val
, self.ty
, debug_loc
, self.skip_dtor
)
1040 glue
::drop_ty_core(bcx
, self.val
, self.ty
, debug_loc
, self.skip_dtor
, self.drop_hint
)
1042 if self.fill_on_drop
{
1043 base
::drop_done_fill_mem(bcx
, self.val
, self.ty
);
1049 #[derive(Copy, Clone, Debug)]
1054 #[derive(Copy, Clone)]
1055 pub struct FreeValue
<'tcx
> {
1058 content_ty
: Ty
<'tcx
>
1061 impl<'tcx
> Cleanup
<'tcx
> for FreeValue
<'tcx
> {
1062 fn must_unwind(&self) -> bool
{
1066 fn is_lifetime_end(&self) -> bool
{
1070 fn trans
<'blk
>(&self,
1071 bcx
: Block
<'blk
, 'tcx
>,
1072 debug_loc
: DebugLoc
)
1073 -> Block
<'blk
, 'tcx
> {
1076 glue
::trans_exchange_free_ty(bcx
,
1085 #[derive(Copy, Clone)]
1086 pub struct LifetimeEnd
{
1090 impl<'tcx
> Cleanup
<'tcx
> for LifetimeEnd
{
1091 fn must_unwind(&self) -> bool
{
1095 fn is_lifetime_end(&self) -> bool
{
1099 fn trans
<'blk
>(&self,
1100 bcx
: Block
<'blk
, 'tcx
>,
1101 debug_loc
: DebugLoc
)
1102 -> Block
<'blk
, 'tcx
> {
1103 debug_loc
.apply(bcx
.fcx
);
1104 base
::call_lifetime_end(bcx
, self.ptr
);
1109 pub fn temporary_scope(tcx
: &ty
::ctxt
,
1112 match tcx
.region_maps
.temporary_scope(id
) {
1114 let r
= AstScope(scope
.node_id(&tcx
.region_maps
));
1115 debug
!("temporary_scope({}) = {:?}", id
, r
);
1119 tcx
.sess
.bug(&format
!("no temporary scope available for expr {}",
1125 pub fn var_scope(tcx
: &ty
::ctxt
,
1128 let r
= AstScope(tcx
.region_maps
.var_scope(id
).node_id(&tcx
.region_maps
));
1129 debug
!("var_scope({}) = {:?}", id
, r
);
1133 ///////////////////////////////////////////////////////////////////////////
1134 // These traits just exist to put the methods into this file.
1136 pub trait CleanupMethods
<'blk
, 'tcx
> {
1137 fn push_ast_cleanup_scope(&self, id
: NodeIdAndSpan
);
1138 fn push_loop_cleanup_scope(&self,
1140 exits
: [Block
<'blk
, 'tcx
>; EXIT_MAX
]);
1141 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex
;
1142 fn push_custom_cleanup_scope_with_debug_loc(&self,
1143 debug_loc
: NodeIdAndSpan
)
1144 -> CustomScopeIndex
;
1145 fn pop_and_trans_ast_cleanup_scope(&self,
1146 bcx
: Block
<'blk
, 'tcx
>,
1147 cleanup_scope
: ast
::NodeId
)
1148 -> Block
<'blk
, 'tcx
>;
1149 fn pop_loop_cleanup_scope(&self,
1150 cleanup_scope
: ast
::NodeId
);
1151 fn pop_custom_cleanup_scope(&self,
1152 custom_scope
: CustomScopeIndex
);
1153 fn pop_and_trans_custom_cleanup_scope(&self,
1154 bcx
: Block
<'blk
, 'tcx
>,
1155 custom_scope
: CustomScopeIndex
)
1156 -> Block
<'blk
, 'tcx
>;
1157 fn top_loop_scope(&self) -> ast
::NodeId
;
1158 fn normal_exit_block(&'blk
self,
1159 cleanup_scope
: ast
::NodeId
,
1160 exit
: usize) -> BasicBlockRef
;
1161 fn return_exit_block(&'blk
self) -> BasicBlockRef
;
1162 fn schedule_lifetime_end(&self,
1163 cleanup_scope
: ScopeId
,
1165 fn schedule_drop_mem(&self,
1166 cleanup_scope
: ScopeId
,
1169 drop_hint
: Option
<DropHintDatum
<'tcx
>>);
1170 fn schedule_drop_and_fill_mem(&self,
1171 cleanup_scope
: ScopeId
,
1174 drop_hint
: Option
<DropHintDatum
<'tcx
>>);
1175 fn schedule_drop_adt_contents(&self,
1176 cleanup_scope
: ScopeId
,
1179 fn schedule_drop_immediate(&self,
1180 cleanup_scope
: ScopeId
,
1183 fn schedule_free_value(&self,
1184 cleanup_scope
: ScopeId
,
1187 content_ty
: Ty
<'tcx
>);
1188 fn schedule_clean(&self,
1189 cleanup_scope
: ScopeId
,
1190 cleanup
: CleanupObj
<'tcx
>);
1191 fn schedule_clean_in_ast_scope(&self,
1192 cleanup_scope
: ast
::NodeId
,
1193 cleanup
: CleanupObj
<'tcx
>);
1194 fn schedule_clean_in_custom_scope(&self,
1195 custom_scope
: CustomScopeIndex
,
1196 cleanup
: CleanupObj
<'tcx
>);
1197 fn needs_invoke(&self) -> bool
;
1198 fn get_landing_pad(&'blk
self) -> BasicBlockRef
;
1201 trait CleanupHelperMethods
<'blk
, 'tcx
> {
1202 fn top_ast_scope(&self) -> Option
<ast
::NodeId
>;
1203 fn top_nonempty_cleanup_scope(&self) -> Option
<usize>;
1204 fn is_valid_to_pop_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
;
1205 fn is_valid_custom_scope(&self, custom_scope
: CustomScopeIndex
) -> bool
;
1206 fn trans_scope_cleanups(&self,
1207 bcx
: Block
<'blk
, 'tcx
>,
1208 scope
: &CleanupScope
<'blk
, 'tcx
>) -> Block
<'blk
, 'tcx
>;
1209 fn trans_cleanups_to_exit_scope(&'blk
self,
1210 label
: EarlyExitLabel
)
1212 fn get_or_create_landing_pad(&'blk
self) -> BasicBlockRef
;
1213 fn scopes_len(&self) -> usize;
1214 fn push_scope(&self, scope
: CleanupScope
<'blk
, 'tcx
>);
1215 fn pop_scope(&self) -> CleanupScope
<'blk
, 'tcx
>;
1216 fn top_scope
<R
, F
>(&self, f
: F
) -> R
where F
: FnOnce(&CleanupScope
<'blk
, 'tcx
>) -> R
;