]> git.proxmox.com Git - rustc.git/blob - src/librustc_trans/cleanup.rs
Imported Upstream version 1.9.0+dfsg1
[rustc.git] / src / librustc_trans / cleanup.rs
1 // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
4 //
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
10
11 //! ## The Cleanup module
12 //!
13 //! The cleanup module tracks what values need to be cleaned up as scopes
14 //! are exited, either via panic or just normal control flow. The basic
15 //! idea is that the function context maintains a stack of cleanup scopes
16 //! that are pushed/popped as we traverse the AST tree. There is typically
17 //! at least one cleanup scope per AST node; some AST nodes may introduce
18 //! additional temporary scopes.
19 //!
20 //! Cleanup items can be scheduled into any of the scopes on the stack.
21 //! Typically, when a scope is popped, we will also generate the code for
22 //! each of its cleanups at that time. This corresponds to a normal exit
23 //! from a block (for example, an expression completing evaluation
24 //! successfully without panic). However, it is also possible to pop a
25 //! block *without* executing its cleanups; this is typically used to
26 //! guard intermediate values that must be cleaned up on panic, but not
27 //! if everything goes right. See the section on custom scopes below for
28 //! more details.
29 //!
30 //! Cleanup scopes come in three kinds:
31 //!
32 //! - **AST scopes:** each AST node in a function body has a corresponding
33 //! AST scope. We push the AST scope when we start generate code for an AST
34 //! node and pop it once the AST node has been fully generated.
35 //! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
36 //! never scheduled into loop scopes; instead, they are used to record the
37 //! basic blocks that we should branch to when a `continue` or `break` statement
38 //! is encountered.
39 //! - **Custom scopes:** custom scopes are typically used to ensure cleanup
40 //! of intermediate values.
41 //!
42 //! ### When to schedule cleanup
43 //!
44 //! Although the cleanup system is intended to *feel* fairly declarative,
45 //! it's still important to time calls to `schedule_clean()` correctly.
46 //! Basically, you should not schedule cleanup for memory until it has
47 //! been initialized, because if an unwind should occur before the memory
48 //! is fully initialized, then the cleanup will run and try to free or
49 //! drop uninitialized memory. If the initialization itself produces
50 //! byproducts that need to be freed, then you should use temporary custom
51 //! scopes to ensure that those byproducts will get freed on unwind. For
52 //! example, an expression like `box foo()` will first allocate a box in the
53 //! heap and then call `foo()` -- if `foo()` should panic, this box needs
54 //! to be *shallowly* freed.
55 //!
56 //! ### Long-distance jumps
57 //!
58 //! In addition to popping a scope, which corresponds to normal control
59 //! flow exiting the scope, we may also *jump out* of a scope into some
60 //! earlier scope on the stack. This can occur in response to a `return`,
61 //! `break`, or `continue` statement, but also in response to panic. In
62 //! any of these cases, we will generate a series of cleanup blocks for
63 //! each of the scopes that is exited. So, if the stack contains scopes A
64 //! ... Z, and we break out of a loop whose corresponding cleanup scope is
65 //! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
66 //! After cleanup is done we would branch to the exit point for scope X.
67 //! But if panic should occur, we would generate cleanups for all the
68 //! scopes from A to Z and then resume the unwind process afterwards.
69 //!
70 //! To avoid generating tons of code, we cache the cleanup blocks that we
71 //! create for breaks, returns, unwinds, and other jumps. Whenever a new
72 //! cleanup is scheduled, though, we must clear these cached blocks. A
73 //! possible improvement would be to keep the cached blocks but simply
74 //! generate a new block which performs the additional cleanup and then
75 //! branches to the existing cached blocks.
76 //!
77 //! ### AST and loop cleanup scopes
78 //!
79 //! AST cleanup scopes are pushed when we begin and end processing an AST
80 //! node. They are used to house cleanups related to rvalue temporary that
81 //! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
82 //! AST scope is popped, we always trans all the cleanups, adding the cleanup
83 //! code after the postdominator of the AST node.
84 //!
85 //! AST nodes that represent breakable loops also push a loop scope; the
86 //! loop scope never has any actual cleanups, it's just used to point to
87 //! the basic blocks where control should flow after a "continue" or
88 //! "break" statement. Popping a loop scope never generates code.
89 //!
90 //! ### Custom cleanup scopes
91 //!
92 //! Custom cleanup scopes are used for a variety of purposes. The most
93 //! common though is to handle temporary byproducts, where cleanup only
94 //! needs to occur on panic. The general strategy is to push a custom
95 //! cleanup scope, schedule *shallow* cleanups into the custom scope, and
96 //! then pop the custom scope (without transing the cleanups) when
97 //! execution succeeds normally. This way the cleanups are only trans'd on
98 //! unwind, and only up until the point where execution succeeded, at
99 //! which time the complete value should be stored in an lvalue or some
100 //! other place where normal cleanup applies.
101 //!
102 //! To spell it out, here is an example. Imagine an expression `box expr`.
103 //! We would basically:
104 //!
105 //! 1. Push a custom cleanup scope C.
106 //! 2. Allocate the box.
107 //! 3. Schedule a shallow free in the scope C.
108 //! 4. Trans `expr` into the box.
109 //! 5. Pop the scope C.
110 //! 6. Return the box as an rvalue.
111 //!
112 //! This way, if a panic occurs while transing `expr`, the custom
113 //! cleanup scope C is pushed and hence the box will be freed. The trans
114 //! code for `expr` itself is responsible for freeing any other byproducts
115 //! that may be in play.
116
117 pub use self::ScopeId::*;
118 pub use self::CleanupScopeKind::*;
119 pub use self::EarlyExitLabel::*;
120 pub use self::Heap::*;
121
122 use llvm::{BasicBlockRef, ValueRef};
123 use base;
124 use build;
125 use common;
126 use common::{Block, FunctionContext, NodeIdAndSpan, LandingPad};
127 use datum::{Datum, Lvalue};
128 use debuginfo::{DebugLoc, ToDebugLoc};
129 use glue;
130 use middle::region;
131 use type_::Type;
132 use value::Value;
133 use rustc::ty::{Ty, TyCtxt};
134
135 use std::fmt;
136 use syntax::ast;
137
138 pub struct CleanupScope<'blk, 'tcx: 'blk> {
139 // The id of this cleanup scope. If the id is None,
140 // this is a *temporary scope* that is pushed during trans to
141 // cleanup miscellaneous garbage that trans may generate whose
142 // lifetime is a subset of some expression. See module doc for
143 // more details.
144 kind: CleanupScopeKind<'blk, 'tcx>,
145
146 // Cleanups to run upon scope exit.
147 cleanups: Vec<CleanupObj<'tcx>>,
148
149 // The debug location any drop calls generated for this scope will be
150 // associated with.
151 debug_loc: DebugLoc,
152
153 cached_early_exits: Vec<CachedEarlyExit>,
154 cached_landing_pad: Option<BasicBlockRef>,
155 }
156
157 #[derive(Copy, Clone, Debug)]
158 pub struct CustomScopeIndex {
159 index: usize
160 }
161
162 pub const EXIT_BREAK: usize = 0;
163 pub const EXIT_LOOP: usize = 1;
164 pub const EXIT_MAX: usize = 2;
165
166 pub enum CleanupScopeKind<'blk, 'tcx: 'blk> {
167 CustomScopeKind,
168 AstScopeKind(ast::NodeId),
169 LoopScopeKind(ast::NodeId, [Block<'blk, 'tcx>; EXIT_MAX])
170 }
171
172 impl<'blk, 'tcx: 'blk> fmt::Debug for CleanupScopeKind<'blk, 'tcx> {
173 fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
174 match *self {
175 CustomScopeKind => write!(f, "CustomScopeKind"),
176 AstScopeKind(nid) => write!(f, "AstScopeKind({})", nid),
177 LoopScopeKind(nid, ref blks) => {
178 write!(f, "LoopScopeKind({}, [", nid)?;
179 for blk in blks {
180 write!(f, "{:p}, ", blk)?;
181 }
182 write!(f, "])")
183 }
184 }
185 }
186 }
187
188 #[derive(Copy, Clone, PartialEq, Debug)]
189 pub enum EarlyExitLabel {
190 UnwindExit(UnwindKind),
191 ReturnExit,
192 LoopExit(ast::NodeId, usize)
193 }
194
195 #[derive(Copy, Clone, Debug)]
196 pub enum UnwindKind {
197 LandingPad,
198 CleanupPad(ValueRef),
199 }
200
201 #[derive(Copy, Clone)]
202 pub struct CachedEarlyExit {
203 label: EarlyExitLabel,
204 cleanup_block: BasicBlockRef,
205 last_cleanup: usize,
206 }
207
208 pub trait Cleanup<'tcx> {
209 fn must_unwind(&self) -> bool;
210 fn is_lifetime_end(&self) -> bool;
211 fn trans<'blk>(&self,
212 bcx: Block<'blk, 'tcx>,
213 debug_loc: DebugLoc)
214 -> Block<'blk, 'tcx>;
215 }
216
217 pub type CleanupObj<'tcx> = Box<Cleanup<'tcx>+'tcx>;
218
219 #[derive(Copy, Clone, Debug)]
220 pub enum ScopeId {
221 AstScope(ast::NodeId),
222 CustomScope(CustomScopeIndex)
223 }
224
225 #[derive(Copy, Clone, Debug)]
226 pub struct DropHint<K>(pub ast::NodeId, pub K);
227
228 pub type DropHintDatum<'tcx> = DropHint<Datum<'tcx, Lvalue>>;
229 pub type DropHintValue = DropHint<ValueRef>;
230
231 impl<K> DropHint<K> {
232 pub fn new(id: ast::NodeId, k: K) -> DropHint<K> { DropHint(id, k) }
233 }
234
235 impl DropHint<ValueRef> {
236 pub fn value(&self) -> ValueRef { self.1 }
237 }
238
239 pub trait DropHintMethods {
240 type ValueKind;
241 fn to_value(&self) -> Self::ValueKind;
242 }
243 impl<'tcx> DropHintMethods for DropHintDatum<'tcx> {
244 type ValueKind = DropHintValue;
245 fn to_value(&self) -> DropHintValue { DropHint(self.0, self.1.val) }
246 }
247
248 impl<'blk, 'tcx> CleanupMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
249 /// Invoked when we start to trans the code contained within a new cleanup scope.
250 fn push_ast_cleanup_scope(&self, debug_loc: NodeIdAndSpan) {
251 debug!("push_ast_cleanup_scope({})",
252 self.ccx.tcx().map.node_to_string(debug_loc.id));
253
254 // FIXME(#2202) -- currently closure bodies have a parent
255 // region, which messes up the assertion below, since there
256 // are no cleanup scopes on the stack at the start of
257 // trans'ing a closure body. I think though that this should
258 // eventually be fixed by closure bodies not having a parent
259 // region, though that's a touch unclear, and it might also be
260 // better just to narrow this assertion more (i.e., by
261 // excluding id's that correspond to closure bodies only). For
262 // now we just say that if there is already an AST scope on the stack,
263 // this new AST scope had better be its immediate child.
264 let top_scope = self.top_ast_scope();
265 let region_maps = &self.ccx.tcx().region_maps;
266 if top_scope.is_some() {
267 assert!((region_maps
268 .opt_encl_scope(region_maps.node_extent(debug_loc.id))
269 .map(|s|s.node_id(region_maps)) == top_scope)
270 ||
271 (region_maps
272 .opt_encl_scope(region_maps.lookup_code_extent(
273 region::CodeExtentData::DestructionScope(debug_loc.id)))
274 .map(|s|s.node_id(region_maps)) == top_scope));
275 }
276
277 self.push_scope(CleanupScope::new(AstScopeKind(debug_loc.id),
278 debug_loc.debug_loc()));
279 }
280
281 fn push_loop_cleanup_scope(&self,
282 id: ast::NodeId,
283 exits: [Block<'blk, 'tcx>; EXIT_MAX]) {
284 debug!("push_loop_cleanup_scope({})",
285 self.ccx.tcx().map.node_to_string(id));
286 assert_eq!(Some(id), self.top_ast_scope());
287
288 // Just copy the debuginfo source location from the enclosing scope
289 let debug_loc = self.scopes
290 .borrow()
291 .last()
292 .unwrap()
293 .debug_loc;
294
295 self.push_scope(CleanupScope::new(LoopScopeKind(id, exits), debug_loc));
296 }
297
298 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex {
299 let index = self.scopes_len();
300 debug!("push_custom_cleanup_scope(): {}", index);
301
302 // Just copy the debuginfo source location from the enclosing scope
303 let debug_loc = self.scopes
304 .borrow()
305 .last()
306 .map(|opt_scope| opt_scope.debug_loc)
307 .unwrap_or(DebugLoc::None);
308
309 self.push_scope(CleanupScope::new(CustomScopeKind, debug_loc));
310 CustomScopeIndex { index: index }
311 }
312
313 fn push_custom_cleanup_scope_with_debug_loc(&self,
314 debug_loc: NodeIdAndSpan)
315 -> CustomScopeIndex {
316 let index = self.scopes_len();
317 debug!("push_custom_cleanup_scope(): {}", index);
318
319 self.push_scope(CleanupScope::new(CustomScopeKind,
320 debug_loc.debug_loc()));
321 CustomScopeIndex { index: index }
322 }
323
324 /// Removes the cleanup scope for id `cleanup_scope`, which must be at the top of the cleanup
325 /// stack, and generates the code to do its cleanups for normal exit.
326 fn pop_and_trans_ast_cleanup_scope(&self,
327 bcx: Block<'blk, 'tcx>,
328 cleanup_scope: ast::NodeId)
329 -> Block<'blk, 'tcx> {
330 debug!("pop_and_trans_ast_cleanup_scope({})",
331 self.ccx.tcx().map.node_to_string(cleanup_scope));
332
333 assert!(self.top_scope(|s| s.kind.is_ast_with_id(cleanup_scope)));
334
335 let scope = self.pop_scope();
336 self.trans_scope_cleanups(bcx, &scope)
337 }
338
339 /// Removes the loop cleanup scope for id `cleanup_scope`, which must be at the top of the
340 /// cleanup stack. Does not generate any cleanup code, since loop scopes should exit by
341 /// branching to a block generated by `normal_exit_block`.
342 fn pop_loop_cleanup_scope(&self,
343 cleanup_scope: ast::NodeId) {
344 debug!("pop_loop_cleanup_scope({})",
345 self.ccx.tcx().map.node_to_string(cleanup_scope));
346
347 assert!(self.top_scope(|s| s.kind.is_loop_with_id(cleanup_scope)));
348
349 let _ = self.pop_scope();
350 }
351
352 /// Removes the top cleanup scope from the stack without executing its cleanups. The top
353 /// cleanup scope must be the temporary scope `custom_scope`.
354 fn pop_custom_cleanup_scope(&self,
355 custom_scope: CustomScopeIndex) {
356 debug!("pop_custom_cleanup_scope({})", custom_scope.index);
357 assert!(self.is_valid_to_pop_custom_scope(custom_scope));
358 let _ = self.pop_scope();
359 }
360
361 /// Removes the top cleanup scope from the stack, which must be a temporary scope, and
362 /// generates the code to do its cleanups for normal exit.
363 fn pop_and_trans_custom_cleanup_scope(&self,
364 bcx: Block<'blk, 'tcx>,
365 custom_scope: CustomScopeIndex)
366 -> Block<'blk, 'tcx> {
367 debug!("pop_and_trans_custom_cleanup_scope({:?})", custom_scope);
368 assert!(self.is_valid_to_pop_custom_scope(custom_scope));
369
370 let scope = self.pop_scope();
371 self.trans_scope_cleanups(bcx, &scope)
372 }
373
374 /// Returns the id of the top-most loop scope
375 fn top_loop_scope(&self) -> ast::NodeId {
376 for scope in self.scopes.borrow().iter().rev() {
377 if let LoopScopeKind(id, _) = scope.kind {
378 return id;
379 }
380 }
381 bug!("no loop scope found");
382 }
383
384 /// Returns a block to branch to which will perform all pending cleanups and
385 /// then break/continue (depending on `exit`) out of the loop with id
386 /// `cleanup_scope`
387 fn normal_exit_block(&'blk self,
388 cleanup_scope: ast::NodeId,
389 exit: usize) -> BasicBlockRef {
390 self.trans_cleanups_to_exit_scope(LoopExit(cleanup_scope, exit))
391 }
392
393 /// Returns a block to branch to which will perform all pending cleanups and
394 /// then return from this function
395 fn return_exit_block(&'blk self) -> BasicBlockRef {
396 self.trans_cleanups_to_exit_scope(ReturnExit)
397 }
398
399 fn schedule_lifetime_end(&self,
400 cleanup_scope: ScopeId,
401 val: ValueRef) {
402 let drop = box LifetimeEnd {
403 ptr: val,
404 };
405
406 debug!("schedule_lifetime_end({:?}, val={:?})",
407 cleanup_scope, Value(val));
408
409 self.schedule_clean(cleanup_scope, drop as CleanupObj);
410 }
411
412 /// Schedules a (deep) drop of `val`, which is a pointer to an instance of
413 /// `ty`
414 fn schedule_drop_mem(&self,
415 cleanup_scope: ScopeId,
416 val: ValueRef,
417 ty: Ty<'tcx>,
418 drop_hint: Option<DropHintDatum<'tcx>>) {
419 if !self.type_needs_drop(ty) { return; }
420 let drop_hint = drop_hint.map(|hint|hint.to_value());
421 let drop = box DropValue {
422 is_immediate: false,
423 val: val,
424 ty: ty,
425 fill_on_drop: false,
426 skip_dtor: false,
427 drop_hint: drop_hint,
428 };
429
430 debug!("schedule_drop_mem({:?}, val={:?}, ty={:?}) fill_on_drop={} skip_dtor={}",
431 cleanup_scope,
432 Value(val),
433 ty,
434 drop.fill_on_drop,
435 drop.skip_dtor);
436
437 self.schedule_clean(cleanup_scope, drop as CleanupObj);
438 }
439
440 /// Schedules a (deep) drop and filling of `val`, which is a pointer to an instance of `ty`
441 fn schedule_drop_and_fill_mem(&self,
442 cleanup_scope: ScopeId,
443 val: ValueRef,
444 ty: Ty<'tcx>,
445 drop_hint: Option<DropHintDatum<'tcx>>) {
446 if !self.type_needs_drop(ty) { return; }
447
448 let drop_hint = drop_hint.map(|datum|datum.to_value());
449 let drop = box DropValue {
450 is_immediate: false,
451 val: val,
452 ty: ty,
453 fill_on_drop: true,
454 skip_dtor: false,
455 drop_hint: drop_hint,
456 };
457
458 debug!("schedule_drop_and_fill_mem({:?}, val={:?}, ty={:?},
459 fill_on_drop={}, skip_dtor={}, has_drop_hint={})",
460 cleanup_scope,
461 Value(val),
462 ty,
463 drop.fill_on_drop,
464 drop.skip_dtor,
465 drop_hint.is_some());
466
467 self.schedule_clean(cleanup_scope, drop as CleanupObj);
468 }
469
470 /// Issue #23611: Schedules a (deep) drop of the contents of
471 /// `val`, which is a pointer to an instance of struct/enum type
472 /// `ty`. The scheduled code handles extracting the discriminant
473 /// and dropping the contents associated with that variant
474 /// *without* executing any associated drop implementation.
475 fn schedule_drop_adt_contents(&self,
476 cleanup_scope: ScopeId,
477 val: ValueRef,
478 ty: Ty<'tcx>) {
479 // `if` below could be "!contents_needs_drop"; skipping drop
480 // is just an optimization, so sound to be conservative.
481 if !self.type_needs_drop(ty) { return; }
482
483 let drop = box DropValue {
484 is_immediate: false,
485 val: val,
486 ty: ty,
487 fill_on_drop: false,
488 skip_dtor: true,
489 drop_hint: None,
490 };
491
492 debug!("schedule_drop_adt_contents({:?}, val={:?}, ty={:?}) fill_on_drop={} skip_dtor={}",
493 cleanup_scope,
494 Value(val),
495 ty,
496 drop.fill_on_drop,
497 drop.skip_dtor);
498
499 self.schedule_clean(cleanup_scope, drop as CleanupObj);
500 }
501
502 /// Schedules a (deep) drop of `val`, which is an instance of `ty`
503 fn schedule_drop_immediate(&self,
504 cleanup_scope: ScopeId,
505 val: ValueRef,
506 ty: Ty<'tcx>) {
507
508 if !self.type_needs_drop(ty) { return; }
509 let drop = Box::new(DropValue {
510 is_immediate: true,
511 val: val,
512 ty: ty,
513 fill_on_drop: false,
514 skip_dtor: false,
515 drop_hint: None,
516 });
517
518 debug!("schedule_drop_immediate({:?}, val={:?}, ty={:?}) fill_on_drop={} skip_dtor={}",
519 cleanup_scope,
520 Value(val),
521 ty,
522 drop.fill_on_drop,
523 drop.skip_dtor);
524
525 self.schedule_clean(cleanup_scope, drop as CleanupObj);
526 }
527
528 /// Schedules a call to `free(val)`. Note that this is a shallow operation.
529 fn schedule_free_value(&self,
530 cleanup_scope: ScopeId,
531 val: ValueRef,
532 heap: Heap,
533 content_ty: Ty<'tcx>) {
534 let drop = box FreeValue { ptr: val, heap: heap, content_ty: content_ty };
535
536 debug!("schedule_free_value({:?}, val={:?}, heap={:?})",
537 cleanup_scope, Value(val), heap);
538
539 self.schedule_clean(cleanup_scope, drop as CleanupObj);
540 }
541
542 fn schedule_clean(&self,
543 cleanup_scope: ScopeId,
544 cleanup: CleanupObj<'tcx>) {
545 match cleanup_scope {
546 AstScope(id) => self.schedule_clean_in_ast_scope(id, cleanup),
547 CustomScope(id) => self.schedule_clean_in_custom_scope(id, cleanup),
548 }
549 }
550
551 /// Schedules a cleanup to occur upon exit from `cleanup_scope`. If `cleanup_scope` is not
552 /// provided, then the cleanup is scheduled in the topmost scope, which must be a temporary
553 /// scope.
554 fn schedule_clean_in_ast_scope(&self,
555 cleanup_scope: ast::NodeId,
556 cleanup: CleanupObj<'tcx>) {
557 debug!("schedule_clean_in_ast_scope(cleanup_scope={})",
558 cleanup_scope);
559
560 for scope in self.scopes.borrow_mut().iter_mut().rev() {
561 if scope.kind.is_ast_with_id(cleanup_scope) {
562 scope.cleanups.push(cleanup);
563 scope.cached_landing_pad = None;
564 return;
565 } else {
566 // will be adding a cleanup to some enclosing scope
567 scope.clear_cached_exits();
568 }
569 }
570
571 bug!("no cleanup scope {} found",
572 self.ccx.tcx().map.node_to_string(cleanup_scope));
573 }
574
575 /// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope.
576 fn schedule_clean_in_custom_scope(&self,
577 custom_scope: CustomScopeIndex,
578 cleanup: CleanupObj<'tcx>) {
579 debug!("schedule_clean_in_custom_scope(custom_scope={})",
580 custom_scope.index);
581
582 assert!(self.is_valid_custom_scope(custom_scope));
583
584 let mut scopes = self.scopes.borrow_mut();
585 let scope = &mut (*scopes)[custom_scope.index];
586 scope.cleanups.push(cleanup);
587 scope.cached_landing_pad = None;
588 }
589
590 /// Returns true if there are pending cleanups that should execute on panic.
591 fn needs_invoke(&self) -> bool {
592 self.scopes.borrow().iter().rev().any(|s| s.needs_invoke())
593 }
594
595 /// Returns a basic block to branch to in the event of a panic. This block
596 /// will run the panic cleanups and eventually resume the exception that
597 /// caused the landing pad to be run.
598 fn get_landing_pad(&'blk self) -> BasicBlockRef {
599 let _icx = base::push_ctxt("get_landing_pad");
600
601 debug!("get_landing_pad");
602
603 let orig_scopes_len = self.scopes_len();
604 assert!(orig_scopes_len > 0);
605
606 // Remove any scopes that do not have cleanups on panic:
607 let mut popped_scopes = vec!();
608 while !self.top_scope(|s| s.needs_invoke()) {
609 debug!("top scope does not need invoke");
610 popped_scopes.push(self.pop_scope());
611 }
612
613 // Check for an existing landing pad in the new topmost scope:
614 let llbb = self.get_or_create_landing_pad();
615
616 // Push the scopes we removed back on:
617 loop {
618 match popped_scopes.pop() {
619 Some(scope) => self.push_scope(scope),
620 None => break
621 }
622 }
623
624 assert_eq!(self.scopes_len(), orig_scopes_len);
625
626 return llbb;
627 }
628 }
629
630 impl<'blk, 'tcx> CleanupHelperMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
631 /// Returns the id of the current top-most AST scope, if any.
632 fn top_ast_scope(&self) -> Option<ast::NodeId> {
633 for scope in self.scopes.borrow().iter().rev() {
634 match scope.kind {
635 CustomScopeKind | LoopScopeKind(..) => {}
636 AstScopeKind(i) => {
637 return Some(i);
638 }
639 }
640 }
641 None
642 }
643
644 fn top_nonempty_cleanup_scope(&self) -> Option<usize> {
645 self.scopes.borrow().iter().rev().position(|s| !s.cleanups.is_empty())
646 }
647
648 fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
649 self.is_valid_custom_scope(custom_scope) &&
650 custom_scope.index == self.scopes.borrow().len() - 1
651 }
652
653 fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
654 let scopes = self.scopes.borrow();
655 custom_scope.index < scopes.len() &&
656 (*scopes)[custom_scope.index].kind.is_temp()
657 }
658
659 /// Generates the cleanups for `scope` into `bcx`
660 fn trans_scope_cleanups(&self, // cannot borrow self, will recurse
661 bcx: Block<'blk, 'tcx>,
662 scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx> {
663
664 let mut bcx = bcx;
665 if !bcx.unreachable.get() {
666 for cleanup in scope.cleanups.iter().rev() {
667 bcx = cleanup.trans(bcx, scope.debug_loc);
668 }
669 }
670 bcx
671 }
672
673 fn scopes_len(&self) -> usize {
674 self.scopes.borrow().len()
675 }
676
677 fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>) {
678 self.scopes.borrow_mut().push(scope)
679 }
680
681 fn pop_scope(&self) -> CleanupScope<'blk, 'tcx> {
682 debug!("popping cleanup scope {}, {} scopes remaining",
683 self.top_scope(|s| s.block_name("")),
684 self.scopes_len() - 1);
685
686 self.scopes.borrow_mut().pop().unwrap()
687 }
688
689 fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R {
690 f(self.scopes.borrow().last().unwrap())
691 }
692
693 /// Used when the caller wishes to jump to an early exit, such as a return,
694 /// break, continue, or unwind. This function will generate all cleanups
695 /// between the top of the stack and the exit `label` and return a basic
696 /// block that the caller can branch to.
697 ///
698 /// For example, if the current stack of cleanups were as follows:
699 ///
700 /// AST 22
701 /// Custom 1
702 /// AST 23
703 /// Loop 23
704 /// Custom 2
705 /// AST 24
706 ///
707 /// and the `label` specifies a break from `Loop 23`, then this function
708 /// would generate a series of basic blocks as follows:
709 ///
710 /// Cleanup(AST 24) -> Cleanup(Custom 2) -> break_blk
711 ///
712 /// where `break_blk` is the block specified in `Loop 23` as the target for
713 /// breaks. The return value would be the first basic block in that sequence
714 /// (`Cleanup(AST 24)`). The caller could then branch to `Cleanup(AST 24)`
715 /// and it will perform all cleanups and finally branch to the `break_blk`.
716 fn trans_cleanups_to_exit_scope(&'blk self,
717 label: EarlyExitLabel)
718 -> BasicBlockRef {
719 debug!("trans_cleanups_to_exit_scope label={:?} scopes={}",
720 label, self.scopes_len());
721
722 let orig_scopes_len = self.scopes_len();
723 let mut prev_llbb;
724 let mut popped_scopes = vec!();
725 let mut skip = 0;
726
727 // First we pop off all the cleanup stacks that are
728 // traversed until the exit is reached, pushing them
729 // onto the side vector `popped_scopes`. No code is
730 // generated at this time.
731 //
732 // So, continuing the example from above, we would wind up
733 // with a `popped_scopes` vector of `[AST 24, Custom 2]`.
734 // (Presuming that there are no cached exits)
735 loop {
736 if self.scopes_len() == 0 {
737 match label {
738 UnwindExit(val) => {
739 // Generate a block that will resume unwinding to the
740 // calling function
741 let bcx = self.new_block("resume", None);
742 match val {
743 UnwindKind::LandingPad => {
744 let addr = self.landingpad_alloca.get()
745 .unwrap();
746 let lp = build::Load(bcx, addr);
747 base::call_lifetime_end(bcx, addr);
748 base::trans_unwind_resume(bcx, lp);
749 }
750 UnwindKind::CleanupPad(_) => {
751 let pad = build::CleanupPad(bcx, None, &[]);
752 build::CleanupRet(bcx, pad, None);
753 }
754 }
755 prev_llbb = bcx.llbb;
756 break;
757 }
758
759 ReturnExit => {
760 prev_llbb = self.get_llreturn();
761 break
762 }
763
764 LoopExit(id, _) => {
765 bug!("cannot exit from scope {}, not in scope", id);
766 }
767 }
768 }
769
770 // Pop off the scope, since we may be generating
771 // unwinding code for it.
772 let top_scope = self.pop_scope();
773 let cached_exit = top_scope.cached_early_exit(label);
774 popped_scopes.push(top_scope);
775
776 // Check if we have already cached the unwinding of this
777 // scope for this label. If so, we can stop popping scopes
778 // and branch to the cached label, since it contains the
779 // cleanups for any subsequent scopes.
780 if let Some((exit, last_cleanup)) = cached_exit {
781 prev_llbb = exit;
782 skip = last_cleanup;
783 break;
784 }
785
786 // If we are searching for a loop exit,
787 // and this scope is that loop, then stop popping and set
788 // `prev_llbb` to the appropriate exit block from the loop.
789 let scope = popped_scopes.last().unwrap();
790 match label {
791 UnwindExit(..) | ReturnExit => { }
792 LoopExit(id, exit) => {
793 if let Some(exit) = scope.kind.early_exit_block(id, exit) {
794 prev_llbb = exit;
795 break
796 }
797 }
798 }
799 }
800
801 debug!("trans_cleanups_to_exit_scope: popped {} scopes",
802 popped_scopes.len());
803
804 // Now push the popped scopes back on. As we go,
805 // we track in `prev_llbb` the exit to which this scope
806 // should branch when it's done.
807 //
808 // So, continuing with our example, we will start out with
809 // `prev_llbb` being set to `break_blk` (or possibly a cached
810 // early exit). We will then pop the scopes from `popped_scopes`
811 // and generate a basic block for each one, prepending it in the
812 // series and updating `prev_llbb`. So we begin by popping `Custom 2`
813 // and generating `Cleanup(Custom 2)`. We make `Cleanup(Custom 2)`
814 // branch to `prev_llbb == break_blk`, giving us a sequence like:
815 //
816 // Cleanup(Custom 2) -> prev_llbb
817 //
818 // We then pop `AST 24` and repeat the process, giving us the sequence:
819 //
820 // Cleanup(AST 24) -> Cleanup(Custom 2) -> prev_llbb
821 //
822 // At this point, `popped_scopes` is empty, and so the final block
823 // that we return to the user is `Cleanup(AST 24)`.
824 while let Some(mut scope) = popped_scopes.pop() {
825 if !scope.cleanups.is_empty() {
826 let name = scope.block_name("clean");
827 debug!("generating cleanups for {}", name);
828
829 let bcx_in = self.new_block(&name[..], None);
830 let exit_label = label.start(bcx_in);
831 let mut bcx_out = bcx_in;
832 let len = scope.cleanups.len();
833 for cleanup in scope.cleanups.iter().rev().take(len - skip) {
834 bcx_out = cleanup.trans(bcx_out, scope.debug_loc);
835 }
836 skip = 0;
837 exit_label.branch(bcx_out, prev_llbb);
838 prev_llbb = bcx_in.llbb;
839
840 scope.add_cached_early_exit(exit_label, prev_llbb, len);
841 }
842 self.push_scope(scope);
843 }
844
845 debug!("trans_cleanups_to_exit_scope: prev_llbb={:?}", prev_llbb);
846
847 assert_eq!(self.scopes_len(), orig_scopes_len);
848 prev_llbb
849 }
850
851 /// Creates a landing pad for the top scope, if one does not exist. The
852 /// landing pad will perform all cleanups necessary for an unwind and then
853 /// `resume` to continue error propagation:
854 ///
855 /// landing_pad -> ... cleanups ... -> [resume]
856 ///
857 /// (The cleanups and resume instruction are created by
858 /// `trans_cleanups_to_exit_scope()`, not in this function itself.)
859 fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef {
860 let pad_bcx;
861
862 debug!("get_or_create_landing_pad");
863
864 // Check if a landing pad block exists; if not, create one.
865 {
866 let mut scopes = self.scopes.borrow_mut();
867 let last_scope = scopes.last_mut().unwrap();
868 match last_scope.cached_landing_pad {
869 Some(llbb) => return llbb,
870 None => {
871 let name = last_scope.block_name("unwind");
872 pad_bcx = self.new_block(&name[..], None);
873 last_scope.cached_landing_pad = Some(pad_bcx.llbb);
874 }
875 }
876 };
877
878 let llpersonality = pad_bcx.fcx.eh_personality();
879
880 let val = if base::wants_msvc_seh(self.ccx.sess()) {
881 // A cleanup pad requires a personality function to be specified, so
882 // we do that here explicitly (happens implicitly below through
883 // creation of the landingpad instruction). We then create a
884 // cleanuppad instruction which has no filters to run cleanup on all
885 // exceptions.
886 build::SetPersonalityFn(pad_bcx, llpersonality);
887 let llretval = build::CleanupPad(pad_bcx, None, &[]);
888 UnwindKind::CleanupPad(llretval)
889 } else {
890 // The landing pad return type (the type being propagated). Not sure
891 // what this represents but it's determined by the personality
892 // function and this is what the EH proposal example uses.
893 let llretty = Type::struct_(self.ccx,
894 &[Type::i8p(self.ccx), Type::i32(self.ccx)],
895 false);
896
897 // The only landing pad clause will be 'cleanup'
898 let llretval = build::LandingPad(pad_bcx, llretty, llpersonality, 1);
899
900 // The landing pad block is a cleanup
901 build::SetCleanup(pad_bcx, llretval);
902
903 let addr = match self.landingpad_alloca.get() {
904 Some(addr) => addr,
905 None => {
906 let addr = base::alloca(pad_bcx, common::val_ty(llretval),
907 "");
908 base::call_lifetime_start(pad_bcx, addr);
909 self.landingpad_alloca.set(Some(addr));
910 addr
911 }
912 };
913 build::Store(pad_bcx, llretval, addr);
914 UnwindKind::LandingPad
915 };
916
917 // Generate the cleanup block and branch to it.
918 let label = UnwindExit(val);
919 let cleanup_llbb = self.trans_cleanups_to_exit_scope(label);
920 label.branch(pad_bcx, cleanup_llbb);
921
922 return pad_bcx.llbb;
923 }
924 }
925
926 impl<'blk, 'tcx> CleanupScope<'blk, 'tcx> {
927 fn new(kind: CleanupScopeKind<'blk, 'tcx>,
928 debug_loc: DebugLoc)
929 -> CleanupScope<'blk, 'tcx> {
930 CleanupScope {
931 kind: kind,
932 debug_loc: debug_loc,
933 cleanups: vec!(),
934 cached_early_exits: vec!(),
935 cached_landing_pad: None,
936 }
937 }
938
939 fn clear_cached_exits(&mut self) {
940 self.cached_early_exits = vec!();
941 self.cached_landing_pad = None;
942 }
943
944 fn cached_early_exit(&self,
945 label: EarlyExitLabel)
946 -> Option<(BasicBlockRef, usize)> {
947 self.cached_early_exits.iter().rev().
948 find(|e| e.label == label).
949 map(|e| (e.cleanup_block, e.last_cleanup))
950 }
951
952 fn add_cached_early_exit(&mut self,
953 label: EarlyExitLabel,
954 blk: BasicBlockRef,
955 last_cleanup: usize) {
956 self.cached_early_exits.push(
957 CachedEarlyExit { label: label,
958 cleanup_block: blk,
959 last_cleanup: last_cleanup});
960 }
961
962 /// True if this scope has cleanups that need unwinding
963 fn needs_invoke(&self) -> bool {
964
965 self.cached_landing_pad.is_some() ||
966 self.cleanups.iter().any(|c| c.must_unwind())
967 }
968
969 /// Returns a suitable name to use for the basic block that handles this cleanup scope
970 fn block_name(&self, prefix: &str) -> String {
971 match self.kind {
972 CustomScopeKind => format!("{}_custom_", prefix),
973 AstScopeKind(id) => format!("{}_ast_{}_", prefix, id),
974 LoopScopeKind(id, _) => format!("{}_loop_{}_", prefix, id),
975 }
976 }
977
978 /// Manipulate cleanup scope for call arguments. Conceptually, each
979 /// argument to a call is an lvalue, and performing the call moves each
980 /// of the arguments into a new rvalue (which gets cleaned up by the
981 /// callee). As an optimization, instead of actually performing all of
982 /// those moves, trans just manipulates the cleanup scope to obtain the
983 /// same effect.
984 pub fn drop_non_lifetime_clean(&mut self) {
985 self.cleanups.retain(|c| c.is_lifetime_end());
986 self.clear_cached_exits();
987 }
988 }
989
990 impl<'blk, 'tcx> CleanupScopeKind<'blk, 'tcx> {
991 fn is_temp(&self) -> bool {
992 match *self {
993 CustomScopeKind => true,
994 LoopScopeKind(..) | AstScopeKind(..) => false,
995 }
996 }
997
998 fn is_ast_with_id(&self, id: ast::NodeId) -> bool {
999 match *self {
1000 CustomScopeKind | LoopScopeKind(..) => false,
1001 AstScopeKind(i) => i == id
1002 }
1003 }
1004
1005 fn is_loop_with_id(&self, id: ast::NodeId) -> bool {
1006 match *self {
1007 CustomScopeKind | AstScopeKind(..) => false,
1008 LoopScopeKind(i, _) => i == id
1009 }
1010 }
1011
1012 /// If this is a loop scope with id `id`, return the early exit block `exit`, else `None`
1013 fn early_exit_block(&self,
1014 id: ast::NodeId,
1015 exit: usize) -> Option<BasicBlockRef> {
1016 match *self {
1017 LoopScopeKind(i, ref exits) if id == i => Some(exits[exit].llbb),
1018 _ => None,
1019 }
1020 }
1021 }
1022
1023 impl EarlyExitLabel {
1024 /// Generates a branch going from `from_bcx` to `to_llbb` where `self` is
1025 /// the exit label attached to the start of `from_bcx`.
1026 ///
1027 /// Transitions from an exit label to other exit labels depend on the type
1028 /// of label. For example with MSVC exceptions unwind exit labels will use
1029 /// the `cleanupret` instruction instead of the `br` instruction.
1030 fn branch(&self, from_bcx: Block, to_llbb: BasicBlockRef) {
1031 if let UnwindExit(UnwindKind::CleanupPad(pad)) = *self {
1032 build::CleanupRet(from_bcx, pad, Some(to_llbb));
1033 } else {
1034 build::Br(from_bcx, to_llbb, DebugLoc::None);
1035 }
1036 }
1037
1038 /// Generates the necessary instructions at the start of `bcx` to prepare
1039 /// for the same kind of early exit label that `self` is.
1040 ///
1041 /// This function will appropriately configure `bcx` based on the kind of
1042 /// label this is. For UnwindExit labels, the `lpad` field of the block will
1043 /// be set to `Some`, and for MSVC exceptions this function will generate a
1044 /// `cleanuppad` instruction at the start of the block so it may be jumped
1045 /// to in the future (e.g. so this block can be cached as an early exit).
1046 ///
1047 /// Returns a new label which will can be used to cache `bcx` in the list of
1048 /// early exits.
1049 fn start(&self, bcx: Block) -> EarlyExitLabel {
1050 match *self {
1051 UnwindExit(UnwindKind::CleanupPad(..)) => {
1052 let pad = build::CleanupPad(bcx, None, &[]);
1053 bcx.lpad.set(Some(bcx.fcx.lpad_arena.alloc(LandingPad::msvc(pad))));
1054 UnwindExit(UnwindKind::CleanupPad(pad))
1055 }
1056 UnwindExit(UnwindKind::LandingPad) => {
1057 bcx.lpad.set(Some(bcx.fcx.lpad_arena.alloc(LandingPad::gnu())));
1058 *self
1059 }
1060 label => label,
1061 }
1062 }
1063 }
1064
1065 impl PartialEq for UnwindKind {
1066 fn eq(&self, val: &UnwindKind) -> bool {
1067 match (*self, *val) {
1068 (UnwindKind::LandingPad, UnwindKind::LandingPad) |
1069 (UnwindKind::CleanupPad(..), UnwindKind::CleanupPad(..)) => true,
1070 _ => false,
1071 }
1072 }
1073 }
1074
1075 ///////////////////////////////////////////////////////////////////////////
1076 // Cleanup types
1077
1078 #[derive(Copy, Clone)]
1079 pub struct DropValue<'tcx> {
1080 is_immediate: bool,
1081 val: ValueRef,
1082 ty: Ty<'tcx>,
1083 fill_on_drop: bool,
1084 skip_dtor: bool,
1085 drop_hint: Option<DropHintValue>,
1086 }
1087
1088 impl<'tcx> Cleanup<'tcx> for DropValue<'tcx> {
1089 fn must_unwind(&self) -> bool {
1090 true
1091 }
1092
1093 fn is_lifetime_end(&self) -> bool {
1094 false
1095 }
1096
1097 fn trans<'blk>(&self,
1098 bcx: Block<'blk, 'tcx>,
1099 debug_loc: DebugLoc)
1100 -> Block<'blk, 'tcx> {
1101 let skip_dtor = self.skip_dtor;
1102 let _icx = if skip_dtor {
1103 base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=true")
1104 } else {
1105 base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=false")
1106 };
1107 let bcx = if self.is_immediate {
1108 glue::drop_ty_immediate(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
1109 } else {
1110 glue::drop_ty_core(bcx, self.val, self.ty, debug_loc, self.skip_dtor, self.drop_hint)
1111 };
1112 if self.fill_on_drop {
1113 base::drop_done_fill_mem(bcx, self.val, self.ty);
1114 }
1115 bcx
1116 }
1117 }
1118
1119 #[derive(Copy, Clone, Debug)]
1120 pub enum Heap {
1121 HeapExchange
1122 }
1123
1124 #[derive(Copy, Clone)]
1125 pub struct FreeValue<'tcx> {
1126 ptr: ValueRef,
1127 heap: Heap,
1128 content_ty: Ty<'tcx>
1129 }
1130
1131 impl<'tcx> Cleanup<'tcx> for FreeValue<'tcx> {
1132 fn must_unwind(&self) -> bool {
1133 true
1134 }
1135
1136 fn is_lifetime_end(&self) -> bool {
1137 false
1138 }
1139
1140 fn trans<'blk>(&self,
1141 bcx: Block<'blk, 'tcx>,
1142 debug_loc: DebugLoc)
1143 -> Block<'blk, 'tcx> {
1144 match self.heap {
1145 HeapExchange => {
1146 glue::trans_exchange_free_ty(bcx,
1147 self.ptr,
1148 self.content_ty,
1149 debug_loc)
1150 }
1151 }
1152 }
1153 }
1154
1155 #[derive(Copy, Clone)]
1156 pub struct LifetimeEnd {
1157 ptr: ValueRef,
1158 }
1159
1160 impl<'tcx> Cleanup<'tcx> for LifetimeEnd {
1161 fn must_unwind(&self) -> bool {
1162 false
1163 }
1164
1165 fn is_lifetime_end(&self) -> bool {
1166 true
1167 }
1168
1169 fn trans<'blk>(&self,
1170 bcx: Block<'blk, 'tcx>,
1171 debug_loc: DebugLoc)
1172 -> Block<'blk, 'tcx> {
1173 debug_loc.apply(bcx.fcx);
1174 base::call_lifetime_end(bcx, self.ptr);
1175 bcx
1176 }
1177 }
1178
1179 pub fn temporary_scope(tcx: &TyCtxt,
1180 id: ast::NodeId)
1181 -> ScopeId {
1182 match tcx.region_maps.temporary_scope(id) {
1183 Some(scope) => {
1184 let r = AstScope(scope.node_id(&tcx.region_maps));
1185 debug!("temporary_scope({}) = {:?}", id, r);
1186 r
1187 }
1188 None => {
1189 bug!("no temporary scope available for expr {}", id)
1190 }
1191 }
1192 }
1193
1194 pub fn var_scope(tcx: &TyCtxt,
1195 id: ast::NodeId)
1196 -> ScopeId {
1197 let r = AstScope(tcx.region_maps.var_scope(id).node_id(&tcx.region_maps));
1198 debug!("var_scope({}) = {:?}", id, r);
1199 r
1200 }
1201
1202 ///////////////////////////////////////////////////////////////////////////
1203 // These traits just exist to put the methods into this file.
1204
1205 pub trait CleanupMethods<'blk, 'tcx> {
1206 fn push_ast_cleanup_scope(&self, id: NodeIdAndSpan);
1207 fn push_loop_cleanup_scope(&self,
1208 id: ast::NodeId,
1209 exits: [Block<'blk, 'tcx>; EXIT_MAX]);
1210 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex;
1211 fn push_custom_cleanup_scope_with_debug_loc(&self,
1212 debug_loc: NodeIdAndSpan)
1213 -> CustomScopeIndex;
1214 fn pop_and_trans_ast_cleanup_scope(&self,
1215 bcx: Block<'blk, 'tcx>,
1216 cleanup_scope: ast::NodeId)
1217 -> Block<'blk, 'tcx>;
1218 fn pop_loop_cleanup_scope(&self,
1219 cleanup_scope: ast::NodeId);
1220 fn pop_custom_cleanup_scope(&self,
1221 custom_scope: CustomScopeIndex);
1222 fn pop_and_trans_custom_cleanup_scope(&self,
1223 bcx: Block<'blk, 'tcx>,
1224 custom_scope: CustomScopeIndex)
1225 -> Block<'blk, 'tcx>;
1226 fn top_loop_scope(&self) -> ast::NodeId;
1227 fn normal_exit_block(&'blk self,
1228 cleanup_scope: ast::NodeId,
1229 exit: usize) -> BasicBlockRef;
1230 fn return_exit_block(&'blk self) -> BasicBlockRef;
1231 fn schedule_lifetime_end(&self,
1232 cleanup_scope: ScopeId,
1233 val: ValueRef);
1234 fn schedule_drop_mem(&self,
1235 cleanup_scope: ScopeId,
1236 val: ValueRef,
1237 ty: Ty<'tcx>,
1238 drop_hint: Option<DropHintDatum<'tcx>>);
1239 fn schedule_drop_and_fill_mem(&self,
1240 cleanup_scope: ScopeId,
1241 val: ValueRef,
1242 ty: Ty<'tcx>,
1243 drop_hint: Option<DropHintDatum<'tcx>>);
1244 fn schedule_drop_adt_contents(&self,
1245 cleanup_scope: ScopeId,
1246 val: ValueRef,
1247 ty: Ty<'tcx>);
1248 fn schedule_drop_immediate(&self,
1249 cleanup_scope: ScopeId,
1250 val: ValueRef,
1251 ty: Ty<'tcx>);
1252 fn schedule_free_value(&self,
1253 cleanup_scope: ScopeId,
1254 val: ValueRef,
1255 heap: Heap,
1256 content_ty: Ty<'tcx>);
1257 fn schedule_clean(&self,
1258 cleanup_scope: ScopeId,
1259 cleanup: CleanupObj<'tcx>);
1260 fn schedule_clean_in_ast_scope(&self,
1261 cleanup_scope: ast::NodeId,
1262 cleanup: CleanupObj<'tcx>);
1263 fn schedule_clean_in_custom_scope(&self,
1264 custom_scope: CustomScopeIndex,
1265 cleanup: CleanupObj<'tcx>);
1266 fn needs_invoke(&self) -> bool;
1267 fn get_landing_pad(&'blk self) -> BasicBlockRef;
1268 }
1269
1270 trait CleanupHelperMethods<'blk, 'tcx> {
1271 fn top_ast_scope(&self) -> Option<ast::NodeId>;
1272 fn top_nonempty_cleanup_scope(&self) -> Option<usize>;
1273 fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
1274 fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
1275 fn trans_scope_cleanups(&self,
1276 bcx: Block<'blk, 'tcx>,
1277 scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx>;
1278 fn trans_cleanups_to_exit_scope(&'blk self,
1279 label: EarlyExitLabel)
1280 -> BasicBlockRef;
1281 fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef;
1282 fn scopes_len(&self) -> usize;
1283 fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>);
1284 fn pop_scope(&self) -> CleanupScope<'blk, 'tcx>;
1285 fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R;
1286 }