]> git.proxmox.com Git - rustc.git/blob - src/librustc_trans/trans/cleanup.rs
Imported Upstream version 1.4.0+dfsg1
[rustc.git] / src / librustc_trans / trans / cleanup.rs
1 // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
4 //
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
10
11 //! ## The Cleanup module
12 //!
13 //! The cleanup module tracks what values need to be cleaned up as scopes
14 //! are exited, either via panic or just normal control flow. The basic
15 //! idea is that the function context maintains a stack of cleanup scopes
16 //! that are pushed/popped as we traverse the AST tree. There is typically
17 //! at least one cleanup scope per AST node; some AST nodes may introduce
18 //! additional temporary scopes.
19 //!
20 //! Cleanup items can be scheduled into any of the scopes on the stack.
21 //! Typically, when a scope is popped, we will also generate the code for
22 //! each of its cleanups at that time. This corresponds to a normal exit
23 //! from a block (for example, an expression completing evaluation
24 //! successfully without panic). However, it is also possible to pop a
25 //! block *without* executing its cleanups; this is typically used to
26 //! guard intermediate values that must be cleaned up on panic, but not
27 //! if everything goes right. See the section on custom scopes below for
28 //! more details.
29 //!
30 //! Cleanup scopes come in three kinds:
31 //!
32 //! - **AST scopes:** each AST node in a function body has a corresponding
33 //! AST scope. We push the AST scope when we start generate code for an AST
34 //! node and pop it once the AST node has been fully generated.
35 //! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
36 //! never scheduled into loop scopes; instead, they are used to record the
37 //! basic blocks that we should branch to when a `continue` or `break` statement
38 //! is encountered.
39 //! - **Custom scopes:** custom scopes are typically used to ensure cleanup
40 //! of intermediate values.
41 //!
42 //! ### When to schedule cleanup
43 //!
44 //! Although the cleanup system is intended to *feel* fairly declarative,
45 //! it's still important to time calls to `schedule_clean()` correctly.
46 //! Basically, you should not schedule cleanup for memory until it has
47 //! been initialized, because if an unwind should occur before the memory
48 //! is fully initialized, then the cleanup will run and try to free or
49 //! drop uninitialized memory. If the initialization itself produces
50 //! byproducts that need to be freed, then you should use temporary custom
51 //! scopes to ensure that those byproducts will get freed on unwind. For
52 //! example, an expression like `box foo()` will first allocate a box in the
53 //! heap and then call `foo()` -- if `foo()` should panic, this box needs
54 //! to be *shallowly* freed.
55 //!
56 //! ### Long-distance jumps
57 //!
58 //! In addition to popping a scope, which corresponds to normal control
59 //! flow exiting the scope, we may also *jump out* of a scope into some
60 //! earlier scope on the stack. This can occur in response to a `return`,
61 //! `break`, or `continue` statement, but also in response to panic. In
62 //! any of these cases, we will generate a series of cleanup blocks for
63 //! each of the scopes that is exited. So, if the stack contains scopes A
64 //! ... Z, and we break out of a loop whose corresponding cleanup scope is
65 //! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
66 //! After cleanup is done we would branch to the exit point for scope X.
67 //! But if panic should occur, we would generate cleanups for all the
68 //! scopes from A to Z and then resume the unwind process afterwards.
69 //!
70 //! To avoid generating tons of code, we cache the cleanup blocks that we
71 //! create for breaks, returns, unwinds, and other jumps. Whenever a new
72 //! cleanup is scheduled, though, we must clear these cached blocks. A
73 //! possible improvement would be to keep the cached blocks but simply
74 //! generate a new block which performs the additional cleanup and then
75 //! branches to the existing cached blocks.
76 //!
77 //! ### AST and loop cleanup scopes
78 //!
79 //! AST cleanup scopes are pushed when we begin and end processing an AST
80 //! node. They are used to house cleanups related to rvalue temporary that
81 //! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
82 //! AST scope is popped, we always trans all the cleanups, adding the cleanup
83 //! code after the postdominator of the AST node.
84 //!
85 //! AST nodes that represent breakable loops also push a loop scope; the
86 //! loop scope never has any actual cleanups, it's just used to point to
87 //! the basic blocks where control should flow after a "continue" or
88 //! "break" statement. Popping a loop scope never generates code.
89 //!
90 //! ### Custom cleanup scopes
91 //!
92 //! Custom cleanup scopes are used for a variety of purposes. The most
93 //! common though is to handle temporary byproducts, where cleanup only
94 //! needs to occur on panic. The general strategy is to push a custom
95 //! cleanup scope, schedule *shallow* cleanups into the custom scope, and
96 //! then pop the custom scope (without transing the cleanups) when
97 //! execution succeeds normally. This way the cleanups are only trans'd on
98 //! unwind, and only up until the point where execution succeeded, at
99 //! which time the complete value should be stored in an lvalue or some
100 //! other place where normal cleanup applies.
101 //!
102 //! To spell it out, here is an example. Imagine an expression `box expr`.
103 //! We would basically:
104 //!
105 //! 1. Push a custom cleanup scope C.
106 //! 2. Allocate the box.
107 //! 3. Schedule a shallow free in the scope C.
108 //! 4. Trans `expr` into the box.
109 //! 5. Pop the scope C.
110 //! 6. Return the box as an rvalue.
111 //!
112 //! This way, if a panic occurs while transing `expr`, the custom
113 //! cleanup scope C is pushed and hence the box will be freed. The trans
114 //! code for `expr` itself is responsible for freeing any other byproducts
115 //! that may be in play.
116
117 pub use self::ScopeId::*;
118 pub use self::CleanupScopeKind::*;
119 pub use self::EarlyExitLabel::*;
120 pub use self::Heap::*;
121
122 use llvm::{BasicBlockRef, ValueRef};
123 use trans::base;
124 use trans::build;
125 use trans::common;
126 use trans::common::{Block, FunctionContext, NodeIdAndSpan};
127 use trans::datum::{Datum, Lvalue};
128 use trans::debuginfo::{DebugLoc, ToDebugLoc};
129 use trans::glue;
130 use middle::region;
131 use trans::type_::Type;
132 use middle::ty::{self, Ty};
133 use std::fmt;
134 use syntax::ast;
135
136 pub struct CleanupScope<'blk, 'tcx: 'blk> {
137 // The id of this cleanup scope. If the id is None,
138 // this is a *temporary scope* that is pushed during trans to
139 // cleanup miscellaneous garbage that trans may generate whose
140 // lifetime is a subset of some expression. See module doc for
141 // more details.
142 kind: CleanupScopeKind<'blk, 'tcx>,
143
144 // Cleanups to run upon scope exit.
145 cleanups: Vec<CleanupObj<'tcx>>,
146
147 // The debug location any drop calls generated for this scope will be
148 // associated with.
149 debug_loc: DebugLoc,
150
151 cached_early_exits: Vec<CachedEarlyExit>,
152 cached_landing_pad: Option<BasicBlockRef>,
153 }
154
155 #[derive(Copy, Clone, Debug)]
156 pub struct CustomScopeIndex {
157 index: usize
158 }
159
160 pub const EXIT_BREAK: usize = 0;
161 pub const EXIT_LOOP: usize = 1;
162 pub const EXIT_MAX: usize = 2;
163
164 pub enum CleanupScopeKind<'blk, 'tcx: 'blk> {
165 CustomScopeKind,
166 AstScopeKind(ast::NodeId),
167 LoopScopeKind(ast::NodeId, [Block<'blk, 'tcx>; EXIT_MAX])
168 }
169
170 impl<'blk, 'tcx: 'blk> fmt::Debug for CleanupScopeKind<'blk, 'tcx> {
171 fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
172 match *self {
173 CustomScopeKind => write!(f, "CustomScopeKind"),
174 AstScopeKind(nid) => write!(f, "AstScopeKind({})", nid),
175 LoopScopeKind(nid, ref blks) => {
176 try!(write!(f, "LoopScopeKind({}, [", nid));
177 for blk in blks {
178 try!(write!(f, "{:p}, ", blk));
179 }
180 write!(f, "])")
181 }
182 }
183 }
184 }
185
186 #[derive(Copy, Clone, PartialEq, Debug)]
187 pub enum EarlyExitLabel {
188 UnwindExit,
189 ReturnExit,
190 LoopExit(ast::NodeId, usize)
191 }
192
193 #[derive(Copy, Clone)]
194 pub struct CachedEarlyExit {
195 label: EarlyExitLabel,
196 cleanup_block: BasicBlockRef,
197 }
198
199 pub trait Cleanup<'tcx> {
200 fn must_unwind(&self) -> bool;
201 fn is_lifetime_end(&self) -> bool;
202 fn trans<'blk>(&self,
203 bcx: Block<'blk, 'tcx>,
204 debug_loc: DebugLoc)
205 -> Block<'blk, 'tcx>;
206 }
207
208 pub type CleanupObj<'tcx> = Box<Cleanup<'tcx>+'tcx>;
209
210 #[derive(Copy, Clone, Debug)]
211 pub enum ScopeId {
212 AstScope(ast::NodeId),
213 CustomScope(CustomScopeIndex)
214 }
215
216 #[derive(Copy, Clone, Debug)]
217 pub struct DropHint<K>(pub ast::NodeId, pub K);
218
219 pub type DropHintDatum<'tcx> = DropHint<Datum<'tcx, Lvalue>>;
220 pub type DropHintValue = DropHint<ValueRef>;
221
222 impl<K> DropHint<K> {
223 pub fn new(id: ast::NodeId, k: K) -> DropHint<K> { DropHint(id, k) }
224 }
225
226 impl DropHint<ValueRef> {
227 pub fn value(&self) -> ValueRef { self.1 }
228 }
229
230 pub trait DropHintMethods {
231 type ValueKind;
232 fn to_value(&self) -> Self::ValueKind;
233 }
234 impl<'tcx> DropHintMethods for DropHintDatum<'tcx> {
235 type ValueKind = DropHintValue;
236 fn to_value(&self) -> DropHintValue { DropHint(self.0, self.1.val) }
237 }
238
239 impl<'blk, 'tcx> CleanupMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
240 /// Invoked when we start to trans the code contained within a new cleanup scope.
241 fn push_ast_cleanup_scope(&self, debug_loc: NodeIdAndSpan) {
242 debug!("push_ast_cleanup_scope({})",
243 self.ccx.tcx().map.node_to_string(debug_loc.id));
244
245 // FIXME(#2202) -- currently closure bodies have a parent
246 // region, which messes up the assertion below, since there
247 // are no cleanup scopes on the stack at the start of
248 // trans'ing a closure body. I think though that this should
249 // eventually be fixed by closure bodies not having a parent
250 // region, though that's a touch unclear, and it might also be
251 // better just to narrow this assertion more (i.e., by
252 // excluding id's that correspond to closure bodies only). For
253 // now we just say that if there is already an AST scope on the stack,
254 // this new AST scope had better be its immediate child.
255 let top_scope = self.top_ast_scope();
256 let region_maps = &self.ccx.tcx().region_maps;
257 if top_scope.is_some() {
258 assert!((region_maps
259 .opt_encl_scope(region_maps.node_extent(debug_loc.id))
260 .map(|s|s.node_id(region_maps)) == top_scope)
261 ||
262 (region_maps
263 .opt_encl_scope(region_maps.lookup_code_extent(
264 region::CodeExtentData::DestructionScope(debug_loc.id)))
265 .map(|s|s.node_id(region_maps)) == top_scope));
266 }
267
268 self.push_scope(CleanupScope::new(AstScopeKind(debug_loc.id),
269 debug_loc.debug_loc()));
270 }
271
272 fn push_loop_cleanup_scope(&self,
273 id: ast::NodeId,
274 exits: [Block<'blk, 'tcx>; EXIT_MAX]) {
275 debug!("push_loop_cleanup_scope({})",
276 self.ccx.tcx().map.node_to_string(id));
277 assert_eq!(Some(id), self.top_ast_scope());
278
279 // Just copy the debuginfo source location from the enclosing scope
280 let debug_loc = self.scopes
281 .borrow()
282 .last()
283 .unwrap()
284 .debug_loc;
285
286 self.push_scope(CleanupScope::new(LoopScopeKind(id, exits), debug_loc));
287 }
288
289 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex {
290 let index = self.scopes_len();
291 debug!("push_custom_cleanup_scope(): {}", index);
292
293 // Just copy the debuginfo source location from the enclosing scope
294 let debug_loc = self.scopes
295 .borrow()
296 .last()
297 .map(|opt_scope| opt_scope.debug_loc)
298 .unwrap_or(DebugLoc::None);
299
300 self.push_scope(CleanupScope::new(CustomScopeKind, debug_loc));
301 CustomScopeIndex { index: index }
302 }
303
304 fn push_custom_cleanup_scope_with_debug_loc(&self,
305 debug_loc: NodeIdAndSpan)
306 -> CustomScopeIndex {
307 let index = self.scopes_len();
308 debug!("push_custom_cleanup_scope(): {}", index);
309
310 self.push_scope(CleanupScope::new(CustomScopeKind,
311 debug_loc.debug_loc()));
312 CustomScopeIndex { index: index }
313 }
314
315 /// Removes the cleanup scope for id `cleanup_scope`, which must be at the top of the cleanup
316 /// stack, and generates the code to do its cleanups for normal exit.
317 fn pop_and_trans_ast_cleanup_scope(&self,
318 bcx: Block<'blk, 'tcx>,
319 cleanup_scope: ast::NodeId)
320 -> Block<'blk, 'tcx> {
321 debug!("pop_and_trans_ast_cleanup_scope({})",
322 self.ccx.tcx().map.node_to_string(cleanup_scope));
323
324 assert!(self.top_scope(|s| s.kind.is_ast_with_id(cleanup_scope)));
325
326 let scope = self.pop_scope();
327 self.trans_scope_cleanups(bcx, &scope)
328 }
329
330 /// Removes the loop cleanup scope for id `cleanup_scope`, which must be at the top of the
331 /// cleanup stack. Does not generate any cleanup code, since loop scopes should exit by
332 /// branching to a block generated by `normal_exit_block`.
333 fn pop_loop_cleanup_scope(&self,
334 cleanup_scope: ast::NodeId) {
335 debug!("pop_loop_cleanup_scope({})",
336 self.ccx.tcx().map.node_to_string(cleanup_scope));
337
338 assert!(self.top_scope(|s| s.kind.is_loop_with_id(cleanup_scope)));
339
340 let _ = self.pop_scope();
341 }
342
343 /// Removes the top cleanup scope from the stack without executing its cleanups. The top
344 /// cleanup scope must be the temporary scope `custom_scope`.
345 fn pop_custom_cleanup_scope(&self,
346 custom_scope: CustomScopeIndex) {
347 debug!("pop_custom_cleanup_scope({})", custom_scope.index);
348 assert!(self.is_valid_to_pop_custom_scope(custom_scope));
349 let _ = self.pop_scope();
350 }
351
352 /// Removes the top cleanup scope from the stack, which must be a temporary scope, and
353 /// generates the code to do its cleanups for normal exit.
354 fn pop_and_trans_custom_cleanup_scope(&self,
355 bcx: Block<'blk, 'tcx>,
356 custom_scope: CustomScopeIndex)
357 -> Block<'blk, 'tcx> {
358 debug!("pop_and_trans_custom_cleanup_scope({:?})", custom_scope);
359 assert!(self.is_valid_to_pop_custom_scope(custom_scope));
360
361 let scope = self.pop_scope();
362 self.trans_scope_cleanups(bcx, &scope)
363 }
364
365 /// Returns the id of the top-most loop scope
366 fn top_loop_scope(&self) -> ast::NodeId {
367 for scope in self.scopes.borrow().iter().rev() {
368 if let LoopScopeKind(id, _) = scope.kind {
369 return id;
370 }
371 }
372 self.ccx.sess().bug("no loop scope found");
373 }
374
375 /// Returns a block to branch to which will perform all pending cleanups and then
376 /// break/continue (depending on `exit`) out of the loop with id `cleanup_scope`
377 fn normal_exit_block(&'blk self,
378 cleanup_scope: ast::NodeId,
379 exit: usize) -> BasicBlockRef {
380 self.trans_cleanups_to_exit_scope(LoopExit(cleanup_scope, exit))
381 }
382
383 /// Returns a block to branch to which will perform all pending cleanups and then return from
384 /// this function
385 fn return_exit_block(&'blk self) -> BasicBlockRef {
386 self.trans_cleanups_to_exit_scope(ReturnExit)
387 }
388
389 fn schedule_lifetime_end(&self,
390 cleanup_scope: ScopeId,
391 val: ValueRef) {
392 let drop = box LifetimeEnd {
393 ptr: val,
394 };
395
396 debug!("schedule_lifetime_end({:?}, val={})",
397 cleanup_scope,
398 self.ccx.tn().val_to_string(val));
399
400 self.schedule_clean(cleanup_scope, drop as CleanupObj);
401 }
402
403 /// Schedules a (deep) drop of `val`, which is a pointer to an instance of `ty`
404 fn schedule_drop_mem(&self,
405 cleanup_scope: ScopeId,
406 val: ValueRef,
407 ty: Ty<'tcx>,
408 drop_hint: Option<DropHintDatum<'tcx>>) {
409 if !self.type_needs_drop(ty) { return; }
410 let drop_hint = drop_hint.map(|hint|hint.to_value());
411 let drop = box DropValue {
412 is_immediate: false,
413 val: val,
414 ty: ty,
415 fill_on_drop: false,
416 skip_dtor: false,
417 drop_hint: drop_hint,
418 };
419
420 debug!("schedule_drop_mem({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
421 cleanup_scope,
422 self.ccx.tn().val_to_string(val),
423 ty,
424 drop.fill_on_drop,
425 drop.skip_dtor);
426
427 self.schedule_clean(cleanup_scope, drop as CleanupObj);
428 }
429
430 /// Schedules a (deep) drop and filling of `val`, which is a pointer to an instance of `ty`
431 fn schedule_drop_and_fill_mem(&self,
432 cleanup_scope: ScopeId,
433 val: ValueRef,
434 ty: Ty<'tcx>,
435 drop_hint: Option<DropHintDatum<'tcx>>) {
436 if !self.type_needs_drop(ty) { return; }
437
438 let drop_hint = drop_hint.map(|datum|datum.to_value());
439 let drop = box DropValue {
440 is_immediate: false,
441 val: val,
442 ty: ty,
443 fill_on_drop: true,
444 skip_dtor: false,
445 drop_hint: drop_hint,
446 };
447
448 debug!("schedule_drop_and_fill_mem({:?}, val={}, ty={:?},
449 fill_on_drop={}, skip_dtor={}, has_drop_hint={})",
450 cleanup_scope,
451 self.ccx.tn().val_to_string(val),
452 ty,
453 drop.fill_on_drop,
454 drop.skip_dtor,
455 drop_hint.is_some());
456
457 self.schedule_clean(cleanup_scope, drop as CleanupObj);
458 }
459
460 /// Issue #23611: Schedules a (deep) drop of the contents of
461 /// `val`, which is a pointer to an instance of struct/enum type
462 /// `ty`. The scheduled code handles extracting the discriminant
463 /// and dropping the contents associated with that variant
464 /// *without* executing any associated drop implementation.
465 fn schedule_drop_adt_contents(&self,
466 cleanup_scope: ScopeId,
467 val: ValueRef,
468 ty: Ty<'tcx>) {
469 // `if` below could be "!contents_needs_drop"; skipping drop
470 // is just an optimization, so sound to be conservative.
471 if !self.type_needs_drop(ty) { return; }
472
473 let drop = box DropValue {
474 is_immediate: false,
475 val: val,
476 ty: ty,
477 fill_on_drop: false,
478 skip_dtor: true,
479 drop_hint: None,
480 };
481
482 debug!("schedule_drop_adt_contents({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
483 cleanup_scope,
484 self.ccx.tn().val_to_string(val),
485 ty,
486 drop.fill_on_drop,
487 drop.skip_dtor);
488
489 self.schedule_clean(cleanup_scope, drop as CleanupObj);
490 }
491
492 /// Schedules a (deep) drop of `val`, which is an instance of `ty`
493 fn schedule_drop_immediate(&self,
494 cleanup_scope: ScopeId,
495 val: ValueRef,
496 ty: Ty<'tcx>) {
497
498 if !self.type_needs_drop(ty) { return; }
499 let drop = Box::new(DropValue {
500 is_immediate: true,
501 val: val,
502 ty: ty,
503 fill_on_drop: false,
504 skip_dtor: false,
505 drop_hint: None,
506 });
507
508 debug!("schedule_drop_immediate({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
509 cleanup_scope,
510 self.ccx.tn().val_to_string(val),
511 ty,
512 drop.fill_on_drop,
513 drop.skip_dtor);
514
515 self.schedule_clean(cleanup_scope, drop as CleanupObj);
516 }
517
518 /// Schedules a call to `free(val)`. Note that this is a shallow operation.
519 fn schedule_free_value(&self,
520 cleanup_scope: ScopeId,
521 val: ValueRef,
522 heap: Heap,
523 content_ty: Ty<'tcx>) {
524 let drop = box FreeValue { ptr: val, heap: heap, content_ty: content_ty };
525
526 debug!("schedule_free_value({:?}, val={}, heap={:?})",
527 cleanup_scope,
528 self.ccx.tn().val_to_string(val),
529 heap);
530
531 self.schedule_clean(cleanup_scope, drop as CleanupObj);
532 }
533
534 fn schedule_clean(&self,
535 cleanup_scope: ScopeId,
536 cleanup: CleanupObj<'tcx>) {
537 match cleanup_scope {
538 AstScope(id) => self.schedule_clean_in_ast_scope(id, cleanup),
539 CustomScope(id) => self.schedule_clean_in_custom_scope(id, cleanup),
540 }
541 }
542
543 /// Schedules a cleanup to occur upon exit from `cleanup_scope`. If `cleanup_scope` is not
544 /// provided, then the cleanup is scheduled in the topmost scope, which must be a temporary
545 /// scope.
546 fn schedule_clean_in_ast_scope(&self,
547 cleanup_scope: ast::NodeId,
548 cleanup: CleanupObj<'tcx>) {
549 debug!("schedule_clean_in_ast_scope(cleanup_scope={})",
550 cleanup_scope);
551
552 for scope in self.scopes.borrow_mut().iter_mut().rev() {
553 if scope.kind.is_ast_with_id(cleanup_scope) {
554 scope.cleanups.push(cleanup);
555 scope.clear_cached_exits();
556 return;
557 } else {
558 // will be adding a cleanup to some enclosing scope
559 scope.clear_cached_exits();
560 }
561 }
562
563 self.ccx.sess().bug(
564 &format!("no cleanup scope {} found",
565 self.ccx.tcx().map.node_to_string(cleanup_scope)));
566 }
567
568 /// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope.
569 fn schedule_clean_in_custom_scope(&self,
570 custom_scope: CustomScopeIndex,
571 cleanup: CleanupObj<'tcx>) {
572 debug!("schedule_clean_in_custom_scope(custom_scope={})",
573 custom_scope.index);
574
575 assert!(self.is_valid_custom_scope(custom_scope));
576
577 let mut scopes = self.scopes.borrow_mut();
578 let scope = &mut (*scopes)[custom_scope.index];
579 scope.cleanups.push(cleanup);
580 scope.clear_cached_exits();
581 }
582
583 /// Returns true if there are pending cleanups that should execute on panic.
584 fn needs_invoke(&self) -> bool {
585 self.scopes.borrow().iter().rev().any(|s| s.needs_invoke())
586 }
587
588 /// Returns a basic block to branch to in the event of a panic. This block will run the panic
589 /// cleanups and eventually invoke the LLVM `Resume` instruction.
590 fn get_landing_pad(&'blk self) -> BasicBlockRef {
591 let _icx = base::push_ctxt("get_landing_pad");
592
593 debug!("get_landing_pad");
594
595 let orig_scopes_len = self.scopes_len();
596 assert!(orig_scopes_len > 0);
597
598 // Remove any scopes that do not have cleanups on panic:
599 let mut popped_scopes = vec!();
600 while !self.top_scope(|s| s.needs_invoke()) {
601 debug!("top scope does not need invoke");
602 popped_scopes.push(self.pop_scope());
603 }
604
605 // Check for an existing landing pad in the new topmost scope:
606 let llbb = self.get_or_create_landing_pad();
607
608 // Push the scopes we removed back on:
609 loop {
610 match popped_scopes.pop() {
611 Some(scope) => self.push_scope(scope),
612 None => break
613 }
614 }
615
616 assert_eq!(self.scopes_len(), orig_scopes_len);
617
618 return llbb;
619 }
620 }
621
622 impl<'blk, 'tcx> CleanupHelperMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
623 /// Returns the id of the current top-most AST scope, if any.
624 fn top_ast_scope(&self) -> Option<ast::NodeId> {
625 for scope in self.scopes.borrow().iter().rev() {
626 match scope.kind {
627 CustomScopeKind | LoopScopeKind(..) => {}
628 AstScopeKind(i) => {
629 return Some(i);
630 }
631 }
632 }
633 None
634 }
635
636 fn top_nonempty_cleanup_scope(&self) -> Option<usize> {
637 self.scopes.borrow().iter().rev().position(|s| !s.cleanups.is_empty())
638 }
639
640 fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
641 self.is_valid_custom_scope(custom_scope) &&
642 custom_scope.index == self.scopes.borrow().len() - 1
643 }
644
645 fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
646 let scopes = self.scopes.borrow();
647 custom_scope.index < scopes.len() &&
648 (*scopes)[custom_scope.index].kind.is_temp()
649 }
650
651 /// Generates the cleanups for `scope` into `bcx`
652 fn trans_scope_cleanups(&self, // cannot borrow self, will recurse
653 bcx: Block<'blk, 'tcx>,
654 scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx> {
655
656 let mut bcx = bcx;
657 if !bcx.unreachable.get() {
658 for cleanup in scope.cleanups.iter().rev() {
659 bcx = cleanup.trans(bcx, scope.debug_loc);
660 }
661 }
662 bcx
663 }
664
665 fn scopes_len(&self) -> usize {
666 self.scopes.borrow().len()
667 }
668
669 fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>) {
670 self.scopes.borrow_mut().push(scope)
671 }
672
673 fn pop_scope(&self) -> CleanupScope<'blk, 'tcx> {
674 debug!("popping cleanup scope {}, {} scopes remaining",
675 self.top_scope(|s| s.block_name("")),
676 self.scopes_len() - 1);
677
678 self.scopes.borrow_mut().pop().unwrap()
679 }
680
681 fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R {
682 f(self.scopes.borrow().last().unwrap())
683 }
684
685 /// Used when the caller wishes to jump to an early exit, such as a return, break, continue, or
686 /// unwind. This function will generate all cleanups between the top of the stack and the exit
687 /// `label` and return a basic block that the caller can branch to.
688 ///
689 /// For example, if the current stack of cleanups were as follows:
690 ///
691 /// AST 22
692 /// Custom 1
693 /// AST 23
694 /// Loop 23
695 /// Custom 2
696 /// AST 24
697 ///
698 /// and the `label` specifies a break from `Loop 23`, then this function would generate a
699 /// series of basic blocks as follows:
700 ///
701 /// Cleanup(AST 24) -> Cleanup(Custom 2) -> break_blk
702 ///
703 /// where `break_blk` is the block specified in `Loop 23` as the target for breaks. The return
704 /// value would be the first basic block in that sequence (`Cleanup(AST 24)`). The caller could
705 /// then branch to `Cleanup(AST 24)` and it will perform all cleanups and finally branch to the
706 /// `break_blk`.
707 fn trans_cleanups_to_exit_scope(&'blk self,
708 label: EarlyExitLabel)
709 -> BasicBlockRef {
710 debug!("trans_cleanups_to_exit_scope label={:?} scopes={}",
711 label, self.scopes_len());
712
713 let orig_scopes_len = self.scopes_len();
714 let mut prev_llbb;
715 let mut popped_scopes = vec!();
716
717 // First we pop off all the cleanup stacks that are
718 // traversed until the exit is reached, pushing them
719 // onto the side vector `popped_scopes`. No code is
720 // generated at this time.
721 //
722 // So, continuing the example from above, we would wind up
723 // with a `popped_scopes` vector of `[AST 24, Custom 2]`.
724 // (Presuming that there are no cached exits)
725 loop {
726 if self.scopes_len() == 0 {
727 match label {
728 UnwindExit => {
729 // Generate a block that will `Resume`.
730 let prev_bcx = self.new_block(true, "resume", None);
731 let personality = self.personality.get().expect(
732 "create_landing_pad() should have set this");
733 let lp = build::Load(prev_bcx, personality);
734 base::call_lifetime_end(prev_bcx, personality);
735 build::Resume(prev_bcx, lp);
736 prev_llbb = prev_bcx.llbb;
737 break;
738 }
739
740 ReturnExit => {
741 prev_llbb = self.get_llreturn();
742 break;
743 }
744
745 LoopExit(id, _) => {
746 self.ccx.sess().bug(&format!(
747 "cannot exit from scope {}, \
748 not in scope", id));
749 }
750 }
751 }
752
753 // Check if we have already cached the unwinding of this
754 // scope for this label. If so, we can stop popping scopes
755 // and branch to the cached label, since it contains the
756 // cleanups for any subsequent scopes.
757 match self.top_scope(|s| s.cached_early_exit(label)) {
758 Some(cleanup_block) => {
759 prev_llbb = cleanup_block;
760 break;
761 }
762 None => { }
763 }
764
765 // Pop off the scope, since we will be generating
766 // unwinding code for it. If we are searching for a loop exit,
767 // and this scope is that loop, then stop popping and set
768 // `prev_llbb` to the appropriate exit block from the loop.
769 popped_scopes.push(self.pop_scope());
770 let scope = popped_scopes.last().unwrap();
771 match label {
772 UnwindExit | ReturnExit => { }
773 LoopExit(id, exit) => {
774 match scope.kind.early_exit_block(id, exit) {
775 Some(exitllbb) => {
776 prev_llbb = exitllbb;
777 break;
778 }
779
780 None => { }
781 }
782 }
783 }
784 }
785
786 debug!("trans_cleanups_to_exit_scope: popped {} scopes",
787 popped_scopes.len());
788
789 // Now push the popped scopes back on. As we go,
790 // we track in `prev_llbb` the exit to which this scope
791 // should branch when it's done.
792 //
793 // So, continuing with our example, we will start out with
794 // `prev_llbb` being set to `break_blk` (or possibly a cached
795 // early exit). We will then pop the scopes from `popped_scopes`
796 // and generate a basic block for each one, prepending it in the
797 // series and updating `prev_llbb`. So we begin by popping `Custom 2`
798 // and generating `Cleanup(Custom 2)`. We make `Cleanup(Custom 2)`
799 // branch to `prev_llbb == break_blk`, giving us a sequence like:
800 //
801 // Cleanup(Custom 2) -> prev_llbb
802 //
803 // We then pop `AST 24` and repeat the process, giving us the sequence:
804 //
805 // Cleanup(AST 24) -> Cleanup(Custom 2) -> prev_llbb
806 //
807 // At this point, `popped_scopes` is empty, and so the final block
808 // that we return to the user is `Cleanup(AST 24)`.
809 while let Some(mut scope) = popped_scopes.pop() {
810 if !scope.cleanups.is_empty() {
811 let name = scope.block_name("clean");
812 debug!("generating cleanups for {}", name);
813 let bcx_in = self.new_block(label.is_unwind(),
814 &name[..],
815 None);
816 let mut bcx_out = bcx_in;
817 for cleanup in scope.cleanups.iter().rev() {
818 bcx_out = cleanup.trans(bcx_out,
819 scope.debug_loc);
820 }
821 build::Br(bcx_out, prev_llbb, DebugLoc::None);
822 prev_llbb = bcx_in.llbb;
823
824 scope.add_cached_early_exit(label, prev_llbb);
825 }
826 self.push_scope(scope);
827 }
828
829 debug!("trans_cleanups_to_exit_scope: prev_llbb={:?}", prev_llbb);
830
831 assert_eq!(self.scopes_len(), orig_scopes_len);
832 prev_llbb
833 }
834
835 /// Creates a landing pad for the top scope, if one does not exist. The landing pad will
836 /// perform all cleanups necessary for an unwind and then `resume` to continue error
837 /// propagation:
838 ///
839 /// landing_pad -> ... cleanups ... -> [resume]
840 ///
841 /// (The cleanups and resume instruction are created by `trans_cleanups_to_exit_scope()`, not
842 /// in this function itself.)
843 fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef {
844 let pad_bcx;
845
846 debug!("get_or_create_landing_pad");
847
848 self.inject_unwind_resume_hook();
849
850 // Check if a landing pad block exists; if not, create one.
851 {
852 let mut scopes = self.scopes.borrow_mut();
853 let last_scope = scopes.last_mut().unwrap();
854 match last_scope.cached_landing_pad {
855 Some(llbb) => { return llbb; }
856 None => {
857 let name = last_scope.block_name("unwind");
858 pad_bcx = self.new_block(true, &name[..], None);
859 last_scope.cached_landing_pad = Some(pad_bcx.llbb);
860 }
861 }
862 }
863
864 // The landing pad return type (the type being propagated). Not sure what
865 // this represents but it's determined by the personality function and
866 // this is what the EH proposal example uses.
867 let llretty = Type::struct_(self.ccx,
868 &[Type::i8p(self.ccx), Type::i32(self.ccx)],
869 false);
870
871 let llpersonality = pad_bcx.fcx.eh_personality();
872
873 // The only landing pad clause will be 'cleanup'
874 let llretval = build::LandingPad(pad_bcx, llretty, llpersonality, 1);
875
876 // The landing pad block is a cleanup
877 build::SetCleanup(pad_bcx, llretval);
878
879 // We store the retval in a function-central alloca, so that calls to
880 // Resume can find it.
881 match self.personality.get() {
882 Some(addr) => {
883 build::Store(pad_bcx, llretval, addr);
884 }
885 None => {
886 let addr = base::alloca(pad_bcx, common::val_ty(llretval), "");
887 base::call_lifetime_start(pad_bcx, addr);
888 self.personality.set(Some(addr));
889 build::Store(pad_bcx, llretval, addr);
890 }
891 }
892
893 // Generate the cleanup block and branch to it.
894 let cleanup_llbb = self.trans_cleanups_to_exit_scope(UnwindExit);
895 build::Br(pad_bcx, cleanup_llbb, DebugLoc::None);
896
897 return pad_bcx.llbb;
898 }
899 }
900
901 impl<'blk, 'tcx> CleanupScope<'blk, 'tcx> {
902 fn new(kind: CleanupScopeKind<'blk, 'tcx>,
903 debug_loc: DebugLoc)
904 -> CleanupScope<'blk, 'tcx> {
905 CleanupScope {
906 kind: kind,
907 debug_loc: debug_loc,
908 cleanups: vec!(),
909 cached_early_exits: vec!(),
910 cached_landing_pad: None,
911 }
912 }
913
914 fn clear_cached_exits(&mut self) {
915 self.cached_early_exits = vec!();
916 self.cached_landing_pad = None;
917 }
918
919 fn cached_early_exit(&self,
920 label: EarlyExitLabel)
921 -> Option<BasicBlockRef> {
922 self.cached_early_exits.iter().
923 find(|e| e.label == label).
924 map(|e| e.cleanup_block)
925 }
926
927 fn add_cached_early_exit(&mut self,
928 label: EarlyExitLabel,
929 blk: BasicBlockRef) {
930 self.cached_early_exits.push(
931 CachedEarlyExit { label: label,
932 cleanup_block: blk });
933 }
934
935 /// True if this scope has cleanups that need unwinding
936 fn needs_invoke(&self) -> bool {
937
938 self.cached_landing_pad.is_some() ||
939 self.cleanups.iter().any(|c| c.must_unwind())
940 }
941
942 /// Returns a suitable name to use for the basic block that handles this cleanup scope
943 fn block_name(&self, prefix: &str) -> String {
944 match self.kind {
945 CustomScopeKind => format!("{}_custom_", prefix),
946 AstScopeKind(id) => format!("{}_ast_{}_", prefix, id),
947 LoopScopeKind(id, _) => format!("{}_loop_{}_", prefix, id),
948 }
949 }
950
951 /// Manipulate cleanup scope for call arguments. Conceptually, each
952 /// argument to a call is an lvalue, and performing the call moves each
953 /// of the arguments into a new rvalue (which gets cleaned up by the
954 /// callee). As an optimization, instead of actually performing all of
955 /// those moves, trans just manipulates the cleanup scope to obtain the
956 /// same effect.
957 pub fn drop_non_lifetime_clean(&mut self) {
958 self.cleanups.retain(|c| c.is_lifetime_end());
959 self.clear_cached_exits();
960 }
961 }
962
963 impl<'blk, 'tcx> CleanupScopeKind<'blk, 'tcx> {
964 fn is_temp(&self) -> bool {
965 match *self {
966 CustomScopeKind => true,
967 LoopScopeKind(..) | AstScopeKind(..) => false,
968 }
969 }
970
971 fn is_ast_with_id(&self, id: ast::NodeId) -> bool {
972 match *self {
973 CustomScopeKind | LoopScopeKind(..) => false,
974 AstScopeKind(i) => i == id
975 }
976 }
977
978 fn is_loop_with_id(&self, id: ast::NodeId) -> bool {
979 match *self {
980 CustomScopeKind | AstScopeKind(..) => false,
981 LoopScopeKind(i, _) => i == id
982 }
983 }
984
985 /// If this is a loop scope with id `id`, return the early exit block `exit`, else `None`
986 fn early_exit_block(&self,
987 id: ast::NodeId,
988 exit: usize) -> Option<BasicBlockRef> {
989 match *self {
990 LoopScopeKind(i, ref exits) if id == i => Some(exits[exit].llbb),
991 _ => None,
992 }
993 }
994 }
995
996 impl EarlyExitLabel {
997 fn is_unwind(&self) -> bool {
998 match *self {
999 UnwindExit => true,
1000 _ => false
1001 }
1002 }
1003 }
1004
1005 ///////////////////////////////////////////////////////////////////////////
1006 // Cleanup types
1007
1008 #[derive(Copy, Clone)]
1009 pub struct DropValue<'tcx> {
1010 is_immediate: bool,
1011 val: ValueRef,
1012 ty: Ty<'tcx>,
1013 fill_on_drop: bool,
1014 skip_dtor: bool,
1015 drop_hint: Option<DropHintValue>,
1016 }
1017
1018 impl<'tcx> Cleanup<'tcx> for DropValue<'tcx> {
1019 fn must_unwind(&self) -> bool {
1020 true
1021 }
1022
1023 fn is_lifetime_end(&self) -> bool {
1024 false
1025 }
1026
1027 fn trans<'blk>(&self,
1028 bcx: Block<'blk, 'tcx>,
1029 debug_loc: DebugLoc)
1030 -> Block<'blk, 'tcx> {
1031 let skip_dtor = self.skip_dtor;
1032 let _icx = if skip_dtor {
1033 base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=true")
1034 } else {
1035 base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=false")
1036 };
1037 let bcx = if self.is_immediate {
1038 glue::drop_ty_immediate(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
1039 } else {
1040 glue::drop_ty_core(bcx, self.val, self.ty, debug_loc, self.skip_dtor, self.drop_hint)
1041 };
1042 if self.fill_on_drop {
1043 base::drop_done_fill_mem(bcx, self.val, self.ty);
1044 }
1045 bcx
1046 }
1047 }
1048
1049 #[derive(Copy, Clone, Debug)]
1050 pub enum Heap {
1051 HeapExchange
1052 }
1053
1054 #[derive(Copy, Clone)]
1055 pub struct FreeValue<'tcx> {
1056 ptr: ValueRef,
1057 heap: Heap,
1058 content_ty: Ty<'tcx>
1059 }
1060
1061 impl<'tcx> Cleanup<'tcx> for FreeValue<'tcx> {
1062 fn must_unwind(&self) -> bool {
1063 true
1064 }
1065
1066 fn is_lifetime_end(&self) -> bool {
1067 false
1068 }
1069
1070 fn trans<'blk>(&self,
1071 bcx: Block<'blk, 'tcx>,
1072 debug_loc: DebugLoc)
1073 -> Block<'blk, 'tcx> {
1074 match self.heap {
1075 HeapExchange => {
1076 glue::trans_exchange_free_ty(bcx,
1077 self.ptr,
1078 self.content_ty,
1079 debug_loc)
1080 }
1081 }
1082 }
1083 }
1084
1085 #[derive(Copy, Clone)]
1086 pub struct LifetimeEnd {
1087 ptr: ValueRef,
1088 }
1089
1090 impl<'tcx> Cleanup<'tcx> for LifetimeEnd {
1091 fn must_unwind(&self) -> bool {
1092 false
1093 }
1094
1095 fn is_lifetime_end(&self) -> bool {
1096 true
1097 }
1098
1099 fn trans<'blk>(&self,
1100 bcx: Block<'blk, 'tcx>,
1101 debug_loc: DebugLoc)
1102 -> Block<'blk, 'tcx> {
1103 debug_loc.apply(bcx.fcx);
1104 base::call_lifetime_end(bcx, self.ptr);
1105 bcx
1106 }
1107 }
1108
1109 pub fn temporary_scope(tcx: &ty::ctxt,
1110 id: ast::NodeId)
1111 -> ScopeId {
1112 match tcx.region_maps.temporary_scope(id) {
1113 Some(scope) => {
1114 let r = AstScope(scope.node_id(&tcx.region_maps));
1115 debug!("temporary_scope({}) = {:?}", id, r);
1116 r
1117 }
1118 None => {
1119 tcx.sess.bug(&format!("no temporary scope available for expr {}",
1120 id))
1121 }
1122 }
1123 }
1124
1125 pub fn var_scope(tcx: &ty::ctxt,
1126 id: ast::NodeId)
1127 -> ScopeId {
1128 let r = AstScope(tcx.region_maps.var_scope(id).node_id(&tcx.region_maps));
1129 debug!("var_scope({}) = {:?}", id, r);
1130 r
1131 }
1132
1133 ///////////////////////////////////////////////////////////////////////////
1134 // These traits just exist to put the methods into this file.
1135
1136 pub trait CleanupMethods<'blk, 'tcx> {
1137 fn push_ast_cleanup_scope(&self, id: NodeIdAndSpan);
1138 fn push_loop_cleanup_scope(&self,
1139 id: ast::NodeId,
1140 exits: [Block<'blk, 'tcx>; EXIT_MAX]);
1141 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex;
1142 fn push_custom_cleanup_scope_with_debug_loc(&self,
1143 debug_loc: NodeIdAndSpan)
1144 -> CustomScopeIndex;
1145 fn pop_and_trans_ast_cleanup_scope(&self,
1146 bcx: Block<'blk, 'tcx>,
1147 cleanup_scope: ast::NodeId)
1148 -> Block<'blk, 'tcx>;
1149 fn pop_loop_cleanup_scope(&self,
1150 cleanup_scope: ast::NodeId);
1151 fn pop_custom_cleanup_scope(&self,
1152 custom_scope: CustomScopeIndex);
1153 fn pop_and_trans_custom_cleanup_scope(&self,
1154 bcx: Block<'blk, 'tcx>,
1155 custom_scope: CustomScopeIndex)
1156 -> Block<'blk, 'tcx>;
1157 fn top_loop_scope(&self) -> ast::NodeId;
1158 fn normal_exit_block(&'blk self,
1159 cleanup_scope: ast::NodeId,
1160 exit: usize) -> BasicBlockRef;
1161 fn return_exit_block(&'blk self) -> BasicBlockRef;
1162 fn schedule_lifetime_end(&self,
1163 cleanup_scope: ScopeId,
1164 val: ValueRef);
1165 fn schedule_drop_mem(&self,
1166 cleanup_scope: ScopeId,
1167 val: ValueRef,
1168 ty: Ty<'tcx>,
1169 drop_hint: Option<DropHintDatum<'tcx>>);
1170 fn schedule_drop_and_fill_mem(&self,
1171 cleanup_scope: ScopeId,
1172 val: ValueRef,
1173 ty: Ty<'tcx>,
1174 drop_hint: Option<DropHintDatum<'tcx>>);
1175 fn schedule_drop_adt_contents(&self,
1176 cleanup_scope: ScopeId,
1177 val: ValueRef,
1178 ty: Ty<'tcx>);
1179 fn schedule_drop_immediate(&self,
1180 cleanup_scope: ScopeId,
1181 val: ValueRef,
1182 ty: Ty<'tcx>);
1183 fn schedule_free_value(&self,
1184 cleanup_scope: ScopeId,
1185 val: ValueRef,
1186 heap: Heap,
1187 content_ty: Ty<'tcx>);
1188 fn schedule_clean(&self,
1189 cleanup_scope: ScopeId,
1190 cleanup: CleanupObj<'tcx>);
1191 fn schedule_clean_in_ast_scope(&self,
1192 cleanup_scope: ast::NodeId,
1193 cleanup: CleanupObj<'tcx>);
1194 fn schedule_clean_in_custom_scope(&self,
1195 custom_scope: CustomScopeIndex,
1196 cleanup: CleanupObj<'tcx>);
1197 fn needs_invoke(&self) -> bool;
1198 fn get_landing_pad(&'blk self) -> BasicBlockRef;
1199 }
1200
1201 trait CleanupHelperMethods<'blk, 'tcx> {
1202 fn top_ast_scope(&self) -> Option<ast::NodeId>;
1203 fn top_nonempty_cleanup_scope(&self) -> Option<usize>;
1204 fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
1205 fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
1206 fn trans_scope_cleanups(&self,
1207 bcx: Block<'blk, 'tcx>,
1208 scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx>;
1209 fn trans_cleanups_to_exit_scope(&'blk self,
1210 label: EarlyExitLabel)
1211 -> BasicBlockRef;
1212 fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef;
1213 fn scopes_len(&self) -> usize;
1214 fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>);
1215 fn pop_scope(&self) -> CleanupScope<'blk, 'tcx>;
1216 fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R;
1217 }