1 //! Mono Item Collection
2 //! ====================
4 //! This module is responsible for discovering all items that will contribute
5 //! to code generation of the crate. The important part here is that it not only
6 //! needs to find syntax-level items (functions, structs, etc) but also all
7 //! their monomorphized instantiations. Every non-generic, non-const function
8 //! maps to one LLVM artifact. Every generic function can produce
9 //! from zero to N artifacts, depending on the sets of type arguments it
10 //! is instantiated with.
11 //! This also applies to generic items from other crates: A generic definition
12 //! in crate X might produce monomorphizations that are compiled into crate Y.
13 //! We also have to collect these here.
15 //! The following kinds of "mono items" are handled here:
23 //! The following things also result in LLVM artifacts, but are not collected
24 //! here, since we instantiate them locally on demand when needed in a given
34 //! Let's define some terms first:
36 //! - A "mono item" is something that results in a function or global in
37 //! the LLVM IR of a codegen unit. Mono items do not stand on their
38 //! own, they can reference other mono items. For example, if function
39 //! `foo()` calls function `bar()` then the mono item for `foo()`
40 //! references the mono item for function `bar()`. In general, the
41 //! definition for mono item A referencing a mono item B is that
42 //! the LLVM artifact produced for A references the LLVM artifact produced
45 //! - Mono items and the references between them form a directed graph,
46 //! where the mono items are the nodes and references form the edges.
47 //! Let's call this graph the "mono item graph".
49 //! - The mono item graph for a program contains all mono items
50 //! that are needed in order to produce the complete LLVM IR of the program.
52 //! The purpose of the algorithm implemented in this module is to build the
53 //! mono item graph for the current crate. It runs in two phases:
55 //! 1. Discover the roots of the graph by traversing the HIR of the crate.
56 //! 2. Starting from the roots, find neighboring nodes by inspecting the MIR
57 //! representation of the item corresponding to a given node, until no more
58 //! new nodes are found.
60 //! ### Discovering roots
62 //! The roots of the mono item graph correspond to the public non-generic
63 //! syntactic items in the source code. We find them by walking the HIR of the
64 //! crate, and whenever we hit upon a public function, method, or static item,
65 //! we create a mono item consisting of the items DefId and, since we only
66 //! consider non-generic items, an empty type-substitution set. (In eager
67 //! collection mode, during incremental compilation, all non-generic functions
68 //! are considered as roots, as well as when the `-Clink-dead-code` option is
69 //! specified. Functions marked `#[no_mangle]` and functions called by inlinable
70 //! functions also always act as roots.)
72 //! ### Finding neighbor nodes
73 //! Given a mono item node, we can discover neighbors by inspecting its
74 //! MIR. We walk the MIR and any time we hit upon something that signifies a
75 //! reference to another mono item, we have found a neighbor. Since the
76 //! mono item we are currently at is always monomorphic, we also know the
77 //! concrete type arguments of its neighbors, and so all neighbors again will be
78 //! monomorphic. The specific forms a reference to a neighboring node can take
79 //! in MIR are quite diverse. Here is an overview:
81 //! #### Calling Functions/Methods
82 //! The most obvious form of one mono item referencing another is a
83 //! function or method call (represented by a CALL terminator in MIR). But
84 //! calls are not the only thing that might introduce a reference between two
85 //! function mono items, and as we will see below, they are just a
86 //! specialization of the form described next, and consequently will not get any
87 //! special treatment in the algorithm.
89 //! #### Taking a reference to a function or method
90 //! A function does not need to actually be called in order to be a neighbor of
91 //! another function. It suffices to just take a reference in order to introduce
92 //! an edge. Consider the following example:
95 //! # use core::fmt::Display;
96 //! fn print_val<T: Display>(x: T) {
97 //! println!("{}", x);
100 //! fn call_fn(f: &dyn Fn(i32), x: i32) {
105 //! let print_i32 = print_val::<i32>;
106 //! call_fn(&print_i32, 0);
109 //! The MIR of none of these functions will contain an explicit call to
110 //! `print_val::<i32>`. Nonetheless, in order to mono this program, we need
111 //! an instance of this function. Thus, whenever we encounter a function or
112 //! method in operand position, we treat it as a neighbor of the current
113 //! mono item. Calls are just a special case of that.
116 //! Drop glue mono items are introduced by MIR drop-statements. The
117 //! generated mono item will again have drop-glue item neighbors if the
118 //! type to be dropped contains nested values that also need to be dropped. It
119 //! might also have a function item neighbor for the explicit `Drop::drop`
120 //! implementation of its type.
122 //! #### Unsizing Casts
123 //! A subtle way of introducing neighbor edges is by casting to a trait object.
124 //! Since the resulting fat-pointer contains a reference to a vtable, we need to
125 //! instantiate all object-safe methods of the trait, as we need to store
126 //! pointers to these functions even if they never get called anywhere. This can
127 //! be seen as a special case of taking a function reference.
130 //! Since `Box` expression have special compiler support, no explicit calls to
131 //! `exchange_malloc()` and `box_free()` may show up in MIR, even if the
132 //! compiler will generate them. We have to observe `Rvalue::Box` expressions
133 //! and Box-typed drop-statements for that purpose.
136 //! Interaction with Cross-Crate Inlining
137 //! -------------------------------------
138 //! The binary of a crate will not only contain machine code for the items
139 //! defined in the source code of that crate. It will also contain monomorphic
140 //! instantiations of any extern generic functions and of functions marked with
142 //! The collection algorithm handles this more or less mono. If it is
143 //! about to create a mono item for something with an external `DefId`,
144 //! it will take a look if the MIR for that item is available, and if so just
145 //! proceed normally. If the MIR is not available, it assumes that the item is
146 //! just linked to and no node is created; which is exactly what we want, since
147 //! no machine code should be generated in the current crate for such an item.
149 //! Eager and Lazy Collection Mode
150 //! ------------------------------
151 //! Mono item collection can be performed in one of two modes:
153 //! - Lazy mode means that items will only be instantiated when actually
154 //! referenced. The goal is to produce the least amount of machine code
157 //! - Eager mode is meant to be used in conjunction with incremental compilation
158 //! where a stable set of mono items is more important than a minimal
159 //! one. Thus, eager mode will instantiate drop-glue for every drop-able type
160 //! in the crate, even if no drop call for that type exists (yet). It will
161 //! also instantiate default implementations of trait methods, something that
162 //! otherwise is only done on demand.
167 //! Some things are not yet fully implemented in the current version of this
171 //! Ideally, no mono item should be generated for const fns unless there
172 //! is a call to them that cannot be evaluated at compile time. At the moment
173 //! this is not implemented however: a mono item will be produced
174 //! regardless of whether it is actually needed or not.
176 use rustc_data_structures
::fx
::{FxHashMap, FxHashSet}
;
177 use rustc_data_structures
::sync
::{par_for_each_in, MTLock, MTRef}
;
178 use rustc_hir
as hir
;
179 use rustc_hir
::def
::DefKind
;
180 use rustc_hir
::def_id
::{DefId, DefIdMap, LocalDefId}
;
181 use rustc_hir
::lang_items
::LangItem
;
182 use rustc_index
::bit_set
::GrowableBitSet
;
183 use rustc_middle
::mir
::interpret
::{AllocId, ConstValue}
;
184 use rustc_middle
::mir
::interpret
::{ErrorHandled, GlobalAlloc, Scalar}
;
185 use rustc_middle
::mir
::mono
::{InstantiationMode, MonoItem}
;
186 use rustc_middle
::mir
::visit
::Visitor
as MirVisitor
;
187 use rustc_middle
::mir
::{self, Local, Location}
;
188 use rustc_middle
::ty
::adjustment
::{CustomCoerceUnsized, PointerCast}
;
189 use rustc_middle
::ty
::print
::with_no_trimmed_paths
;
190 use rustc_middle
::ty
::query
::TyCtxtAt
;
191 use rustc_middle
::ty
::subst
::{GenericArgKind, InternalSubsts}
;
192 use rustc_middle
::ty
::{
193 self, GenericParamDefKind
, Instance
, Ty
, TyCtxt
, TypeFoldable
, TypeVisitable
, VtblEntry
,
195 use rustc_middle
::{middle::codegen_fn_attrs::CodegenFnAttrFlags, mir::visit::TyContext}
;
196 use rustc_session
::config
::EntryFnType
;
197 use rustc_session
::lint
::builtin
::LARGE_ASSIGNMENTS
;
198 use rustc_session
::Limit
;
199 use rustc_span
::source_map
::{dummy_spanned, respan, Span, Spanned, DUMMY_SP}
;
200 use rustc_target
::abi
::Size
;
202 use std
::path
::PathBuf
;
204 use crate::errors
::{LargeAssignmentsLint, RecursionLimit, TypeLengthLimit}
;
207 pub enum MonoItemCollectionMode
{
212 /// Maps every mono item to all mono items it references in its
214 pub struct InliningMap
<'tcx
> {
215 // Maps a source mono item to the range of mono items
217 // The range selects elements within the `targets` vecs.
218 index
: FxHashMap
<MonoItem
<'tcx
>, Range
<usize>>,
219 targets
: Vec
<MonoItem
<'tcx
>>,
221 // Contains one bit per mono item in the `targets` field. That bit
222 // is true if that mono item needs to be inlined into every CGU.
223 inlines
: GrowableBitSet
<usize>,
226 /// Struct to store mono items in each collecting and if they should
227 /// be inlined. We call `instantiation_mode` to get their inlining
228 /// status when inserting new elements, which avoids calling it in
229 /// `inlining_map.lock_mut()`. See the `collect_items_rec` implementation
231 struct MonoItems
<'tcx
> {
232 // If this is false, we do not need to compute whether items
233 // will need to be inlined.
234 compute_inlining
: bool
,
236 // The TyCtxt used to determine whether the a item should
240 // The collected mono items. The bool field in each element
241 // indicates whether this element should be inlined.
242 items
: Vec
<(Spanned
<MonoItem
<'tcx
>>, bool
/*inlined*/)>,
245 impl<'tcx
> MonoItems
<'tcx
> {
247 fn push(&mut self, item
: Spanned
<MonoItem
<'tcx
>>) {
252 fn extend
<T
: IntoIterator
<Item
= Spanned
<MonoItem
<'tcx
>>>>(&mut self, iter
: T
) {
253 self.items
.extend(iter
.into_iter().map(|mono_item
| {
254 let inlined
= if !self.compute_inlining
{
257 mono_item
.node
.instantiation_mode(self.tcx
) == InstantiationMode
::LocalCopy
264 impl<'tcx
> InliningMap
<'tcx
> {
265 fn new() -> InliningMap
<'tcx
> {
267 index
: FxHashMap
::default(),
269 inlines
: GrowableBitSet
::with_capacity(1024),
273 fn record_accesses
<'a
>(
275 source
: MonoItem
<'tcx
>,
276 new_targets
: &'a
[(Spanned
<MonoItem
<'tcx
>>, bool
)],
280 let start_index
= self.targets
.len();
281 let new_items_count
= new_targets
.len();
282 let new_items_count_total
= new_items_count
+ self.targets
.len();
284 self.targets
.reserve(new_items_count
);
285 self.inlines
.ensure(new_items_count_total
);
287 for (i
, (Spanned { node: mono_item, .. }
, inlined
)) in new_targets
.into_iter().enumerate() {
288 self.targets
.push(*mono_item
);
290 self.inlines
.insert(i
+ start_index
);
294 let end_index
= self.targets
.len();
295 assert
!(self.index
.insert(source
, start_index
..end_index
).is_none());
298 /// Internally iterate over all items referenced by `source` which will be
299 /// made available for inlining.
300 pub fn with_inlining_candidates
<F
>(&self, source
: MonoItem
<'tcx
>, mut f
: F
)
302 F
: FnMut(MonoItem
<'tcx
>),
304 if let Some(range
) = self.index
.get(&source
) {
305 for (i
, candidate
) in self.targets
[range
.clone()].iter().enumerate() {
306 if self.inlines
.contains(range
.start
+ i
) {
313 /// Internally iterate over all items and the things each accesses.
314 pub fn iter_accesses
<F
>(&self, mut f
: F
)
316 F
: FnMut(MonoItem
<'tcx
>, &[MonoItem
<'tcx
>]),
318 for (&accessor
, range
) in &self.index
{
319 f(accessor
, &self.targets
[range
.clone()])
324 #[instrument(skip(tcx, mode), level = "debug")]
325 pub fn collect_crate_mono_items(
327 mode
: MonoItemCollectionMode
,
328 ) -> (FxHashSet
<MonoItem
<'_
>>, InliningMap
<'_
>) {
329 let _prof_timer
= tcx
.prof
.generic_activity("monomorphization_collector");
332 tcx
.sess
.time("monomorphization_collector_root_collections", || collect_roots(tcx
, mode
));
334 debug
!("building mono item graph, beginning at roots");
336 let mut visited
= MTLock
::new(FxHashSet
::default());
337 let mut inlining_map
= MTLock
::new(InliningMap
::new());
338 let recursion_limit
= tcx
.recursion_limit();
341 let visited
: MTRef
<'_
, _
> = &mut visited
;
342 let inlining_map
: MTRef
<'_
, _
> = &mut inlining_map
;
344 tcx
.sess
.time("monomorphization_collector_graph_walk", || {
345 par_for_each_in(roots
, |root
| {
346 let mut recursion_depths
= DefIdMap
::default();
351 &mut recursion_depths
,
359 (visited
.into_inner(), inlining_map
.into_inner())
362 // Find all non-generic items by walking the HIR. These items serve as roots to
363 // start monomorphizing from.
364 #[instrument(skip(tcx, mode), level = "debug")]
365 fn collect_roots(tcx
: TyCtxt
<'_
>, mode
: MonoItemCollectionMode
) -> Vec
<MonoItem
<'_
>> {
366 debug
!("collecting roots");
367 let mut roots
= MonoItems { compute_inlining: false, tcx, items: Vec::new() }
;
370 let entry_fn
= tcx
.entry_fn(());
372 debug
!("collect_roots: entry_fn = {:?}", entry_fn
);
374 let mut collector
= RootCollector { tcx, mode, entry_fn, output: &mut roots }
;
376 let crate_items
= tcx
.hir_crate_items(());
378 for id
in crate_items
.items() {
379 collector
.process_item(id
);
382 for id
in crate_items
.impl_items() {
383 collector
.process_impl_item(id
);
386 collector
.push_extra_entry_roots();
389 // We can only codegen items that are instantiable - items all of
390 // whose predicates hold. Luckily, items that aren't instantiable
391 // can't actually be used, so we can just skip codegenning them.
395 .filter_map(|(Spanned { node: mono_item, .. }
, _
)| {
396 mono_item
.is_instantiable(tcx
).then_some(mono_item
)
401 /// Collect all monomorphized items reachable from `starting_point`, and emit a note diagnostic if a
402 /// post-monorphization error is encountered during a collection step.
403 #[instrument(skip(tcx, visited, recursion_depths, recursion_limit, inlining_map), level = "debug")]
404 fn collect_items_rec
<'tcx
>(
406 starting_point
: Spanned
<MonoItem
<'tcx
>>,
407 visited
: MTRef
<'_
, MTLock
<FxHashSet
<MonoItem
<'tcx
>>>>,
408 recursion_depths
: &mut DefIdMap
<usize>,
409 recursion_limit
: Limit
,
410 inlining_map
: MTRef
<'_
, MTLock
<InliningMap
<'tcx
>>>,
412 if !visited
.lock_mut().insert(starting_point
.node
) {
413 // We've been here already, no need to search again.
417 let mut neighbors
= MonoItems { compute_inlining: true, tcx, items: Vec::new() }
;
418 let recursion_depth_reset
;
421 // Post-monomorphization errors MVP
423 // We can encounter errors while monomorphizing an item, but we don't have a good way of
424 // showing a complete stack of spans ultimately leading to collecting the erroneous one yet.
425 // (It's also currently unclear exactly which diagnostics and information would be interesting
426 // to report in such cases)
428 // This leads to suboptimal error reporting: a post-monomorphization error (PME) will be
429 // shown with just a spanned piece of code causing the error, without information on where
430 // it was called from. This is especially obscure if the erroneous mono item is in a
431 // dependency. See for example issue #85155, where, before minimization, a PME happened two
432 // crates downstream from libcore's stdarch, without a way to know which dependency was the
435 // If such an error occurs in the current crate, its span will be enough to locate the
436 // source. If the cause is in another crate, the goal here is to quickly locate which mono
437 // item in the current crate is ultimately responsible for causing the error.
439 // To give at least _some_ context to the user: while collecting mono items, we check the
440 // error count. If it has changed, a PME occurred, and we trigger some diagnostics about the
441 // current step of mono items collection.
443 // FIXME: don't rely on global state, instead bubble up errors. Note: this is very hard to do.
444 let error_count
= tcx
.sess
.diagnostic().err_count();
446 match starting_point
.node
{
447 MonoItem
::Static(def_id
) => {
448 let instance
= Instance
::mono(tcx
, def_id
);
450 // Sanity check whether this ended up being collected accidentally
451 debug_assert
!(should_codegen_locally(tcx
, &instance
));
453 let ty
= instance
.ty(tcx
, ty
::ParamEnv
::reveal_all());
454 visit_drop_use(tcx
, ty
, true, starting_point
.span
, &mut neighbors
);
456 recursion_depth_reset
= None
;
458 if let Ok(alloc
) = tcx
.eval_static_initializer(def_id
) {
459 for &id
in alloc
.inner().provenance().ptrs().values() {
460 collect_miri(tcx
, id
, &mut neighbors
);
464 MonoItem
::Fn(instance
) => {
465 // Sanity check whether this ended up being collected accidentally
466 debug_assert
!(should_codegen_locally(tcx
, &instance
));
468 // Keep track of the monomorphization recursion depth
469 recursion_depth_reset
= Some(check_recursion_limit(
476 check_type_length_limit(tcx
, instance
);
478 rustc_data_structures
::stack
::ensure_sufficient_stack(|| {
479 collect_neighbours(tcx
, instance
, &mut neighbors
);
482 MonoItem
::GlobalAsm(item_id
) => {
483 recursion_depth_reset
= None
;
485 let item
= tcx
.hir().item(item_id
);
486 if let hir
::ItemKind
::GlobalAsm(asm
) = item
.kind
{
487 for (op
, op_sp
) in asm
.operands
{
489 hir
::InlineAsmOperand
::Const { .. }
=> {
490 // Only constants which resolve to a plain integer
491 // are supported. Therefore the value should not
492 // depend on any other items.
494 hir
::InlineAsmOperand
::SymFn { anon_const }
=> {
496 tcx
.typeck_body(anon_const
.body
).node_type(anon_const
.hir_id
);
497 visit_fn_use(tcx
, fn_ty
, false, *op_sp
, &mut neighbors
);
499 hir
::InlineAsmOperand
::SymStatic { path: _, def_id }
=> {
500 let instance
= Instance
::mono(tcx
, *def_id
);
501 if should_codegen_locally(tcx
, &instance
) {
502 trace
!("collecting static {:?}", def_id
);
503 neighbors
.push(dummy_spanned(MonoItem
::Static(*def_id
)));
506 hir
::InlineAsmOperand
::In { .. }
507 | hir
::InlineAsmOperand
::Out { .. }
508 | hir
::InlineAsmOperand
::InOut { .. }
509 | hir
::InlineAsmOperand
::SplitInOut { .. }
=> {
510 span_bug
!(*op_sp
, "invalid operand type for global_asm!")
515 span_bug
!(item
.span
, "Mismatch between hir::Item type and MonoItem type")
520 // Check for PMEs and emit a diagnostic if one happened. To try to show relevant edges of the
522 if tcx
.sess
.diagnostic().err_count() > error_count
523 && starting_point
.node
.is_generic_fn()
524 && starting_point
.node
.is_user_defined()
526 let formatted_item
= with_no_trimmed_paths
!(starting_point
.node
.to_string());
527 tcx
.sess
.span_note_without_error(
529 &format
!("the above error was encountered while instantiating `{formatted_item}`"),
532 inlining_map
.lock_mut().record_accesses(starting_point
.node
, &neighbors
.items
);
534 for (neighbour
, _
) in neighbors
.items
{
535 collect_items_rec(tcx
, neighbour
, visited
, recursion_depths
, recursion_limit
, inlining_map
);
538 if let Some((def_id
, depth
)) = recursion_depth_reset
{
539 recursion_depths
.insert(def_id
, depth
);
543 /// Format instance name that is already known to be too long for rustc.
544 /// Show only the first 2 types if it is longer than 32 characters to avoid blasting
545 /// the user's terminal with thousands of lines of type-name.
547 /// If the type name is longer than before+after, it will be written to a file.
548 fn shrunk_instance_name
<'tcx
>(
550 instance
: &Instance
<'tcx
>,
551 ) -> (String
, Option
<PathBuf
>) {
552 let s
= instance
.to_string();
554 // Only use the shrunk version if it's really shorter.
555 // This also avoids the case where before and after slices overlap.
556 if s
.chars().nth(33).is_some() {
557 let shrunk
= format
!("{}", ty
::ShortInstance(instance
, 4));
562 let path
= tcx
.output_filenames(()).temp_path_ext("long-type.txt", None
);
563 let written_to_path
= std
::fs
::write(&path
, s
).ok().map(|_
| path
);
565 (shrunk
, written_to_path
)
571 fn check_recursion_limit
<'tcx
>(
573 instance
: Instance
<'tcx
>,
575 recursion_depths
: &mut DefIdMap
<usize>,
576 recursion_limit
: Limit
,
577 ) -> (DefId
, usize) {
578 let def_id
= instance
.def_id();
579 let recursion_depth
= recursion_depths
.get(&def_id
).cloned().unwrap_or(0);
580 debug
!(" => recursion depth={}", recursion_depth
);
582 let adjusted_recursion_depth
= if Some(def_id
) == tcx
.lang_items().drop_in_place_fn() {
583 // HACK: drop_in_place creates tight monomorphization loops. Give
590 // Code that needs to instantiate the same function recursively
591 // more than the recursion limit is assumed to be causing an
592 // infinite expansion.
593 if !recursion_limit
.value_within_limit(adjusted_recursion_depth
) {
594 let def_span
= tcx
.def_span(def_id
);
595 let def_path_str
= tcx
.def_path_str(def_id
);
596 let (shrunk
, written_to_path
) = shrunk_instance_name(tcx
, &instance
);
597 let mut path
= PathBuf
::new();
598 let was_written
= if let Some(written_to_path
) = written_to_path
{
599 path
= written_to_path
;
604 tcx
.sess
.emit_fatal(RecursionLimit
{
614 recursion_depths
.insert(def_id
, recursion_depth
+ 1);
616 (def_id
, recursion_depth
)
619 fn check_type_length_limit
<'tcx
>(tcx
: TyCtxt
<'tcx
>, instance
: Instance
<'tcx
>) {
620 let type_length
= instance
623 .flat_map(|arg
| arg
.walk())
624 .filter(|arg
| match arg
.unpack() {
625 GenericArgKind
::Type(_
) | GenericArgKind
::Const(_
) => true,
626 GenericArgKind
::Lifetime(_
) => false,
629 debug
!(" => type length={}", type_length
);
631 // Rust code can easily create exponentially-long types using only a
632 // polynomial recursion depth. Even with the default recursion
633 // depth, you can easily get cases that take >2^60 steps to run,
634 // which means that rustc basically hangs.
636 // Bail out in these cases to avoid that bad user experience.
637 if !tcx
.type_length_limit().value_within_limit(type_length
) {
638 let (shrunk
, written_to_path
) = shrunk_instance_name(tcx
, &instance
);
639 let span
= tcx
.def_span(instance
.def_id());
640 let mut path
= PathBuf
::new();
641 let was_written
= if written_to_path
.is_some() {
642 path
= written_to_path
.unwrap();
647 tcx
.sess
.emit_fatal(TypeLengthLimit { span, shrunk, was_written, path, type_length }
);
651 struct MirNeighborCollector
<'a
, 'tcx
> {
653 body
: &'a mir
::Body
<'tcx
>,
654 output
: &'a
mut MonoItems
<'tcx
>,
655 instance
: Instance
<'tcx
>,
658 impl<'a
, 'tcx
> MirNeighborCollector
<'a
, 'tcx
> {
659 pub fn monomorphize
<T
>(&self, value
: T
) -> T
661 T
: TypeFoldable
<'tcx
>,
663 debug
!("monomorphize: self.instance={:?}", self.instance
);
664 self.instance
.subst_mir_and_normalize_erasing_regions(
666 ty
::ParamEnv
::reveal_all(),
672 impl<'a
, 'tcx
> MirVisitor
<'tcx
> for MirNeighborCollector
<'a
, 'tcx
> {
673 fn visit_rvalue(&mut self, rvalue
: &mir
::Rvalue
<'tcx
>, location
: Location
) {
674 debug
!("visiting rvalue {:?}", *rvalue
);
676 let span
= self.body
.source_info(location
).span
;
679 // When doing an cast from a regular pointer to a fat pointer, we
680 // have to instantiate all methods of the trait being cast to, so we
681 // can build the appropriate vtable.
683 mir
::CastKind
::Pointer(PointerCast
::Unsize
),
687 | mir
::Rvalue
::Cast(mir
::CastKind
::DynStar
, ref operand
, target_ty
) => {
688 let target_ty
= self.monomorphize(target_ty
);
689 let source_ty
= operand
.ty(self.body
, self.tcx
);
690 let source_ty
= self.monomorphize(source_ty
);
691 let (source_ty
, target_ty
) =
692 find_vtable_types_for_unsizing(self.tcx
.at(span
), source_ty
, target_ty
);
693 // This could also be a different Unsize instruction, like
694 // from a fixed sized array to a slice. But we are only
695 // interested in things that produce a vtable.
696 if (target_ty
.is_trait() && !source_ty
.is_trait())
697 || (target_ty
.is_dyn_star() && !source_ty
.is_dyn_star())
699 create_mono_items_for_vtable_methods(
709 mir
::CastKind
::Pointer(PointerCast
::ReifyFnPointer
),
713 let fn_ty
= operand
.ty(self.body
, self.tcx
);
714 let fn_ty
= self.monomorphize(fn_ty
);
715 visit_fn_use(self.tcx
, fn_ty
, false, span
, &mut self.output
);
718 mir
::CastKind
::Pointer(PointerCast
::ClosureFnPointer(_
)),
722 let source_ty
= operand
.ty(self.body
, self.tcx
);
723 let source_ty
= self.monomorphize(source_ty
);
724 match *source_ty
.kind() {
725 ty
::Closure(def_id
, substs
) => {
726 let instance
= Instance
::resolve_closure(
730 ty
::ClosureKind
::FnOnce
,
732 .expect("failed to normalize and resolve closure during codegen");
733 if should_codegen_locally(self.tcx
, &instance
) {
734 self.output
.push(create_fn_mono_item(self.tcx
, instance
, span
));
740 mir
::Rvalue
::ThreadLocalRef(def_id
) => {
741 assert
!(self.tcx
.is_thread_local_static(def_id
));
742 let instance
= Instance
::mono(self.tcx
, def_id
);
743 if should_codegen_locally(self.tcx
, &instance
) {
744 trace
!("collecting thread-local static {:?}", def_id
);
745 self.output
.push(respan(span
, MonoItem
::Static(def_id
)));
748 _
=> { /* not interesting */ }
751 self.super_rvalue(rvalue
, location
);
754 /// This does not walk the constant, as it has been handled entirely here and trying
755 /// to walk it would attempt to evaluate the `ty::Const` inside, which doesn't necessarily
756 /// work, as some constants cannot be represented in the type system.
757 #[instrument(skip(self), level = "debug")]
758 fn visit_constant(&mut self, constant
: &mir
::Constant
<'tcx
>, location
: Location
) {
759 let literal
= self.monomorphize(constant
.literal
);
760 let val
= match literal
{
761 mir
::ConstantKind
::Val(val
, _
) => val
,
762 mir
::ConstantKind
::Ty(ct
) => match ct
.kind() {
763 ty
::ConstKind
::Value(val
) => self.tcx
.valtree_to_const_val((ct
.ty(), val
)),
764 ty
::ConstKind
::Unevaluated(ct
) => {
766 let param_env
= ty
::ParamEnv
::reveal_all();
767 match self.tcx
.const_eval_resolve(param_env
, ct
.expand(), None
) {
768 // The `monomorphize` call should have evaluated that constant already.
770 Err(ErrorHandled
::Reported(_
)) => return,
771 Err(ErrorHandled
::TooGeneric
) => span_bug
!(
772 self.body
.source_info(location
).span
,
773 "collection encountered polymorphic constant: {:?}",
780 mir
::ConstantKind
::Unevaluated(uv
, _
) => {
781 let param_env
= ty
::ParamEnv
::reveal_all();
782 match self.tcx
.const_eval_resolve(param_env
, uv
, None
) {
783 // The `monomorphize` call should have evaluated that constant already.
785 Err(ErrorHandled
::Reported(_
)) => return,
786 Err(ErrorHandled
::TooGeneric
) => span_bug
!(
787 self.body
.source_info(location
).span
,
788 "collection encountered polymorphic constant: {:?}",
794 collect_const_value(self.tcx
, val
, self.output
);
795 MirVisitor
::visit_ty(self, literal
.ty(), TyContext
::Location(location
));
798 fn visit_terminator(&mut self, terminator
: &mir
::Terminator
<'tcx
>, location
: Location
) {
799 debug
!("visiting terminator {:?} @ {:?}", terminator
, location
);
800 let source
= self.body
.source_info(location
).span
;
803 match terminator
.kind
{
804 mir
::TerminatorKind
::Call { ref func, .. }
=> {
805 let callee_ty
= func
.ty(self.body
, tcx
);
806 let callee_ty
= self.monomorphize(callee_ty
);
807 visit_fn_use(self.tcx
, callee_ty
, true, source
, &mut self.output
)
809 mir
::TerminatorKind
::Drop { ref place, .. }
810 | mir
::TerminatorKind
::DropAndReplace { ref place, .. }
=> {
811 let ty
= place
.ty(self.body
, self.tcx
).ty
;
812 let ty
= self.monomorphize(ty
);
813 visit_drop_use(self.tcx
, ty
, true, source
, self.output
);
815 mir
::TerminatorKind
::InlineAsm { ref operands, .. }
=> {
818 mir
::InlineAsmOperand
::SymFn { ref value }
=> {
819 let fn_ty
= self.monomorphize(value
.literal
.ty());
820 visit_fn_use(self.tcx
, fn_ty
, false, source
, &mut self.output
);
822 mir
::InlineAsmOperand
::SymStatic { def_id }
=> {
823 let instance
= Instance
::mono(self.tcx
, def_id
);
824 if should_codegen_locally(self.tcx
, &instance
) {
825 trace
!("collecting asm sym static {:?}", def_id
);
826 self.output
.push(respan(source
, MonoItem
::Static(def_id
)));
833 mir
::TerminatorKind
::Assert { ref msg, .. }
=> {
834 let lang_item
= match msg
{
835 mir
::AssertKind
::BoundsCheck { .. }
=> LangItem
::PanicBoundsCheck
,
836 _
=> LangItem
::Panic
,
838 let instance
= Instance
::mono(tcx
, tcx
.require_lang_item(lang_item
, Some(source
)));
839 if should_codegen_locally(tcx
, &instance
) {
840 self.output
.push(create_fn_mono_item(tcx
, instance
, source
));
843 mir
::TerminatorKind
::Abort { .. }
=> {
844 let instance
= Instance
::mono(
846 tcx
.require_lang_item(LangItem
::PanicCannotUnwind
, Some(source
)),
848 if should_codegen_locally(tcx
, &instance
) {
849 self.output
.push(create_fn_mono_item(tcx
, instance
, source
));
852 mir
::TerminatorKind
::Goto { .. }
853 | mir
::TerminatorKind
::SwitchInt { .. }
854 | mir
::TerminatorKind
::Resume
855 | mir
::TerminatorKind
::Return
856 | mir
::TerminatorKind
::Unreachable
=> {}
857 mir
::TerminatorKind
::GeneratorDrop
858 | mir
::TerminatorKind
::Yield { .. }
859 | mir
::TerminatorKind
::FalseEdge { .. }
860 | mir
::TerminatorKind
::FalseUnwind { .. }
=> bug
!(),
863 self.super_terminator(terminator
, location
);
866 fn visit_operand(&mut self, operand
: &mir
::Operand
<'tcx
>, location
: Location
) {
867 self.super_operand(operand
, location
);
868 let limit
= self.tcx
.move_size_limit().0;
872 let limit
= Size
::from_bytes(limit
);
873 let ty
= operand
.ty(self.body
, self.tcx
);
874 let ty
= self.monomorphize(ty
);
875 let layout
= self.tcx
.layout_of(ty
::ParamEnv
::reveal_all().and(ty
));
876 if let Ok(layout
) = layout
{
877 if layout
.size
> limit
{
879 let source_info
= self.body
.source_info(location
);
880 debug
!(?source_info
);
881 let lint_root
= source_info
.scope
.lint_root(&self.body
.source_scopes
);
883 let Some(lint_root
) = lint_root
else {
884 // This happens when the issue is in a function from a foreign crate that
885 // we monomorphized in the current crate. We can't get a `HirId` for things
887 // FIXME: Find out where to report the lint on. Maybe simply crate-level lint root
888 // but correct span? This would make the lint at least accept crate-level lint attributes.
891 self.tcx
.emit_spanned_lint(
895 LargeAssignmentsLint
{
896 span
: source_info
.span
,
897 size
: layout
.size
.bytes(),
898 limit
: limit
.bytes(),
908 _context
: mir
::visit
::PlaceContext
,
914 fn visit_drop_use
<'tcx
>(
917 is_direct_call
: bool
,
919 output
: &mut MonoItems
<'tcx
>,
921 let instance
= Instance
::resolve_drop_in_place(tcx
, ty
);
922 visit_instance_use(tcx
, instance
, is_direct_call
, source
, output
);
925 fn visit_fn_use
<'tcx
>(
928 is_direct_call
: bool
,
930 output
: &mut MonoItems
<'tcx
>,
932 if let ty
::FnDef(def_id
, substs
) = *ty
.kind() {
933 let instance
= if is_direct_call
{
934 ty
::Instance
::expect_resolve(tcx
, ty
::ParamEnv
::reveal_all(), def_id
, substs
)
936 match ty
::Instance
::resolve_for_fn_ptr(tcx
, ty
::ParamEnv
::reveal_all(), def_id
, substs
)
938 Some(instance
) => instance
,
939 _
=> bug
!("failed to resolve instance for {ty}"),
942 visit_instance_use(tcx
, instance
, is_direct_call
, source
, output
);
946 fn visit_instance_use
<'tcx
>(
948 instance
: ty
::Instance
<'tcx
>,
949 is_direct_call
: bool
,
951 output
: &mut MonoItems
<'tcx
>,
953 debug
!("visit_item_use({:?}, is_direct_call={:?})", instance
, is_direct_call
);
954 if !should_codegen_locally(tcx
, &instance
) {
959 ty
::InstanceDef
::Virtual(..) | ty
::InstanceDef
::Intrinsic(_
) => {
961 bug
!("{:?} being reified", instance
);
964 ty
::InstanceDef
::DropGlue(_
, None
) => {
965 // Don't need to emit noop drop glue if we are calling directly.
967 output
.push(create_fn_mono_item(tcx
, instance
, source
));
970 ty
::InstanceDef
::DropGlue(_
, Some(_
))
971 | ty
::InstanceDef
::VTableShim(..)
972 | ty
::InstanceDef
::ReifyShim(..)
973 | ty
::InstanceDef
::ClosureOnceShim { .. }
974 | ty
::InstanceDef
::Item(..)
975 | ty
::InstanceDef
::FnPtrShim(..)
976 | ty
::InstanceDef
::CloneShim(..) => {
977 output
.push(create_fn_mono_item(tcx
, instance
, source
));
982 /// Returns `true` if we should codegen an instance in the local crate, or returns `false` if we
983 /// can just link to the upstream crate and therefore don't need a mono item.
984 fn should_codegen_locally
<'tcx
>(tcx
: TyCtxt
<'tcx
>, instance
: &Instance
<'tcx
>) -> bool
{
985 let Some(def_id
) = instance
.def
.def_id_if_not_guaranteed_local_codegen() else {
989 if tcx
.is_foreign_item(def_id
) {
990 // Foreign items are always linked against, there's no way of instantiating them.
994 if def_id
.is_local() {
995 // Local items cannot be referred to locally without monomorphizing them locally.
999 if tcx
.is_reachable_non_generic(def_id
)
1000 || instance
.polymorphize(tcx
).upstream_monomorphization(tcx
).is_some()
1002 // We can link to the item in question, no instance needed in this crate.
1006 if let DefKind
::Static(_
) = tcx
.def_kind(def_id
) {
1007 // We cannot monomorphize statics from upstream crates.
1011 if !tcx
.is_mir_available(def_id
) {
1012 bug
!("no MIR available for {:?}", def_id
);
1018 /// For a given pair of source and target type that occur in an unsizing coercion,
1019 /// this function finds the pair of types that determines the vtable linking
1022 /// For example, the source type might be `&SomeStruct` and the target type
1023 /// might be `&dyn SomeTrait` in a cast like:
1025 /// ```rust,ignore (not real code)
1026 /// let src: &SomeStruct = ...;
1027 /// let target = src as &dyn SomeTrait;
1030 /// Then the output of this function would be (SomeStruct, SomeTrait) since for
1031 /// constructing the `target` fat-pointer we need the vtable for that pair.
1033 /// Things can get more complicated though because there's also the case where
1034 /// the unsized type occurs as a field:
1037 /// struct ComplexStruct<T: ?Sized> {
1044 /// In this case, if `T` is sized, `&ComplexStruct<T>` is a thin pointer. If `T`
1045 /// is unsized, `&SomeStruct` is a fat pointer, and the vtable it points to is
1046 /// for the pair of `T` (which is a trait) and the concrete type that `T` was
1047 /// originally coerced from:
1049 /// ```rust,ignore (not real code)
1050 /// let src: &ComplexStruct<SomeStruct> = ...;
1051 /// let target = src as &ComplexStruct<dyn SomeTrait>;
1054 /// Again, we want this `find_vtable_types_for_unsizing()` to provide the pair
1055 /// `(SomeStruct, SomeTrait)`.
1057 /// Finally, there is also the case of custom unsizing coercions, e.g., for
1058 /// smart pointers such as `Rc` and `Arc`.
1059 fn find_vtable_types_for_unsizing
<'tcx
>(
1060 tcx
: TyCtxtAt
<'tcx
>,
1061 source_ty
: Ty
<'tcx
>,
1062 target_ty
: Ty
<'tcx
>,
1063 ) -> (Ty
<'tcx
>, Ty
<'tcx
>) {
1064 let ptr_vtable
= |inner_source
: Ty
<'tcx
>, inner_target
: Ty
<'tcx
>| {
1065 let param_env
= ty
::ParamEnv
::reveal_all();
1066 let type_has_metadata
= |ty
: Ty
<'tcx
>| -> bool
{
1067 if ty
.is_sized(tcx
.tcx
, param_env
) {
1070 let tail
= tcx
.struct_tail_erasing_lifetimes(ty
, param_env
);
1072 ty
::Foreign(..) => false,
1073 ty
::Str
| ty
::Slice(..) | ty
::Dynamic(..) => true,
1074 _
=> bug
!("unexpected unsized tail: {:?}", tail
),
1077 if type_has_metadata(inner_source
) {
1078 (inner_source
, inner_target
)
1080 tcx
.struct_lockstep_tails_erasing_lifetimes(inner_source
, inner_target
, param_env
)
1084 match (&source_ty
.kind(), &target_ty
.kind()) {
1085 (&ty
::Ref(_
, a
, _
), &ty
::Ref(_
, b
, _
) | &ty
::RawPtr(ty
::TypeAndMut { ty: b, .. }
))
1086 | (&ty
::RawPtr(ty
::TypeAndMut { ty: a, .. }
), &ty
::RawPtr(ty
::TypeAndMut { ty: b, .. }
)) => {
1089 (&ty
::Adt(def_a
, _
), &ty
::Adt(def_b
, _
)) if def_a
.is_box() && def_b
.is_box() => {
1090 ptr_vtable(source_ty
.boxed_ty(), target_ty
.boxed_ty())
1094 (_
, &ty
::Dynamic(_
, _
, ty
::DynStar
)) => ptr_vtable(source_ty
, target_ty
),
1096 (&ty
::Adt(source_adt_def
, source_substs
), &ty
::Adt(target_adt_def
, target_substs
)) => {
1097 assert_eq
!(source_adt_def
, target_adt_def
);
1099 let CustomCoerceUnsized
::Struct(coerce_index
) =
1100 crate::custom_coerce_unsize_info(tcx
, source_ty
, target_ty
);
1102 let source_fields
= &source_adt_def
.non_enum_variant().fields
;
1103 let target_fields
= &target_adt_def
.non_enum_variant().fields
;
1106 coerce_index
< source_fields
.len() && source_fields
.len() == target_fields
.len()
1109 find_vtable_types_for_unsizing(
1111 source_fields
[coerce_index
].ty(*tcx
, source_substs
),
1112 target_fields
[coerce_index
].ty(*tcx
, target_substs
),
1116 "find_vtable_types_for_unsizing: invalid coercion {:?} -> {:?}",
1123 #[instrument(skip(tcx), level = "debug", ret)]
1124 fn create_fn_mono_item
<'tcx
>(
1126 instance
: Instance
<'tcx
>,
1128 ) -> Spanned
<MonoItem
<'tcx
>> {
1129 let def_id
= instance
.def_id();
1130 if tcx
.sess
.opts
.unstable_opts
.profile_closures
&& def_id
.is_local() && tcx
.is_closure(def_id
) {
1131 crate::util
::dump_closure_profile(tcx
, instance
);
1134 respan(source
, MonoItem
::Fn(instance
.polymorphize(tcx
)))
1137 /// Creates a `MonoItem` for each method that is referenced by the vtable for
1138 /// the given trait/impl pair.
1139 fn create_mono_items_for_vtable_methods
<'tcx
>(
1144 output
: &mut MonoItems
<'tcx
>,
1146 assert
!(!trait_ty
.has_escaping_bound_vars() && !impl_ty
.has_escaping_bound_vars());
1148 if let ty
::Dynamic(ref trait_ty
, ..) = trait_ty
.kind() {
1149 if let Some(principal
) = trait_ty
.principal() {
1150 let poly_trait_ref
= principal
.with_self_ty(tcx
, impl_ty
);
1151 assert
!(!poly_trait_ref
.has_escaping_bound_vars());
1153 // Walk all methods of the trait, including those of its supertraits
1154 let entries
= tcx
.vtable_entries(poly_trait_ref
);
1155 let methods
= entries
1157 .filter_map(|entry
| match entry
{
1158 VtblEntry
::MetadataDropInPlace
1159 | VtblEntry
::MetadataSize
1160 | VtblEntry
::MetadataAlign
1161 | VtblEntry
::Vacant
=> None
,
1162 VtblEntry
::TraitVPtr(_
) => {
1163 // all super trait items already covered, so skip them.
1166 VtblEntry
::Method(instance
) => {
1167 Some(*instance
).filter(|instance
| should_codegen_locally(tcx
, instance
))
1170 .map(|item
| create_fn_mono_item(tcx
, item
, source
));
1171 output
.extend(methods
);
1174 // Also add the destructor.
1175 visit_drop_use(tcx
, impl_ty
, false, source
, output
);
1179 //=-----------------------------------------------------------------------------
1181 //=-----------------------------------------------------------------------------
1183 struct RootCollector
<'a
, 'tcx
> {
1185 mode
: MonoItemCollectionMode
,
1186 output
: &'a
mut MonoItems
<'tcx
>,
1187 entry_fn
: Option
<(DefId
, EntryFnType
)>,
1190 impl<'v
> RootCollector
<'_
, 'v
> {
1191 fn process_item(&mut self, id
: hir
::ItemId
) {
1192 match self.tcx
.def_kind(id
.owner_id
) {
1193 DefKind
::Enum
| DefKind
::Struct
| DefKind
::Union
=> {
1194 let item
= self.tcx
.hir().item(id
);
1196 hir
::ItemKind
::Enum(_
, ref generics
)
1197 | hir
::ItemKind
::Struct(_
, ref generics
)
1198 | hir
::ItemKind
::Union(_
, ref generics
) => {
1199 if generics
.params
.is_empty() {
1200 if self.mode
== MonoItemCollectionMode
::Eager
{
1202 "RootCollector: ADT drop-glue for {}",
1203 self.tcx
.def_path_str(item
.owner_id
.to_def_id())
1206 let ty
= Instance
::new(
1207 item
.owner_id
.to_def_id(),
1208 InternalSubsts
::empty(),
1210 .ty(self.tcx
, ty
::ParamEnv
::reveal_all());
1211 visit_drop_use(self.tcx
, ty
, true, DUMMY_SP
, self.output
);
1218 DefKind
::GlobalAsm
=> {
1220 "RootCollector: ItemKind::GlobalAsm({})",
1221 self.tcx
.def_path_str(id
.owner_id
.to_def_id())
1223 self.output
.push(dummy_spanned(MonoItem
::GlobalAsm(id
)));
1225 DefKind
::Static(..) => {
1227 "RootCollector: ItemKind::Static({})",
1228 self.tcx
.def_path_str(id
.owner_id
.to_def_id())
1230 self.output
.push(dummy_spanned(MonoItem
::Static(id
.owner_id
.to_def_id())));
1233 // const items only generate mono items if they are
1234 // actually used somewhere. Just declaring them is insufficient.
1236 // but even just declaring them must collect the items they refer to
1237 if let Ok(val
) = self.tcx
.const_eval_poly(id
.owner_id
.to_def_id()) {
1238 collect_const_value(self.tcx
, val
, &mut self.output
);
1242 if self.mode
== MonoItemCollectionMode
::Eager
{
1243 let item
= self.tcx
.hir().item(id
);
1244 create_mono_items_for_default_impls(self.tcx
, item
, self.output
);
1248 self.push_if_root(id
.owner_id
.def_id
);
1254 fn process_impl_item(&mut self, id
: hir
::ImplItemId
) {
1255 if matches
!(self.tcx
.def_kind(id
.owner_id
), DefKind
::AssocFn
) {
1256 self.push_if_root(id
.owner_id
.def_id
);
1260 fn is_root(&self, def_id
: LocalDefId
) -> bool
{
1261 !item_requires_monomorphization(self.tcx
, def_id
)
1262 && match self.mode
{
1263 MonoItemCollectionMode
::Eager
=> true,
1264 MonoItemCollectionMode
::Lazy
=> {
1265 self.entry_fn
.and_then(|(id
, _
)| id
.as_local()) == Some(def_id
)
1266 || self.tcx
.is_reachable_non_generic(def_id
)
1269 .codegen_fn_attrs(def_id
)
1271 .contains(CodegenFnAttrFlags
::RUSTC_STD_INTERNAL_SYMBOL
)
1276 /// If `def_id` represents a root, pushes it onto the list of
1277 /// outputs. (Note that all roots must be monomorphic.)
1278 #[instrument(skip(self), level = "debug")]
1279 fn push_if_root(&mut self, def_id
: LocalDefId
) {
1280 if self.is_root(def_id
) {
1281 debug
!("found root");
1283 let instance
= Instance
::mono(self.tcx
, def_id
.to_def_id());
1284 self.output
.push(create_fn_mono_item(self.tcx
, instance
, DUMMY_SP
));
1288 /// As a special case, when/if we encounter the
1289 /// `main()` function, we also have to generate a
1290 /// monomorphized copy of the start lang item based on
1291 /// the return type of `main`. This is not needed when
1292 /// the user writes their own `start` manually.
1293 fn push_extra_entry_roots(&mut self) {
1294 let Some((main_def_id
, EntryFnType
::Main { .. }
)) = self.entry_fn
else {
1298 let start_def_id
= self.tcx
.require_lang_item(LangItem
::Start
, None
);
1299 let main_ret_ty
= self.tcx
.fn_sig(main_def_id
).output();
1301 // Given that `main()` has no arguments,
1302 // then its return type cannot have
1303 // late-bound regions, since late-bound
1304 // regions must appear in the argument
1306 let main_ret_ty
= self.tcx
.normalize_erasing_regions(
1307 ty
::ParamEnv
::reveal_all(),
1308 main_ret_ty
.no_bound_vars().unwrap(),
1311 let start_instance
= Instance
::resolve(
1313 ty
::ParamEnv
::reveal_all(),
1315 self.tcx
.intern_substs(&[main_ret_ty
.into()]),
1320 self.output
.push(create_fn_mono_item(self.tcx
, start_instance
, DUMMY_SP
));
1324 fn item_requires_monomorphization(tcx
: TyCtxt
<'_
>, def_id
: LocalDefId
) -> bool
{
1325 let generics
= tcx
.generics_of(def_id
);
1326 generics
.requires_monomorphization(tcx
)
1329 fn create_mono_items_for_default_impls
<'tcx
>(
1331 item
: &'tcx hir
::Item
<'tcx
>,
1332 output
: &mut MonoItems
<'tcx
>,
1335 hir
::ItemKind
::Impl(ref impl_
) => {
1336 if matches
!(impl_
.polarity
, hir
::ImplPolarity
::Negative(_
)) {
1340 for param
in impl_
.generics
.params
{
1342 hir
::GenericParamKind
::Lifetime { .. }
=> {}
1343 hir
::GenericParamKind
::Type { .. }
| hir
::GenericParamKind
::Const { .. }
=> {
1350 "create_mono_items_for_default_impls(item={})",
1351 tcx
.def_path_str(item
.owner_id
.to_def_id())
1354 if let Some(trait_ref
) = tcx
.impl_trait_ref(item
.owner_id
) {
1355 let trait_ref
= trait_ref
.subst_identity();
1357 let param_env
= ty
::ParamEnv
::reveal_all();
1358 let trait_ref
= tcx
.normalize_erasing_regions(param_env
, trait_ref
);
1359 let overridden_methods
= tcx
.impl_item_implementor_ids(item
.owner_id
);
1360 for method
in tcx
.provided_trait_methods(trait_ref
.def_id
) {
1361 if overridden_methods
.contains_key(&method
.def_id
) {
1365 if tcx
.generics_of(method
.def_id
).own_requires_monomorphization() {
1370 InternalSubsts
::for_item(tcx
, method
.def_id
, |param
, _
| match param
.kind
{
1371 GenericParamDefKind
::Lifetime
=> tcx
.lifetimes
.re_erased
.into(),
1372 GenericParamDefKind
::Type { .. }
1373 | GenericParamDefKind
::Const { .. }
=> {
1374 trait_ref
.substs
[param
.index
as usize]
1378 ty
::Instance
::expect_resolve(tcx
, param_env
, method
.def_id
, substs
);
1380 let mono_item
= create_fn_mono_item(tcx
, instance
, DUMMY_SP
);
1381 if mono_item
.node
.is_instantiable(tcx
) && should_codegen_locally(tcx
, &instance
)
1383 output
.push(mono_item
);
1392 /// Scans the miri alloc in order to find function calls, closures, and drop-glue.
1393 fn collect_miri
<'tcx
>(tcx
: TyCtxt
<'tcx
>, alloc_id
: AllocId
, output
: &mut MonoItems
<'tcx
>) {
1394 match tcx
.global_alloc(alloc_id
) {
1395 GlobalAlloc
::Static(def_id
) => {
1396 assert
!(!tcx
.is_thread_local_static(def_id
));
1397 let instance
= Instance
::mono(tcx
, def_id
);
1398 if should_codegen_locally(tcx
, &instance
) {
1399 trace
!("collecting static {:?}", def_id
);
1400 output
.push(dummy_spanned(MonoItem
::Static(def_id
)));
1403 GlobalAlloc
::Memory(alloc
) => {
1404 trace
!("collecting {:?} with {:#?}", alloc_id
, alloc
);
1405 for &inner
in alloc
.inner().provenance().ptrs().values() {
1406 rustc_data_structures
::stack
::ensure_sufficient_stack(|| {
1407 collect_miri(tcx
, inner
, output
);
1411 GlobalAlloc
::Function(fn_instance
) => {
1412 if should_codegen_locally(tcx
, &fn_instance
) {
1413 trace
!("collecting {:?} with {:#?}", alloc_id
, fn_instance
);
1414 output
.push(create_fn_mono_item(tcx
, fn_instance
, DUMMY_SP
));
1417 GlobalAlloc
::VTable(ty
, trait_ref
) => {
1418 let alloc_id
= tcx
.vtable_allocation((ty
, trait_ref
));
1419 collect_miri(tcx
, alloc_id
, output
)
1424 /// Scans the MIR in order to find function calls, closures, and drop-glue.
1425 #[instrument(skip(tcx, output), level = "debug")]
1426 fn collect_neighbours
<'tcx
>(
1428 instance
: Instance
<'tcx
>,
1429 output
: &mut MonoItems
<'tcx
>,
1431 let body
= tcx
.instance_mir(instance
.def
);
1432 MirNeighborCollector { tcx, body: &body, output, instance }
.visit_body(&body
);
1435 #[instrument(skip(tcx, output), level = "debug")]
1436 fn collect_const_value
<'tcx
>(
1438 value
: ConstValue
<'tcx
>,
1439 output
: &mut MonoItems
<'tcx
>,
1442 ConstValue
::Scalar(Scalar
::Ptr(ptr
, _size
)) => collect_miri(tcx
, ptr
.provenance
, output
),
1443 ConstValue
::Slice { data: alloc, start: _, end: _ }
| ConstValue
::ByRef { alloc, .. }
=> {
1444 for &id
in alloc
.inner().provenance().ptrs().values() {
1445 collect_miri(tcx
, id
, output
);