1 //! Mono Item Collection
2 //! ====================
4 //! This module is responsible for discovering all items that will contribute
5 //! to code generation of the crate. The important part here is that it not only
6 //! needs to find syntax-level items (functions, structs, etc) but also all
7 //! their monomorphized instantiations. Every non-generic, non-const function
8 //! maps to one LLVM artifact. Every generic function can produce
9 //! from zero to N artifacts, depending on the sets of type arguments it
10 //! is instantiated with.
11 //! This also applies to generic items from other crates: A generic definition
12 //! in crate X might produce monomorphizations that are compiled into crate Y.
13 //! We also have to collect these here.
15 //! The following kinds of "mono items" are handled here:
23 //! The following things also result in LLVM artifacts, but are not collected
24 //! here, since we instantiate them locally on demand when needed in a given
34 //! Let's define some terms first:
36 //! - A "mono item" is something that results in a function or global in
37 //! the LLVM IR of a codegen unit. Mono items do not stand on their
38 //! own, they can reference other mono items. For example, if function
39 //! `foo()` calls function `bar()` then the mono item for `foo()`
40 //! references the mono item for function `bar()`. In general, the
41 //! definition for mono item A referencing a mono item B is that
42 //! the LLVM artifact produced for A references the LLVM artifact produced
45 //! - Mono items and the references between them form a directed graph,
46 //! where the mono items are the nodes and references form the edges.
47 //! Let's call this graph the "mono item graph".
49 //! - The mono item graph for a program contains all mono items
50 //! that are needed in order to produce the complete LLVM IR of the program.
52 //! The purpose of the algorithm implemented in this module is to build the
53 //! mono item graph for the current crate. It runs in two phases:
55 //! 1. Discover the roots of the graph by traversing the HIR of the crate.
56 //! 2. Starting from the roots, find neighboring nodes by inspecting the MIR
57 //! representation of the item corresponding to a given node, until no more
58 //! new nodes are found.
60 //! ### Discovering roots
62 //! The roots of the mono item graph correspond to the public non-generic
63 //! syntactic items in the source code. We find them by walking the HIR of the
64 //! crate, and whenever we hit upon a public function, method, or static item,
65 //! we create a mono item consisting of the items DefId and, since we only
66 //! consider non-generic items, an empty type-substitution set. (In eager
67 //! collection mode, during incremental compilation, all non-generic functions
68 //! are considered as roots, as well as when the `-Clink-dead-code` option is
69 //! specified. Functions marked `#[no_mangle]` and functions called by inlinable
70 //! functions also always act as roots.)
72 //! ### Finding neighbor nodes
73 //! Given a mono item node, we can discover neighbors by inspecting its
74 //! MIR. We walk the MIR and any time we hit upon something that signifies a
75 //! reference to another mono item, we have found a neighbor. Since the
76 //! mono item we are currently at is always monomorphic, we also know the
77 //! concrete type arguments of its neighbors, and so all neighbors again will be
78 //! monomorphic. The specific forms a reference to a neighboring node can take
79 //! in MIR are quite diverse. Here is an overview:
81 //! #### Calling Functions/Methods
82 //! The most obvious form of one mono item referencing another is a
83 //! function or method call (represented by a CALL terminator in MIR). But
84 //! calls are not the only thing that might introduce a reference between two
85 //! function mono items, and as we will see below, they are just a
86 //! specialization of the form described next, and consequently will not get any
87 //! special treatment in the algorithm.
89 //! #### Taking a reference to a function or method
90 //! A function does not need to actually be called in order to be a neighbor of
91 //! another function. It suffices to just take a reference in order to introduce
92 //! an edge. Consider the following example:
95 //! # use core::fmt::Display;
96 //! fn print_val<T: Display>(x: T) {
97 //! println!("{}", x);
100 //! fn call_fn(f: &dyn Fn(i32), x: i32) {
105 //! let print_i32 = print_val::<i32>;
106 //! call_fn(&print_i32, 0);
109 //! The MIR of none of these functions will contain an explicit call to
110 //! `print_val::<i32>`. Nonetheless, in order to mono this program, we need
111 //! an instance of this function. Thus, whenever we encounter a function or
112 //! method in operand position, we treat it as a neighbor of the current
113 //! mono item. Calls are just a special case of that.
116 //! Drop glue mono items are introduced by MIR drop-statements. The
117 //! generated mono item will again have drop-glue item neighbors if the
118 //! type to be dropped contains nested values that also need to be dropped. It
119 //! might also have a function item neighbor for the explicit `Drop::drop`
120 //! implementation of its type.
122 //! #### Unsizing Casts
123 //! A subtle way of introducing neighbor edges is by casting to a trait object.
124 //! Since the resulting fat-pointer contains a reference to a vtable, we need to
125 //! instantiate all object-safe methods of the trait, as we need to store
126 //! pointers to these functions even if they never get called anywhere. This can
127 //! be seen as a special case of taking a function reference.
130 //! Since `Box` expression have special compiler support, no explicit calls to
131 //! `exchange_malloc()` and `box_free()` may show up in MIR, even if the
132 //! compiler will generate them. We have to observe `Rvalue::Box` expressions
133 //! and Box-typed drop-statements for that purpose.
136 //! Interaction with Cross-Crate Inlining
137 //! -------------------------------------
138 //! The binary of a crate will not only contain machine code for the items
139 //! defined in the source code of that crate. It will also contain monomorphic
140 //! instantiations of any extern generic functions and of functions marked with
142 //! The collection algorithm handles this more or less mono. If it is
143 //! about to create a mono item for something with an external `DefId`,
144 //! it will take a look if the MIR for that item is available, and if so just
145 //! proceed normally. If the MIR is not available, it assumes that the item is
146 //! just linked to and no node is created; which is exactly what we want, since
147 //! no machine code should be generated in the current crate for such an item.
149 //! Eager and Lazy Collection Mode
150 //! ------------------------------
151 //! Mono item collection can be performed in one of two modes:
153 //! - Lazy mode means that items will only be instantiated when actually
154 //! referenced. The goal is to produce the least amount of machine code
157 //! - Eager mode is meant to be used in conjunction with incremental compilation
158 //! where a stable set of mono items is more important than a minimal
159 //! one. Thus, eager mode will instantiate drop-glue for every drop-able type
160 //! in the crate, even if no drop call for that type exists (yet). It will
161 //! also instantiate default implementations of trait methods, something that
162 //! otherwise is only done on demand.
167 //! Some things are not yet fully implemented in the current version of this
171 //! Ideally, no mono item should be generated for const fns unless there
172 //! is a call to them that cannot be evaluated at compile time. At the moment
173 //! this is not implemented however: a mono item will be produced
174 //! regardless of whether it is actually needed or not.
176 use rustc_data_structures
::fx
::{FxHashMap, FxHashSet}
;
177 use rustc_data_structures
::sync
::{par_for_each_in, MTLock, MTLockRef}
;
178 use rustc_hir
as hir
;
179 use rustc_hir
::def
::DefKind
;
180 use rustc_hir
::def_id
::{DefId, DefIdMap, LocalDefId}
;
181 use rustc_hir
::lang_items
::LangItem
;
182 use rustc_index
::bit_set
::GrowableBitSet
;
183 use rustc_middle
::mir
::interpret
::{AllocId, ConstValue}
;
184 use rustc_middle
::mir
::interpret
::{ErrorHandled, GlobalAlloc, Scalar}
;
185 use rustc_middle
::mir
::mono
::{InstantiationMode, MonoItem}
;
186 use rustc_middle
::mir
::visit
::Visitor
as MirVisitor
;
187 use rustc_middle
::mir
::{self, Local, Location}
;
188 use rustc_middle
::query
::TyCtxtAt
;
189 use rustc_middle
::ty
::adjustment
::{CustomCoerceUnsized, PointerCast}
;
190 use rustc_middle
::ty
::print
::with_no_trimmed_paths
;
191 use rustc_middle
::ty
::subst
::{GenericArgKind, InternalSubsts}
;
192 use rustc_middle
::ty
::{
193 self, GenericParamDefKind
, Instance
, InstanceDef
, Ty
, TyCtxt
, TypeFoldable
, TypeVisitableExt
,
196 use rustc_middle
::{middle::codegen_fn_attrs::CodegenFnAttrFlags, mir::visit::TyContext}
;
197 use rustc_session
::config
::EntryFnType
;
198 use rustc_session
::lint
::builtin
::LARGE_ASSIGNMENTS
;
199 use rustc_session
::Limit
;
200 use rustc_span
::source_map
::{dummy_spanned, respan, Span, Spanned, DUMMY_SP}
;
201 use rustc_target
::abi
::Size
;
203 use std
::path
::PathBuf
;
206 EncounteredErrorWhileInstantiating
, LargeAssignmentsLint
, RecursionLimit
, TypeLengthLimit
,
210 pub enum MonoItemCollectionMode
{
215 /// Maps every mono item to all mono items it references in its
217 pub struct InliningMap
<'tcx
> {
218 // Maps a source mono item to the range of mono items
220 // The range selects elements within the `targets` vecs.
221 index
: FxHashMap
<MonoItem
<'tcx
>, Range
<usize>>,
222 targets
: Vec
<MonoItem
<'tcx
>>,
224 // Contains one bit per mono item in the `targets` field. That bit
225 // is true if that mono item needs to be inlined into every CGU.
226 inlines
: GrowableBitSet
<usize>,
229 /// Struct to store mono items in each collecting and if they should
230 /// be inlined. We call `instantiation_mode` to get their inlining
231 /// status when inserting new elements, which avoids calling it in
232 /// `inlining_map.lock_mut()`. See the `collect_items_rec` implementation
234 struct MonoItems
<'tcx
> {
235 // If this is false, we do not need to compute whether items
236 // will need to be inlined.
237 compute_inlining
: bool
,
239 // The TyCtxt used to determine whether the a item should
243 // The collected mono items. The bool field in each element
244 // indicates whether this element should be inlined.
245 items
: Vec
<(Spanned
<MonoItem
<'tcx
>>, bool
/*inlined*/)>,
248 impl<'tcx
> MonoItems
<'tcx
> {
250 fn push(&mut self, item
: Spanned
<MonoItem
<'tcx
>>) {
255 fn extend
<T
: IntoIterator
<Item
= Spanned
<MonoItem
<'tcx
>>>>(&mut self, iter
: T
) {
256 self.items
.extend(iter
.into_iter().map(|mono_item
| {
257 let inlined
= if !self.compute_inlining
{
260 mono_item
.node
.instantiation_mode(self.tcx
) == InstantiationMode
::LocalCopy
267 impl<'tcx
> InliningMap
<'tcx
> {
268 fn new() -> InliningMap
<'tcx
> {
270 index
: FxHashMap
::default(),
272 inlines
: GrowableBitSet
::with_capacity(1024),
276 fn record_accesses
<'a
>(
278 source
: MonoItem
<'tcx
>,
279 new_targets
: &'a
[(Spanned
<MonoItem
<'tcx
>>, bool
)],
283 let start_index
= self.targets
.len();
284 let new_items_count
= new_targets
.len();
285 let new_items_count_total
= new_items_count
+ self.targets
.len();
287 self.targets
.reserve(new_items_count
);
288 self.inlines
.ensure(new_items_count_total
);
290 for (i
, (Spanned { node: mono_item, .. }
, inlined
)) in new_targets
.into_iter().enumerate() {
291 self.targets
.push(*mono_item
);
293 self.inlines
.insert(i
+ start_index
);
297 let end_index
= self.targets
.len();
298 assert
!(self.index
.insert(source
, start_index
..end_index
).is_none());
301 /// Internally iterate over all items referenced by `source` which will be
302 /// made available for inlining.
303 pub fn with_inlining_candidates
<F
>(&self, source
: MonoItem
<'tcx
>, mut f
: F
)
305 F
: FnMut(MonoItem
<'tcx
>),
307 if let Some(range
) = self.index
.get(&source
) {
308 for (i
, candidate
) in self.targets
[range
.clone()].iter().enumerate() {
309 if self.inlines
.contains(range
.start
+ i
) {
316 /// Internally iterate over all items and the things each accesses.
317 pub fn iter_accesses
<F
>(&self, mut f
: F
)
319 F
: FnMut(MonoItem
<'tcx
>, &[MonoItem
<'tcx
>]),
321 for (&accessor
, range
) in &self.index
{
322 f(accessor
, &self.targets
[range
.clone()])
327 #[instrument(skip(tcx, mode), level = "debug")]
328 pub fn collect_crate_mono_items(
330 mode
: MonoItemCollectionMode
,
331 ) -> (FxHashSet
<MonoItem
<'_
>>, InliningMap
<'_
>) {
332 let _prof_timer
= tcx
.prof
.generic_activity("monomorphization_collector");
335 tcx
.sess
.time("monomorphization_collector_root_collections", || collect_roots(tcx
, mode
));
337 debug
!("building mono item graph, beginning at roots");
339 let mut visited
= MTLock
::new(FxHashSet
::default());
340 let mut inlining_map
= MTLock
::new(InliningMap
::new());
341 let recursion_limit
= tcx
.recursion_limit();
344 let visited
: MTLockRef
<'_
, _
> = &mut visited
;
345 let inlining_map
: MTLockRef
<'_
, _
> = &mut inlining_map
;
347 tcx
.sess
.time("monomorphization_collector_graph_walk", || {
348 par_for_each_in(roots
, |root
| {
349 let mut recursion_depths
= DefIdMap
::default();
354 &mut recursion_depths
,
362 (visited
.into_inner(), inlining_map
.into_inner())
365 // Find all non-generic items by walking the HIR. These items serve as roots to
366 // start monomorphizing from.
367 #[instrument(skip(tcx, mode), level = "debug")]
368 fn collect_roots(tcx
: TyCtxt
<'_
>, mode
: MonoItemCollectionMode
) -> Vec
<MonoItem
<'_
>> {
369 debug
!("collecting roots");
370 let mut roots
= MonoItems { compute_inlining: false, tcx, items: Vec::new() }
;
373 let entry_fn
= tcx
.entry_fn(());
375 debug
!("collect_roots: entry_fn = {:?}", entry_fn
);
377 let mut collector
= RootCollector { tcx, mode, entry_fn, output: &mut roots }
;
379 let crate_items
= tcx
.hir_crate_items(());
381 for id
in crate_items
.items() {
382 collector
.process_item(id
);
385 for id
in crate_items
.impl_items() {
386 collector
.process_impl_item(id
);
389 collector
.push_extra_entry_roots();
392 // We can only codegen items that are instantiable - items all of
393 // whose predicates hold. Luckily, items that aren't instantiable
394 // can't actually be used, so we can just skip codegenning them.
398 .filter_map(|(Spanned { node: mono_item, .. }
, _
)| {
399 mono_item
.is_instantiable(tcx
).then_some(mono_item
)
404 /// Collect all monomorphized items reachable from `starting_point`, and emit a note diagnostic if a
405 /// post-monomorphization error is encountered during a collection step.
406 #[instrument(skip(tcx, visited, recursion_depths, recursion_limit, inlining_map), level = "debug")]
407 fn collect_items_rec
<'tcx
>(
409 starting_point
: Spanned
<MonoItem
<'tcx
>>,
410 visited
: MTLockRef
<'_
, FxHashSet
<MonoItem
<'tcx
>>>,
411 recursion_depths
: &mut DefIdMap
<usize>,
412 recursion_limit
: Limit
,
413 inlining_map
: MTLockRef
<'_
, InliningMap
<'tcx
>>,
415 if !visited
.lock_mut().insert(starting_point
.node
) {
416 // We've been here already, no need to search again.
420 let mut neighbors
= MonoItems { compute_inlining: true, tcx, items: Vec::new() }
;
421 let recursion_depth_reset
;
424 // Post-monomorphization errors MVP
426 // We can encounter errors while monomorphizing an item, but we don't have a good way of
427 // showing a complete stack of spans ultimately leading to collecting the erroneous one yet.
428 // (It's also currently unclear exactly which diagnostics and information would be interesting
429 // to report in such cases)
431 // This leads to suboptimal error reporting: a post-monomorphization error (PME) will be
432 // shown with just a spanned piece of code causing the error, without information on where
433 // it was called from. This is especially obscure if the erroneous mono item is in a
434 // dependency. See for example issue #85155, where, before minimization, a PME happened two
435 // crates downstream from libcore's stdarch, without a way to know which dependency was the
438 // If such an error occurs in the current crate, its span will be enough to locate the
439 // source. If the cause is in another crate, the goal here is to quickly locate which mono
440 // item in the current crate is ultimately responsible for causing the error.
442 // To give at least _some_ context to the user: while collecting mono items, we check the
443 // error count. If it has changed, a PME occurred, and we trigger some diagnostics about the
444 // current step of mono items collection.
446 // FIXME: don't rely on global state, instead bubble up errors. Note: this is very hard to do.
447 let error_count
= tcx
.sess
.diagnostic().err_count();
449 match starting_point
.node
{
450 MonoItem
::Static(def_id
) => {
451 let instance
= Instance
::mono(tcx
, def_id
);
453 // Sanity check whether this ended up being collected accidentally
454 debug_assert
!(should_codegen_locally(tcx
, &instance
));
456 let ty
= instance
.ty(tcx
, ty
::ParamEnv
::reveal_all());
457 visit_drop_use(tcx
, ty
, true, starting_point
.span
, &mut neighbors
);
459 recursion_depth_reset
= None
;
461 if let Ok(alloc
) = tcx
.eval_static_initializer(def_id
) {
462 for &id
in alloc
.inner().provenance().ptrs().values() {
463 collect_miri(tcx
, id
, &mut neighbors
);
467 if tcx
.needs_thread_local_shim(def_id
) {
468 neighbors
.push(respan(
470 MonoItem
::Fn(Instance
{
471 def
: InstanceDef
::ThreadLocalShim(def_id
),
472 substs
: InternalSubsts
::empty(),
477 MonoItem
::Fn(instance
) => {
478 // Sanity check whether this ended up being collected accidentally
479 debug_assert
!(should_codegen_locally(tcx
, &instance
));
481 // Keep track of the monomorphization recursion depth
482 recursion_depth_reset
= Some(check_recursion_limit(
489 check_type_length_limit(tcx
, instance
);
491 rustc_data_structures
::stack
::ensure_sufficient_stack(|| {
492 collect_neighbours(tcx
, instance
, &mut neighbors
);
495 MonoItem
::GlobalAsm(item_id
) => {
496 recursion_depth_reset
= None
;
498 let item
= tcx
.hir().item(item_id
);
499 if let hir
::ItemKind
::GlobalAsm(asm
) = item
.kind
{
500 for (op
, op_sp
) in asm
.operands
{
502 hir
::InlineAsmOperand
::Const { .. }
=> {
503 // Only constants which resolve to a plain integer
504 // are supported. Therefore the value should not
505 // depend on any other items.
507 hir
::InlineAsmOperand
::SymFn { anon_const }
=> {
509 tcx
.typeck_body(anon_const
.body
).node_type(anon_const
.hir_id
);
510 visit_fn_use(tcx
, fn_ty
, false, *op_sp
, &mut neighbors
);
512 hir
::InlineAsmOperand
::SymStatic { path: _, def_id }
=> {
513 let instance
= Instance
::mono(tcx
, *def_id
);
514 if should_codegen_locally(tcx
, &instance
) {
515 trace
!("collecting static {:?}", def_id
);
516 neighbors
.push(dummy_spanned(MonoItem
::Static(*def_id
)));
519 hir
::InlineAsmOperand
::In { .. }
520 | hir
::InlineAsmOperand
::Out { .. }
521 | hir
::InlineAsmOperand
::InOut { .. }
522 | hir
::InlineAsmOperand
::SplitInOut { .. }
=> {
523 span_bug
!(*op_sp
, "invalid operand type for global_asm!")
528 span_bug
!(item
.span
, "Mismatch between hir::Item type and MonoItem type")
533 // Check for PMEs and emit a diagnostic if one happened. To try to show relevant edges of the
535 if tcx
.sess
.diagnostic().err_count() > error_count
536 && starting_point
.node
.is_generic_fn()
537 && starting_point
.node
.is_user_defined()
539 let formatted_item
= with_no_trimmed_paths
!(starting_point
.node
.to_string());
540 tcx
.sess
.emit_note(EncounteredErrorWhileInstantiating
{
541 span
: starting_point
.span
,
545 inlining_map
.lock_mut().record_accesses(starting_point
.node
, &neighbors
.items
);
547 for (neighbour
, _
) in neighbors
.items
{
548 collect_items_rec(tcx
, neighbour
, visited
, recursion_depths
, recursion_limit
, inlining_map
);
551 if let Some((def_id
, depth
)) = recursion_depth_reset
{
552 recursion_depths
.insert(def_id
, depth
);
556 /// Format instance name that is already known to be too long for rustc.
557 /// Show only the first 2 types if it is longer than 32 characters to avoid blasting
558 /// the user's terminal with thousands of lines of type-name.
560 /// If the type name is longer than before+after, it will be written to a file.
561 fn shrunk_instance_name
<'tcx
>(
563 instance
: &Instance
<'tcx
>,
564 ) -> (String
, Option
<PathBuf
>) {
565 let s
= instance
.to_string();
567 // Only use the shrunk version if it's really shorter.
568 // This also avoids the case where before and after slices overlap.
569 if s
.chars().nth(33).is_some() {
570 let shrunk
= format
!("{}", ty
::ShortInstance(instance
, 4));
575 let path
= tcx
.output_filenames(()).temp_path_ext("long-type.txt", None
);
576 let written_to_path
= std
::fs
::write(&path
, s
).ok().map(|_
| path
);
578 (shrunk
, written_to_path
)
584 fn check_recursion_limit
<'tcx
>(
586 instance
: Instance
<'tcx
>,
588 recursion_depths
: &mut DefIdMap
<usize>,
589 recursion_limit
: Limit
,
590 ) -> (DefId
, usize) {
591 let def_id
= instance
.def_id();
592 let recursion_depth
= recursion_depths
.get(&def_id
).cloned().unwrap_or(0);
593 debug
!(" => recursion depth={}", recursion_depth
);
595 let adjusted_recursion_depth
= if Some(def_id
) == tcx
.lang_items().drop_in_place_fn() {
596 // HACK: drop_in_place creates tight monomorphization loops. Give
603 // Code that needs to instantiate the same function recursively
604 // more than the recursion limit is assumed to be causing an
605 // infinite expansion.
606 if !recursion_limit
.value_within_limit(adjusted_recursion_depth
) {
607 let def_span
= tcx
.def_span(def_id
);
608 let def_path_str
= tcx
.def_path_str(def_id
);
609 let (shrunk
, written_to_path
) = shrunk_instance_name(tcx
, &instance
);
610 let mut path
= PathBuf
::new();
611 let was_written
= if let Some(written_to_path
) = written_to_path
{
612 path
= written_to_path
;
617 tcx
.sess
.emit_fatal(RecursionLimit
{
627 recursion_depths
.insert(def_id
, recursion_depth
+ 1);
629 (def_id
, recursion_depth
)
632 fn check_type_length_limit
<'tcx
>(tcx
: TyCtxt
<'tcx
>, instance
: Instance
<'tcx
>) {
633 let type_length
= instance
636 .flat_map(|arg
| arg
.walk())
637 .filter(|arg
| match arg
.unpack() {
638 GenericArgKind
::Type(_
) | GenericArgKind
::Const(_
) => true,
639 GenericArgKind
::Lifetime(_
) => false,
642 debug
!(" => type length={}", type_length
);
644 // Rust code can easily create exponentially-long types using only a
645 // polynomial recursion depth. Even with the default recursion
646 // depth, you can easily get cases that take >2^60 steps to run,
647 // which means that rustc basically hangs.
649 // Bail out in these cases to avoid that bad user experience.
650 if !tcx
.type_length_limit().value_within_limit(type_length
) {
651 let (shrunk
, written_to_path
) = shrunk_instance_name(tcx
, &instance
);
652 let span
= tcx
.def_span(instance
.def_id());
653 let mut path
= PathBuf
::new();
654 let was_written
= if let Some(path2
) = written_to_path
{
660 tcx
.sess
.emit_fatal(TypeLengthLimit { span, shrunk, was_written, path, type_length }
);
664 struct MirNeighborCollector
<'a
, 'tcx
> {
666 body
: &'a mir
::Body
<'tcx
>,
667 output
: &'a
mut MonoItems
<'tcx
>,
668 instance
: Instance
<'tcx
>,
671 impl<'a
, 'tcx
> MirNeighborCollector
<'a
, 'tcx
> {
672 pub fn monomorphize
<T
>(&self, value
: T
) -> T
674 T
: TypeFoldable
<TyCtxt
<'tcx
>>,
676 debug
!("monomorphize: self.instance={:?}", self.instance
);
677 self.instance
.subst_mir_and_normalize_erasing_regions(
679 ty
::ParamEnv
::reveal_all(),
680 ty
::EarlyBinder(value
),
685 impl<'a
, 'tcx
> MirVisitor
<'tcx
> for MirNeighborCollector
<'a
, 'tcx
> {
686 fn visit_rvalue(&mut self, rvalue
: &mir
::Rvalue
<'tcx
>, location
: Location
) {
687 debug
!("visiting rvalue {:?}", *rvalue
);
689 let span
= self.body
.source_info(location
).span
;
692 // When doing an cast from a regular pointer to a fat pointer, we
693 // have to instantiate all methods of the trait being cast to, so we
694 // can build the appropriate vtable.
696 mir
::CastKind
::Pointer(PointerCast
::Unsize
),
700 | mir
::Rvalue
::Cast(mir
::CastKind
::DynStar
, ref operand
, target_ty
) => {
701 let target_ty
= self.monomorphize(target_ty
);
702 let source_ty
= operand
.ty(self.body
, self.tcx
);
703 let source_ty
= self.monomorphize(source_ty
);
704 let (source_ty
, target_ty
) =
705 find_vtable_types_for_unsizing(self.tcx
.at(span
), source_ty
, target_ty
);
706 // This could also be a different Unsize instruction, like
707 // from a fixed sized array to a slice. But we are only
708 // interested in things that produce a vtable.
709 if (target_ty
.is_trait() && !source_ty
.is_trait())
710 || (target_ty
.is_dyn_star() && !source_ty
.is_dyn_star())
712 create_mono_items_for_vtable_methods(
722 mir
::CastKind
::Pointer(PointerCast
::ReifyFnPointer
),
726 let fn_ty
= operand
.ty(self.body
, self.tcx
);
727 let fn_ty
= self.monomorphize(fn_ty
);
728 visit_fn_use(self.tcx
, fn_ty
, false, span
, &mut self.output
);
731 mir
::CastKind
::Pointer(PointerCast
::ClosureFnPointer(_
)),
735 let source_ty
= operand
.ty(self.body
, self.tcx
);
736 let source_ty
= self.monomorphize(source_ty
);
737 match *source_ty
.kind() {
738 ty
::Closure(def_id
, substs
) => {
739 let instance
= Instance
::resolve_closure(
743 ty
::ClosureKind
::FnOnce
,
745 .expect("failed to normalize and resolve closure during codegen");
746 if should_codegen_locally(self.tcx
, &instance
) {
747 self.output
.push(create_fn_mono_item(self.tcx
, instance
, span
));
753 mir
::Rvalue
::ThreadLocalRef(def_id
) => {
754 assert
!(self.tcx
.is_thread_local_static(def_id
));
755 let instance
= Instance
::mono(self.tcx
, def_id
);
756 if should_codegen_locally(self.tcx
, &instance
) {
757 trace
!("collecting thread-local static {:?}", def_id
);
758 self.output
.push(respan(span
, MonoItem
::Static(def_id
)));
761 _
=> { /* not interesting */ }
764 self.super_rvalue(rvalue
, location
);
767 /// This does not walk the constant, as it has been handled entirely here and trying
768 /// to walk it would attempt to evaluate the `ty::Const` inside, which doesn't necessarily
769 /// work, as some constants cannot be represented in the type system.
770 #[instrument(skip(self), level = "debug")]
771 fn visit_constant(&mut self, constant
: &mir
::Constant
<'tcx
>, location
: Location
) {
772 let literal
= self.monomorphize(constant
.literal
);
773 let val
= match literal
{
774 mir
::ConstantKind
::Val(val
, _
) => val
,
775 mir
::ConstantKind
::Ty(ct
) => match ct
.kind() {
776 ty
::ConstKind
::Value(val
) => self.tcx
.valtree_to_const_val((ct
.ty(), val
)),
777 ty
::ConstKind
::Unevaluated(ct
) => {
779 let param_env
= ty
::ParamEnv
::reveal_all();
780 match self.tcx
.const_eval_resolve(param_env
, ct
.expand(), None
) {
781 // The `monomorphize` call should have evaluated that constant already.
783 Err(ErrorHandled
::Reported(_
)) => return,
784 Err(ErrorHandled
::TooGeneric
) => span_bug
!(
785 self.body
.source_info(location
).span
,
786 "collection encountered polymorphic constant: {:?}",
793 mir
::ConstantKind
::Unevaluated(uv
, _
) => {
794 let param_env
= ty
::ParamEnv
::reveal_all();
795 match self.tcx
.const_eval_resolve(param_env
, uv
, None
) {
796 // The `monomorphize` call should have evaluated that constant already.
798 Err(ErrorHandled
::Reported(_
)) => return,
799 Err(ErrorHandled
::TooGeneric
) => span_bug
!(
800 self.body
.source_info(location
).span
,
801 "collection encountered polymorphic constant: {:?}",
807 collect_const_value(self.tcx
, val
, self.output
);
808 MirVisitor
::visit_ty(self, literal
.ty(), TyContext
::Location(location
));
811 fn visit_terminator(&mut self, terminator
: &mir
::Terminator
<'tcx
>, location
: Location
) {
812 debug
!("visiting terminator {:?} @ {:?}", terminator
, location
);
813 let source
= self.body
.source_info(location
).span
;
816 match terminator
.kind
{
817 mir
::TerminatorKind
::Call { ref func, .. }
=> {
818 let callee_ty
= func
.ty(self.body
, tcx
);
819 let callee_ty
= self.monomorphize(callee_ty
);
820 visit_fn_use(self.tcx
, callee_ty
, true, source
, &mut self.output
)
822 mir
::TerminatorKind
::Drop { ref place, .. }
=> {
823 let ty
= place
.ty(self.body
, self.tcx
).ty
;
824 let ty
= self.monomorphize(ty
);
825 visit_drop_use(self.tcx
, ty
, true, source
, self.output
);
827 mir
::TerminatorKind
::InlineAsm { ref operands, .. }
=> {
830 mir
::InlineAsmOperand
::SymFn { ref value }
=> {
831 let fn_ty
= self.monomorphize(value
.literal
.ty());
832 visit_fn_use(self.tcx
, fn_ty
, false, source
, &mut self.output
);
834 mir
::InlineAsmOperand
::SymStatic { def_id }
=> {
835 let instance
= Instance
::mono(self.tcx
, def_id
);
836 if should_codegen_locally(self.tcx
, &instance
) {
837 trace
!("collecting asm sym static {:?}", def_id
);
838 self.output
.push(respan(source
, MonoItem
::Static(def_id
)));
845 mir
::TerminatorKind
::Assert { ref msg, .. }
=> {
846 let lang_item
= match &**msg
{
847 mir
::AssertKind
::BoundsCheck { .. }
=> LangItem
::PanicBoundsCheck
,
848 _
=> LangItem
::Panic
,
850 let instance
= Instance
::mono(tcx
, tcx
.require_lang_item(lang_item
, Some(source
)));
851 if should_codegen_locally(tcx
, &instance
) {
852 self.output
.push(create_fn_mono_item(tcx
, instance
, source
));
855 mir
::TerminatorKind
::Terminate { .. }
=> {
856 let instance
= Instance
::mono(
858 tcx
.require_lang_item(LangItem
::PanicCannotUnwind
, Some(source
)),
860 if should_codegen_locally(tcx
, &instance
) {
861 self.output
.push(create_fn_mono_item(tcx
, instance
, source
));
864 mir
::TerminatorKind
::Goto { .. }
865 | mir
::TerminatorKind
::SwitchInt { .. }
866 | mir
::TerminatorKind
::Resume
867 | mir
::TerminatorKind
::Return
868 | mir
::TerminatorKind
::Unreachable
=> {}
869 mir
::TerminatorKind
::GeneratorDrop
870 | mir
::TerminatorKind
::Yield { .. }
871 | mir
::TerminatorKind
::FalseEdge { .. }
872 | mir
::TerminatorKind
::FalseUnwind { .. }
=> bug
!(),
875 if let Some(mir
::UnwindAction
::Terminate
) = terminator
.unwind() {
876 let instance
= Instance
::mono(
878 tcx
.require_lang_item(LangItem
::PanicCannotUnwind
, Some(source
)),
880 if should_codegen_locally(tcx
, &instance
) {
881 self.output
.push(create_fn_mono_item(tcx
, instance
, source
));
885 self.super_terminator(terminator
, location
);
888 fn visit_operand(&mut self, operand
: &mir
::Operand
<'tcx
>, location
: Location
) {
889 self.super_operand(operand
, location
);
890 let limit
= self.tcx
.move_size_limit().0;
894 let limit
= Size
::from_bytes(limit
);
895 let ty
= operand
.ty(self.body
, self.tcx
);
896 let ty
= self.monomorphize(ty
);
897 let layout
= self.tcx
.layout_of(ty
::ParamEnv
::reveal_all().and(ty
));
898 if let Ok(layout
) = layout
{
899 if layout
.size
> limit
{
901 let source_info
= self.body
.source_info(location
);
902 debug
!(?source_info
);
903 let lint_root
= source_info
.scope
.lint_root(&self.body
.source_scopes
);
905 let Some(lint_root
) = lint_root
else {
906 // This happens when the issue is in a function from a foreign crate that
907 // we monomorphized in the current crate. We can't get a `HirId` for things
909 // FIXME: Find out where to report the lint on. Maybe simply crate-level lint root
910 // but correct span? This would make the lint at least accept crate-level lint attributes.
913 self.tcx
.emit_spanned_lint(
917 LargeAssignmentsLint
{
918 span
: source_info
.span
,
919 size
: layout
.size
.bytes(),
920 limit
: limit
.bytes(),
930 _context
: mir
::visit
::PlaceContext
,
936 fn visit_drop_use
<'tcx
>(
939 is_direct_call
: bool
,
941 output
: &mut MonoItems
<'tcx
>,
943 let instance
= Instance
::resolve_drop_in_place(tcx
, ty
);
944 visit_instance_use(tcx
, instance
, is_direct_call
, source
, output
);
947 fn visit_fn_use
<'tcx
>(
950 is_direct_call
: bool
,
952 output
: &mut MonoItems
<'tcx
>,
954 if let ty
::FnDef(def_id
, substs
) = *ty
.kind() {
955 let instance
= if is_direct_call
{
956 ty
::Instance
::expect_resolve(tcx
, ty
::ParamEnv
::reveal_all(), def_id
, substs
)
958 match ty
::Instance
::resolve_for_fn_ptr(tcx
, ty
::ParamEnv
::reveal_all(), def_id
, substs
)
960 Some(instance
) => instance
,
961 _
=> bug
!("failed to resolve instance for {ty}"),
964 visit_instance_use(tcx
, instance
, is_direct_call
, source
, output
);
968 fn visit_instance_use
<'tcx
>(
970 instance
: ty
::Instance
<'tcx
>,
971 is_direct_call
: bool
,
973 output
: &mut MonoItems
<'tcx
>,
975 debug
!("visit_item_use({:?}, is_direct_call={:?})", instance
, is_direct_call
);
976 if !should_codegen_locally(tcx
, &instance
) {
981 ty
::InstanceDef
::Virtual(..) | ty
::InstanceDef
::Intrinsic(_
) => {
983 bug
!("{:?} being reified", instance
);
986 ty
::InstanceDef
::ThreadLocalShim(..) => {
987 bug
!("{:?} being reified", instance
);
989 ty
::InstanceDef
::DropGlue(_
, None
) => {
990 // Don't need to emit noop drop glue if we are calling directly.
992 output
.push(create_fn_mono_item(tcx
, instance
, source
));
995 ty
::InstanceDef
::DropGlue(_
, Some(_
))
996 | ty
::InstanceDef
::VTableShim(..)
997 | ty
::InstanceDef
::ReifyShim(..)
998 | ty
::InstanceDef
::ClosureOnceShim { .. }
999 | ty
::InstanceDef
::Item(..)
1000 | ty
::InstanceDef
::FnPtrShim(..)
1001 | ty
::InstanceDef
::CloneShim(..)
1002 | ty
::InstanceDef
::FnPtrAddrShim(..) => {
1003 output
.push(create_fn_mono_item(tcx
, instance
, source
));
1008 /// Returns `true` if we should codegen an instance in the local crate, or returns `false` if we
1009 /// can just link to the upstream crate and therefore don't need a mono item.
1010 fn should_codegen_locally
<'tcx
>(tcx
: TyCtxt
<'tcx
>, instance
: &Instance
<'tcx
>) -> bool
{
1011 let Some(def_id
) = instance
.def
.def_id_if_not_guaranteed_local_codegen() else {
1015 if tcx
.is_foreign_item(def_id
) {
1016 // Foreign items are always linked against, there's no way of instantiating them.
1020 if def_id
.is_local() {
1021 // Local items cannot be referred to locally without monomorphizing them locally.
1025 if tcx
.is_reachable_non_generic(def_id
)
1026 || instance
.polymorphize(tcx
).upstream_monomorphization(tcx
).is_some()
1028 // We can link to the item in question, no instance needed in this crate.
1032 if let DefKind
::Static(_
) = tcx
.def_kind(def_id
) {
1033 // We cannot monomorphize statics from upstream crates.
1037 if !tcx
.is_mir_available(def_id
) {
1038 bug
!("no MIR available for {:?}", def_id
);
1044 /// For a given pair of source and target type that occur in an unsizing coercion,
1045 /// this function finds the pair of types that determines the vtable linking
1048 /// For example, the source type might be `&SomeStruct` and the target type
1049 /// might be `&dyn SomeTrait` in a cast like:
1051 /// ```rust,ignore (not real code)
1052 /// let src: &SomeStruct = ...;
1053 /// let target = src as &dyn SomeTrait;
1056 /// Then the output of this function would be (SomeStruct, SomeTrait) since for
1057 /// constructing the `target` fat-pointer we need the vtable for that pair.
1059 /// Things can get more complicated though because there's also the case where
1060 /// the unsized type occurs as a field:
1063 /// struct ComplexStruct<T: ?Sized> {
1070 /// In this case, if `T` is sized, `&ComplexStruct<T>` is a thin pointer. If `T`
1071 /// is unsized, `&SomeStruct` is a fat pointer, and the vtable it points to is
1072 /// for the pair of `T` (which is a trait) and the concrete type that `T` was
1073 /// originally coerced from:
1075 /// ```rust,ignore (not real code)
1076 /// let src: &ComplexStruct<SomeStruct> = ...;
1077 /// let target = src as &ComplexStruct<dyn SomeTrait>;
1080 /// Again, we want this `find_vtable_types_for_unsizing()` to provide the pair
1081 /// `(SomeStruct, SomeTrait)`.
1083 /// Finally, there is also the case of custom unsizing coercions, e.g., for
1084 /// smart pointers such as `Rc` and `Arc`.
1085 fn find_vtable_types_for_unsizing
<'tcx
>(
1086 tcx
: TyCtxtAt
<'tcx
>,
1087 source_ty
: Ty
<'tcx
>,
1088 target_ty
: Ty
<'tcx
>,
1089 ) -> (Ty
<'tcx
>, Ty
<'tcx
>) {
1090 let ptr_vtable
= |inner_source
: Ty
<'tcx
>, inner_target
: Ty
<'tcx
>| {
1091 let param_env
= ty
::ParamEnv
::reveal_all();
1092 let type_has_metadata
= |ty
: Ty
<'tcx
>| -> bool
{
1093 if ty
.is_sized(tcx
.tcx
, param_env
) {
1096 let tail
= tcx
.struct_tail_erasing_lifetimes(ty
, param_env
);
1098 ty
::Foreign(..) => false,
1099 ty
::Str
| ty
::Slice(..) | ty
::Dynamic(..) => true,
1100 _
=> bug
!("unexpected unsized tail: {:?}", tail
),
1103 if type_has_metadata(inner_source
) {
1104 (inner_source
, inner_target
)
1106 tcx
.struct_lockstep_tails_erasing_lifetimes(inner_source
, inner_target
, param_env
)
1110 match (&source_ty
.kind(), &target_ty
.kind()) {
1111 (&ty
::Ref(_
, a
, _
), &ty
::Ref(_
, b
, _
) | &ty
::RawPtr(ty
::TypeAndMut { ty: b, .. }
))
1112 | (&ty
::RawPtr(ty
::TypeAndMut { ty: a, .. }
), &ty
::RawPtr(ty
::TypeAndMut { ty: b, .. }
)) => {
1115 (&ty
::Adt(def_a
, _
), &ty
::Adt(def_b
, _
)) if def_a
.is_box() && def_b
.is_box() => {
1116 ptr_vtable(source_ty
.boxed_ty(), target_ty
.boxed_ty())
1120 (_
, &ty
::Dynamic(_
, _
, ty
::DynStar
)) => ptr_vtable(source_ty
, target_ty
),
1122 (&ty
::Adt(source_adt_def
, source_substs
), &ty
::Adt(target_adt_def
, target_substs
)) => {
1123 assert_eq
!(source_adt_def
, target_adt_def
);
1125 let CustomCoerceUnsized
::Struct(coerce_index
) =
1126 crate::custom_coerce_unsize_info(tcx
, source_ty
, target_ty
);
1128 let source_fields
= &source_adt_def
.non_enum_variant().fields
;
1129 let target_fields
= &target_adt_def
.non_enum_variant().fields
;
1132 coerce_index
.index() < source_fields
.len()
1133 && source_fields
.len() == target_fields
.len()
1136 find_vtable_types_for_unsizing(
1138 source_fields
[coerce_index
].ty(*tcx
, source_substs
),
1139 target_fields
[coerce_index
].ty(*tcx
, target_substs
),
1143 "find_vtable_types_for_unsizing: invalid coercion {:?} -> {:?}",
1150 #[instrument(skip(tcx), level = "debug", ret)]
1151 fn create_fn_mono_item
<'tcx
>(
1153 instance
: Instance
<'tcx
>,
1155 ) -> Spanned
<MonoItem
<'tcx
>> {
1156 let def_id
= instance
.def_id();
1157 if tcx
.sess
.opts
.unstable_opts
.profile_closures
&& def_id
.is_local() && tcx
.is_closure(def_id
) {
1158 crate::util
::dump_closure_profile(tcx
, instance
);
1161 respan(source
, MonoItem
::Fn(instance
.polymorphize(tcx
)))
1164 /// Creates a `MonoItem` for each method that is referenced by the vtable for
1165 /// the given trait/impl pair.
1166 fn create_mono_items_for_vtable_methods
<'tcx
>(
1171 output
: &mut MonoItems
<'tcx
>,
1173 assert
!(!trait_ty
.has_escaping_bound_vars() && !impl_ty
.has_escaping_bound_vars());
1175 if let ty
::Dynamic(ref trait_ty
, ..) = trait_ty
.kind() {
1176 if let Some(principal
) = trait_ty
.principal() {
1177 let poly_trait_ref
= principal
.with_self_ty(tcx
, impl_ty
);
1178 assert
!(!poly_trait_ref
.has_escaping_bound_vars());
1180 // Walk all methods of the trait, including those of its supertraits
1181 let entries
= tcx
.vtable_entries(poly_trait_ref
);
1182 let methods
= entries
1184 .filter_map(|entry
| match entry
{
1185 VtblEntry
::MetadataDropInPlace
1186 | VtblEntry
::MetadataSize
1187 | VtblEntry
::MetadataAlign
1188 | VtblEntry
::Vacant
=> None
,
1189 VtblEntry
::TraitVPtr(_
) => {
1190 // all super trait items already covered, so skip them.
1193 VtblEntry
::Method(instance
) => {
1194 Some(*instance
).filter(|instance
| should_codegen_locally(tcx
, instance
))
1197 .map(|item
| create_fn_mono_item(tcx
, item
, source
));
1198 output
.extend(methods
);
1201 // Also add the destructor.
1202 visit_drop_use(tcx
, impl_ty
, false, source
, output
);
1206 //=-----------------------------------------------------------------------------
1208 //=-----------------------------------------------------------------------------
1210 struct RootCollector
<'a
, 'tcx
> {
1212 mode
: MonoItemCollectionMode
,
1213 output
: &'a
mut MonoItems
<'tcx
>,
1214 entry_fn
: Option
<(DefId
, EntryFnType
)>,
1217 impl<'v
> RootCollector
<'_
, 'v
> {
1218 fn process_item(&mut self, id
: hir
::ItemId
) {
1219 match self.tcx
.def_kind(id
.owner_id
) {
1220 DefKind
::Enum
| DefKind
::Struct
| DefKind
::Union
=> {
1221 if self.mode
== MonoItemCollectionMode
::Eager
1222 && self.tcx
.generics_of(id
.owner_id
).count() == 0
1224 debug
!("RootCollector: ADT drop-glue for `{id:?}`",);
1226 let ty
= self.tcx
.type_of(id
.owner_id
.to_def_id()).no_bound_vars().unwrap();
1227 visit_drop_use(self.tcx
, ty
, true, DUMMY_SP
, self.output
);
1230 DefKind
::GlobalAsm
=> {
1232 "RootCollector: ItemKind::GlobalAsm({})",
1233 self.tcx
.def_path_str(id
.owner_id
)
1235 self.output
.push(dummy_spanned(MonoItem
::GlobalAsm(id
)));
1237 DefKind
::Static(..) => {
1238 let def_id
= id
.owner_id
.to_def_id();
1239 debug
!("RootCollector: ItemKind::Static({})", self.tcx
.def_path_str(def_id
));
1240 self.output
.push(dummy_spanned(MonoItem
::Static(def_id
)));
1243 // const items only generate mono items if they are
1244 // actually used somewhere. Just declaring them is insufficient.
1246 // but even just declaring them must collect the items they refer to
1247 if let Ok(val
) = self.tcx
.const_eval_poly(id
.owner_id
.to_def_id()) {
1248 collect_const_value(self.tcx
, val
, &mut self.output
);
1251 DefKind
::Impl { .. }
=> {
1252 if self.mode
== MonoItemCollectionMode
::Eager
{
1253 create_mono_items_for_default_impls(self.tcx
, id
, self.output
);
1257 self.push_if_root(id
.owner_id
.def_id
);
1263 fn process_impl_item(&mut self, id
: hir
::ImplItemId
) {
1264 if matches
!(self.tcx
.def_kind(id
.owner_id
), DefKind
::AssocFn
) {
1265 self.push_if_root(id
.owner_id
.def_id
);
1269 fn is_root(&self, def_id
: LocalDefId
) -> bool
{
1270 !item_requires_monomorphization(self.tcx
, def_id
)
1271 && match self.mode
{
1272 MonoItemCollectionMode
::Eager
=> true,
1273 MonoItemCollectionMode
::Lazy
=> {
1274 self.entry_fn
.and_then(|(id
, _
)| id
.as_local()) == Some(def_id
)
1275 || self.tcx
.is_reachable_non_generic(def_id
)
1278 .codegen_fn_attrs(def_id
)
1280 .contains(CodegenFnAttrFlags
::RUSTC_STD_INTERNAL_SYMBOL
)
1285 /// If `def_id` represents a root, pushes it onto the list of
1286 /// outputs. (Note that all roots must be monomorphic.)
1287 #[instrument(skip(self), level = "debug")]
1288 fn push_if_root(&mut self, def_id
: LocalDefId
) {
1289 if self.is_root(def_id
) {
1290 debug
!("found root");
1292 let instance
= Instance
::mono(self.tcx
, def_id
.to_def_id());
1293 self.output
.push(create_fn_mono_item(self.tcx
, instance
, DUMMY_SP
));
1297 /// As a special case, when/if we encounter the
1298 /// `main()` function, we also have to generate a
1299 /// monomorphized copy of the start lang item based on
1300 /// the return type of `main`. This is not needed when
1301 /// the user writes their own `start` manually.
1302 fn push_extra_entry_roots(&mut self) {
1303 let Some((main_def_id
, EntryFnType
::Main { .. }
)) = self.entry_fn
else {
1307 let start_def_id
= self.tcx
.require_lang_item(LangItem
::Start
, None
);
1308 let main_ret_ty
= self.tcx
.fn_sig(main_def_id
).no_bound_vars().unwrap().output();
1310 // Given that `main()` has no arguments,
1311 // then its return type cannot have
1312 // late-bound regions, since late-bound
1313 // regions must appear in the argument
1315 let main_ret_ty
= self.tcx
.normalize_erasing_regions(
1316 ty
::ParamEnv
::reveal_all(),
1317 main_ret_ty
.no_bound_vars().unwrap(),
1320 let start_instance
= Instance
::resolve(
1322 ty
::ParamEnv
::reveal_all(),
1324 self.tcx
.mk_substs(&[main_ret_ty
.into()]),
1329 self.output
.push(create_fn_mono_item(self.tcx
, start_instance
, DUMMY_SP
));
1333 fn item_requires_monomorphization(tcx
: TyCtxt
<'_
>, def_id
: LocalDefId
) -> bool
{
1334 let generics
= tcx
.generics_of(def_id
);
1335 generics
.requires_monomorphization(tcx
)
1338 #[instrument(level = "debug", skip(tcx, output))]
1339 fn create_mono_items_for_default_impls
<'tcx
>(
1342 output
: &mut MonoItems
<'tcx
>,
1344 let polarity
= tcx
.impl_polarity(item
.owner_id
);
1345 if matches
!(polarity
, ty
::ImplPolarity
::Negative
) {
1349 if tcx
.generics_of(item
.owner_id
).own_requires_monomorphization() {
1353 let Some(trait_ref
) = tcx
.impl_trait_ref(item
.owner_id
) else {
1357 // Lifetimes never affect trait selection, so we are allowed to eagerly
1358 // instantiate an instance of an impl method if the impl (and method,
1359 // which we check below) is only parameterized over lifetime. In that case,
1360 // we use the ReErased, which has no lifetime information associated with
1361 // it, to validate whether or not the impl is legal to instantiate at all.
1362 let only_region_params
= |param
: &ty
::GenericParamDef
, _
: &_
| match param
.kind
{
1363 GenericParamDefKind
::Lifetime
=> tcx
.lifetimes
.re_erased
.into(),
1364 GenericParamDefKind
::Type { .. }
| GenericParamDefKind
::Const { .. }
=> {
1366 "`own_requires_monomorphization` check means that \
1367 we should have no type/const params"
1371 let impl_substs
= InternalSubsts
::for_item(tcx
, item
.owner_id
.to_def_id(), only_region_params
);
1372 let trait_ref
= trait_ref
.subst(tcx
, impl_substs
);
1374 // Unlike 'lazy' monomorphization that begins by collecting items transitively
1375 // called by `main` or other global items, when eagerly monomorphizing impl
1376 // items, we never actually check that the predicates of this impl are satisfied
1377 // in a empty reveal-all param env (i.e. with no assumptions).
1379 // Even though this impl has no type or const substitutions, because we don't
1380 // consider higher-ranked predicates such as `for<'a> &'a mut [u8]: Copy` to
1381 // be trivially false. We must now check that the impl has no impossible-to-satisfy
1383 if tcx
.subst_and_check_impossible_predicates((item
.owner_id
.to_def_id(), impl_substs
)) {
1387 let param_env
= ty
::ParamEnv
::reveal_all();
1388 let trait_ref
= tcx
.normalize_erasing_regions(param_env
, trait_ref
);
1389 let overridden_methods
= tcx
.impl_item_implementor_ids(item
.owner_id
);
1390 for method
in tcx
.provided_trait_methods(trait_ref
.def_id
) {
1391 if overridden_methods
.contains_key(&method
.def_id
) {
1395 if tcx
.generics_of(method
.def_id
).own_requires_monomorphization() {
1399 // As mentioned above, the method is legal to eagerly instantiate if it
1400 // only has lifetime substitutions. This is validated by
1401 let substs
= trait_ref
.substs
.extend_to(tcx
, method
.def_id
, only_region_params
);
1402 let instance
= ty
::Instance
::expect_resolve(tcx
, param_env
, method
.def_id
, substs
);
1404 let mono_item
= create_fn_mono_item(tcx
, instance
, DUMMY_SP
);
1405 if mono_item
.node
.is_instantiable(tcx
) && should_codegen_locally(tcx
, &instance
) {
1406 output
.push(mono_item
);
1411 /// Scans the miri alloc in order to find function calls, closures, and drop-glue.
1412 fn collect_miri
<'tcx
>(tcx
: TyCtxt
<'tcx
>, alloc_id
: AllocId
, output
: &mut MonoItems
<'tcx
>) {
1413 match tcx
.global_alloc(alloc_id
) {
1414 GlobalAlloc
::Static(def_id
) => {
1415 assert
!(!tcx
.is_thread_local_static(def_id
));
1416 let instance
= Instance
::mono(tcx
, def_id
);
1417 if should_codegen_locally(tcx
, &instance
) {
1418 trace
!("collecting static {:?}", def_id
);
1419 output
.push(dummy_spanned(MonoItem
::Static(def_id
)));
1422 GlobalAlloc
::Memory(alloc
) => {
1423 trace
!("collecting {:?} with {:#?}", alloc_id
, alloc
);
1424 for &inner
in alloc
.inner().provenance().ptrs().values() {
1425 rustc_data_structures
::stack
::ensure_sufficient_stack(|| {
1426 collect_miri(tcx
, inner
, output
);
1430 GlobalAlloc
::Function(fn_instance
) => {
1431 if should_codegen_locally(tcx
, &fn_instance
) {
1432 trace
!("collecting {:?} with {:#?}", alloc_id
, fn_instance
);
1433 output
.push(create_fn_mono_item(tcx
, fn_instance
, DUMMY_SP
));
1436 GlobalAlloc
::VTable(ty
, trait_ref
) => {
1437 let alloc_id
= tcx
.vtable_allocation((ty
, trait_ref
));
1438 collect_miri(tcx
, alloc_id
, output
)
1443 /// Scans the MIR in order to find function calls, closures, and drop-glue.
1444 #[instrument(skip(tcx, output), level = "debug")]
1445 fn collect_neighbours
<'tcx
>(
1447 instance
: Instance
<'tcx
>,
1448 output
: &mut MonoItems
<'tcx
>,
1450 let body
= tcx
.instance_mir(instance
.def
);
1451 MirNeighborCollector { tcx, body: &body, output, instance }
.visit_body(&body
);
1454 #[instrument(skip(tcx, output), level = "debug")]
1455 fn collect_const_value
<'tcx
>(
1457 value
: ConstValue
<'tcx
>,
1458 output
: &mut MonoItems
<'tcx
>,
1461 ConstValue
::Scalar(Scalar
::Ptr(ptr
, _size
)) => collect_miri(tcx
, ptr
.provenance
, output
),
1462 ConstValue
::Slice { data: alloc, start: _, end: _ }
| ConstValue
::ByRef { alloc, .. }
=> {
1463 for &id
in alloc
.inner().provenance().ptrs().values() {
1464 collect_miri(tcx
, id
, output
);