]> git.proxmox.com Git - rustc.git/blob - compiler/rustc_monomorphize/src/collector.rs
New upstream version 1.68.2+dfsg1
[rustc.git] / compiler / rustc_monomorphize / src / collector.rs
1 //! Mono Item Collection
2 //! ====================
3 //!
4 //! This module is responsible for discovering all items that will contribute
5 //! to code generation of the crate. The important part here is that it not only
6 //! needs to find syntax-level items (functions, structs, etc) but also all
7 //! their monomorphized instantiations. Every non-generic, non-const function
8 //! maps to one LLVM artifact. Every generic function can produce
9 //! from zero to N artifacts, depending on the sets of type arguments it
10 //! is instantiated with.
11 //! This also applies to generic items from other crates: A generic definition
12 //! in crate X might produce monomorphizations that are compiled into crate Y.
13 //! We also have to collect these here.
14 //!
15 //! The following kinds of "mono items" are handled here:
16 //!
17 //! - Functions
18 //! - Methods
19 //! - Closures
20 //! - Statics
21 //! - Drop glue
22 //!
23 //! The following things also result in LLVM artifacts, but are not collected
24 //! here, since we instantiate them locally on demand when needed in a given
25 //! codegen unit:
26 //!
27 //! - Constants
28 //! - VTables
29 //! - Object Shims
30 //!
31 //!
32 //! General Algorithm
33 //! -----------------
34 //! Let's define some terms first:
35 //!
36 //! - A "mono item" is something that results in a function or global in
37 //! the LLVM IR of a codegen unit. Mono items do not stand on their
38 //! own, they can reference other mono items. For example, if function
39 //! `foo()` calls function `bar()` then the mono item for `foo()`
40 //! references the mono item for function `bar()`. In general, the
41 //! definition for mono item A referencing a mono item B is that
42 //! the LLVM artifact produced for A references the LLVM artifact produced
43 //! for B.
44 //!
45 //! - Mono items and the references between them form a directed graph,
46 //! where the mono items are the nodes and references form the edges.
47 //! Let's call this graph the "mono item graph".
48 //!
49 //! - The mono item graph for a program contains all mono items
50 //! that are needed in order to produce the complete LLVM IR of the program.
51 //!
52 //! The purpose of the algorithm implemented in this module is to build the
53 //! mono item graph for the current crate. It runs in two phases:
54 //!
55 //! 1. Discover the roots of the graph by traversing the HIR of the crate.
56 //! 2. Starting from the roots, find neighboring nodes by inspecting the MIR
57 //! representation of the item corresponding to a given node, until no more
58 //! new nodes are found.
59 //!
60 //! ### Discovering roots
61 //!
62 //! The roots of the mono item graph correspond to the public non-generic
63 //! syntactic items in the source code. We find them by walking the HIR of the
64 //! crate, and whenever we hit upon a public function, method, or static item,
65 //! we create a mono item consisting of the items DefId and, since we only
66 //! consider non-generic items, an empty type-substitution set. (In eager
67 //! collection mode, during incremental compilation, all non-generic functions
68 //! are considered as roots, as well as when the `-Clink-dead-code` option is
69 //! specified. Functions marked `#[no_mangle]` and functions called by inlinable
70 //! functions also always act as roots.)
71 //!
72 //! ### Finding neighbor nodes
73 //! Given a mono item node, we can discover neighbors by inspecting its
74 //! MIR. We walk the MIR and any time we hit upon something that signifies a
75 //! reference to another mono item, we have found a neighbor. Since the
76 //! mono item we are currently at is always monomorphic, we also know the
77 //! concrete type arguments of its neighbors, and so all neighbors again will be
78 //! monomorphic. The specific forms a reference to a neighboring node can take
79 //! in MIR are quite diverse. Here is an overview:
80 //!
81 //! #### Calling Functions/Methods
82 //! The most obvious form of one mono item referencing another is a
83 //! function or method call (represented by a CALL terminator in MIR). But
84 //! calls are not the only thing that might introduce a reference between two
85 //! function mono items, and as we will see below, they are just a
86 //! specialization of the form described next, and consequently will not get any
87 //! special treatment in the algorithm.
88 //!
89 //! #### Taking a reference to a function or method
90 //! A function does not need to actually be called in order to be a neighbor of
91 //! another function. It suffices to just take a reference in order to introduce
92 //! an edge. Consider the following example:
93 //!
94 //! ```
95 //! # use core::fmt::Display;
96 //! fn print_val<T: Display>(x: T) {
97 //! println!("{}", x);
98 //! }
99 //!
100 //! fn call_fn(f: &dyn Fn(i32), x: i32) {
101 //! f(x);
102 //! }
103 //!
104 //! fn main() {
105 //! let print_i32 = print_val::<i32>;
106 //! call_fn(&print_i32, 0);
107 //! }
108 //! ```
109 //! The MIR of none of these functions will contain an explicit call to
110 //! `print_val::<i32>`. Nonetheless, in order to mono this program, we need
111 //! an instance of this function. Thus, whenever we encounter a function or
112 //! method in operand position, we treat it as a neighbor of the current
113 //! mono item. Calls are just a special case of that.
114 //!
115 //! #### Drop glue
116 //! Drop glue mono items are introduced by MIR drop-statements. The
117 //! generated mono item will again have drop-glue item neighbors if the
118 //! type to be dropped contains nested values that also need to be dropped. It
119 //! might also have a function item neighbor for the explicit `Drop::drop`
120 //! implementation of its type.
121 //!
122 //! #### Unsizing Casts
123 //! A subtle way of introducing neighbor edges is by casting to a trait object.
124 //! Since the resulting fat-pointer contains a reference to a vtable, we need to
125 //! instantiate all object-safe methods of the trait, as we need to store
126 //! pointers to these functions even if they never get called anywhere. This can
127 //! be seen as a special case of taking a function reference.
128 //!
129 //! #### Boxes
130 //! Since `Box` expression have special compiler support, no explicit calls to
131 //! `exchange_malloc()` and `box_free()` may show up in MIR, even if the
132 //! compiler will generate them. We have to observe `Rvalue::Box` expressions
133 //! and Box-typed drop-statements for that purpose.
134 //!
135 //!
136 //! Interaction with Cross-Crate Inlining
137 //! -------------------------------------
138 //! The binary of a crate will not only contain machine code for the items
139 //! defined in the source code of that crate. It will also contain monomorphic
140 //! instantiations of any extern generic functions and of functions marked with
141 //! `#[inline]`.
142 //! The collection algorithm handles this more or less mono. If it is
143 //! about to create a mono item for something with an external `DefId`,
144 //! it will take a look if the MIR for that item is available, and if so just
145 //! proceed normally. If the MIR is not available, it assumes that the item is
146 //! just linked to and no node is created; which is exactly what we want, since
147 //! no machine code should be generated in the current crate for such an item.
148 //!
149 //! Eager and Lazy Collection Mode
150 //! ------------------------------
151 //! Mono item collection can be performed in one of two modes:
152 //!
153 //! - Lazy mode means that items will only be instantiated when actually
154 //! referenced. The goal is to produce the least amount of machine code
155 //! possible.
156 //!
157 //! - Eager mode is meant to be used in conjunction with incremental compilation
158 //! where a stable set of mono items is more important than a minimal
159 //! one. Thus, eager mode will instantiate drop-glue for every drop-able type
160 //! in the crate, even if no drop call for that type exists (yet). It will
161 //! also instantiate default implementations of trait methods, something that
162 //! otherwise is only done on demand.
163 //!
164 //!
165 //! Open Issues
166 //! -----------
167 //! Some things are not yet fully implemented in the current version of this
168 //! module.
169 //!
170 //! ### Const Fns
171 //! Ideally, no mono item should be generated for const fns unless there
172 //! is a call to them that cannot be evaluated at compile time. At the moment
173 //! this is not implemented however: a mono item will be produced
174 //! regardless of whether it is actually needed or not.
175
176 use rustc_data_structures::fx::{FxHashMap, FxHashSet};
177 use rustc_data_structures::sync::{par_for_each_in, MTLock, MTRef};
178 use rustc_hir as hir;
179 use rustc_hir::def::DefKind;
180 use rustc_hir::def_id::{DefId, DefIdMap, LocalDefId};
181 use rustc_hir::lang_items::LangItem;
182 use rustc_index::bit_set::GrowableBitSet;
183 use rustc_middle::mir::interpret::{AllocId, ConstValue};
184 use rustc_middle::mir::interpret::{ErrorHandled, GlobalAlloc, Scalar};
185 use rustc_middle::mir::mono::{InstantiationMode, MonoItem};
186 use rustc_middle::mir::visit::Visitor as MirVisitor;
187 use rustc_middle::mir::{self, Local, Location};
188 use rustc_middle::ty::adjustment::{CustomCoerceUnsized, PointerCast};
189 use rustc_middle::ty::print::with_no_trimmed_paths;
190 use rustc_middle::ty::query::TyCtxtAt;
191 use rustc_middle::ty::subst::{GenericArgKind, InternalSubsts};
192 use rustc_middle::ty::{
193 self, GenericParamDefKind, Instance, Ty, TyCtxt, TypeFoldable, TypeVisitable, VtblEntry,
194 };
195 use rustc_middle::{middle::codegen_fn_attrs::CodegenFnAttrFlags, mir::visit::TyContext};
196 use rustc_session::config::EntryFnType;
197 use rustc_session::lint::builtin::LARGE_ASSIGNMENTS;
198 use rustc_session::Limit;
199 use rustc_span::source_map::{dummy_spanned, respan, Span, Spanned, DUMMY_SP};
200 use rustc_target::abi::Size;
201 use std::ops::Range;
202 use std::path::PathBuf;
203
204 use crate::errors::{LargeAssignmentsLint, RecursionLimit, TypeLengthLimit};
205
206 #[derive(PartialEq)]
207 pub enum MonoItemCollectionMode {
208 Eager,
209 Lazy,
210 }
211
212 /// Maps every mono item to all mono items it references in its
213 /// body.
214 pub struct InliningMap<'tcx> {
215 // Maps a source mono item to the range of mono items
216 // accessed by it.
217 // The range selects elements within the `targets` vecs.
218 index: FxHashMap<MonoItem<'tcx>, Range<usize>>,
219 targets: Vec<MonoItem<'tcx>>,
220
221 // Contains one bit per mono item in the `targets` field. That bit
222 // is true if that mono item needs to be inlined into every CGU.
223 inlines: GrowableBitSet<usize>,
224 }
225
226 /// Struct to store mono items in each collecting and if they should
227 /// be inlined. We call `instantiation_mode` to get their inlining
228 /// status when inserting new elements, which avoids calling it in
229 /// `inlining_map.lock_mut()`. See the `collect_items_rec` implementation
230 /// below.
231 struct MonoItems<'tcx> {
232 // If this is false, we do not need to compute whether items
233 // will need to be inlined.
234 compute_inlining: bool,
235
236 // The TyCtxt used to determine whether the a item should
237 // be inlined.
238 tcx: TyCtxt<'tcx>,
239
240 // The collected mono items. The bool field in each element
241 // indicates whether this element should be inlined.
242 items: Vec<(Spanned<MonoItem<'tcx>>, bool /*inlined*/)>,
243 }
244
245 impl<'tcx> MonoItems<'tcx> {
246 #[inline]
247 fn push(&mut self, item: Spanned<MonoItem<'tcx>>) {
248 self.extend([item]);
249 }
250
251 #[inline]
252 fn extend<T: IntoIterator<Item = Spanned<MonoItem<'tcx>>>>(&mut self, iter: T) {
253 self.items.extend(iter.into_iter().map(|mono_item| {
254 let inlined = if !self.compute_inlining {
255 false
256 } else {
257 mono_item.node.instantiation_mode(self.tcx) == InstantiationMode::LocalCopy
258 };
259 (mono_item, inlined)
260 }))
261 }
262 }
263
264 impl<'tcx> InliningMap<'tcx> {
265 fn new() -> InliningMap<'tcx> {
266 InliningMap {
267 index: FxHashMap::default(),
268 targets: Vec::new(),
269 inlines: GrowableBitSet::with_capacity(1024),
270 }
271 }
272
273 fn record_accesses<'a>(
274 &mut self,
275 source: MonoItem<'tcx>,
276 new_targets: &'a [(Spanned<MonoItem<'tcx>>, bool)],
277 ) where
278 'tcx: 'a,
279 {
280 let start_index = self.targets.len();
281 let new_items_count = new_targets.len();
282 let new_items_count_total = new_items_count + self.targets.len();
283
284 self.targets.reserve(new_items_count);
285 self.inlines.ensure(new_items_count_total);
286
287 for (i, (Spanned { node: mono_item, .. }, inlined)) in new_targets.into_iter().enumerate() {
288 self.targets.push(*mono_item);
289 if *inlined {
290 self.inlines.insert(i + start_index);
291 }
292 }
293
294 let end_index = self.targets.len();
295 assert!(self.index.insert(source, start_index..end_index).is_none());
296 }
297
298 /// Internally iterate over all items referenced by `source` which will be
299 /// made available for inlining.
300 pub fn with_inlining_candidates<F>(&self, source: MonoItem<'tcx>, mut f: F)
301 where
302 F: FnMut(MonoItem<'tcx>),
303 {
304 if let Some(range) = self.index.get(&source) {
305 for (i, candidate) in self.targets[range.clone()].iter().enumerate() {
306 if self.inlines.contains(range.start + i) {
307 f(*candidate);
308 }
309 }
310 }
311 }
312
313 /// Internally iterate over all items and the things each accesses.
314 pub fn iter_accesses<F>(&self, mut f: F)
315 where
316 F: FnMut(MonoItem<'tcx>, &[MonoItem<'tcx>]),
317 {
318 for (&accessor, range) in &self.index {
319 f(accessor, &self.targets[range.clone()])
320 }
321 }
322 }
323
324 #[instrument(skip(tcx, mode), level = "debug")]
325 pub fn collect_crate_mono_items(
326 tcx: TyCtxt<'_>,
327 mode: MonoItemCollectionMode,
328 ) -> (FxHashSet<MonoItem<'_>>, InliningMap<'_>) {
329 let _prof_timer = tcx.prof.generic_activity("monomorphization_collector");
330
331 let roots =
332 tcx.sess.time("monomorphization_collector_root_collections", || collect_roots(tcx, mode));
333
334 debug!("building mono item graph, beginning at roots");
335
336 let mut visited = MTLock::new(FxHashSet::default());
337 let mut inlining_map = MTLock::new(InliningMap::new());
338 let recursion_limit = tcx.recursion_limit();
339
340 {
341 let visited: MTRef<'_, _> = &mut visited;
342 let inlining_map: MTRef<'_, _> = &mut inlining_map;
343
344 tcx.sess.time("monomorphization_collector_graph_walk", || {
345 par_for_each_in(roots, |root| {
346 let mut recursion_depths = DefIdMap::default();
347 collect_items_rec(
348 tcx,
349 dummy_spanned(root),
350 visited,
351 &mut recursion_depths,
352 recursion_limit,
353 inlining_map,
354 );
355 });
356 });
357 }
358
359 (visited.into_inner(), inlining_map.into_inner())
360 }
361
362 // Find all non-generic items by walking the HIR. These items serve as roots to
363 // start monomorphizing from.
364 #[instrument(skip(tcx, mode), level = "debug")]
365 fn collect_roots(tcx: TyCtxt<'_>, mode: MonoItemCollectionMode) -> Vec<MonoItem<'_>> {
366 debug!("collecting roots");
367 let mut roots = MonoItems { compute_inlining: false, tcx, items: Vec::new() };
368
369 {
370 let entry_fn = tcx.entry_fn(());
371
372 debug!("collect_roots: entry_fn = {:?}", entry_fn);
373
374 let mut collector = RootCollector { tcx, mode, entry_fn, output: &mut roots };
375
376 let crate_items = tcx.hir_crate_items(());
377
378 for id in crate_items.items() {
379 collector.process_item(id);
380 }
381
382 for id in crate_items.impl_items() {
383 collector.process_impl_item(id);
384 }
385
386 collector.push_extra_entry_roots();
387 }
388
389 // We can only codegen items that are instantiable - items all of
390 // whose predicates hold. Luckily, items that aren't instantiable
391 // can't actually be used, so we can just skip codegenning them.
392 roots
393 .items
394 .into_iter()
395 .filter_map(|(Spanned { node: mono_item, .. }, _)| {
396 mono_item.is_instantiable(tcx).then_some(mono_item)
397 })
398 .collect()
399 }
400
401 /// Collect all monomorphized items reachable from `starting_point`, and emit a note diagnostic if a
402 /// post-monorphization error is encountered during a collection step.
403 #[instrument(skip(tcx, visited, recursion_depths, recursion_limit, inlining_map), level = "debug")]
404 fn collect_items_rec<'tcx>(
405 tcx: TyCtxt<'tcx>,
406 starting_point: Spanned<MonoItem<'tcx>>,
407 visited: MTRef<'_, MTLock<FxHashSet<MonoItem<'tcx>>>>,
408 recursion_depths: &mut DefIdMap<usize>,
409 recursion_limit: Limit,
410 inlining_map: MTRef<'_, MTLock<InliningMap<'tcx>>>,
411 ) {
412 if !visited.lock_mut().insert(starting_point.node) {
413 // We've been here already, no need to search again.
414 return;
415 }
416
417 let mut neighbors = MonoItems { compute_inlining: true, tcx, items: Vec::new() };
418 let recursion_depth_reset;
419
420 //
421 // Post-monomorphization errors MVP
422 //
423 // We can encounter errors while monomorphizing an item, but we don't have a good way of
424 // showing a complete stack of spans ultimately leading to collecting the erroneous one yet.
425 // (It's also currently unclear exactly which diagnostics and information would be interesting
426 // to report in such cases)
427 //
428 // This leads to suboptimal error reporting: a post-monomorphization error (PME) will be
429 // shown with just a spanned piece of code causing the error, without information on where
430 // it was called from. This is especially obscure if the erroneous mono item is in a
431 // dependency. See for example issue #85155, where, before minimization, a PME happened two
432 // crates downstream from libcore's stdarch, without a way to know which dependency was the
433 // cause.
434 //
435 // If such an error occurs in the current crate, its span will be enough to locate the
436 // source. If the cause is in another crate, the goal here is to quickly locate which mono
437 // item in the current crate is ultimately responsible for causing the error.
438 //
439 // To give at least _some_ context to the user: while collecting mono items, we check the
440 // error count. If it has changed, a PME occurred, and we trigger some diagnostics about the
441 // current step of mono items collection.
442 //
443 // FIXME: don't rely on global state, instead bubble up errors. Note: this is very hard to do.
444 let error_count = tcx.sess.diagnostic().err_count();
445
446 match starting_point.node {
447 MonoItem::Static(def_id) => {
448 let instance = Instance::mono(tcx, def_id);
449
450 // Sanity check whether this ended up being collected accidentally
451 debug_assert!(should_codegen_locally(tcx, &instance));
452
453 let ty = instance.ty(tcx, ty::ParamEnv::reveal_all());
454 visit_drop_use(tcx, ty, true, starting_point.span, &mut neighbors);
455
456 recursion_depth_reset = None;
457
458 if let Ok(alloc) = tcx.eval_static_initializer(def_id) {
459 for &id in alloc.inner().provenance().ptrs().values() {
460 collect_miri(tcx, id, &mut neighbors);
461 }
462 }
463 }
464 MonoItem::Fn(instance) => {
465 // Sanity check whether this ended up being collected accidentally
466 debug_assert!(should_codegen_locally(tcx, &instance));
467
468 // Keep track of the monomorphization recursion depth
469 recursion_depth_reset = Some(check_recursion_limit(
470 tcx,
471 instance,
472 starting_point.span,
473 recursion_depths,
474 recursion_limit,
475 ));
476 check_type_length_limit(tcx, instance);
477
478 rustc_data_structures::stack::ensure_sufficient_stack(|| {
479 collect_neighbours(tcx, instance, &mut neighbors);
480 });
481 }
482 MonoItem::GlobalAsm(item_id) => {
483 recursion_depth_reset = None;
484
485 let item = tcx.hir().item(item_id);
486 if let hir::ItemKind::GlobalAsm(asm) = item.kind {
487 for (op, op_sp) in asm.operands {
488 match op {
489 hir::InlineAsmOperand::Const { .. } => {
490 // Only constants which resolve to a plain integer
491 // are supported. Therefore the value should not
492 // depend on any other items.
493 }
494 hir::InlineAsmOperand::SymFn { anon_const } => {
495 let fn_ty =
496 tcx.typeck_body(anon_const.body).node_type(anon_const.hir_id);
497 visit_fn_use(tcx, fn_ty, false, *op_sp, &mut neighbors);
498 }
499 hir::InlineAsmOperand::SymStatic { path: _, def_id } => {
500 let instance = Instance::mono(tcx, *def_id);
501 if should_codegen_locally(tcx, &instance) {
502 trace!("collecting static {:?}", def_id);
503 neighbors.push(dummy_spanned(MonoItem::Static(*def_id)));
504 }
505 }
506 hir::InlineAsmOperand::In { .. }
507 | hir::InlineAsmOperand::Out { .. }
508 | hir::InlineAsmOperand::InOut { .. }
509 | hir::InlineAsmOperand::SplitInOut { .. } => {
510 span_bug!(*op_sp, "invalid operand type for global_asm!")
511 }
512 }
513 }
514 } else {
515 span_bug!(item.span, "Mismatch between hir::Item type and MonoItem type")
516 }
517 }
518 }
519
520 // Check for PMEs and emit a diagnostic if one happened. To try to show relevant edges of the
521 // mono item graph.
522 if tcx.sess.diagnostic().err_count() > error_count
523 && starting_point.node.is_generic_fn()
524 && starting_point.node.is_user_defined()
525 {
526 let formatted_item = with_no_trimmed_paths!(starting_point.node.to_string());
527 tcx.sess.span_note_without_error(
528 starting_point.span,
529 &format!("the above error was encountered while instantiating `{formatted_item}`"),
530 );
531 }
532 inlining_map.lock_mut().record_accesses(starting_point.node, &neighbors.items);
533
534 for (neighbour, _) in neighbors.items {
535 collect_items_rec(tcx, neighbour, visited, recursion_depths, recursion_limit, inlining_map);
536 }
537
538 if let Some((def_id, depth)) = recursion_depth_reset {
539 recursion_depths.insert(def_id, depth);
540 }
541 }
542
543 /// Format instance name that is already known to be too long for rustc.
544 /// Show only the first 2 types if it is longer than 32 characters to avoid blasting
545 /// the user's terminal with thousands of lines of type-name.
546 ///
547 /// If the type name is longer than before+after, it will be written to a file.
548 fn shrunk_instance_name<'tcx>(
549 tcx: TyCtxt<'tcx>,
550 instance: &Instance<'tcx>,
551 ) -> (String, Option<PathBuf>) {
552 let s = instance.to_string();
553
554 // Only use the shrunk version if it's really shorter.
555 // This also avoids the case where before and after slices overlap.
556 if s.chars().nth(33).is_some() {
557 let shrunk = format!("{}", ty::ShortInstance(instance, 4));
558 if shrunk == s {
559 return (s, None);
560 }
561
562 let path = tcx.output_filenames(()).temp_path_ext("long-type.txt", None);
563 let written_to_path = std::fs::write(&path, s).ok().map(|_| path);
564
565 (shrunk, written_to_path)
566 } else {
567 (s, None)
568 }
569 }
570
571 fn check_recursion_limit<'tcx>(
572 tcx: TyCtxt<'tcx>,
573 instance: Instance<'tcx>,
574 span: Span,
575 recursion_depths: &mut DefIdMap<usize>,
576 recursion_limit: Limit,
577 ) -> (DefId, usize) {
578 let def_id = instance.def_id();
579 let recursion_depth = recursion_depths.get(&def_id).cloned().unwrap_or(0);
580 debug!(" => recursion depth={}", recursion_depth);
581
582 let adjusted_recursion_depth = if Some(def_id) == tcx.lang_items().drop_in_place_fn() {
583 // HACK: drop_in_place creates tight monomorphization loops. Give
584 // it more margin.
585 recursion_depth / 4
586 } else {
587 recursion_depth
588 };
589
590 // Code that needs to instantiate the same function recursively
591 // more than the recursion limit is assumed to be causing an
592 // infinite expansion.
593 if !recursion_limit.value_within_limit(adjusted_recursion_depth) {
594 let def_span = tcx.def_span(def_id);
595 let def_path_str = tcx.def_path_str(def_id);
596 let (shrunk, written_to_path) = shrunk_instance_name(tcx, &instance);
597 let mut path = PathBuf::new();
598 let was_written = if let Some(written_to_path) = written_to_path {
599 path = written_to_path;
600 Some(())
601 } else {
602 None
603 };
604 tcx.sess.emit_fatal(RecursionLimit {
605 span,
606 shrunk,
607 def_span,
608 def_path_str,
609 was_written,
610 path,
611 });
612 }
613
614 recursion_depths.insert(def_id, recursion_depth + 1);
615
616 (def_id, recursion_depth)
617 }
618
619 fn check_type_length_limit<'tcx>(tcx: TyCtxt<'tcx>, instance: Instance<'tcx>) {
620 let type_length = instance
621 .substs
622 .iter()
623 .flat_map(|arg| arg.walk())
624 .filter(|arg| match arg.unpack() {
625 GenericArgKind::Type(_) | GenericArgKind::Const(_) => true,
626 GenericArgKind::Lifetime(_) => false,
627 })
628 .count();
629 debug!(" => type length={}", type_length);
630
631 // Rust code can easily create exponentially-long types using only a
632 // polynomial recursion depth. Even with the default recursion
633 // depth, you can easily get cases that take >2^60 steps to run,
634 // which means that rustc basically hangs.
635 //
636 // Bail out in these cases to avoid that bad user experience.
637 if !tcx.type_length_limit().value_within_limit(type_length) {
638 let (shrunk, written_to_path) = shrunk_instance_name(tcx, &instance);
639 let span = tcx.def_span(instance.def_id());
640 let mut path = PathBuf::new();
641 let was_written = if written_to_path.is_some() {
642 path = written_to_path.unwrap();
643 Some(())
644 } else {
645 None
646 };
647 tcx.sess.emit_fatal(TypeLengthLimit { span, shrunk, was_written, path, type_length });
648 }
649 }
650
651 struct MirNeighborCollector<'a, 'tcx> {
652 tcx: TyCtxt<'tcx>,
653 body: &'a mir::Body<'tcx>,
654 output: &'a mut MonoItems<'tcx>,
655 instance: Instance<'tcx>,
656 }
657
658 impl<'a, 'tcx> MirNeighborCollector<'a, 'tcx> {
659 pub fn monomorphize<T>(&self, value: T) -> T
660 where
661 T: TypeFoldable<'tcx>,
662 {
663 debug!("monomorphize: self.instance={:?}", self.instance);
664 self.instance.subst_mir_and_normalize_erasing_regions(
665 self.tcx,
666 ty::ParamEnv::reveal_all(),
667 value,
668 )
669 }
670 }
671
672 impl<'a, 'tcx> MirVisitor<'tcx> for MirNeighborCollector<'a, 'tcx> {
673 fn visit_rvalue(&mut self, rvalue: &mir::Rvalue<'tcx>, location: Location) {
674 debug!("visiting rvalue {:?}", *rvalue);
675
676 let span = self.body.source_info(location).span;
677
678 match *rvalue {
679 // When doing an cast from a regular pointer to a fat pointer, we
680 // have to instantiate all methods of the trait being cast to, so we
681 // can build the appropriate vtable.
682 mir::Rvalue::Cast(
683 mir::CastKind::Pointer(PointerCast::Unsize),
684 ref operand,
685 target_ty,
686 )
687 | mir::Rvalue::Cast(mir::CastKind::DynStar, ref operand, target_ty) => {
688 let target_ty = self.monomorphize(target_ty);
689 let source_ty = operand.ty(self.body, self.tcx);
690 let source_ty = self.monomorphize(source_ty);
691 let (source_ty, target_ty) =
692 find_vtable_types_for_unsizing(self.tcx.at(span), source_ty, target_ty);
693 // This could also be a different Unsize instruction, like
694 // from a fixed sized array to a slice. But we are only
695 // interested in things that produce a vtable.
696 if (target_ty.is_trait() && !source_ty.is_trait())
697 || (target_ty.is_dyn_star() && !source_ty.is_dyn_star())
698 {
699 create_mono_items_for_vtable_methods(
700 self.tcx,
701 target_ty,
702 source_ty,
703 span,
704 self.output,
705 );
706 }
707 }
708 mir::Rvalue::Cast(
709 mir::CastKind::Pointer(PointerCast::ReifyFnPointer),
710 ref operand,
711 _,
712 ) => {
713 let fn_ty = operand.ty(self.body, self.tcx);
714 let fn_ty = self.monomorphize(fn_ty);
715 visit_fn_use(self.tcx, fn_ty, false, span, &mut self.output);
716 }
717 mir::Rvalue::Cast(
718 mir::CastKind::Pointer(PointerCast::ClosureFnPointer(_)),
719 ref operand,
720 _,
721 ) => {
722 let source_ty = operand.ty(self.body, self.tcx);
723 let source_ty = self.monomorphize(source_ty);
724 match *source_ty.kind() {
725 ty::Closure(def_id, substs) => {
726 let instance = Instance::resolve_closure(
727 self.tcx,
728 def_id,
729 substs,
730 ty::ClosureKind::FnOnce,
731 )
732 .expect("failed to normalize and resolve closure during codegen");
733 if should_codegen_locally(self.tcx, &instance) {
734 self.output.push(create_fn_mono_item(self.tcx, instance, span));
735 }
736 }
737 _ => bug!(),
738 }
739 }
740 mir::Rvalue::ThreadLocalRef(def_id) => {
741 assert!(self.tcx.is_thread_local_static(def_id));
742 let instance = Instance::mono(self.tcx, def_id);
743 if should_codegen_locally(self.tcx, &instance) {
744 trace!("collecting thread-local static {:?}", def_id);
745 self.output.push(respan(span, MonoItem::Static(def_id)));
746 }
747 }
748 _ => { /* not interesting */ }
749 }
750
751 self.super_rvalue(rvalue, location);
752 }
753
754 /// This does not walk the constant, as it has been handled entirely here and trying
755 /// to walk it would attempt to evaluate the `ty::Const` inside, which doesn't necessarily
756 /// work, as some constants cannot be represented in the type system.
757 #[instrument(skip(self), level = "debug")]
758 fn visit_constant(&mut self, constant: &mir::Constant<'tcx>, location: Location) {
759 let literal = self.monomorphize(constant.literal);
760 let val = match literal {
761 mir::ConstantKind::Val(val, _) => val,
762 mir::ConstantKind::Ty(ct) => match ct.kind() {
763 ty::ConstKind::Value(val) => self.tcx.valtree_to_const_val((ct.ty(), val)),
764 ty::ConstKind::Unevaluated(ct) => {
765 debug!(?ct);
766 let param_env = ty::ParamEnv::reveal_all();
767 match self.tcx.const_eval_resolve(param_env, ct.expand(), None) {
768 // The `monomorphize` call should have evaluated that constant already.
769 Ok(val) => val,
770 Err(ErrorHandled::Reported(_)) => return,
771 Err(ErrorHandled::TooGeneric) => span_bug!(
772 self.body.source_info(location).span,
773 "collection encountered polymorphic constant: {:?}",
774 literal
775 ),
776 }
777 }
778 _ => return,
779 },
780 mir::ConstantKind::Unevaluated(uv, _) => {
781 let param_env = ty::ParamEnv::reveal_all();
782 match self.tcx.const_eval_resolve(param_env, uv, None) {
783 // The `monomorphize` call should have evaluated that constant already.
784 Ok(val) => val,
785 Err(ErrorHandled::Reported(_)) => return,
786 Err(ErrorHandled::TooGeneric) => span_bug!(
787 self.body.source_info(location).span,
788 "collection encountered polymorphic constant: {:?}",
789 literal
790 ),
791 }
792 }
793 };
794 collect_const_value(self.tcx, val, self.output);
795 MirVisitor::visit_ty(self, literal.ty(), TyContext::Location(location));
796 }
797
798 fn visit_terminator(&mut self, terminator: &mir::Terminator<'tcx>, location: Location) {
799 debug!("visiting terminator {:?} @ {:?}", terminator, location);
800 let source = self.body.source_info(location).span;
801
802 let tcx = self.tcx;
803 match terminator.kind {
804 mir::TerminatorKind::Call { ref func, .. } => {
805 let callee_ty = func.ty(self.body, tcx);
806 let callee_ty = self.monomorphize(callee_ty);
807 visit_fn_use(self.tcx, callee_ty, true, source, &mut self.output)
808 }
809 mir::TerminatorKind::Drop { ref place, .. }
810 | mir::TerminatorKind::DropAndReplace { ref place, .. } => {
811 let ty = place.ty(self.body, self.tcx).ty;
812 let ty = self.monomorphize(ty);
813 visit_drop_use(self.tcx, ty, true, source, self.output);
814 }
815 mir::TerminatorKind::InlineAsm { ref operands, .. } => {
816 for op in operands {
817 match *op {
818 mir::InlineAsmOperand::SymFn { ref value } => {
819 let fn_ty = self.monomorphize(value.literal.ty());
820 visit_fn_use(self.tcx, fn_ty, false, source, &mut self.output);
821 }
822 mir::InlineAsmOperand::SymStatic { def_id } => {
823 let instance = Instance::mono(self.tcx, def_id);
824 if should_codegen_locally(self.tcx, &instance) {
825 trace!("collecting asm sym static {:?}", def_id);
826 self.output.push(respan(source, MonoItem::Static(def_id)));
827 }
828 }
829 _ => {}
830 }
831 }
832 }
833 mir::TerminatorKind::Assert { ref msg, .. } => {
834 let lang_item = match msg {
835 mir::AssertKind::BoundsCheck { .. } => LangItem::PanicBoundsCheck,
836 _ => LangItem::Panic,
837 };
838 let instance = Instance::mono(tcx, tcx.require_lang_item(lang_item, Some(source)));
839 if should_codegen_locally(tcx, &instance) {
840 self.output.push(create_fn_mono_item(tcx, instance, source));
841 }
842 }
843 mir::TerminatorKind::Abort { .. } => {
844 let instance = Instance::mono(
845 tcx,
846 tcx.require_lang_item(LangItem::PanicCannotUnwind, Some(source)),
847 );
848 if should_codegen_locally(tcx, &instance) {
849 self.output.push(create_fn_mono_item(tcx, instance, source));
850 }
851 }
852 mir::TerminatorKind::Goto { .. }
853 | mir::TerminatorKind::SwitchInt { .. }
854 | mir::TerminatorKind::Resume
855 | mir::TerminatorKind::Return
856 | mir::TerminatorKind::Unreachable => {}
857 mir::TerminatorKind::GeneratorDrop
858 | mir::TerminatorKind::Yield { .. }
859 | mir::TerminatorKind::FalseEdge { .. }
860 | mir::TerminatorKind::FalseUnwind { .. } => bug!(),
861 }
862
863 self.super_terminator(terminator, location);
864 }
865
866 fn visit_operand(&mut self, operand: &mir::Operand<'tcx>, location: Location) {
867 self.super_operand(operand, location);
868 let limit = self.tcx.move_size_limit().0;
869 if limit == 0 {
870 return;
871 }
872 let limit = Size::from_bytes(limit);
873 let ty = operand.ty(self.body, self.tcx);
874 let ty = self.monomorphize(ty);
875 let layout = self.tcx.layout_of(ty::ParamEnv::reveal_all().and(ty));
876 if let Ok(layout) = layout {
877 if layout.size > limit {
878 debug!(?layout);
879 let source_info = self.body.source_info(location);
880 debug!(?source_info);
881 let lint_root = source_info.scope.lint_root(&self.body.source_scopes);
882 debug!(?lint_root);
883 let Some(lint_root) = lint_root else {
884 // This happens when the issue is in a function from a foreign crate that
885 // we monomorphized in the current crate. We can't get a `HirId` for things
886 // in other crates.
887 // FIXME: Find out where to report the lint on. Maybe simply crate-level lint root
888 // but correct span? This would make the lint at least accept crate-level lint attributes.
889 return;
890 };
891 self.tcx.emit_spanned_lint(
892 LARGE_ASSIGNMENTS,
893 lint_root,
894 source_info.span,
895 LargeAssignmentsLint {
896 span: source_info.span,
897 size: layout.size.bytes(),
898 limit: limit.bytes(),
899 },
900 )
901 }
902 }
903 }
904
905 fn visit_local(
906 &mut self,
907 _place_local: Local,
908 _context: mir::visit::PlaceContext,
909 _location: Location,
910 ) {
911 }
912 }
913
914 fn visit_drop_use<'tcx>(
915 tcx: TyCtxt<'tcx>,
916 ty: Ty<'tcx>,
917 is_direct_call: bool,
918 source: Span,
919 output: &mut MonoItems<'tcx>,
920 ) {
921 let instance = Instance::resolve_drop_in_place(tcx, ty);
922 visit_instance_use(tcx, instance, is_direct_call, source, output);
923 }
924
925 fn visit_fn_use<'tcx>(
926 tcx: TyCtxt<'tcx>,
927 ty: Ty<'tcx>,
928 is_direct_call: bool,
929 source: Span,
930 output: &mut MonoItems<'tcx>,
931 ) {
932 if let ty::FnDef(def_id, substs) = *ty.kind() {
933 let instance = if is_direct_call {
934 ty::Instance::expect_resolve(tcx, ty::ParamEnv::reveal_all(), def_id, substs)
935 } else {
936 match ty::Instance::resolve_for_fn_ptr(tcx, ty::ParamEnv::reveal_all(), def_id, substs)
937 {
938 Some(instance) => instance,
939 _ => bug!("failed to resolve instance for {ty}"),
940 }
941 };
942 visit_instance_use(tcx, instance, is_direct_call, source, output);
943 }
944 }
945
946 fn visit_instance_use<'tcx>(
947 tcx: TyCtxt<'tcx>,
948 instance: ty::Instance<'tcx>,
949 is_direct_call: bool,
950 source: Span,
951 output: &mut MonoItems<'tcx>,
952 ) {
953 debug!("visit_item_use({:?}, is_direct_call={:?})", instance, is_direct_call);
954 if !should_codegen_locally(tcx, &instance) {
955 return;
956 }
957
958 match instance.def {
959 ty::InstanceDef::Virtual(..) | ty::InstanceDef::Intrinsic(_) => {
960 if !is_direct_call {
961 bug!("{:?} being reified", instance);
962 }
963 }
964 ty::InstanceDef::DropGlue(_, None) => {
965 // Don't need to emit noop drop glue if we are calling directly.
966 if !is_direct_call {
967 output.push(create_fn_mono_item(tcx, instance, source));
968 }
969 }
970 ty::InstanceDef::DropGlue(_, Some(_))
971 | ty::InstanceDef::VTableShim(..)
972 | ty::InstanceDef::ReifyShim(..)
973 | ty::InstanceDef::ClosureOnceShim { .. }
974 | ty::InstanceDef::Item(..)
975 | ty::InstanceDef::FnPtrShim(..)
976 | ty::InstanceDef::CloneShim(..) => {
977 output.push(create_fn_mono_item(tcx, instance, source));
978 }
979 }
980 }
981
982 /// Returns `true` if we should codegen an instance in the local crate, or returns `false` if we
983 /// can just link to the upstream crate and therefore don't need a mono item.
984 fn should_codegen_locally<'tcx>(tcx: TyCtxt<'tcx>, instance: &Instance<'tcx>) -> bool {
985 let Some(def_id) = instance.def.def_id_if_not_guaranteed_local_codegen() else {
986 return true;
987 };
988
989 if tcx.is_foreign_item(def_id) {
990 // Foreign items are always linked against, there's no way of instantiating them.
991 return false;
992 }
993
994 if def_id.is_local() {
995 // Local items cannot be referred to locally without monomorphizing them locally.
996 return true;
997 }
998
999 if tcx.is_reachable_non_generic(def_id)
1000 || instance.polymorphize(tcx).upstream_monomorphization(tcx).is_some()
1001 {
1002 // We can link to the item in question, no instance needed in this crate.
1003 return false;
1004 }
1005
1006 if let DefKind::Static(_) = tcx.def_kind(def_id) {
1007 // We cannot monomorphize statics from upstream crates.
1008 return false;
1009 }
1010
1011 if !tcx.is_mir_available(def_id) {
1012 bug!("no MIR available for {:?}", def_id);
1013 }
1014
1015 true
1016 }
1017
1018 /// For a given pair of source and target type that occur in an unsizing coercion,
1019 /// this function finds the pair of types that determines the vtable linking
1020 /// them.
1021 ///
1022 /// For example, the source type might be `&SomeStruct` and the target type
1023 /// might be `&dyn SomeTrait` in a cast like:
1024 ///
1025 /// ```rust,ignore (not real code)
1026 /// let src: &SomeStruct = ...;
1027 /// let target = src as &dyn SomeTrait;
1028 /// ```
1029 ///
1030 /// Then the output of this function would be (SomeStruct, SomeTrait) since for
1031 /// constructing the `target` fat-pointer we need the vtable for that pair.
1032 ///
1033 /// Things can get more complicated though because there's also the case where
1034 /// the unsized type occurs as a field:
1035 ///
1036 /// ```rust
1037 /// struct ComplexStruct<T: ?Sized> {
1038 /// a: u32,
1039 /// b: f64,
1040 /// c: T
1041 /// }
1042 /// ```
1043 ///
1044 /// In this case, if `T` is sized, `&ComplexStruct<T>` is a thin pointer. If `T`
1045 /// is unsized, `&SomeStruct` is a fat pointer, and the vtable it points to is
1046 /// for the pair of `T` (which is a trait) and the concrete type that `T` was
1047 /// originally coerced from:
1048 ///
1049 /// ```rust,ignore (not real code)
1050 /// let src: &ComplexStruct<SomeStruct> = ...;
1051 /// let target = src as &ComplexStruct<dyn SomeTrait>;
1052 /// ```
1053 ///
1054 /// Again, we want this `find_vtable_types_for_unsizing()` to provide the pair
1055 /// `(SomeStruct, SomeTrait)`.
1056 ///
1057 /// Finally, there is also the case of custom unsizing coercions, e.g., for
1058 /// smart pointers such as `Rc` and `Arc`.
1059 fn find_vtable_types_for_unsizing<'tcx>(
1060 tcx: TyCtxtAt<'tcx>,
1061 source_ty: Ty<'tcx>,
1062 target_ty: Ty<'tcx>,
1063 ) -> (Ty<'tcx>, Ty<'tcx>) {
1064 let ptr_vtable = |inner_source: Ty<'tcx>, inner_target: Ty<'tcx>| {
1065 let param_env = ty::ParamEnv::reveal_all();
1066 let type_has_metadata = |ty: Ty<'tcx>| -> bool {
1067 if ty.is_sized(tcx.tcx, param_env) {
1068 return false;
1069 }
1070 let tail = tcx.struct_tail_erasing_lifetimes(ty, param_env);
1071 match tail.kind() {
1072 ty::Foreign(..) => false,
1073 ty::Str | ty::Slice(..) | ty::Dynamic(..) => true,
1074 _ => bug!("unexpected unsized tail: {:?}", tail),
1075 }
1076 };
1077 if type_has_metadata(inner_source) {
1078 (inner_source, inner_target)
1079 } else {
1080 tcx.struct_lockstep_tails_erasing_lifetimes(inner_source, inner_target, param_env)
1081 }
1082 };
1083
1084 match (&source_ty.kind(), &target_ty.kind()) {
1085 (&ty::Ref(_, a, _), &ty::Ref(_, b, _) | &ty::RawPtr(ty::TypeAndMut { ty: b, .. }))
1086 | (&ty::RawPtr(ty::TypeAndMut { ty: a, .. }), &ty::RawPtr(ty::TypeAndMut { ty: b, .. })) => {
1087 ptr_vtable(*a, *b)
1088 }
1089 (&ty::Adt(def_a, _), &ty::Adt(def_b, _)) if def_a.is_box() && def_b.is_box() => {
1090 ptr_vtable(source_ty.boxed_ty(), target_ty.boxed_ty())
1091 }
1092
1093 // T as dyn* Trait
1094 (_, &ty::Dynamic(_, _, ty::DynStar)) => ptr_vtable(source_ty, target_ty),
1095
1096 (&ty::Adt(source_adt_def, source_substs), &ty::Adt(target_adt_def, target_substs)) => {
1097 assert_eq!(source_adt_def, target_adt_def);
1098
1099 let CustomCoerceUnsized::Struct(coerce_index) =
1100 crate::custom_coerce_unsize_info(tcx, source_ty, target_ty);
1101
1102 let source_fields = &source_adt_def.non_enum_variant().fields;
1103 let target_fields = &target_adt_def.non_enum_variant().fields;
1104
1105 assert!(
1106 coerce_index < source_fields.len() && source_fields.len() == target_fields.len()
1107 );
1108
1109 find_vtable_types_for_unsizing(
1110 tcx,
1111 source_fields[coerce_index].ty(*tcx, source_substs),
1112 target_fields[coerce_index].ty(*tcx, target_substs),
1113 )
1114 }
1115 _ => bug!(
1116 "find_vtable_types_for_unsizing: invalid coercion {:?} -> {:?}",
1117 source_ty,
1118 target_ty
1119 ),
1120 }
1121 }
1122
1123 #[instrument(skip(tcx), level = "debug", ret)]
1124 fn create_fn_mono_item<'tcx>(
1125 tcx: TyCtxt<'tcx>,
1126 instance: Instance<'tcx>,
1127 source: Span,
1128 ) -> Spanned<MonoItem<'tcx>> {
1129 let def_id = instance.def_id();
1130 if tcx.sess.opts.unstable_opts.profile_closures && def_id.is_local() && tcx.is_closure(def_id) {
1131 crate::util::dump_closure_profile(tcx, instance);
1132 }
1133
1134 respan(source, MonoItem::Fn(instance.polymorphize(tcx)))
1135 }
1136
1137 /// Creates a `MonoItem` for each method that is referenced by the vtable for
1138 /// the given trait/impl pair.
1139 fn create_mono_items_for_vtable_methods<'tcx>(
1140 tcx: TyCtxt<'tcx>,
1141 trait_ty: Ty<'tcx>,
1142 impl_ty: Ty<'tcx>,
1143 source: Span,
1144 output: &mut MonoItems<'tcx>,
1145 ) {
1146 assert!(!trait_ty.has_escaping_bound_vars() && !impl_ty.has_escaping_bound_vars());
1147
1148 if let ty::Dynamic(ref trait_ty, ..) = trait_ty.kind() {
1149 if let Some(principal) = trait_ty.principal() {
1150 let poly_trait_ref = principal.with_self_ty(tcx, impl_ty);
1151 assert!(!poly_trait_ref.has_escaping_bound_vars());
1152
1153 // Walk all methods of the trait, including those of its supertraits
1154 let entries = tcx.vtable_entries(poly_trait_ref);
1155 let methods = entries
1156 .iter()
1157 .filter_map(|entry| match entry {
1158 VtblEntry::MetadataDropInPlace
1159 | VtblEntry::MetadataSize
1160 | VtblEntry::MetadataAlign
1161 | VtblEntry::Vacant => None,
1162 VtblEntry::TraitVPtr(_) => {
1163 // all super trait items already covered, so skip them.
1164 None
1165 }
1166 VtblEntry::Method(instance) => {
1167 Some(*instance).filter(|instance| should_codegen_locally(tcx, instance))
1168 }
1169 })
1170 .map(|item| create_fn_mono_item(tcx, item, source));
1171 output.extend(methods);
1172 }
1173
1174 // Also add the destructor.
1175 visit_drop_use(tcx, impl_ty, false, source, output);
1176 }
1177 }
1178
1179 //=-----------------------------------------------------------------------------
1180 // Root Collection
1181 //=-----------------------------------------------------------------------------
1182
1183 struct RootCollector<'a, 'tcx> {
1184 tcx: TyCtxt<'tcx>,
1185 mode: MonoItemCollectionMode,
1186 output: &'a mut MonoItems<'tcx>,
1187 entry_fn: Option<(DefId, EntryFnType)>,
1188 }
1189
1190 impl<'v> RootCollector<'_, 'v> {
1191 fn process_item(&mut self, id: hir::ItemId) {
1192 match self.tcx.def_kind(id.owner_id) {
1193 DefKind::Enum | DefKind::Struct | DefKind::Union => {
1194 let item = self.tcx.hir().item(id);
1195 match item.kind {
1196 hir::ItemKind::Enum(_, ref generics)
1197 | hir::ItemKind::Struct(_, ref generics)
1198 | hir::ItemKind::Union(_, ref generics) => {
1199 if generics.params.is_empty() {
1200 if self.mode == MonoItemCollectionMode::Eager {
1201 debug!(
1202 "RootCollector: ADT drop-glue for {}",
1203 self.tcx.def_path_str(item.owner_id.to_def_id())
1204 );
1205
1206 let ty = Instance::new(
1207 item.owner_id.to_def_id(),
1208 InternalSubsts::empty(),
1209 )
1210 .ty(self.tcx, ty::ParamEnv::reveal_all());
1211 visit_drop_use(self.tcx, ty, true, DUMMY_SP, self.output);
1212 }
1213 }
1214 }
1215 _ => bug!(),
1216 }
1217 }
1218 DefKind::GlobalAsm => {
1219 debug!(
1220 "RootCollector: ItemKind::GlobalAsm({})",
1221 self.tcx.def_path_str(id.owner_id.to_def_id())
1222 );
1223 self.output.push(dummy_spanned(MonoItem::GlobalAsm(id)));
1224 }
1225 DefKind::Static(..) => {
1226 debug!(
1227 "RootCollector: ItemKind::Static({})",
1228 self.tcx.def_path_str(id.owner_id.to_def_id())
1229 );
1230 self.output.push(dummy_spanned(MonoItem::Static(id.owner_id.to_def_id())));
1231 }
1232 DefKind::Const => {
1233 // const items only generate mono items if they are
1234 // actually used somewhere. Just declaring them is insufficient.
1235
1236 // but even just declaring them must collect the items they refer to
1237 if let Ok(val) = self.tcx.const_eval_poly(id.owner_id.to_def_id()) {
1238 collect_const_value(self.tcx, val, &mut self.output);
1239 }
1240 }
1241 DefKind::Impl => {
1242 if self.mode == MonoItemCollectionMode::Eager {
1243 let item = self.tcx.hir().item(id);
1244 create_mono_items_for_default_impls(self.tcx, item, self.output);
1245 }
1246 }
1247 DefKind::Fn => {
1248 self.push_if_root(id.owner_id.def_id);
1249 }
1250 _ => {}
1251 }
1252 }
1253
1254 fn process_impl_item(&mut self, id: hir::ImplItemId) {
1255 if matches!(self.tcx.def_kind(id.owner_id), DefKind::AssocFn) {
1256 self.push_if_root(id.owner_id.def_id);
1257 }
1258 }
1259
1260 fn is_root(&self, def_id: LocalDefId) -> bool {
1261 !item_requires_monomorphization(self.tcx, def_id)
1262 && match self.mode {
1263 MonoItemCollectionMode::Eager => true,
1264 MonoItemCollectionMode::Lazy => {
1265 self.entry_fn.and_then(|(id, _)| id.as_local()) == Some(def_id)
1266 || self.tcx.is_reachable_non_generic(def_id)
1267 || self
1268 .tcx
1269 .codegen_fn_attrs(def_id)
1270 .flags
1271 .contains(CodegenFnAttrFlags::RUSTC_STD_INTERNAL_SYMBOL)
1272 }
1273 }
1274 }
1275
1276 /// If `def_id` represents a root, pushes it onto the list of
1277 /// outputs. (Note that all roots must be monomorphic.)
1278 #[instrument(skip(self), level = "debug")]
1279 fn push_if_root(&mut self, def_id: LocalDefId) {
1280 if self.is_root(def_id) {
1281 debug!("found root");
1282
1283 let instance = Instance::mono(self.tcx, def_id.to_def_id());
1284 self.output.push(create_fn_mono_item(self.tcx, instance, DUMMY_SP));
1285 }
1286 }
1287
1288 /// As a special case, when/if we encounter the
1289 /// `main()` function, we also have to generate a
1290 /// monomorphized copy of the start lang item based on
1291 /// the return type of `main`. This is not needed when
1292 /// the user writes their own `start` manually.
1293 fn push_extra_entry_roots(&mut self) {
1294 let Some((main_def_id, EntryFnType::Main { .. })) = self.entry_fn else {
1295 return;
1296 };
1297
1298 let start_def_id = self.tcx.require_lang_item(LangItem::Start, None);
1299 let main_ret_ty = self.tcx.fn_sig(main_def_id).output();
1300
1301 // Given that `main()` has no arguments,
1302 // then its return type cannot have
1303 // late-bound regions, since late-bound
1304 // regions must appear in the argument
1305 // listing.
1306 let main_ret_ty = self.tcx.normalize_erasing_regions(
1307 ty::ParamEnv::reveal_all(),
1308 main_ret_ty.no_bound_vars().unwrap(),
1309 );
1310
1311 let start_instance = Instance::resolve(
1312 self.tcx,
1313 ty::ParamEnv::reveal_all(),
1314 start_def_id,
1315 self.tcx.intern_substs(&[main_ret_ty.into()]),
1316 )
1317 .unwrap()
1318 .unwrap();
1319
1320 self.output.push(create_fn_mono_item(self.tcx, start_instance, DUMMY_SP));
1321 }
1322 }
1323
1324 fn item_requires_monomorphization(tcx: TyCtxt<'_>, def_id: LocalDefId) -> bool {
1325 let generics = tcx.generics_of(def_id);
1326 generics.requires_monomorphization(tcx)
1327 }
1328
1329 fn create_mono_items_for_default_impls<'tcx>(
1330 tcx: TyCtxt<'tcx>,
1331 item: &'tcx hir::Item<'tcx>,
1332 output: &mut MonoItems<'tcx>,
1333 ) {
1334 match item.kind {
1335 hir::ItemKind::Impl(ref impl_) => {
1336 if matches!(impl_.polarity, hir::ImplPolarity::Negative(_)) {
1337 return;
1338 }
1339
1340 for param in impl_.generics.params {
1341 match param.kind {
1342 hir::GenericParamKind::Lifetime { .. } => {}
1343 hir::GenericParamKind::Type { .. } | hir::GenericParamKind::Const { .. } => {
1344 return;
1345 }
1346 }
1347 }
1348
1349 debug!(
1350 "create_mono_items_for_default_impls(item={})",
1351 tcx.def_path_str(item.owner_id.to_def_id())
1352 );
1353
1354 if let Some(trait_ref) = tcx.impl_trait_ref(item.owner_id) {
1355 let trait_ref = trait_ref.subst_identity();
1356
1357 let param_env = ty::ParamEnv::reveal_all();
1358 let trait_ref = tcx.normalize_erasing_regions(param_env, trait_ref);
1359 let overridden_methods = tcx.impl_item_implementor_ids(item.owner_id);
1360 for method in tcx.provided_trait_methods(trait_ref.def_id) {
1361 if overridden_methods.contains_key(&method.def_id) {
1362 continue;
1363 }
1364
1365 if tcx.generics_of(method.def_id).own_requires_monomorphization() {
1366 continue;
1367 }
1368
1369 let substs =
1370 InternalSubsts::for_item(tcx, method.def_id, |param, _| match param.kind {
1371 GenericParamDefKind::Lifetime => tcx.lifetimes.re_erased.into(),
1372 GenericParamDefKind::Type { .. }
1373 | GenericParamDefKind::Const { .. } => {
1374 trait_ref.substs[param.index as usize]
1375 }
1376 });
1377 let instance =
1378 ty::Instance::expect_resolve(tcx, param_env, method.def_id, substs);
1379
1380 let mono_item = create_fn_mono_item(tcx, instance, DUMMY_SP);
1381 if mono_item.node.is_instantiable(tcx) && should_codegen_locally(tcx, &instance)
1382 {
1383 output.push(mono_item);
1384 }
1385 }
1386 }
1387 }
1388 _ => bug!(),
1389 }
1390 }
1391
1392 /// Scans the miri alloc in order to find function calls, closures, and drop-glue.
1393 fn collect_miri<'tcx>(tcx: TyCtxt<'tcx>, alloc_id: AllocId, output: &mut MonoItems<'tcx>) {
1394 match tcx.global_alloc(alloc_id) {
1395 GlobalAlloc::Static(def_id) => {
1396 assert!(!tcx.is_thread_local_static(def_id));
1397 let instance = Instance::mono(tcx, def_id);
1398 if should_codegen_locally(tcx, &instance) {
1399 trace!("collecting static {:?}", def_id);
1400 output.push(dummy_spanned(MonoItem::Static(def_id)));
1401 }
1402 }
1403 GlobalAlloc::Memory(alloc) => {
1404 trace!("collecting {:?} with {:#?}", alloc_id, alloc);
1405 for &inner in alloc.inner().provenance().ptrs().values() {
1406 rustc_data_structures::stack::ensure_sufficient_stack(|| {
1407 collect_miri(tcx, inner, output);
1408 });
1409 }
1410 }
1411 GlobalAlloc::Function(fn_instance) => {
1412 if should_codegen_locally(tcx, &fn_instance) {
1413 trace!("collecting {:?} with {:#?}", alloc_id, fn_instance);
1414 output.push(create_fn_mono_item(tcx, fn_instance, DUMMY_SP));
1415 }
1416 }
1417 GlobalAlloc::VTable(ty, trait_ref) => {
1418 let alloc_id = tcx.vtable_allocation((ty, trait_ref));
1419 collect_miri(tcx, alloc_id, output)
1420 }
1421 }
1422 }
1423
1424 /// Scans the MIR in order to find function calls, closures, and drop-glue.
1425 #[instrument(skip(tcx, output), level = "debug")]
1426 fn collect_neighbours<'tcx>(
1427 tcx: TyCtxt<'tcx>,
1428 instance: Instance<'tcx>,
1429 output: &mut MonoItems<'tcx>,
1430 ) {
1431 let body = tcx.instance_mir(instance.def);
1432 MirNeighborCollector { tcx, body: &body, output, instance }.visit_body(&body);
1433 }
1434
1435 #[instrument(skip(tcx, output), level = "debug")]
1436 fn collect_const_value<'tcx>(
1437 tcx: TyCtxt<'tcx>,
1438 value: ConstValue<'tcx>,
1439 output: &mut MonoItems<'tcx>,
1440 ) {
1441 match value {
1442 ConstValue::Scalar(Scalar::Ptr(ptr, _size)) => collect_miri(tcx, ptr.provenance, output),
1443 ConstValue::Slice { data: alloc, start: _, end: _ } | ConstValue::ByRef { alloc, .. } => {
1444 for &id in alloc.inner().provenance().ptrs().values() {
1445 collect_miri(tcx, id, output);
1446 }
1447 }
1448 _ => {}
1449 }
1450 }