]> git.proxmox.com Git - rustc.git/blame - compiler/rustc_monomorphize/src/partitioning/mod.rs
New upstream version 1.64.0+dfsg1
[rustc.git] / compiler / rustc_monomorphize / src / partitioning / mod.rs
CommitLineData
3dfed10e
XL
1//! Partitioning Codegen Units for Incremental Compilation
2//! ======================================================
3//!
4//! The task of this module is to take the complete set of monomorphizations of
5//! a crate and produce a set of codegen units from it, where a codegen unit
6//! is a named set of (mono-item, linkage) pairs. That is, this module
7//! decides which monomorphization appears in which codegen units with which
8//! linkage. The following paragraphs describe some of the background on the
9//! partitioning scheme.
10//!
11//! The most important opportunity for saving on compilation time with
12//! incremental compilation is to avoid re-codegenning and re-optimizing code.
13//! Since the unit of codegen and optimization for LLVM is "modules" or, how
14//! we call them "codegen units", the particulars of how much time can be saved
15//! by incremental compilation are tightly linked to how the output program is
16//! partitioned into these codegen units prior to passing it to LLVM --
17//! especially because we have to treat codegen units as opaque entities once
18//! they are created: There is no way for us to incrementally update an existing
19//! LLVM module and so we have to build any such module from scratch if it was
20//! affected by some change in the source code.
21//!
22//! From that point of view it would make sense to maximize the number of
23//! codegen units by, for example, putting each function into its own module.
24//! That way only those modules would have to be re-compiled that were actually
25//! affected by some change, minimizing the number of functions that could have
26//! been re-used but just happened to be located in a module that is
27//! re-compiled.
28//!
29//! However, since LLVM optimization does not work across module boundaries,
30//! using such a highly granular partitioning would lead to very slow runtime
31//! code since it would effectively prohibit inlining and other inter-procedure
32//! optimizations. We want to avoid that as much as possible.
33//!
34//! Thus we end up with a trade-off: The bigger the codegen units, the better
35//! LLVM's optimizer can do its work, but also the smaller the compilation time
36//! reduction we get from incremental compilation.
37//!
38//! Ideally, we would create a partitioning such that there are few big codegen
39//! units with few interdependencies between them. For now though, we use the
40//! following heuristic to determine the partitioning:
41//!
42//! - There are two codegen units for every source-level module:
43//! - One for "stable", that is non-generic, code
44//! - One for more "volatile" code, i.e., monomorphized instances of functions
45//! defined in that module
46//!
47//! In order to see why this heuristic makes sense, let's take a look at when a
48//! codegen unit can get invalidated:
49//!
50//! 1. The most straightforward case is when the BODY of a function or global
51//! changes. Then any codegen unit containing the code for that item has to be
52//! re-compiled. Note that this includes all codegen units where the function
53//! has been inlined.
54//!
55//! 2. The next case is when the SIGNATURE of a function or global changes. In
56//! this case, all codegen units containing a REFERENCE to that item have to be
57//! re-compiled. This is a superset of case 1.
58//!
59//! 3. The final and most subtle case is when a REFERENCE to a generic function
60//! is added or removed somewhere. Even though the definition of the function
61//! might be unchanged, a new REFERENCE might introduce a new monomorphized
62//! instance of this function which has to be placed and compiled somewhere.
63//! Conversely, when removing a REFERENCE, it might have been the last one with
64//! that particular set of generic arguments and thus we have to remove it.
65//!
66//! From the above we see that just using one codegen unit per source-level
67//! module is not such a good idea, since just adding a REFERENCE to some
68//! generic item somewhere else would invalidate everything within the module
69//! containing the generic item. The heuristic above reduces this detrimental
70//! side-effect of references a little by at least not touching the non-generic
71//! code of the module.
72//!
73//! A Note on Inlining
74//! ------------------
75//! As briefly mentioned above, in order for LLVM to be able to inline a
76//! function call, the body of the function has to be available in the LLVM
77//! module where the call is made. This has a few consequences for partitioning:
78//!
79//! - The partitioning algorithm has to take care of placing functions into all
80//! codegen units where they should be available for inlining. It also has to
81//! decide on the correct linkage for these functions.
82//!
83//! - The partitioning algorithm has to know which functions are likely to get
84//! inlined, so it can distribute function instantiations accordingly. Since
85//! there is no way of knowing for sure which functions LLVM will decide to
86//! inline in the end, we apply a heuristic here: Only functions marked with
87//! `#[inline]` are considered for inlining by the partitioner. The current
88//! implementation will not try to determine if a function is likely to be
89//! inlined by looking at the functions definition.
90//!
91//! Note though that as a side-effect of creating a codegen units per
92//! source-level module, functions from the same module will be available for
93//! inlining, even when they are not marked `#[inline]`.
94
95mod default;
96mod merging;
97
98use rustc_data_structures::fx::{FxHashMap, FxHashSet};
99use rustc_data_structures::sync;
17df50a5 100use rustc_hir::def_id::DefIdSet;
064997fb 101use rustc_middle::mir;
3dfed10e
XL
102use rustc_middle::mir::mono::MonoItem;
103use rustc_middle::mir::mono::{CodegenUnit, Linkage};
1b1a35ee 104use rustc_middle::ty::print::with_no_trimmed_paths;
3dfed10e
XL
105use rustc_middle::ty::query::Providers;
106use rustc_middle::ty::TyCtxt;
107use rustc_span::symbol::Symbol;
108
c295e0f8
XL
109use crate::collector::InliningMap;
110use crate::collector::{self, MonoItemCollectionMode};
3dfed10e 111
1b1a35ee
XL
112pub struct PartitioningCx<'a, 'tcx> {
113 tcx: TyCtxt<'tcx>,
114 target_cgu_count: usize,
115 inlining_map: &'a InliningMap<'tcx>,
116}
117
3dfed10e
XL
118trait Partitioner<'tcx> {
119 fn place_root_mono_items(
120 &mut self,
1b1a35ee 121 cx: &PartitioningCx<'_, 'tcx>,
3dfed10e
XL
122 mono_items: &mut dyn Iterator<Item = MonoItem<'tcx>>,
123 ) -> PreInliningPartitioning<'tcx>;
124
125 fn merge_codegen_units(
126 &mut self,
1b1a35ee 127 cx: &PartitioningCx<'_, 'tcx>,
3dfed10e 128 initial_partitioning: &mut PreInliningPartitioning<'tcx>,
3dfed10e
XL
129 );
130
131 fn place_inlined_mono_items(
132 &mut self,
1b1a35ee 133 cx: &PartitioningCx<'_, 'tcx>,
3dfed10e 134 initial_partitioning: PreInliningPartitioning<'tcx>,
3dfed10e
XL
135 ) -> PostInliningPartitioning<'tcx>;
136
137 fn internalize_symbols(
138 &mut self,
1b1a35ee 139 cx: &PartitioningCx<'_, 'tcx>,
3dfed10e 140 partitioning: &mut PostInliningPartitioning<'tcx>,
3dfed10e
XL
141 );
142}
143
144fn get_partitioner<'tcx>(tcx: TyCtxt<'tcx>) -> Box<dyn Partitioner<'tcx>> {
064997fb 145 let strategy = match &tcx.sess.opts.unstable_opts.cgu_partitioning_strategy {
3dfed10e
XL
146 None => "default",
147 Some(s) => &s[..],
148 };
149
150 match strategy {
151 "default" => Box::new(default::DefaultPartitioning),
152 _ => tcx.sess.fatal("unknown partitioning strategy"),
153 }
154}
155
156pub fn partition<'tcx>(
157 tcx: TyCtxt<'tcx>,
158 mono_items: &mut dyn Iterator<Item = MonoItem<'tcx>>,
159 max_cgu_count: usize,
160 inlining_map: &InliningMap<'tcx>,
161) -> Vec<CodegenUnit<'tcx>> {
162 let _prof_timer = tcx.prof.generic_activity("cgu_partitioning");
163
164 let mut partitioner = get_partitioner(tcx);
1b1a35ee 165 let cx = &PartitioningCx { tcx, target_cgu_count: max_cgu_count, inlining_map };
3dfed10e
XL
166 // In the first step, we place all regular monomorphizations into their
167 // respective 'home' codegen unit. Regular monomorphizations are all
168 // functions and statics defined in the local crate.
169 let mut initial_partitioning = {
170 let _prof_timer = tcx.prof.generic_activity("cgu_partitioning_place_roots");
1b1a35ee 171 partitioner.place_root_mono_items(cx, mono_items)
3dfed10e
XL
172 };
173
174 initial_partitioning.codegen_units.iter_mut().for_each(|cgu| cgu.estimate_size(tcx));
175
176 debug_dump(tcx, "INITIAL PARTITIONING:", initial_partitioning.codegen_units.iter());
177
178 // Merge until we have at most `max_cgu_count` codegen units.
179 {
180 let _prof_timer = tcx.prof.generic_activity("cgu_partitioning_merge_cgus");
1b1a35ee 181 partitioner.merge_codegen_units(cx, &mut initial_partitioning);
3dfed10e
XL
182 debug_dump(tcx, "POST MERGING:", initial_partitioning.codegen_units.iter());
183 }
184
185 // In the next step, we use the inlining map to determine which additional
186 // monomorphizations have to go into each codegen unit. These additional
187 // monomorphizations can be drop-glue, functions from external crates, and
188 // local functions the definition of which is marked with `#[inline]`.
189 let mut post_inlining = {
190 let _prof_timer = tcx.prof.generic_activity("cgu_partitioning_place_inline_items");
1b1a35ee 191 partitioner.place_inlined_mono_items(cx, initial_partitioning)
3dfed10e
XL
192 };
193
194 post_inlining.codegen_units.iter_mut().for_each(|cgu| cgu.estimate_size(tcx));
195
196 debug_dump(tcx, "POST INLINING:", post_inlining.codegen_units.iter());
197
198 // Next we try to make as many symbols "internal" as possible, so LLVM has
199 // more freedom to optimize.
1b1a35ee 200 if !tcx.sess.link_dead_code() {
3dfed10e 201 let _prof_timer = tcx.prof.generic_activity("cgu_partitioning_internalize_symbols");
1b1a35ee 202 partitioner.internalize_symbols(cx, &mut post_inlining);
3dfed10e
XL
203 }
204
5099ac24
FG
205 let instrument_dead_code =
206 tcx.sess.instrument_coverage() && !tcx.sess.instrument_coverage_except_unused_functions();
207
208 if instrument_dead_code {
209 assert!(
210 post_inlining.codegen_units.len() > 0,
211 "There must be at least one CGU that code coverage data can be generated in."
212 );
213
214 // Find the smallest CGU that has exported symbols and put the dead
215 // function stubs in that CGU. We look for exported symbols to increase
216 // the likelihood the linker won't throw away the dead functions.
217 // FIXME(#92165): In order to truly resolve this, we need to make sure
218 // the object file (CGU) containing the dead function stubs is included
219 // in the final binary. This will probably require forcing these
220 // function symbols to be included via `-u` or `/include` linker args.
221 let mut cgus: Vec<_> = post_inlining.codegen_units.iter_mut().collect();
222 cgus.sort_by_key(|cgu| cgu.size_estimate());
223
224 let dead_code_cgu =
225 if let Some(cgu) = cgus.into_iter().rev().find(|cgu| {
226 cgu.items().iter().any(|(_, (linkage, _))| *linkage == Linkage::External)
227 }) {
228 cgu
229 } else {
230 // If there are no CGUs that have externally linked items,
231 // then we just pick the first CGU as a fallback.
232 &mut post_inlining.codegen_units[0]
233 };
234 dead_code_cgu.make_code_coverage_dead_code_cgu();
235 }
236
3dfed10e
XL
237 // Finally, sort by codegen unit name, so that we get deterministic results.
238 let PostInliningPartitioning {
239 codegen_units: mut result,
240 mono_item_placements: _,
241 internalization_candidates: _,
242 } = post_inlining;
243
a2a8927a 244 result.sort_by(|a, b| a.name().as_str().partial_cmp(b.name().as_str()).unwrap());
3dfed10e
XL
245
246 result
247}
248
249pub struct PreInliningPartitioning<'tcx> {
250 codegen_units: Vec<CodegenUnit<'tcx>>,
251 roots: FxHashSet<MonoItem<'tcx>>,
252 internalization_candidates: FxHashSet<MonoItem<'tcx>>,
253}
254
255/// For symbol internalization, we need to know whether a symbol/mono-item is
256/// accessed from outside the codegen unit it is defined in. This type is used
257/// to keep track of that.
258#[derive(Clone, PartialEq, Eq, Debug)]
259enum MonoItemPlacement {
260 SingleCgu { cgu_name: Symbol },
261 MultipleCgus,
262}
263
264struct PostInliningPartitioning<'tcx> {
265 codegen_units: Vec<CodegenUnit<'tcx>>,
266 mono_item_placements: FxHashMap<MonoItem<'tcx>, MonoItemPlacement>,
267 internalization_candidates: FxHashSet<MonoItem<'tcx>>,
268}
269
270fn debug_dump<'a, 'tcx, I>(tcx: TyCtxt<'tcx>, label: &str, cgus: I)
271where
272 I: Iterator<Item = &'a CodegenUnit<'tcx>>,
273 'tcx: 'a,
274{
6a06907d
XL
275 let dump = move || {
276 use std::fmt::Write;
277
278 let s = &mut String::new();
279 let _ = writeln!(s, "{}", label);
3dfed10e 280 for cgu in cgus {
6a06907d
XL
281 let _ =
282 writeln!(s, "CodegenUnit {} estimated size {} :", cgu.name(), cgu.size_estimate());
3dfed10e
XL
283
284 for (mono_item, linkage) in cgu.items() {
285 let symbol_name = mono_item.symbol_name(tcx).name;
286 let symbol_hash_start = symbol_name.rfind('h');
5869c6ff 287 let symbol_hash = symbol_hash_start.map_or("<no hash>", |i| &symbol_name[i..]);
3dfed10e 288
6a06907d
XL
289 let _ = writeln!(
290 s,
3dfed10e 291 " - {} [{:?}] [{}] estimated size {}",
1b1a35ee 292 mono_item,
3dfed10e
XL
293 linkage,
294 symbol_hash,
295 mono_item.size_estimate(tcx)
296 );
297 }
298
6a06907d 299 let _ = writeln!(s, "");
3dfed10e 300 }
6a06907d
XL
301
302 std::mem::take(s)
303 };
304
305 debug!("{}", dump());
3dfed10e
XL
306}
307
308#[inline(never)] // give this a place in the profiler
309fn assert_symbols_are_distinct<'a, 'tcx, I>(tcx: TyCtxt<'tcx>, mono_items: I)
310where
311 I: Iterator<Item = &'a MonoItem<'tcx>>,
312 'tcx: 'a,
313{
314 let _prof_timer = tcx.prof.generic_activity("assert_symbols_are_distinct");
315
316 let mut symbols: Vec<_> =
317 mono_items.map(|mono_item| (mono_item, mono_item.symbol_name(tcx))).collect();
318
319 symbols.sort_by_key(|sym| sym.1);
320
1b1a35ee 321 for &[(mono_item1, ref sym1), (mono_item2, ref sym2)] in symbols.array_windows() {
3dfed10e 322 if sym1 == sym2 {
3dfed10e
XL
323 let span1 = mono_item1.local_span(tcx);
324 let span2 = mono_item2.local_span(tcx);
325
326 // Deterministically select one of the spans for error reporting
327 let span = match (span1, span2) {
328 (Some(span1), Some(span2)) => {
329 Some(if span1.lo().0 > span2.lo().0 { span1 } else { span2 })
330 }
331 (span1, span2) => span1.or(span2),
332 };
333
334 let error_message = format!("symbol `{}` is already defined", sym1);
335
336 if let Some(span) = span {
337 tcx.sess.span_fatal(span, &error_message)
338 } else {
339 tcx.sess.fatal(&error_message)
340 }
341 }
342 }
343}
344
345fn collect_and_partition_mono_items<'tcx>(
346 tcx: TyCtxt<'tcx>,
17df50a5 347 (): (),
3dfed10e 348) -> (&'tcx DefIdSet, &'tcx [CodegenUnit<'tcx>]) {
064997fb 349 let collection_mode = match tcx.sess.opts.unstable_opts.print_mono_items {
3dfed10e
XL
350 Some(ref s) => {
351 let mode_string = s.to_lowercase();
352 let mode_string = mode_string.trim();
353 if mode_string == "eager" {
354 MonoItemCollectionMode::Eager
355 } else {
356 if mode_string != "lazy" {
357 let message = format!(
358 "Unknown codegen-item collection mode '{}'. \
359 Falling back to 'lazy' mode.",
360 mode_string
361 );
362 tcx.sess.warn(&message);
363 }
364
365 MonoItemCollectionMode::Lazy
366 }
367 }
368 None => {
1b1a35ee 369 if tcx.sess.link_dead_code() {
3dfed10e
XL
370 MonoItemCollectionMode::Eager
371 } else {
372 MonoItemCollectionMode::Lazy
373 }
374 }
375 };
376
377 let (items, inlining_map) = collector::collect_crate_mono_items(tcx, collection_mode);
378
379 tcx.sess.abort_if_errors();
380
381 let (codegen_units, _) = tcx.sess.time("partition_and_assert_distinct_symbols", || {
382 sync::join(
383 || {
17df50a5 384 let mut codegen_units = partition(
3dfed10e
XL
385 tcx,
386 &mut items.iter().cloned(),
387 tcx.sess.codegen_units(),
388 &inlining_map,
17df50a5
XL
389 );
390 codegen_units[0].make_primary();
391 &*tcx.arena.alloc_from_iter(codegen_units)
3dfed10e
XL
392 },
393 || assert_symbols_are_distinct(tcx, items.iter()),
394 )
395 });
396
3c0e092e
XL
397 if tcx.prof.enabled() {
398 // Record CGU size estimates for self-profiling.
399 for cgu in codegen_units {
400 tcx.prof.artifact_size(
401 "codegen_unit_size_estimate",
a2a8927a 402 cgu.name().as_str(),
3c0e092e
XL
403 cgu.size_estimate() as u64,
404 );
405 }
406 }
407
3dfed10e
XL
408 let mono_items: DefIdSet = items
409 .iter()
410 .filter_map(|mono_item| match *mono_item {
411 MonoItem::Fn(ref instance) => Some(instance.def_id()),
412 MonoItem::Static(def_id) => Some(def_id),
413 _ => None,
414 })
415 .collect();
416
064997fb 417 if tcx.sess.opts.unstable_opts.print_mono_items.is_some() {
3dfed10e
XL
418 let mut item_to_cgus: FxHashMap<_, Vec<_>> = Default::default();
419
420 for cgu in codegen_units {
421 for (&mono_item, &linkage) in cgu.items() {
422 item_to_cgus.entry(mono_item).or_default().push((cgu.name(), linkage));
423 }
424 }
425
426 let mut item_keys: Vec<_> = items
427 .iter()
428 .map(|i| {
5e7ed085 429 let mut output = with_no_trimmed_paths!(i.to_string());
3dfed10e
XL
430 output.push_str(" @@");
431 let mut empty = Vec::new();
432 let cgus = item_to_cgus.get_mut(i).unwrap_or(&mut empty);
433 cgus.sort_by_key(|(name, _)| *name);
434 cgus.dedup();
435 for &(ref cgu_name, (linkage, _)) in cgus.iter() {
1b1a35ee 436 output.push(' ');
a2a8927a 437 output.push_str(cgu_name.as_str());
3dfed10e
XL
438
439 let linkage_abbrev = match linkage {
440 Linkage::External => "External",
441 Linkage::AvailableExternally => "Available",
442 Linkage::LinkOnceAny => "OnceAny",
443 Linkage::LinkOnceODR => "OnceODR",
444 Linkage::WeakAny => "WeakAny",
445 Linkage::WeakODR => "WeakODR",
446 Linkage::Appending => "Appending",
447 Linkage::Internal => "Internal",
448 Linkage::Private => "Private",
449 Linkage::ExternalWeak => "ExternalWeak",
450 Linkage::Common => "Common",
451 };
452
1b1a35ee 453 output.push('[');
3dfed10e 454 output.push_str(linkage_abbrev);
1b1a35ee 455 output.push(']');
3dfed10e
XL
456 }
457 output
458 })
459 .collect();
460
461 item_keys.sort();
462
463 for item in item_keys {
464 println!("MONO_ITEM {}", item);
465 }
466 }
467
468 (tcx.arena.alloc(mono_items), codegen_units)
469}
470
17df50a5
XL
471fn codegened_and_inlined_items<'tcx>(tcx: TyCtxt<'tcx>, (): ()) -> &'tcx DefIdSet {
472 let (items, cgus) = tcx.collect_and_partition_mono_items(());
6a06907d
XL
473 let mut visited = DefIdSet::default();
474 let mut result = items.clone();
475
476 for cgu in cgus {
477 for (item, _) in cgu.items() {
478 if let MonoItem::Fn(ref instance) = item {
479 let did = instance.def_id();
480 if !visited.insert(did) {
481 continue;
482 }
064997fb
FG
483 let body = tcx.instance_mir(instance.def);
484 for block in body.basic_blocks() {
485 for statement in &block.statements {
486 let mir::StatementKind::Coverage(_) = statement.kind else { continue };
487 let scope = statement.source_info.scope;
488 if let Some(inlined) = scope.inlined_instance(&body.source_scopes) {
489 result.insert(inlined.def_id());
490 }
6a06907d
XL
491 }
492 }
493 }
494 }
495 }
496
497 tcx.arena.alloc(result)
498}
499
3dfed10e
XL
500pub fn provide(providers: &mut Providers) {
501 providers.collect_and_partition_mono_items = collect_and_partition_mono_items;
6a06907d 502 providers.codegened_and_inlined_items = codegened_and_inlined_items;
3dfed10e
XL
503
504 providers.is_codegened_item = |tcx, def_id| {
17df50a5 505 let (all_mono_items, _) = tcx.collect_and_partition_mono_items(());
3dfed10e
XL
506 all_mono_items.contains(&def_id)
507 };
508
509 providers.codegen_unit = |tcx, name| {
17df50a5 510 let (_, all) = tcx.collect_and_partition_mono_items(());
3dfed10e
XL
511 all.iter()
512 .find(|cgu| cgu.name() == name)
513 .unwrap_or_else(|| panic!("failed to find cgu with name {:?}", name))
514 };
515}