]> git.proxmox.com Git - cargo.git/blob - benches/README.md
Auto merge of #10458 - Eh2406:console-history, r=ehuss
[cargo.git] / benches / README.md
1 # Cargo Benchmarking
2
3 This directory contains some benchmarks for cargo itself. This uses
4 [Criterion] for running benchmarks. It is recommended to read the Criterion
5 book to get familiar with how to use it. A basic usage would be:
6
7 ```sh
8 cd benches/benchsuite
9 cargo bench
10 ```
11
12 The tests involve downloading the index and benchmarking against some
13 real-world and artificial workspaces located in the [`workspaces`](workspaces)
14 directory.
15
16 **Beware** that the initial download can take a fairly long amount of time (10
17 minutes minimum on an extremely fast network) and require significant disk
18 space (around 4.5GB). The benchsuite will cache the index and downloaded
19 crates in the `target/tmp/bench` directory, so subsequent runs should be
20 faster. You can (and probably should) specify individual benchmarks to run to
21 narrow it down to a more reasonable set, for example:
22
23 ```sh
24 cargo bench -- resolve_ws/rust
25 ```
26
27 This will only download what's necessary for the rust-lang/rust workspace
28 (which is about 330MB) and run the benchmarks against it (which should take
29 about a minute). To get a list of all the benchmarks, run:
30
31 ```sh
32 cargo bench -- --list
33 ```
34
35 ## Viewing reports
36
37 The benchmarks display some basic information on the command-line while they
38 run. A more complete HTML report can be found at
39 `target/criterion/report/index.html` which contains links to all the
40 benchmarks and summaries. Check out the Criterion book for more information on
41 the extensive reporting capabilities.
42
43 ## Comparing implementations
44
45 Knowing the raw numbers can be useful, but what you're probably most
46 interested in is checking if your changes help or hurt performance. To do
47 that, you need to run the benchmarks multiple times.
48
49 First, run the benchmarks from the master branch of cargo without any changes.
50 To make it easier to compare, Criterion supports naming the baseline so that
51 you can iterate on your code and compare against it multiple times.
52
53 ```sh
54 cargo bench -- --save-baseline master
55 ```
56
57 Now you can switch to your branch with your changes. Re-run the benchmarks
58 compared against the baseline:
59
60 ```sh
61 cargo bench -- --baseline master
62 ```
63
64 You can repeat the last command as you make changes to re-compare against the
65 master baseline.
66
67 Without the baseline arguments, it will compare against the last run, which
68 can be helpful for comparing incremental changes.
69
70 ## Capturing workspaces
71
72 The [`workspaces`](workspaces) directory contains several workspaces that
73 provide a variety of different workspaces intended to provide good exercises
74 for benchmarks. Some of these are shadow copies of real-world workspaces. This
75 is done with the tool in the [`capture`](capture) directory. The tool will
76 copy `Cargo.lock` and all of the `Cargo.toml` files of the workspace members.
77 It also adds an empty `lib.rs` so Cargo won't error, and sanitizes the
78 `Cargo.toml` to some degree, removing unwanted elements. Finally, it
79 compresses everything into a `tgz`.
80
81 To run it, do:
82
83 ```sh
84 cd benches/capture
85 cargo run -- /path/to/workspace/foo
86 ```
87
88 The resolver benchmarks also support the `CARGO_BENCH_WORKSPACES` environment
89 variable, which you can point to a Cargo workspace if you want to try
90 different workspaces. For example:
91
92 ```sh
93 CARGO_BENCH_WORKSPACES=/path/to/some/workspace cargo bench
94 ```
95
96 ## TODO
97
98 This is just a start for establishing a benchmarking suite for Cargo. There's
99 a lot that can be added. Some ideas:
100
101 * Fix the benchmarks so that the resolver setup doesn't run every iteration.
102 * Benchmark [this section of
103 code](https://github.com/rust-lang/cargo/blob/a821e2cb24d7b6013433f069ab3bad53d160e100/src/cargo/ops/cargo_compile.rs#L470-L549)
104 which builds the unit graph. The performance there isn't great, and it would
105 be good to keep an eye on it. Unfortunately that would mean doing a bit of
106 work to make `generate_targets` publicly visible, and there is a bunch of
107 setup code that may need to be duplicated.
108 * Benchmark the fingerprinting code.
109 * Benchmark running the `cargo` executable. Running something like `cargo
110 build` or `cargo check` with everything "Fresh" would be a good end-to-end
111 exercise to measure the overall overhead of Cargo.
112 * Benchmark pathological resolver scenarios. There might be some cases where
113 the resolver can spend a significant amount of time. It would be good to
114 identify if these exist, and create benchmarks for them. This may require
115 creating an artificial index, similar to the `resolver-tests`. This should
116 also consider scenarios where the resolver ultimately fails.
117 * Benchmark without `Cargo.lock`. I'm not sure if this is particularly
118 valuable, since we are mostly concerned with incremental builds which will
119 always have a lock file.
120 * Benchmark just
121 [`resolve::resolve`](https://github.com/rust-lang/cargo/blob/a821e2cb24d7b6013433f069ab3bad53d160e100/src/cargo/core/resolver/mod.rs#L122)
122 without anything else. This can help focus on just the resolver.
123
124 [Criterion]: https://bheisler.github.io/criterion.rs/book/