Auto merge of #5921 - Eh2406:proptest, r=alexcrichton
use proptest to fuzz the resolver
This has been a long time goal. This uses proptest to generate random registry indexes and throws them at the resolver.
It would be simple to generate a registry by,
1. make a list of name and version number each picked at random
2. for each pick a list of dependencies by making a list of name and version requirements at random.
Unfortunately, it would be extremely unlikely to generate any interesting cases, as the chance that the random name you depend on was also generated as the name of a crate is vanishingly small. So this implementation works very hard to ensure that it only generates valid dependency requirements.
This is still a WIP as it has many problems:
- [x] The current strategy is very convoluted. It is hard to see that it is correct, and harder to see how it can be expanded. Thanks to @centril for working with me on IRC to get this far. Do you have advice for improving it?
- [X] It is slow as molasses when run without release. I looked with a profilere and we seem to spend 2/3 of the time in `to_url`. Maybe we can special case `example.com` for test, like we do for `crates.io` or something? Edit: Done. `lazy_static` did its magic.
- [x] `proptest` does not yet work with `minimal-versions`, a taste of my own medicine.
- [x] I have not verified that, if I remove the fixes for other test that this regenerates them.
The current strategy does not:
- [x] generate interesting version numbers, it just dose 1.0.0, 2.0.0 ...
- [x] guarantee that the version requirements are possible to meet by the crate named.
- [ ] generate features.
- [ ] generate dev-dependencies.
- [x] build deep dependency trees, it seems to prefer to generate crates with 0 or 1 dependents so that on average the tree is 1 or 2 layers deep.
And last but not least, there are no interesting properties being tested. Like:
- [ ] If resolution was successful, then all the transitive requirements are met.
- [x] If resolution was successful, then unpublishing a version of a crate that was not selected should not change that.
- [x] If resolution was unsuccessful, then it should stay unsuccessful even if any version of a crate is unpublished.
- [ ] @maurer suggested testing for consistency. Same registry, same cargo version, same lockfile, every time.
- [ ] @maurer suggested a pareto optimality property (if all else stays the same, but new package versions are released, we don't get a new lockfile where every version is <= the old one, and at least one is < the old one)
Auto merge of #6086 - alexcrichton:build-plan-dev-deps, r=ehuss
Fix `--build-plan` with dev-dependencies
Regressed in #6005 it looks like the build plan requires all packages to
be downloaded rather than just those coming out of `unit_dependenices`,
so let's make sure to download everything!
Auto merge of #6085 - alexcrichton:progress, r=ehuss
Print a line per downloaded crate
This commit brings back the old one-line-per-crate output that Cargo's
had since the beginning of time but was removed in #6005. This was
requested on our [users forum][1]
Alex Crichton [Mon, 24 Sep 2018 16:24:20 +0000 (09:24 -0700)]
Fix `--build-plan` with dev-dependencies
Regressed in #6005 it looks like the build plan requires all packages to
be downloaded rather than just those coming out of `unit_dependenices`,
so let's make sure to download everything!
Fix missing messages when --message-format=json is deeply nested
This commit switches from `serde_json::Value` to [`RawValue`](https://docs.rs/serde_json/1.0/serde_json/value/struct.RawValue.html), which can process arbitrarily deeply nested JSON content without recursion.
Alex Crichton [Mon, 24 Sep 2018 15:59:25 +0000 (08:59 -0700)]
Print a line per downloaded crate
This commit brings back the old one-line-per-crate output that Cargo's
had since the beginning of time but was removed in #6005. This was
requested on our [users forum][1]
Auto merge of #6066 - zachlute:project-to-package-docs, r=alexcrichton
Replaced 'project' with 'package' in Cargo documentation.
Partial fix for #6056.
I tried to make a distinction between places that were talking about Cargo 'projects' vs. a generic concept of a 'project' or a github 'project'. It's entirely possible I was overzealous, so please tell me if some of these changes look dumb.
Auto merge of #6026 - alexcrichton:install-config, r=ehuss
Only load `~/.cargo/config` for `cargo install`
This commit tweaks how configuration is loaded for `cargo install`, ensuring
that we only load configuration from `$HOME` instead of the current working
directory. This should make installations a little more consistent in that they
probably shouldn't cover project-local configuration but should respect global
configuration!
Alex Crichton [Fri, 14 Sep 2018 17:42:27 +0000 (10:42 -0700)]
Only load `~/.cargo/config` for `cargo install`
This commit tweaks how configuration is loaded for `cargo install`, ensuring
that we only load configuration from `$HOME` instead of the current working
directory. This should make installations a little more consistent in that they
probably shouldn't cover project-local configuration but should respect global
configuration!
Auto merge of #6068 - eddyb:oopsie-daisy-stabilize, r=alexcrichton
Remove `fix::local_paths_no_fix`, as `crate_in_paths` is getting stabilized.
Needed for rust-lang/rust#54403 (blocking RC1).
Ideally we'd also clean up the tests, e.g. removing `#![feature(rust_2018_preview)]` and `is_nightly` checks, but it's easier to just remove the only failing test (because it tests the feature gate is needed).
Auto merge of #6005 - alexcrichton:download-parallel, r=ehuss
Download crates in parallel with HTTP/2
This PR revives some of the work of https://github.com/rust-lang/cargo/pull/5161 by refactoring Cargo to make it much easier to add parallel downloads, and then it does so with the `curl` crate's new `http2` feature to compile `nghttp2` has a backend.
The primary refactoring done here is to remove the concept of "download this one package" deep within a `Source`. Instead a `Source` still has a `download` method but it's considered to be largely non-blocking. If a crate needs to be downloaded it immediately returns information as to such. The `PackageSet` abstraction is now a central location for all parallel downloads, and all users of it have been refactored to be amenable to parallel downloads, when added.
Alex Crichton [Mon, 17 Sep 2018 18:59:37 +0000 (11:59 -0700)]
Avoid a debug overflow from curl
Curl doesn't guarantee that we'll get the total/current progress bars in a "as
we expect" fashion, so throw out weird data points (such as we've downloaded
more bytes than curl thinks we'll be downloading in total).
Alex Crichton [Mon, 17 Sep 2018 18:42:45 +0000 (11:42 -0700)]
Tweaks to retry logic
* Don't use `chain_err` internally inside `try` because then the spurious error
printed internally is bland and doesn't contain the right error information.
Instead place the `chain_err` on the outside so it propagates up to the user
still but doesn't get printed for spurious errors.
* Handle `pending_ids` management in the case of an error on the connection.
Alex Crichton [Thu, 13 Sep 2018 03:57:01 +0000 (20:57 -0700)]
Update the progress bar for parallel downloads
This is actually a super tricky problem. We don't really have the capacity for
more than one line of update-able information in Cargo right now, so we need to
squeeze a lot of information into one line of output for Cargo. The main
constraints this tries to satisfy are:
* At all times it should be clear what's happening. Cargo shouldn't just hang
with no output when downloading a crate for a long time, a counter ideally
needs to be decreasing while the download progresses.
* If a progress bar is shown, it shouldn't jump around. This ends up just being
a surprising user experience for most. Progress bars should only ever
increase, but they may increase at different speeds.
* Cargo has, currently, at most one line of output (as mentioned above) to pack
information into. We haven't delved into fancier terminal features that
involve multiple lines of update-able output.
* When downloading crates as part of `cargo build` (the norm) we don't actually
know ahead of time how many crates are being downloaded. We rely on the
calculation of unit dependencies to naturally feed into downloading more
crates.
* Furthermore, once we decide to download a crate, we don't actually know how
big it is! We have to wait for the server to tell us how big it is.
There doesn't really seem to be a great solution that satisfies all of these
constraints unfortunately. As a result this commit implements a relatively
conservative solution which should hopefully get us most of the way there. There
isn't actually a progress bar but rather Cargo prints that it's got N crates
left to download, and if it takes awhile it prints out that there are M bytes
remaining.
Unfortunately the progress is pretty choppy and jerky, not providing a smooth
UI. This appears to largely be because Cargo will synchronously extract
tarballs, which for large crates can cause a noticeable pause. Cargo's not
really prepared internally to perform this work on helper threads, but ideally
if it could do so it would improve the output quite a bit! (making it much
smoother and also able to account for the time tarball extraction takes).
Alex Crichton [Thu, 13 Sep 2018 02:24:33 +0000 (19:24 -0700)]
Only enable downloads through an RAII structure
This should help us scope downloads to ensure that we only ever have one
download session at once and it's explicitly scoped so we can handle the
progress bar and everything.
Alex Crichton [Tue, 11 Sep 2018 00:28:18 +0000 (17:28 -0700)]
Parallelize downloads with HTTP/2
This commit implements parallel downloads using `libcurl` powered by
`libnghttp2` over HTTP/2. Using all of the previous refactorings this actually
implements usage of `Multi` to download crates in parallel. This achieves some
large wins locally, taking download times from 30s to 2s in the best case.
The standard output of Cargo is also changed as a result of this commit. It's
no longer useful for Cargo to print "Downloading ..." for each crate really as
they all start instantaneously. Instead Cargo now no longer prints `Downloading`
by default (unless attached to a pipe) and instead only has one progress bar for
all downloads. Currently this progress bar is discrete and based on the total
number of downloads, no longer specifying how much of one particular download
has happened. This provides a less granular view into what Cargo is doing but
it's hoped that it looks reasonable from an outside perspective as there's
still a progress bar indicating what's happening.
Alex Crichton [Mon, 10 Sep 2018 23:50:29 +0000 (16:50 -0700)]
Refactor `build_unit_dependencies` for parallel downloads
This commit refactors the `build_unit_dependencies` function for leveraging
parallel downloads. This function is the primary location that crates are
download as part of `cargo build`, which is one of the most critical locations
to take advantage of parallel downloads!
The problem here is somewhat unusal in that we don't want to download the entire
crate graph but rather only the precise slice that we're compiling. This means
that we're letting the calculation of unit dependencies entirely drive the
decision of what crates to download. While we're calculating dependencies,
though, we may need to download more crates!
The strategy implemented here is to attempt to build the unit dependency graph.
Any missing packages are skipped during this construction and enqueued for
download. If any package was enqueued, we throw away the entire graph, wait for
a download to finish, and then attempt to construct the unit graph again.
The hope here is that we'll be able to incrementally make progress and
incrementally add more and more crates to be downloaded in parallel. The worry
is that this is a relatively inefficient strategy as we could reconstruct the
unit dependency graph N times (where you have N total transitive dependencies).
To help alleviate this concern wait for only a small handful of crates to
actively be downloading before we continue to try to recreate the dependency
graph. It's hoped that this will strike a good balance between ensuring we
always have a full pipeline of downloads without re-downloading too much.
Alex Crichton [Mon, 10 Sep 2018 20:44:11 +0000 (13:44 -0700)]
Rename PackageSet::get to `get_one`
This commit removes the old `PackageSet::get` method (used for downloading
crates) to `PackageSet::get_one`. The new method explicitly indicates that only
one package is being fetched. Additionally this commit also adds support for an
API that allows supporting downloading multiple packages in parallel. Existing
callers are all updated to use the parallel API where appropriate except for
`BuildContext`, which will be updated in the next commit.
Also note that no parallel downloads are done yet, they're still synchronously
performed one at a time. A later commit will add support for truly downloading
in parallel.
Alex Crichton [Fri, 7 Sep 2018 22:13:57 +0000 (15:13 -0700)]
Move downloading crates higher up in Cargo
This commit is intended to lay the groundwork for downloading crates in parallel
from crates.io. It starts out by lifting up the download operation from deep
within the `Source` trait to the `PackageSet`. This should allow us to maintain
a queue of downloads specifically in a `PackageSet` and arbitrate access through
that one type, making it easier for us to implement parallel downloads.
The `Source` trait's `download` method may now fail saying "go download this
data", and then the data is fed back into the trait object once it's complete to
actually get the `Package` out.
Auto merge of #6048 - ehuss:no-master-ci, r=alexcrichton
Disable master in CI.
There's no need to build and test the exact same commit twice between
bors's auto branch and master. Hopefully this will help reduce the
bors timeouts due to waiting on appveyor.
Eric Huss [Tue, 18 Sep 2018 13:09:27 +0000 (06:09 -0700)]
Disable master in CI.
There's no need to build and test the exact same commit twice between
bors's auto branch and master. Hopefully this will help reduce the
bors timeouts due to waiting on appveyor.
Auto merge of #5993 - alexcrichton:publish-renames, r=ehuss
Fix publishing renamed dependencies to crates.io
This commit fixes publishing crates which contain locally renamed dependencies
to crates.io. Previously this lack of information meant that although we could
resolve the crate graph correctly it wouldn't work well with respect to optional
features and optional dependencies. The fix here is to persist this information
into the registry about the crate being renamed in `Cargo.toml`, allowing Cargo
to correctly deduce feature names as it does when it has `Cargo.toml` locally.
A dual side of this commit is to publish this information to crates.io. We'll
want to merge the associated PR (link to come soon) on crates.io first and make
sure that's deployed as well before we stabilize the crate renaming feature.
The index format is updated as well as part of this change. The `name` key for
dependencies is now unconditionally what was written in `Cargo.toml` as the
left-hand-side of the dependency specification. In other words this is the raw
crate name, but only for the local crate. A new key, `package`, is added to
dependencies (and it can be `None`). This key indicates the crates.io package is
being linked against, an represents the `package` key in `Cargo.toml`.
It's important to consider the interaction with older Cargo implementations
which don't support the `package` key in the index. In these situations older
Cargo binaries will likely fail to resolve entirely as the renamed name is
unlikely to exist on crates.io. For example the `futures` crate now has an
optional dependency with the name `futures01` which depends on an older version
of `futures` on crates.io. The string `futures01` will be listed in the index
under the `"name"` key, but no `futures01` crate exists on crates.io so older
Cargo will generate an error. If the crate does exist on crates.io, then even
weirder error messages will likely result.
Auto merge of #6039 - ehuss:fix-all-targets, r=alexcrichton
--all-targets fixes
- Fix: `cargo test --all-targets` was running lib tests three times.
- `--all-targets` help strings were wrong or misleading.
- Minor cleanup to add `Proposal` type to maybe make the code more readable.
Auto merge of #5988 - Eh2406:explore_the_bug, r=alexcrichton
BUG fuzzing found a bug in the resolver, we need a complete set of conflicts to do backjumping
As mentioned in https://github.com/rust-lang/cargo/pull/5921#issuecomment-418890269, the new proptest found a live bug! This PR so far tracs my attempt to minimize the problematic input.
The problem turned out to be that we where backjumping on incomplete set of conflicts.
Eric Huss [Mon, 17 Sep 2018 02:41:57 +0000 (19:41 -0700)]
--all-targets fixes
- Fix: `cargo test --all-targets` was running lib tests three times.
- `--all-targets` help strings were wrong or misleading.
- Minor cleanup to add `Proposal` type to maybe make the code more readable.
Auto merge of #6004 - zachlute:fix-ctrlc-handling2, r=alexcrichton
Add empty ctrlc handler on Windows.
Fixes #6000.
This is a 'better' version of PR #6002 that accomplishes the same thing using only `winapi` calls without the dependency on the `ctrlc` crate or spawning an additional thread.
----
When exec_replacing the cargo process, we want the new process to get any signals. This already works fine on Unix.
On Windows, pressing ctrlc kills the cargo process and doesn't pass the signal on to the child process. By adding an empty handler, we allow the child process to handle the signal instead.