bors [Fri, 25 Mar 2016 16:26:34 +0000 (09:26 -0700)]
Auto merge of #2519 - alexcrichton:update-curl, r=alexcrichton
Update curl-sys
Picks up a fix to hopefully and correctly configure OpenSSL to be enabled in
cross-compiled situations where OpenSSL comes from a different location
(currently specified by the `OPENSSL_ROOT_DIR` environment variable that libssh2
also reads).
Alex Crichton [Fri, 25 Mar 2016 16:24:35 +0000 (09:24 -0700)]
Update curl-sys
Picks up a fix to hopefully and correctly configure OpenSSL to be enabled in
cross-compiled situations where OpenSSL comes from a different location
(currently specified by the `OPENSSL_ROOT_DIR` environment variable that libssh2
also reads).
bors [Thu, 24 Mar 2016 16:15:04 +0000 (09:15 -0700)]
Auto merge of #2513 - alexcrichton:xcompile, r=alexcrichton
Fix nightly dist builds
* When downloading rustc, also download a number of cross-std libraries so we
can cross compile with that compiler.
* Only build OpenSSL on some --enable-nightly builds, not all. For example
Windows and OSX don't want to link statically to OpenSSL.
Alex Crichton [Thu, 24 Mar 2016 01:07:19 +0000 (18:07 -0700)]
Fix nightly dist builds
* When downloading rustc, also download a number of cross-std libraries so we
can cross compile with that compiler.
* Only build OpenSSL on some --enable-nightly builds, not all. For example
Windows and OSX don't want to link statically to OpenSSL.
bors [Wed, 23 Mar 2016 16:36:36 +0000 (09:36 -0700)]
Auto merge of #2510 - alexcrichton:xcompile, r=brson
Prepare for ARM/FreeBSD/NetBSD nightlies
This commit beefs up Cargo's makefiles to support nightly builds of Cargo for
multiple platforms. This primarily involves vendoring the logic of how to build
OpenSSL for statically linking against Cargo into the Makefiles directly. We'll
have to update the version of OpenSSL as releases are made, but we essentially
already do that with the normal docker container.
The Linux nightlies will still run in the normal dist docker container (a really
old CentOS build) and builds for new platforms will happen in the standard
linux-cross container we use for other cross builds. The nightly versions of
these will produce Cargo tarballs for a whole bunch of platforms to get
uploaded.
This has been tested in the `alexcrichton/rust-slave-linux-cross:2016-03-17b`
docker container for the 3 ARM targets and FreeBSD target. NetBSD will come once
rust-lang/rust#32407 lands.
Alex Crichton [Tue, 22 Mar 2016 21:40:00 +0000 (14:40 -0700)]
Prepare for ARM/FreeBSD/NetBSD nightlies
This commit beefs up Cargo's makefiles to support nightly builds of Cargo for
multiple platforms. This primarily involves vendoring the logic of how to build
OpenSSL for statically linking against Cargo into the Makefiles directly. We'll
have to update the version of OpenSSL as releases are made, but we essentially
already do that with the normal docker container.
The Linux nightlies will still run in the normal dist docker container (a really
old CentOS build) and builds for new platforms will happen in the standard
linux-cross container we use for other cross builds. The nightly versions of
these will produce Cargo tarballs for a whole bunch of platforms to get
uploaded.
This has been tested in the `alexcrichton/rust-slave-linux-cross:2016-03-17b`
docker container for the 3 ARM targets and FreeBSD target. NetBSD will come once
rust-lang/rust#32407 lands.
bors [Sat, 19 Mar 2016 18:39:35 +0000 (11:39 -0700)]
Auto merge of #2502 - IvanUkhov:doc, r=alexcrichton
doc: make the pages’ titles consistent
The titles of some of the pages end with “Cargo Documentation” (_e.g._, [Frequently Asked Questions](http://doc.crates.io/faq.html)) whereas the titles of some other pages do not (_e.g._, [Environment Variables](http://doc.crates.io/environment-variables.html)), which is a bit inconsistent. Perhaps one should either add that ending to all the titles or eliminate it from all of them. This pull request does the latter, which can be changed if needed. I personally think that such long titles are reasonable for the `title` HTML tag but a bit too verbose when displayed on the page.
bors [Fri, 18 Mar 2016 17:49:04 +0000 (10:49 -0700)]
Auto merge of #2482 - ryanq:issue2266, r=alexcrichton
Suggest the best matching target for cargo run
Targets passed to cargo compile are validated against the package. If
the target exists, it is compiled. If not, cargo will bail and offer a
suggested target name if there is a close match.
The tests create and build/run binaries and examples using filenames
that are close (or not so close) to the target names to verify that
close matching names are suggested to the user.
bors [Fri, 18 Mar 2016 16:47:42 +0000 (09:47 -0700)]
Auto merge of #2493 - IvanUkhov:typography, r=alexcrichton
doc/manifest: polish typography
The pull request makes a number of stylistic adjustments to The Manifest Format page. There are two commits. The first is a preliminary reflowing of paragraphs, and the second bears the actual changes. If any of the changes seems dubious, please let me know; I’ll try to motivate (and revert if needed).
bors [Thu, 17 Mar 2016 05:14:36 +0000 (22:14 -0700)]
Auto merge of #2486 - alexcrichton:flock, r=brson
Fix running Cargo concurrently
Cargo has historically had no protections against running it concurrently. This
is pretty unfortunate, however, as it essentially just means that you can only
run one instance of Cargo at a time **globally on a system**.
An "easy solution" to this would be the use of file locks, except they need to
be applied judiciously. It'd be a pretty bad experience to just lock the entire
system globally for Cargo (although it would work), but otherwise Cargo must be
principled how it accesses the filesystem to ensure that locks are properly
held. This commit intends to solve all of these problems.
A new utility module is added to cargo, `util::flock`, which contains two types:
* `FileLock` - a locked version of a `File`. This RAII guard will unlock the
lock on `Drop` and I/O can be performed through this object. The actual
underlying `Path` can be read from this object as well.
* `Filesystem` - an unlocked representation of a `Path`. There is no "safe"
method to access the underlying path without locking a file on the filesystem
first.
Built on the [fs2] library, these locks use the `flock` system call on Unix and
`LockFileEx` on Windows. Although file locking on Unix is [documented as not so
great][unix-bad], but largely only because of NFS, these are just advisory, and
there's no byte-range locking. These issues don't necessarily plague Cargo,
however, so we should try to leverage them. On both Windows and Unix the file
locks are released when the underlying OS handle is closed, which means that
if the process dies the locks are released.
Cargo has a number of global resources which it now needs to lock, and the
strategy is done in a fairly straightforward way:
* Each registry's index contains one lock (a dotfile in the index). Updating the
index requires a read/write lock while reading the index requires a shared
lock. This should allow each process to ensure a registry update happens while
not blocking out others for an unnecessarily long time. Additionally any
number of processes can read the index.
* When downloading crates, each downloaded crate is individually locked. A lock
for the downloaded crate implies a lock on the output directory as well.
Because downloaded crates are immutable, once the downloaded directory exists
the lock is no longer needed as it won't be modified, so it can be released.
This granularity of locking allows multiple Cargo instances to download
dependencies in parallel.
* Git repositories have separate locks for the database and for the project
checkout. The datbase and checkout are locked for read/write access when an
update is performed, and the lock of the checkout is held for the entire
lifetime of the git source. This is done to ensure that any other Cargo
processes must wait while we use the git repository. Unfortunately there's
just not that much parallelism here.
* Binaries managed by `cargo install` are locked by the local metadata file that
Cargo manages. This is relatively straightforward.
* The actual artifact output directory is just globally locked for the entire
build. It's hypothesized that running Cargo concurrently in *one directory* is
less of a feature needed rather than running multiple instances of Cargo
globally (for now at least). It would be possible to have finer grained
locking here, but that can likely be deferred to a future PR.
So with all of this infrastructure in place, Cargo is now ready to grab some
locks and ensure that you can call it concurrently anywhere at any time and
everything always works out as one might expect.
One interesting question, however, is what does Cargo do on contention? On one
hand Cargo could immediately abort, but this would lead to a pretty poor UI as
any Cargo process on the system could kick out any other. Instead this PR takes
a more nuanced approach.
* First, all locks are attempted to be acquired (a "try lock"). If this
succeeds, we're done.
* Next, Cargo prints a message to the console that it's going to block waiting
for a lock. This is done because it's indeterminate how long Cargo will wait
for the lock to become available, and most long-lasting operations in Cargo
have a message printed for them.
* Finally, a blocking acquisition of the lock is issued and we wait for it to
become available.
So all in all this should help Cargo fix any future concurrency bugs with file
locking in a principled fashion while also allowing concurrent Cargo processes
to proceed reasonably across the system.
Alex Crichton [Sat, 12 Mar 2016 17:58:53 +0000 (09:58 -0800)]
Fix running Cargo concurrently
Cargo has historically had no protections against running it concurrently. This
is pretty unfortunate, however, as it essentially just means that you can only
run one instance of Cargo at a time **globally on a system**.
An "easy solution" to this would be the use of file locks, except they need to
be applied judiciously. It'd be a pretty bad experience to just lock the entire
system globally for Cargo (although it would work), but otherwise Cargo must be
principled how it accesses the filesystem to ensure that locks are properly
held. This commit intends to solve all of these problems.
A new utility module is added to cargo, `util::flock`, which contains two types:
* `FileLock` - a locked version of a `File`. This RAII guard will unlock the
lock on `Drop` and I/O can be performed through this object. The actual
underlying `Path` can be read from this object as well.
* `Filesystem` - an unlocked representation of a `Path`. There is no "safe"
method to access the underlying path without locking a file on the filesystem
first.
Built on the [fs2] library, these locks use the `flock` system call on Unix and
`LockFileEx` on Windows. Although file locking on Unix is [documented as not so
great][unix-bad], but largely only because of NFS, these are just advisory, and
there's no byte-range locking. These issues don't necessarily plague Cargo,
however, so we should try to leverage them. On both Windows and Unix the file
locks are released when the underlying OS handle is closed, which means that
if the process dies the locks are released.
Cargo has a number of global resources which it now needs to lock, and the
strategy is done in a fairly straightforward way:
* Each registry's index contains one lock (a dotfile in the index). Updating the
index requires a read/write lock while reading the index requires a shared
lock. This should allow each process to ensure a registry update happens while
not blocking out others for an unnecessarily long time. Additionally any
number of processes can read the index.
* When downloading crates, each downloaded crate is individually locked. A lock
for the downloaded crate implies a lock on the output directory as well.
Because downloaded crates are immutable, once the downloaded directory exists
the lock is no longer needed as it won't be modified, so it can be released.
This granularity of locking allows multiple Cargo instances to download
dependencies in parallel.
* Git repositories have separate locks for the database and for the project
checkout. The datbase and checkout are locked for read/write access when an
update is performed, and the lock of the checkout is held for the entire
lifetime of the git source. This is done to ensure that any other Cargo
processes must wait while we use the git repository. Unfortunately there's
just not that much parallelism here.
* Binaries managed by `cargo install` are locked by the local metadata file that
Cargo manages. This is relatively straightforward.
* The actual artifact output directory is just globally locked for the entire
build. It's hypothesized that running Cargo concurrently in *one directory* is
less of a feature needed rather than running multiple instances of Cargo
globally (for now at least). It would be possible to have finer grained
locking here, but that can likely be deferred to a future PR.
So with all of this infrastructure in place, Cargo is now ready to grab some
locks and ensure that you can call it concurrently anywhere at any time and
everything always works out as one might expect.
One interesting question, however, is what does Cargo do on contention? On one
hand Cargo could immediately abort, but this would lead to a pretty poor UI as
any Cargo process on the system could kick out any other. Instead this PR takes
a more nuanced approach.
* First, all locks are attempted to be acquired (a "try lock"). If this
succeeds, we're done.
* Next, Cargo prints a message to the console that it's going to block waiting
for a lock. This is done because it's indeterminate how long Cargo will wait
for the lock to become available, and most long-lasting operations in Cargo
have a message printed for them.
* Finally, a blocking acquisition of the lock is issued and we wait for it to
become available.
So all in all this should help Cargo fix any future concurrency bugs with file
locking in a principled fashion while also allowing concurrent Cargo processes
to proceed reasonably across the system.
bors [Thu, 17 Mar 2016 00:50:37 +0000 (17:50 -0700)]
Auto merge of #2484 - alexcrichton:fix-bad-backtrack, r=brson
Fix caching features across backtracking
In the local loop during resolution all variables need to be reset whenever we
backtrack up a frame, but currently the `method` and `features` set are
accidentally not reset whenever we backtrack. Calculate the `method` later and
cache `features` in each frame so we can properly backtrack.
bors [Thu, 17 Mar 2016 00:18:53 +0000 (17:18 -0700)]
Auto merge of #2241 - brson:rustflags, r=alexcrichton
Apply RUSTFLAGS arguments to rustc builds
Cargo will use RUSTFLAGS for building everything that is not a build script
or plugin. It does not apply to these targets because they may be for
a different platform that 'normal' builds.
Alex Crichton [Mon, 14 Mar 2016 22:45:05 +0000 (15:45 -0700)]
Fix caching features across backtracking
In the local loop during resolution all variables need to be reset whenever we
backtrack up a frame, but currently the `method` and `features` set are
accidentally not reset whenever we backtrack. Calculate the `method` later and
cache `features` in each frame so we can properly backtrack.
Brian Anderson [Wed, 17 Feb 2016 00:48:03 +0000 (00:48 +0000)]
Apply RUSTFLAGS env var to rustc builds
This passes RUSTFLAGS to rustc builds for the target architecture.
We don't want to pass the RUSTFLAGS args to multiple architectures because
they may contain architecture-specific flags. Ideally, the scheme
we would use would treat plugins and build scripts - which may not
be for the target architecture - consistently. Unfortunately it's
quite difficult in the current Cargo architecture to seperately
identify build scripts, plugins and their dependencies from
code used by the target.
So the scheme here is very simple:
1) If --target is not specified, RUSTFLAGS applies to all builds.
2) If --target is specified, RUSTFLAGS only applies to builds
with the Kind::Target target kind, which indicates build units
derived from the requested --target.
Ryan Quattlebaum [Mon, 14 Mar 2016 17:22:17 +0000 (13:22 -0400)]
Suggest the best matching target for cargo run
Targets passed to cargo compile are validated against the package. If
the target exists, it is compiled. If not, cargo will bail and offer a
suggested target name if there is a close match.
The tests create and build/run binaries and examples using filenames
that are close (or not so close) to the target names to verify that
close matching names are suggested to the user.
bors [Mon, 14 Mar 2016 17:08:12 +0000 (10:08 -0700)]
Auto merge of #2468 - TheNeikos:add-warning_if_no_browser, r=alexcrichton
Add warning if no browser
Closes #2371
I am unsure if `println!` is the correct way to print warnings at this stage, since it is not a hard error thus returning `Err` seems a bit too strong.
bors [Sat, 12 Mar 2016 20:34:58 +0000 (12:34 -0800)]
Auto merge of #2474 - sbeckeriv:add-some-flair, r=alexcrichton
Add build flair
Dearest Reviewer
I have add the travis-ci build badge to the README. I pushed the image down towards the bottom. I was thinking that most people reading the readme do not care about the build status. I have seen the badge done in different locations and with different titles. I am easy on where it goes. I added it because I found that I was looking for the status when my branch was failing. I have since learned the travis interface.
bors [Sat, 12 Mar 2016 20:08:12 +0000 (12:08 -0800)]
Auto merge of #2421 - sbeckeriv:decolor-messages-426, r=alexcrichton
Dull the errors
This resolves #426
Dearest Reviewer,
I have updated the error messages to use say_status at the shell level. I have also changed say_status to print the message in bold. I do think it looks nice but it does have the side effect of making some seemingly unrelated text bold. I do think it looks better bold but it is also very easy to revert. I have included examples of both.
Thank you,
Becker
Bold: Note the usage is bold.
<img width="1072" alt="screen shot 2016-02-27 at 10 49 05 am" src="https://cloud.githubusercontent.com/assets/12170/13374778/0efd54ec-dd43-11e5-9f02-f0224608132a.png">
No bold:
<img width="885" alt="screen shot 2016-02-27 at 10 46 35 am" src="https://cloud.githubusercontent.com/assets/12170/13374775/fa3a6612-dd42-11e5-9c09-8f23506f5f0c.png">
I updated the error states to use say_status.
Add text to the empty error
The empty error looked odd with the say_status change.
Update all stderr messages
Switch them to format statements and create a helper for the error
status.
bors [Sat, 12 Mar 2016 00:35:41 +0000 (16:35 -0800)]
Auto merge of #2454 - alexcrichton:less-recurse, r=brson
Globally optimize traversal in resolve
Currently when we're attempting to resolve a dependency graph we locally
optimize the order in which we visit candidates for a resolution (most
constrained first). Once a version is activated, however, it will add a whole
mess of new dependencies that need to be activated to the global list, currently
appended at the end.
This unfortunately can lead to pathological behavior. By always popping from the
back and appending to the back of pending dependencies, super constrained
dependencies in the front end up not getting visited for quite awhile. This in
turn can cause Cargo to appear to hang for quite awhile as it's so aggressively
backtracking.
This commit switches the list of dependencies-to-activate from a `Vec` to a
`BinaryHeap`. The heap is sorted by the number of candidates for each
dependency, with the least candidates first. This ends up massively cutting down
on resolution times in practice whenever `=` dependencies are encountered
because they are resolved almost immediately instead of way near the end if
they're at the wrong place in the graph.
This alteration in traversal order ended up messing up the existing cycle
detection, so that was just removed entirely from resolution and moved to its
own dedicated pass.
bors [Wed, 9 Mar 2016 21:05:23 +0000 (13:05 -0800)]
Auto merge of #2420 - alexcrichton:different-metadata, r=brson
Ensure metadata for libs/bins are distinct
It may be the case in the future that the compiler will require that the "salt"
(the `-C metadata` flag) for all crates with the same name are distinct. Right
now a Cargo project with a library and a binary, however, will have the same
salt with the same crate name.
This commit mixes in some extra data to the library's salt to ensure that its
symbols don't clash with the binary's.
bors [Wed, 9 Mar 2016 17:20:11 +0000 (09:20 -0800)]
Auto merge of #2455 - jespino:remove-completed-todo, r=alexcrichton
Removing finished TODO
I think this TODO is finished (The only public field in all the file is the Layout::path) If this is part of the TODO, i can set it as private and create a getter).
Alex Crichton [Wed, 9 Mar 2016 00:37:00 +0000 (16:37 -0800)]
Globally optimize traversal in resolve
Currently when we're attempting to resolve a dependency graph we locally
optimize the order in which we visit candidates for a resolution (most
constrained first). Once a version is activated, however, it will add a whole
mess of new dependencies that need to be activated to the global list, currently
appended at the end.
This unfortunately can lead to pathological behavior. By always popping from the
back and appending to the back of pending dependencies, super constrained
dependencies in the front end up not getting visited for quite awhile. This in
turn can cause Cargo to appear to hang for quite awhile as it's so aggressively
backtracking.
This commit switches the list of dependencies-to-activate from a `Vec` to a
`BinaryHeap`. The heap is sorted by the number of candidates for each
dependency, with the least candidates first. This ends up massively cutting down
on resolution times in practice whenever `=` dependencies are encountered
because they are resolved almost immediately instead of way near the end if
they're at the wrong place in the graph.
This alteration in traversal order ended up messing up the existing cycle
detection, so that was just removed entirely from resolution and moved to its
own dedicated pass.
bors [Fri, 4 Mar 2016 18:22:25 +0000 (18:22 +0000)]
Auto merge of #2438 - jseyfried:subcommands, r=alexcrichton
This PR moves the subcommands in `src/bin` into their own directory and ensures future compatibility with the corrected search paths for non-inline modules (see [Rust PR #32006](https://github.com/rust-lang/rust/pull/32006)).
r? @alexcrichton
bors [Fri, 4 Mar 2016 00:56:13 +0000 (00:56 +0000)]
Auto merge of #2423 - alexcrichton:fix-pkgid-hash, r=brson
All crates being compiled by Cargo are identified by a unique `PackageId` instance. This ID incorporates information such as the name, version, and source from where the crate came from. Package ids are allowed to have path sources to depend on local crates on the filesystem. The package id itself encodes the path of where the crate came from.
Historically, however, the "path source" from where these packages are learned had some interesting logic. Specifically one specific source would be able to return many packages within. In other words, a path source would recursively walk crate definitions and the filesystem attempting to find crates. Each crate returned from a source has the same source id, so consequently all packages from one source path would have the same source path id.
This in turn leads to confusing an surprising behavior, for example:
* When crates are compiled the status message indicates the path of the crate root, not the crate being compiled
* When viewed from two different locations (e.g. two different crate roots) the same package would have two different source ids because the id is based on the root location.
This hash mismatch has been [papered over](https://github.com/rust-lang/cargo/pull/1697) in the past to try to fix some spurious recompiles, but it unfortunately [leaked back in](https://github.com/rust-lang/cargo/pull/2279). This is clearly indicative of the "hack" being inappropriate so instead these commits fix the root of the problem.
---
In short, these commits ensure that the package id for a package defined locally has a path that points precisely at that package. This was a relatively invasive change and had ramifications on a few specific portions which now require a bit of extra code to support.
The fundamental change here was to change `PathSource` to be non-recursive by default in terms of what packages it thinks it contains. There are still two recursive use cases, git repositories and path overrides, which are used for backwards compatibility. This meant, however, that the packaging step for crate no longer has knowledge of other crates in a repository to filter out files from. Some specific logic was added to assist in discovering a git repository as well as filtering out sibling packages.
Another ramification of this patch, however, is that special care needs to be taken when decoding a lockfile. We now need all path dependencies in the lockfile to point precisely at where the path dependency came from, and this information is not encoded in the lock file. The decoding support was altered to do a simple probe of the filesystem to recursively walk path dependencies to ensure that we can match up packages in a lock file to where they're found on the filesystem.
Overall, however, this commit closes #1697 and also addresses servo/servo#9794 where this issue was originally reported.
Alex Crichton [Tue, 1 Mar 2016 16:24:43 +0000 (08:24 -0800)]
Fix all tests with recent changes
The package id for path dependencies now has another path component pointing
precisely to the package being compiled, so lots of tests need their output
matches to get updated.
Alex Crichton [Tue, 1 Mar 2016 16:20:16 +0000 (08:20 -0800)]
Fix some packaging logic in path sources
Currently the packaging logic depends on the old recursive nature of path
sources for a few points:
* Discovery of a git repository of a package.
* Filtering out of sibling packages for only including the right set of files.
For a non-recursive path source (now essentially the default) we can no longer
assume that we have a listing of all packages. Subsequently this logic was
tweaked to allow:
* Instead of looking for packages at the root of a repo, we instead look for a
Cargo.toml at the root of a git repository.
* We keep track of all Cargo.toml files found in a repository and prune out all
files which appear to be ancestors of that package.
Alex Crichton [Tue, 1 Mar 2016 06:20:47 +0000 (22:20 -0800)]
Fix decoding lock files with path dependencies
With the previous changes a path dependency must have the precise path to it
listed in its package id. Currently when decoding a lockfile, however, all path
dependencies have the same package id, which unfortunately causes a mismatch.
This commit alters the decoding of a lockfile to perform some simple path
traversals to probe the filesystem to understand where path dependencies are and
set the right package id for the found packages.
Alex Crichton [Tue, 1 Mar 2016 06:19:24 +0000 (22:19 -0800)]
Ensure overrides use recursive path sources
This mirrors the behavior that they have today. The `load` method for path
sources will by default return a non-recursive `PathSource` which unfortunately
isn't what we want here.
Alex Crichton [Tue, 1 Mar 2016 06:17:28 +0000 (22:17 -0800)]
Remove hacks when hashing package ids
Right now there's a few hacks here and there to "correctly" hash package ids by
taking a package's root path into account instead of the path store in the
package id. The purpose of this was to solve issues where the same package
referenced from two locations ended up having two different hashes.
This hack leaked, however, into the implementation of fingerprints which in
turned ended up causing spurious rebuilds. Fix this problem once and for all by
just defining hashing on package ids the natural and expected way.
bors [Thu, 3 Mar 2016 18:20:20 +0000 (18:20 +0000)]
Auto merge of #2433 - alexcrichton:fix-lines-match, r=alexcrichton
Right now we only match a suffix of the line, assuming all lines start with
`[..]`. Instead this ensures that the first match is anchored at the start.
Alex Crichton [Thu, 3 Mar 2016 18:18:02 +0000 (10:18 -0800)]
Fix output matching in tests
Right now we only match a suffix of the line, assuming all lines start with
`[..]`. Instead this ensures that the first match is anchored at the start.