Edward Thomson [Mon, 12 Jun 2017 11:01:10 +0000 (12:01 +0100)]
repository_item_path: return ENOTFOUND when appropriate
Disambiguate error values: return `GIT_ENOTFOUND` when the item cannot
exist in the repository (perhaps because the repository is inmemory or
otherwise not backed by a filesystem), return `-1` when there is a hard
failure.
Edward Thomson [Thu, 8 Jun 2017 21:23:53 +0000 (22:23 +0100)]
checkout: cope with untracked files in directory deletion
When deleting a directory during checkout, do not simply delete the
directory, since there may be untracked files. Instead, go into
the iterator and examine each file.
In the original code (the code with the faulty assumption), we look to
see if there's an index entry beneath the directory that we want to
remove. Eg, it looks to see if we have a workdir entry foo and an
index entry foo/bar.txt. If this is not the case, then the working
directory must have precious files in that directory. This part is okay.
The part that's not okay is if there is an index entry foo/bar.txt. It
just blows away the whole damned directory.
That's not cool.
Instead, by simply pushing the directory itself onto the stack and
iterating each entry, we will deal with the files one by one - whether
they're in the index (and can be force removed) or not (and are
precious).
The original code was a bad optimization, assuming that we didn't need
to git_iterator_advance_into if there was any index entry in the folder.
That's wrong - we could have optimized this iff all folder entries are
in the index.
Instead, we need to simply dig into the directory and analyze its
entries.
checkout: do not delete directories with untracked entries
If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout`
invocations, we remove files which were previously staged. But while
doing so, we unfortunately also remove unstaged files in a directory
which contains at least one staged file, resulting in potential data
loss.
Initially, the setting has been solely used to enable the use of
`fsync()` when creating objects. Since then, the use has been extended
to also cover references and index files. As the option is not yet part
of any release, we can still correct this by renaming the option to
something more sensible, indicating not only correlation to objects.
This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also
move the variable from the object to repository source code.
The `git_buf_init` function has an optional length parameter, which will
cause the buffer to be initialized and allocated in one step. This can
be used instead of static initialization with `GIT_BUF_INIT` followed by
a `git_buf_grow`. This patch does so for two functions where it is
applicable.
buffer: return errors for `git_buf_init` and `git_buf_attach`
Both the `git_buf_init` and `git_buf_attach` functions may call
`git_buf_grow` in case they were given an allocation length as
parameter. As such, it is possible for these functions to fail when we
run out of memory. While it won't probably be used anytime soon, it does
indeed make sense to also record this fact by returning an error code
from both functions. As they belong to the internal API only, this
change does not break our interface.
buffer: consistently use `ENSURE_SIZE` to grow buffers on-demand
The `ENSURE_SIZE` macro can be used to grow a buffer if its currently
allocated size does not suffice a required target size. While most of
the code already uses this macro, the `git_buf_join` and `git_buf_join3`
functions do not yet use it. Due to the macro first checking whether we
have to grow the buffer at all, this has the benefit of saving a
function call when it is not needed. While this is nice to have, it will
probably not matter at all performance-wise -- instead, this only serves
for consistency across the code.
While the `ENSURE_SIZE` macro gets a reference to both the buffer that
is to be resized and a new size, we were not consistently referencing
the passed buffer, but instead a variable `buf`, which is not passed in.
Funnily enough, we never noticed because our buffers seem to always be
named `buf` whenever the macro was being used.
Fix the macro by always using the passed-in buffer. While at it, add
braces around all mentions of passed-in variables as should be done with
macros to avoid subtle errors.
buffer: rely on `GITERR_OOM` set by `git_buf_try_grow`
The function `git_buf_try_grow` consistently calls `giterr_set_oom`
whenever growing the buffer fails due to insufficient memory being
available. So in fact, we do not have to do this ourselves when a call
to any buffer-growing function has failed due to an OOM situation. But
we still do so in two functions, which this patch cleans up.
The updated SHA1DC library allows us to use custom includes instead of
using standard includes. Due to requirements with cross-platform, we
provide some custom system includes files like for example the
"stdint.h" file on Win32. Because of this, we want to make sure to avoid
breaking cross-platform compatibility when SHA1DC is enabled.
To use the new mechanism, we can simply define
`SHA1DC_NO_STANDARD_INCLUDES`. Furthermore, we can specify custom
include files via two defines, which we now use to include our
"common.h" header.
OpenSSL v1.1 has introduced a new way of initializing the library
without having to call various functions of different subsystems. In
libgit2, we have been adapting to that change with 88520151f
(openssl_stream: use new initialization function on OpenSSL version
>=1.1, 2017-04-07), where we added an #ifdef depending on the OpenSSL
version. This change broke building with libressl, though, which has not
changed its API in the same way.
Fix the issue by expanding the #ifdef condition to use the old way of
initializing with libressl.
tests: index::version: improve write test for index v4
The current write test does not trigger some edge-cases in the index
version 4 path compression code. Rewrite the test to start off the an
empty standard repository, creating index entries with interesting paths
itself. This allows for more fine-grained control over checked paths.
Furthermore, we now also verify that entry paths are actually
reconstructed correctly.
tests: index::version: verify we write compressed index entries
While we do have a test which checks whether a written index of version
4 has the correct version set, we do not check whether this actually
enables path compression for index entries. This commit adds a new test
by adding a number of index entries with equal path prefixes to the
index and subsequently flushing that to disk. With suffix compression
enabled by index version 4, only the last few bytes of these paths will
actually have to be written to the index, saving a lot of disk space.
For the test, differences are about an order of magnitude, allowing us
to easily verify without taking a deeper look at actual on-disk
contents.
tests: index::version: add test to read index version v4
While we have a simple test to determine whether we can write an index
of version 4, we never verified that we are able to read this kind of
index (and in fact, we were not able to do so). Add a new repository
which has an index of version 4. This repository is then read from a new
test.
The init and cleanup functions for test suites are usually prepended to
our actual tests. The index::version test suite does not adhere to this
stile. Fix this.
index: verify we have enough space left when writing index entries
In our code writing index entries, we carry around a `disk_size`
representing how much memory we have in total and pass this value to
`git_encode_varint` to do bounds checks. This does not make much sense,
as at the time when passing on this variable it is already out of date.
Fix this by subtracting used memory from `disk_size` as we go along.
Furthermore, assert we've actually got enough space left to do the final
path memcpy.
index: fix shared prefix computation when writing index entry
When using compressed index entries, each entry's path is preceded by a
varint encoding how long the shared prefix with the previous index entry
actually is. We currently encode a length of `(path_len - same_len)`,
which is doubly wrong. First, `path_len` is already set to `path_len -
same_len` previously. Second, we want to encode the shared prefix rather
than the un-shared suffix length.
Fix this by using `same_len` as the varint value instead.
index: also sanity check entry size with compressed entries
We have a check in place whether the index has enough data left for the
required footer after reading an index entry, but this was only used for
uncompressed entries. Move the check down a bit so that it is executed
for both compressed and uncompressed index entries.
All index entry size computations are now performed in
`index_entry_size`. As such, we do not need the file-scope macros for
computing these sizes anymore. Remove them and move the `entry_size`
macro into the `index_entry_size` function.
index: don't right-pad paths when writing compressed entries
Our code to write index entries to disk does not check whether the
entry that is to be written should use prefix compression for the path.
As such, we were overallocating memory and added bogus right-padding
into the resulting index entries. As there is no padding allowed in the
index version 4 format, this should actually result in an invalid index.
Fix this by re-using the newly extracted `index_entry_size` function.
index: move index entry size computation into its own function
Create a new function `index_entry_size` which encapsulates the logic to
calculate how much space is needed for an index entry, whether it is
simple/extended or compressed/uncompressed. This can later be re-used by
our code writing index entries.
index: set last written index entry in foreach-entry-loop
The last written disk entry is currently being written inside of the
function `write_disk_entry`. Make behavior a bit more obviously by
instead setting it inside of `write_entries` while iterating all
entries.
index: set last entry when reading compressed entries
To calculate the path of a compressed index entry, we need to know the
preceding entry's path. While we do actually set the first predecessor
correctly to "", we fail to update this while reading the entries.
Fix the issue by updating `last` inside of the loop. Previously, we've
been passing a double-pointer to `read_entry`, which it didn't update.
As it is more obvious to update the pointer inside the loop itself,
though, we can simply convert it to a normal pointer.
index: fix confusion with shared prefix in compressed path names
The index version 4 introduced compressed path names for the entries.
From the git.git index-format documentation:
At the beginning of an entry, an integer N in the variable width
encoding [...] is stored, followed by a NUL-terminated string S.
Removing N bytes from the end of the path name for the previous
entry, and replacing it with the string S yields the path name for
this entry.
But instead of stripping N bytes from the previous path's string and
using the remaining prefix, we were instead simply concatenating the
previous path with the current entry path, which is obviously wrong.
Fix the issue by correctly copying the first N bytes of the previous
entry only and concatenating the result with our current entry's path.
varint: fix computation for remaining buffer space
When encoding varints to a buffer, we want to remain sure that the
required buffer space does not exceed what is actually available. Our
current check does not do the right thing, though, in that it does not
honor that our `pos` variable counts the position down instead of up. As
such, we will require too much memory for small varints and not enough
memory for big varints.
Fix the issue by correctly calculating the required size as
`(sizeof(varint) - pos)`. Add a test which failed before.
Chris Hescock [Wed, 11 Jan 2017 15:39:59 +0000 (10:39 -0500)]
indexer: name pack files after trailer hash
Upstream git.git has changed the way how packfiles are named.
Previously, they were using a hash of the contained object's OIDs, which
has then been changed to use the hash of the complete packfile instead.
See 1190a1acf (pack-objects: name pack files after trailer hash,
2013-12-05) in the git.git repository for more information on this
change.
This commit changes our logic to match the behavior of core git.
repository: make check if repo is a worktree more strict
To determine if a repository is a worktree or not, we currently check
for the existence of a "gitdir" file inside of the repository's gitdir.
While this is sufficient for non-broken repositories, we have at least
one case of a subtly broken repository where there exists a gitdir file
inside of a gitmodule. This will cause us to misidentify the submodule
as a worktree.
While this is not really a fault of ours, we can do better here by
observing that a repository can only ever be a worktree iff its common
directory and dotgit directory are different. This allows us to make our
check whether a repo is a worktree or not more strict by doing a simple
string comparison of these two directories. This will also allow us to
do the right thing in the above case of a broken repository, as for
submodules these directories will be the same. At the same time, this
allows us to skip the `stat` check for the "gitdir" file for most
repositories.
The check whether a repository is a worktree or not is currently done
inside of `git_repository_open_ext`. As we want to extend this function
later on, pull it out into its own function `repo_is_worktree` to ease
working on it.
repository: improve parameter names for `find_repo`
The out-parameters of `find_repo` containing found paths of a repository
are a tad confusing, as they are not as obvious as they could be. Rename
them like following to ease reading the code:
repository: clear out-parameter instead of freeing it
The `path` out-parameter of `find_repo` is being sanitized initially
such that we do not try to append to existing content. The sanitization
is done via `git_buf_free`, though, which forces us to needlessly
reallocate the buffer later in the function. Fix this by using
`git_buf_clear` instead.
The fields `declared_size` and `received_bytes` of the `git_odb_stream`
are both of type `git_off_t` which is defined as a signed integer. When
passing these values to a printf-style string in
`git_odb_stream__invalid_length`, though, we format these as PRIuZ,
which is unsigned.
Fix the issue by using PRIdZ instead, silencing warnings on macOS.
The credentials callback reads the username and password via scanf into
fixed-length arrays. While these are simply examples and as such not as
interesting, the unchecked return value of scanf causes GCC to emit
warnings. So while we're busy to shut up GCC, we also fix the possible
overflow of scanf by using getline instead.
odb: shut up gcc warnings regarding uninitilized variables
The `error` variable is used as a return value in the out-section of
both `odb_read_1` and `read_prefix_1`. While the value will actually
always be initialized inside of this section, GCC fails to realize this
due to interactions with the `found` variable: if `found` is set, the
error will always be initialized. If it is not, we return early without
reaching the out-statements.
Shut up the warnings by initializing the error variable, even though it
is unnecessary.
worktree: switch over worktree pruning to an opts structure
The current signature of `git_worktree_prune` accepts a flags field to
alter its behavior. This is not as flexible as we'd like it to be when
we want to enable passing additional options in the future. As the
function has not been part of any release yet, we are still free to
alter its current signature. This commit does so by using our usual
pattern of an options structure, which is easily extendable without
breaking the API.
When creating a new worktree, we do have a potential race with us
creating the worktree and another process trying to delete the same
worktree as it is being created. As such, the upstream git project has
introduced a flag `git worktree add --locked`, which will cause the
newly created worktree to be locked immediately after its creation. This
mitigates the race condition.
We want to be able to mirror the same behavior. As such, a new flag
`locked` is added to the options structure of `git_worktree_add` which
allows the user to enable this behavior.
tests: repo: fix repo discovery tests on overlayfs
Debian and Ubuntu often use schroot to build their DEB packages in a
controlled environment. Depending on how schroot is configured, our
tests regarding repository discovery break due to not being able to find
the repositories anymore. It turns out that these errors occur when the
schroot is configured to use an overlayfs on the directory structures.
The reason for this failure is that we usually refrain from discovering
repositories across devices. But unfortunately, overlayfs does not have
consistent device identifiers for all its files but will instead use the
device number of the filesystem the file stems from. So whenever we
cross boundaries between the upper and lower layer of the overlay, we
will fail to properly detect the repository and bail out.
This commit fixes the issue by enabling cross-device discovery in our
tests. While it would be preferable to have this turned off, it probably
won't do much harm anyway as we set up our tests in a temporary location
outside of the parent repository.
After calling `libssh2_init`, we need to clean up after the library by
executing `libssh2_exit` as soon as we exit. Register a shutdown handler
to do so which simply calls `libssh2_exit`. This fixes several memory
leaks.
We unconditionally return success when initializing libssh2, regardless
of whether `libgssh2_init` signals success or an error. Fix this by
checking its return code.
The `git_worktree_add` function currently accepts only a path and name
for the new work tree. As we may want to expand these parameters in
future versions without adding additional parameters to the function for
every option, this commit introduces our typical pattern of an options
struct. Right now, this structure is still empty, which will change with
the next commit.
While the function reading an object from the complete OID already
verifies OIDs, we do not yet do so for reading objects from a partial
OID. Do so when strict OID verification is enabled.
The read_prefix_1 function has several return statements springled
throughout the code. As we have to free memory upon getting an error,
the free code has to be repeated at every single retrun -- which it is
not, so we have a memory leak here.
Refactor the code to use the typical `goto out` pattern, which will free
data when an error has occurred. While we're at it, we can also improve
the error message thrown when multiple ambiguous prefixes are found. It
will now include the colliding prefixes.
Verifying hashsums of objects we are reading from the ODB may be costly
as we have to perform an additional hashsum calculation on the object.
Especially when reading large objects, the penalty can be as high as
35%, as can be seen when executing the equivalent of `git cat-file` with
and without verification enabled. To mitigate for this, we add a global
option for libgit2 which enables the developer to turn off the
verification, e.g. when he can be reasonably sure that the objects on
disk won't be corrupted.
The upstream git.git project verifies objects when looking them up from
disk. This avoids scenarios where objects have somehow become corrupt on
disk, e.g. due to hardware failures or bit flips. While our mantra is
usually to follow upstream behavior, we do not do so in this case, as we
never check hashes of objects we have just read from disk.
To fix this, we create a new error class `GIT_EMISMATCH` which denotes
that we have looked up an object with a hashsum mismatch. `odb_read_1`
will then, after having read the object from its backend, hash the
object and compare the resulting hash to the expected hash. If hashes do
not match, it will return an error.
This obviously introduces another computation of checksums and could
potentially impact performance. Note though that we usually perform I/O
operations directly before doing this computation, and as such the
actual overhead should be drowned out by I/O. Running our test suite
seems to confirm this guess. On a Linux system with best-of-five
timings, we had 21.592s with the check enabled and 21.590s with the
ckeck disabled. Note though that our test suite mostly contains very
small blobs only. It is expected that repositories with bigger blobs may
notice an increased hit by this check.
In addition to a new test, we also had to change the
odb::backend::nonrefreshing test suite, which now triggers a hashsum
mismatch when looking up the commit "deadbeef...". This is expected, as
the fake backend allocated inside of the test will return an empty
object for the OID "deadbeef...", which will obviously not hash back to
"deadbeef..." again. We can simply adjust the hash to equal the hash of
the empty object here to fix this test.
We currently have no tests which check whether we fail reading corrupted
objects. Add one which modifies contents of an object stored on disk and
then tries to read the object.
The object::lookup tests do use the "testrepo.git" repository in a
read-only way, so we do not set up the repository as a sandbox but
simply open it. But in a future commit, we will want to test looking up
objects which are corrupted in some way, which requires us to modify the
on-disk data. Doing this in a repository without creating the sandbox
will modify contents of our libgit2 repository, though.