Stefan Behrens [Fri, 27 Apr 2012 16:41:46 +0000 (12:41 -0400)]
Btrfs: fix block_rsv and space_info lock ordering
may_commit_transaction() calls
spin_lock(&space_info->lock);
spin_lock(&delayed_rsv->lock);
and update_global_block_rsv() calls
spin_lock(&block_rsv->lock);
spin_lock(&sinfo->lock);
Lockdep complains about this at run time.
Everywhere except in update_global_block_rsv(), the space_info lock is
the outer lock, therefore the locking order in update_global_block_rsv()
is changed.
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Jan Schmidt [Fri, 27 Apr 2012 16:41:45 +0000 (12:41 -0400)]
Btrfs: fix repair code for RAID10
btrfs_map_block sets mirror_num, so that the repair code knows eventually
which device gave us the read error. For RAID10, mirror_num must be 1 or 2.
Before this fix mirror_num was incorrectly related to our stripe index.
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Josef Bacik [Tue, 24 Apr 2012 00:35:03 +0000 (20:35 -0400)]
Btrfs: do not start delalloc inodes during sync
btrfs_start_delalloc_inodes will just walk the list of delalloc inodes and
start writing them out, but it doesn't splice the list or anything so as
long as somebody is doing work on the box you could end up in this section
_forever_. So just remove it, it's not needed anyway since sync will start
writeback on all inodes anyway, all we need to do is wait for ordered
extents and then we can commit the transaction. In my horrible torture test
sync goes from taking 4 minutes to about 1.5 minutes. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Stefan Behrens [Fri, 30 Mar 2012 11:58:32 +0000 (13:58 +0200)]
Btrfs: fix that check_int_data mount option was ignored
The bitfield member mount_opt was too small by one bit to hold the mount
option that enabled to include data extents in the integrity checker.
Since the same issue happened when the BTRFS_MOUNT_PANIC_ON_FATAL_ERROR
option was added (git rebase silently merges so that the increase of the
size of the bitfield member is lost), the bit limit was removed entirely.
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
Stefan Behrens [Mon, 19 Mar 2012 15:17:22 +0000 (16:17 +0100)]
Btrfs: fix btrfs_ioctl_dev_info() crash on missing device
When a filesystem is mounted with the degraded option, it is
possible that some of the devices are not there.
btrfs_ioctl_dev_info() crashs in this case because the device
name is a NULL pointer. This ioctl was only used for scrub.
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
Arne Jansen [Wed, 18 Apr 2012 08:27:16 +0000 (10:27 +0200)]
btrfs: don't return EINTR
It is basically a good thing if we are interruptible when waiting for
free space, but the generality in which it is implemented currently
leads to system calls being interruptible that are not documented this
way. For example git can't handle interrupted unlink(), leading to
corrupt repos under space pressure.
Instead we raise the bar to only be interruptible by SIGKILL.
Thanks to David Sterba for suggesting this.
Josef Bacik [Mon, 16 Apr 2012 13:42:26 +0000 (09:42 -0400)]
Btrfs: always store the mirror we read the eb from
A user reported a panic where we were trying to fix a bad mirror but the
mirror number we were giving was 0, which is invalid. This is because we
don't do the transid verification until after the read, so as far as the
read code is concerned the read was a success. So instead store the mirror
we read from so that if there is some failure post read we know which mirror
to try next and which mirror needs to be fixed if we find a good copy of the
block. Thanks,
Cc: Jeff Mahoney <jeffm@suse.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Josef Bacik <josef@redhat.com> Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org>
Btrfs: fix max chunk size check in chunk allocator
Fix a bug, where in case we need to adjust stripe_size so that the
length of the resulting chunk is less than or equal to max_chunk_size,
DUP chunks turn out to be only half as big as they could be.
Cc: Arne Jansen <sensille@gmx.net> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Jan Schmidt [Fri, 13 Apr 2012 10:28:08 +0000 (12:28 +0200)]
Btrfs: add missing read locks in backref.c
iref_to_path and iterate_irefs both increment the eb's refcount to use it
after releasing the path. Both depend on consistent data remaining in the
extent buffer and need a read lock to protect it.
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net>
Jan Schmidt [Fri, 13 Apr 2012 10:28:00 +0000 (12:28 +0200)]
Btrfs: don't call free_extent_buffer twice in iterate_irefs
Avoid calling free_extent_buffer more than once when the iterator function
returns non-zero. The only code that uses this is scrub repair for corrupted
nodatasum blocks.
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net>
Btrfs: Make free_ipath() deal gracefully with NULL pointers
Make free_ipath() behave like most other freeing functions in the
kernel and gracefully do nothing when passed a NULL pointer.
Besides this making the bahaviour consistent with functions such as
kfree(), vfree(), btrfs_free_path() etc etc, it also fixes a real NULL
deref issue in fs/btrfs/ioctl.c::btrfs_ioctl_ino_to_path(). In that
function we have this code:
If we ever take the true branch of that 'if' statement we'll end up
passing a NULL pointer to free_ipath() which will subsequently
dereference it and we'll go "Boom" :-(
This patch will avoid that.
Li Zefan [Mon, 12 Mar 2012 08:39:48 +0000 (16:39 +0800)]
Btrfs: avoid possible use-after-free in clear_extent_bit()
clear_extent_bit()
{
next_node = rb_next(&state->rb_node);
...
clear_state_bit(state); <-- this may free next_node
if (next_node) {
state = rb_entry(next_node);
...
}
}
clear_state_bit() calls merge_state() which may free the next node
of the passing extent_state, so clear_extent_bit() may end up
referencing freed memory.
Arne Jansen [Sat, 25 Feb 2012 08:09:47 +0000 (09:09 +0100)]
btrfs: don't add both copies of DUP to reada extent tree
Normally when there are 2 copies of a block, we add both to the
reada extent tree and prefetch only the one that is easier to reach.
This way we can better utilize multiple devices.
In case of DUP this makes no sense as both copies reside on the
same device.
Arne Jansen [Sat, 25 Feb 2012 08:09:30 +0000 (09:09 +0100)]
btrfs: fix race in reada
When inserting into the radix tree returns EEXIST, get the existing
entry without giving up the spinlock in between.
There was a race for both the zones trees and the extent tree.
Li Zefan [Tue, 21 Feb 2012 09:04:28 +0000 (17:04 +0800)]
Btrfs: avoid setting ->d_op twice
Follow those instructions, and you'll trigger a warning in the
beginning of d_set_d_op():
# mkfs.btrfs /dev/loop3
# mount /dev/loop3 /mnt
# btrfs sub create /mnt/sub
# btrfs sub snap /mnt /mnt/snap
# touch /mnt/snap/sub
touch: cannot touch `tmp': Permission denied
__d_alloc() set d_op to sb->s_d_op (btrfs_dentry_operations), and
then simple_lookup() reset it to simple_dentry_operations, which
triggered the warning.
Josef Bacik [Thu, 12 Apr 2012 20:03:57 +0000 (16:03 -0400)]
Btrfs: use commit root when loading free space cache
A user reported that booting his box up with btrfs root on 3.4 was way
slower than on 3.3 because I removed the ideal caching code. It turns out
that we don't load the free space cache if we're in a commit for deadlock
reasons, but since we're reading the cache and it hasn't changed yet we are
safe reading the inode and free space item from the commit root, so do that
and remove all of the deadlock checks so we don't unnecessarily skip loading
the free space cache. The user reported this fixed the slowness. Thanks,
Tested-by: Calvin Walton <calvin.walton@kepstin.ca> Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Cc: David Sterba <dsterba@suse.cz> Cc: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Dave Jones <davej@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Btrfs: remove lock assert from get_restripe_target()
This fixes a regression introduced by fc67c450. spin_is_locked() always
returns 0 on UP kernels, which caused assert in get_restripe_target() to
be fired on every call from btrfs_reduce_alloc_profile() on UP systems.
Remove it completely for now, it's not clear if it's going to be needed
in future.
Reported-by: Bobby Powers <bobbypowers@gmail.com> Reported-by: Mitch Harder <mitch.harder@sabayonlinux.org> Tested-by: Mitch Harder <mitch.harder@sabayonlinux.org> Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Chris Mason [Thu, 29 Mar 2012 21:02:47 +0000 (17:02 -0400)]
Btrfs: update the checks for mixed block groups with big metadata blocks
Dave Sterba had put in patches to look for mixed data/metadata groups
with metadata bigger than 4KB. But these ended up in the wrong place
and it wasn't testing the feature flag correctly.
This updates the tests to make sure our sizes are matching
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Liu Bo [Thu, 29 Mar 2012 13:57:45 +0000 (09:57 -0400)]
Btrfs: update to the right index of defragment
When we use autodefrag, we forget to update the index which indicates
the last page we've dirty. And we'll set dirty flags on a same set of
pages again and again.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Liu Bo [Thu, 29 Mar 2012 13:57:45 +0000 (09:57 -0400)]
Btrfs: fix recursive defragment with autodefrag option
$ mkfs.btrfs disk
$ mount disk /mnt -o autodefrag
$ dd if=/dev/zero of=/mnt/foobar bs=4k count=10 2>/dev/null && sync
$ for i in `seq 9 -2 0`; do dd if=/dev/zero of=/mnt/foobar bs=4k count=1 \
seek=$i conv=notrunc 2> /dev/null; done && sync
then we'll get to defrag "foobar" again and again.
So does option "-o autodefrag,compress".
Reasons:
When the cleaner kthread gets to fetch inodes from the defrag tree and defrag
them, it will dirty pages and submit them, this will comes to another DATA COW
where the processing inode will be inserted to the defrag tree again.
This patch sets a rule for COW code, i.e. insert an inode when we're really
going to make some defragments.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Liu Bo [Thu, 29 Mar 2012 13:57:44 +0000 (09:57 -0400)]
Btrfs: fix the mismatch of page->mapping
commit 600a45e1d5e376f679ff9ecc4ce9452710a6d27c
(Btrfs: fix deadlock on page lock when doing auto-defragment)
fixes the deadlock on page, but it also introduces another bug.
A page may have been truncated after unlock & lock.
So we need to find it again to get the right one.
And since we've held i_mutex lock, inode size remains unchanged and
we can drop isize overflow checks.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Liu Bo [Thu, 29 Mar 2012 13:57:44 +0000 (09:57 -0400)]
Btrfs: fix race between direct io and autodefrag
The bug is from running xfstests 209 with autodefrag.
The race is as follows:
t1 t2(autodefrag)
direct IO
invalidate pagecache
dio(old data) add_inode_defrag
invalidate pagecache
endio
direct IO
invalidate pagecache
run_defrag
readpage(old data)
set page dirty (old data)
dio(new data, rewrite)
invalidate pagecache (*)
endio
t2(autodefrag) will get old data into pagecache via readpage and set
pagecache dirty. Meanwhile, invalidate pagecache(*) will fail due to
dirty flags in pages. So the old data may be flushed into disk by
flush thread, which will lead to data loss.
And so does the case of user defragment progs.
The patch fixes this race by holding i_mutex when we readpage and set page dirty.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Liu Bo [Thu, 29 Mar 2012 13:57:44 +0000 (09:57 -0400)]
Btrfs: fix deadlock during allocating chunks
This deadlock comes from xfstests 251.
We'll hold the chunk_mutex throughout the whole of a chunk allocation.
But if we find that we've used up system chunk space, we need to allocate a
new system chunk, but this will lead to a recursion of chunk allocation and end
up with a deadlock on chunk_mutex.
So instead we need to allocate the system chunk first if we find we're in ENOSPC.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Chris Mason [Wed, 1 Feb 2012 01:19:02 +0000 (20:19 -0500)]
Btrfs: don't use crc items bigger than 4KB
With the big metadata blocks, we can have crc items
that are much bigger than a page. There are a few
places that we try to kmalloc memory to hold the
items during a split.
Items bigger than 4KB don't really have a huge benefit
in efficiency, but they do trigger larger order allocations.
This commits changes the csums to make sure they stay under
4KB. This is not a format change, just a #define to limit
huge items.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Chris Mason [Tue, 27 Mar 2012 22:56:56 +0000 (18:56 -0400)]
Btrfs: flush out and clean up any block device pages during mount
Btrfs puts the filesystem metadata into its own address space, and
somehow the block device address space isn't getting onto disk properly
before a mount. The end result is that a loop of mkfs and mounting the
filesystem will sometimes find stale or incorrect data.
This commit should fix it by sprinkling fdatawrites and invalidate_bdev
calls around. This is a short term measure to make sure it is fixed.
The block devices really should be flushed and cleaned up higher in the
stack.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
David Sterba [Fri, 17 Feb 2012 11:26:09 +0000 (12:26 +0100)]
btrfs: disallow unequal data/metadata blocksize for mixed block groups
With support for bigger metadata blocks, we must avoid mounting a
filesystem with different block size for mixed block groups, this causes
corruption (found by xfstests/083).
Stefan Behrens [Tue, 27 Mar 2012 18:21:27 +0000 (14:21 -0400)]
Btrfs: change scrub to support big blocks
Scrub used to be coded for nodesize == leafsize == sectorsize == PAGE_SIZE.
This is now changed to support sizes for nodesize and leafsize which are
N * PAGE_SIZE.
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Stefan Behrens [Tue, 27 Mar 2012 18:21:26 +0000 (14:21 -0400)]
Btrfs: introduce common define for max number of mirrors
Readahead already has a define for the max number of mirrors. Scrub
needs such a define now, the rest of the code will need something
like this soon. Therefore the define was added to ctree.h and removed
from the readahead code.
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Ilya Dryomov [Tue, 27 Mar 2012 14:09:18 +0000 (17:09 +0300)]
Btrfs: fix infinite loop in btrfs_shrink_device()
If relocate of block group 0 fails with ENOSPC we end up infinitely
looping because key.offset -= 1 statement in that case brings us back to
where we started.
Ilya Dryomov [Tue, 27 Mar 2012 14:09:18 +0000 (17:09 +0300)]
Btrfs: fix memory leak in resolver code
init_ipath() allocates btrfs_data_container which is never freed. Free
it in free_ipath() and nuke the comment for init_data_container() - we
can safely free it with kfree().
Ilya Dryomov [Tue, 27 Mar 2012 14:09:17 +0000 (17:09 +0300)]
Btrfs: validate target profiles only if we are going to use them
Do not run sanity checks on all target profiles unless they all will be
used. This came up because alloc_profile_is_valid() is now more strict
than it used to be.
Ilya Dryomov [Tue, 27 Mar 2012 14:09:17 +0000 (17:09 +0300)]
Btrfs: improve the logic in btrfs_can_relocate()
Currently if we don't have enough space allocated we go ahead and loop
though devices in the hopes of finding enough space for a chunk of the
*same* type as the one we are trying to relocate. The problem with that
is that if we are trying to restripe the chunk its target type can be
more relaxed than the current one (eg require less devices or less
space). So, when restriping, run checks against the target profile
instead of the current one.
Ilya Dryomov [Tue, 27 Mar 2012 14:09:17 +0000 (17:09 +0300)]
Btrfs: add __get_block_group_index() helper
Add __get_block_group_index() helper to be able to derive block group
index from an arbitary set of flags. Implement get_block_group_index()
in terms of it.
Ilya Dryomov [Tue, 27 Mar 2012 14:09:17 +0000 (17:09 +0300)]
Btrfs: move alloc_profile_is_valid() to volumes.c
Header file is not a good place to define functions. This also moves a
call to alloc_profile_is_valid() down the stack and removes a redundant
check from __btrfs_alloc_chunk() - alloc_profile_is_valid() takes it
into account.
Ilya Dryomov [Tue, 27 Mar 2012 14:09:16 +0000 (17:09 +0300)]
Btrfs: stop silently switching single chunks to raid0 on balance
This has been causing a lot of confusion for quite a while now and a lot
of users were surprised by this (some of them were even stuck in a
ENOSPC situation which they couldn't easily get out of). The addition
of restriper gives users a clear choice between raid0 and drive concat
setup so there's absolutely no excuse for us to keep doing this.
Jan Schmidt [Fri, 23 Mar 2012 16:32:28 +0000 (17:32 +0100)]
Btrfs: fix regression in scrub path resolving
In commit 4692cf58 we introduced new backref walking code for btrfs. This
assumes we're searching live roots, which requires a transaction context.
While scrubbing, however, we must not join a transaction because this could
deadlock with the commit path. Additionally, what scrub really wants to do
is resolving a logical address in the commit root it's currently checking.
This patch adds support for logical to path resolving on commit roots and
makes scrub use that.
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net>
Jan Schmidt [Fri, 23 Mar 2012 16:24:19 +0000 (17:24 +0100)]
Btrfs: check return value of btrfs_cow_block()
The two helper functions commit_cowonly_roots() and
create_pending_snapshot() failed to check the return value from
btrfs_cow_block(), which could at least in theory fail with -ENOSPC from
btrfs_alloc_free_block(). This commit adds the missing checks.
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net>
Jan Schmidt [Fri, 23 Mar 2012 16:14:20 +0000 (17:14 +0100)]
Btrfs: actually call btrfs_init_lockdep
btrfs_init_lockdep only makes our lockdep class names look prettier, thus
it did never hurt we forgot to actually call it. This turns our lockdep
identifier strings from lockdep auto-set #[id] into really pretty
"btrfs-fs-01" or "btrfs-csum-03".
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net>
Josef Bacik [Tue, 27 Mar 2012 01:57:36 +0000 (21:57 -0400)]
Btrfs: deal with read errors on extent buffers differently
Since we need to read and write extent buffers in their entirety we can't use
the normal bio_readpage_error stuff since it only works on a per page basis. So
instead make it so that if we see an io error in endio we just mark the eb as
having an IO error and then in btree_read_extent_buffer_pages we will manually
try other mirrors and then overwrite the bad mirror if we find a good copy.
This works with larger than page size blocks. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Chris Mason [Mon, 19 Mar 2012 19:54:38 +0000 (15:54 -0400)]
Btrfs: adjust the write_lock_level as we unlock
btrfs_search_slot sometimes needs write locks on high levels of
the tree. It remembers the highest level that needs a write lock
and will use that for all future searches through the tree in a given
call.
But, very often we'll just cow the top level or the level below and we
won't really need write locks on the root again after that. This patch
changes things to adjust the write lock requirement as it unlocks
levels.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Josef Bacik [Tue, 13 Mar 2012 13:38:00 +0000 (09:38 -0400)]
Btrfs: ensure an entire eb is written at once
This patch simplifies how we track our extent buffers. Previously we could exit
writepages with only having written half of an extent buffer, which meant we had
to track the state of the pages and the state of the extent buffers differently.
Now we only read in entire extent buffers and write out entire extent buffers,
this allows us to simply set bits in our bflags to indicate the state of the eb
and we no longer have to do things like track uptodate with our iotree. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
Josef Bacik [Thu, 15 Mar 2012 22:24:42 +0000 (18:24 -0400)]
Btrfs: introduce mark_extent_buffer_accessed
Because an eb can have multiple pages we need to make sure that all pages within
the eb are markes as accessed, since releasepage can be called against any page
in the eb. This will keep us from possibly evicting hot eb's when we're doing
larger than pagesize eb's. Thanks,
Josef Bacik [Fri, 9 Mar 2012 21:01:49 +0000 (16:01 -0500)]
Btrfs: introduce free_extent_buffer_stale
Because btrfs cow's we can end up with extent buffers that are no longer
necessary just sitting around in memory. So instead of evicting these pages, we
could end up evicting things we actually care about. Thus we have
free_extent_buffer_stale for use when we are freeing tree blocks. This will
make it so that the ref for the eb being in the radix tree is dropped as soon as
possible and then is freed when the refcount hits 0 instead of waiting to be
released by releasepage. Thanks,
Josef Bacik [Fri, 9 Mar 2012 14:51:43 +0000 (09:51 -0500)]
Btrfs: only use the existing eb if it's count isn't 0
We can run into a problem where we find an eb for our existing page already on
the radix tree but it has a ref count of 0. It hasn't yet been removed by RCU
yet so this can cause issues where we will use the EB after free. So do
atomic_inc_not_zero on the exists->refs and if it is zero just do
synchronize_rcu() and try again. We won't have to worry about new allocators
coming in since they will block on the page lock at this point. Thanks,
Josef Bacik [Wed, 7 Mar 2012 21:20:05 +0000 (16:20 -0500)]
Btrfs: set page->private to the eb
We spend a lot of time looking up extent buffers from pages when we could just
store the pointer to the eb the page is associated with in page->private. This
patch does just that, and it makes things a little simpler and reduces a bit of
CPU overhead involved with doing metadata IO. Thanks,
Chris Mason [Fri, 6 Aug 2010 17:21:20 +0000 (13:21 -0400)]
Btrfs: allow metadata blocks larger than the page size
A few years ago the btrfs code to support blocks lager than
the page size was disabled to fix a few corner cases in the
page cache handling. This fixes the code to properly support
large metadata blocks again.
Since current kernels will crash early and often with larger
metadata blocks, this adds an incompat bit so that older kernels
can't mount it.
This also does away with different blocksizes for nodes and leaves.
You get a single block size for all tree blocks.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Josef Bacik [Wed, 18 Jan 2012 15:56:06 +0000 (10:56 -0500)]
Btrfs: remove search_start and search_end from find_free_extent and callers
We have been passing nothing but (u64)-1 to find_free_extent for search_end in
all of the callers, so it's completely useless, and we've always been passing 0
in as search_start, so just remove them as function arguments and move
search_start into find_free_extent. Thanks,
Josef Bacik [Fri, 13 Jan 2012 20:27:45 +0000 (15:27 -0500)]
Btrfs: remove the ideal caching code
This is a relic from before we had the disk space cache and it was to make
bootup times when you had btrfs as root not be so damned slow. Now that we have
the disk space cache this isn't a problem anymore and really having this code
casues uneeded fragmentation and complexity, so just remove it. Thanks,
Jan Kara [Mon, 12 Mar 2012 15:05:50 +0000 (16:05 +0100)]
btrfs: Fix busyloop in transaction_kthread()
When a filesystem got aborted due do error, transaction_kthread() will
busyloop. Fix it by going to sleep in that case as well. Maybe we should
just stop transaction_kthread() when filesystem is aborted but that would be
more complex.
Jeff Mahoney [Mon, 12 Mar 2012 15:03:00 +0000 (16:03 +0100)]
btrfs: replace many BUG_ONs with proper error handling
btrfs currently handles most errors with BUG_ON. This patch is a work-in-
progress but aims to handle most errors other than internal logic
errors and ENOMEM more gracefully.
This iteration prevents most crashes but can run into lockups with
the page lock on occasion when the timing "works out."
Jeff Mahoney [Thu, 1 Mar 2012 13:57:30 +0000 (14:57 +0100)]
btrfs: add varargs to btrfs_error
btrfs currently handles most errors with BUG_ON. This patch is a work-in-
progress but aims to handle most errors other than internal logic
errors and ENOMEM more gracefully.
This iteration prevents most crashes but can run into lockups with
the page lock on occasion when the timing "works out."
Mark Fasheh [Fri, 9 Sep 2011 00:40:01 +0000 (17:40 -0700)]
btrfs: Remove BUG_ON from __finish_chunk_alloc()
btrfs_alloc_chunk() unconditionally BUGs on any error returned from
__finish_chunk_alloc() so there's no need for two BUG_ON lines. Remove the
one from __finish_chunk_alloc().
Mark Fasheh [Fri, 9 Sep 2011 00:29:00 +0000 (17:29 -0700)]
btrfs: Remove BUG_ON from __btrfs_alloc_chunk()
We BUG_ON() error from add_extent_mapping(), but that error looks pretty
easy to bubble back up - as far as I can tell there have not been any
permanent modifications to fs state at that point.
Mark Fasheh [Fri, 9 Sep 2011 00:14:32 +0000 (17:14 -0700)]
btrfs: Don't BUG_ON insert errors in btrfs_alloc_dev_extent()
The only caller of btrfs_alloc_dev_extent() is __btrfs_alloc_chunk() which
already bugs on any error returned. We can remove the BUG_ON's in
btrfs_alloc_dev_extent() then since __btrfs_alloc_chunk() will "catch" them
anyway.
Mark Fasheh [Thu, 1 Sep 2011 18:27:57 +0000 (11:27 -0700)]
btrfs: Go readonly on tree errors in balance_level
balace_level() seems to deal with missing tree nodes by BUG_ON(). Instead,
we can easily just set the file system readonly and bubble -EROFS back up
the stack.
Mark Fasheh [Mon, 29 Aug 2011 21:30:39 +0000 (14:30 -0700)]
btrfs: Don't BUG_ON errors from update_ref_for_cow()
__btrfs_cow_block(), the only caller of update_ref_for_cow() will BUG_ON()
any error return. Instead, we can go read-only fs as update_ref_for_cow()
manipulates disk data in a way which doesn't look like it's easily rolled
back.
Mark Fasheh [Mon, 29 Aug 2011 21:17:04 +0000 (14:17 -0700)]
btrfs: Go readonly on bad extent refs in update_ref_for_cow()
update_ref_for_cow() will BUG_ON() after it's call to
btrfs_lookup_extent_info() if no existing references are found. Since refs
are computed directly from disk, this should be treated as a corruption
instead of a logic error.
Mark Fasheh [Wed, 10 Aug 2011 19:32:10 +0000 (12:32 -0700)]
btrfs: Don't BUG_ON errors in __finish_chunk_alloc()
All callers of __finish_chunk_alloc() BUG_ON() return value, so it's trivial
for us to always bubble up any errors caught in __finish_chunk_alloc() to be
caught there.
Mark Fasheh [Fri, 5 Aug 2011 22:46:16 +0000 (15:46 -0700)]
btrfs: Don't BUG_ON kzalloc error in btrfs_lookup_csums_range()
Unfortunately it isn't enough to just exit here - the kzalloc() happens in a
loop and the allocated items are added to a linked list whose head is passed
in from the caller.
To fix the BUG_ON() and also provide the semantic that the list passed in is
only modified on success, I create function-local temporary list that we add
items too. If no error is met, that list is spliced to the callers at the
end of the function. Otherwise the list will be walked and all items freed
before the error value is returned.
I did a simple test on this patch by forcing an error at the kzalloc() point
and verifying that when this hits (git clone seemed to exercise this), the
function throws the proper error. Unfortunately but predictably, we later
hit a BUG_ON(ret) type line that still hasn't been fixed up ;)
Mark Fasheh [Mon, 8 Aug 2011 20:20:18 +0000 (13:20 -0700)]
btrfs: Don't BUG_ON() errors in update_ref_for_cow()
The only caller of update_ref_for_cow() is __btrfs_cow_block() which was
originally ignoring any return values. update_ref_for_cow() however doesn't
look like a candidate to become a void function - there are a few places
where errors can occur.
So instead I changed update_ref_for_cow() to bubble all errors up (instead
of BUG_ON). __btrfs_cow_block() was then updated to catch and BUG_ON() any
errors from update_ref_for_cow(). The end effect is that we have no change
in behavior, but about 8 different places where a BUG_ON(ret) was removed.
Obviously a future patch will have to address the BUG_ON() in
__btrfs_cow_block().
Mark Fasheh [Tue, 26 Jul 2011 18:32:23 +0000 (11:32 -0700)]
btrfs: Don't BUG_ON errors from btrfs_create_subvol_root()
This is called from only one place - create_subvol() which passes errors
safely back out to it's caller, btrfs_mksubvol where they are handled.
Additionally, btrfs_create_subvol_root() itself bug's needlessly from error
return of btrfs_update_inode(). Since create_subvol() was fixed to catch
errors we can bubble this one up too.
Jeff Mahoney [Tue, 4 Oct 2011 03:22:41 +0000 (23:22 -0400)]
btrfs: btrfs_drop_snapshot should return int
Commit cb1b69f4 (Btrfs: forced readonly when btrfs_drop_snapshot() fails)
made btrfs_drop_snapshot return void because there were no callers checking
the return value. That is the wrong order to handle error propogation since
the caller will have no idea that an error has occured and continue on
as if nothing went wrong.
Jeff Mahoney [Tue, 4 Oct 2011 03:23:14 +0000 (23:23 -0400)]
btrfs: ->submit_bio_hook error push-up
This pushes failures from the submit_bio_hook callbacks,
btrfs_submit_bio_hook and btree_submit_bio_hook into the callers, including
callers of submit_one_bio where it catches the failures with BUG_ON.
It also pushes up through the ->readpage_io_failed_hook to
end_bio_extent_writepage where the error is already caught with BUG_ON.
Jeff Mahoney [Tue, 4 Oct 2011 03:23:13 +0000 (23:23 -0400)]
btrfs: Factor out tree->ops->merge_bio_hook call
In submit_extent_page, there's a visually noisy if statement that, in
the midst of other conditions, does the tree dependency for tree->ops
and tree->ops->merge_bio_hook before calling it, and then another
condition afterwards. If an error is returned from merge_bio_hook,
there's no way to catch it. It's considered a routine "1" return
value instead of a failure.
This patch factors out the dependency check into a new local merge_bio
routine and BUG's on an error. The if statement is less noisy as a side-
effect.
Jeff Mahoney [Tue, 4 Oct 2011 03:23:12 +0000 (23:23 -0400)]
btrfs: Simplify btrfs_submit_bio_hook
btrfs_submit_bio_hook currently calls btrfs_bio_wq_end_io in either case
of an if statement that determines one of the arguments.
This patch moves the function call outside of the if statement and uses it
to only determine the different argument. This allows us to catch an
error in one place in a more visually obvious way.