]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/commitdiff
f2fs: optimize fs_lock for better performance
authorYu Chao <chao2.yu@samsung.com>
Thu, 12 Sep 2013 03:17:51 +0000 (11:17 +0800)
committerJaegeuk Kim <jaegeuk.kim@samsung.com>
Tue, 24 Sep 2013 08:45:48 +0000 (17:45 +0900)
There is a performance problem: when all sbi->fs_lock are holded, then
all the following threads may get the same next_lock value from sbi->next_lock_num
in function mutex_lock_op, and wait for the same lock(fs_lock[next_lock]),
it may cause performance reduce.
So we move the sbi->next_lock_num++ before getting lock, this will average the
following threads if all sbi->fs_lock are holded.

v1-->v2:
Drop the needless spin_lock as Jaegeuk suggested.

Suggested-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
Signed-off-by: Yu Chao <chao2.yu@samsung.com>
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
fs/f2fs/f2fs.h

index 608f0df5b9190f8e8b301dd61e085fa70c66c5db..7fd99d8bd2ffac3a1ce9d88154e84af74383c1e1 100644 (file)
@@ -544,15 +544,15 @@ static inline void mutex_unlock_all(struct f2fs_sb_info *sbi)
 
 static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
 {
-       unsigned char next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
+       unsigned char next_lock;
        int i = 0;
 
        for (; i < NR_GLOBAL_LOCKS; i++)
                if (mutex_trylock(&sbi->fs_lock[i]))
                        return i;
 
+       next_lock = sbi->next_lock_num++ % NR_GLOBAL_LOCKS;
        mutex_lock(&sbi->fs_lock[next_lock]);
-       sbi->next_lock_num++;
        return next_lock;
 }