Pull x86 fixes from Ingo Molnar.
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/smp: Fix topology checks on AMD MCM CPUs
x86/mm: Fix some kernel-doc warnings
x86, um: Correct syscall table type attributes breaking gcc 4.8
(i.e. 7xxx/5xxx SoCs), SPEAr (arm), Loongson1B (mips) and XLINX XC2V3000
FF1152AMT0221 D1215994A VIRTEX FPGA board.
-DWC Ether MAC 10/100/1000 Universal version 3.60a (and older) and DWC Ether MAC 10/100
-Universal version 4.0 have been used for developing this driver.
+DWC Ether MAC 10/100/1000 Universal version 3.60a (and older) and DWC Ether
+MAC 10/100 Universal version 4.0 have been used for developing this driver.
This driver supports both the platform bus and PCI.
When one or more packets are received, an interrupt happens. The interrupts
are not queued so the driver has to scan all the descriptors in the ring during
the receive process.
-This is based on NAPI so the interrupt handler signals only if there is work to be
-done, and it exits.
+This is based on NAPI so the interrupt handler signals only if there is work
+to be done, and it exits.
Then the poll method will be scheduled at some future point.
The incoming packets are stored, by the DMA, in a list of pre-allocated socket
buffers in order to avoid the memcpy (Zero-copy).
4.3) Timer-Driver Interrupt
-Instead of having the device that asynchronously notifies the frame receptions, the
-driver configures a timer to generate an interrupt at regular intervals.
-Based on the granularity of the timer, the frames that are received by the device
-will experience different levels of latency. Some NICs have dedicated timer
-device to perform this task. STMMAC can use either the RTC device or the TMU
-channel 2 on STLinux platforms.
+Instead of having the device that asynchronously notifies the frame receptions,
+the driver configures a timer to generate an interrupt at regular intervals.
+Based on the granularity of the timer, the frames that are received by the
+device will experience different levels of latency. Some NICs have dedicated
+timer device to perform this task. STMMAC can use either the RTC device or the
+TMU channel 2 on STLinux platforms.
The timers frequency can be passed to the driver as parameter; when change it,
take care of both hardware capability and network stability/performance impact.
-Several performance tests on STM platforms showed this optimisation allows to spare
-the CPU while having the maximum throughput.
+Several performance tests on STM platforms showed this optimisation allows to
+spare the CPU while having the maximum throughput.
4.4) WOL
-Wake up on Lan feature through Magic and Unicast frames are supported for the GMAC
-core.
+Wake up on Lan feature through Magic and Unicast frames are supported for the
+GMAC core.
4.5) DMA descriptors
Driver handles both normal and enhanced descriptors. The latter has been only
These are included in the include/linux/stmmac.h header file
and detailed below as well:
- struct plat_stmmacenet_data {
+struct plat_stmmacenet_data {
+ char *phy_bus_name;
int bus_id;
int phy_addr;
int interface;
void (*bus_setup)(void __iomem *ioaddr);
int (*init)(struct platform_device *pdev);
void (*exit)(struct platform_device *pdev);
+ void *custom_cfg;
+ void *custom_data;
void *bsp_priv;
};
Where:
+ o phy_bus_name: phy bus name to attach to the stmmac.
o bus_id: bus identifier.
o phy_addr: the physical address can be passed from the platform.
If it is set to -1 the driver will automatically
detect it at run-time by probing all the 32 addresses.
o interface: PHY device's interface.
o mdio_bus_data: specific platform fields for the MDIO bus.
- o pbl: the Programmable Burst Length is maximum number of beats to
+ o dma_cfg: internal DMA parameters
+ o pbl: the Programmable Burst Length is maximum number of beats to
be transferred in one DMA transaction.
GMAC also enables the 4xPBL by default.
+ o fixed_burst/mixed_burst/burst_len
o clk_csr: fixed CSR Clock range selection.
o has_gmac: uses the GMAC core.
o enh_desc: if sets the MAC will use the enhanced descriptor structure.
this is sometime necessary on some platforms (e.g. ST boxes)
where the HW needs to have set some PIO lines or system cfg
registers.
- o custom_cfg: this is a custom configuration that can be passed while
- initialising the resources.
+ o custom_cfg/custom_data: this is a custom configuration that can be passed
+ while initialising the resources.
+ o bsp_priv: another private poiter.
For MDIO bus The we have:
o irqs: list of IRQs, one per PHY.
o probed_phy_irq: if irqs is NULL, use this for probed PHY.
-
For DMA engine we have the following internal fields that should be
tuned according to the HW capabilities.
F: drivers/gpio/gpio-bt8xx.c
BTRFS FILE SYSTEM
-M: Chris Mason <chris.mason@oracle.com>
+M: Chris Mason <chris.mason@fusionio.com>
L: linux-btrfs@vger.kernel.org
W: http://btrfs.wiki.kernel.org/
Q: http://patchwork.kernel.org/project/linux-btrfs/list/
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
S: Maintained
F: Documentation/filesystems/btrfs.txt
F: fs/btrfs/
CFG80211 and NL80211
M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
+W: http://wireless.kernel.org/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: include/linux/nl80211.h
F: include/net/cfg80211.h
M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
W: http://linuxwireless.org/
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: Documentation/networking/mac80211-injection.txt
F: include/net/mac80211.h
M: Mattias Nissler <mattias.nissler@gmx.de>
L: linux-wireless@vger.kernel.org
W: http://linuxwireless.org/en/developers/Documentation/mac80211/RateControl/PID
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: net/mac80211/rc80211_pid*
RFKILL
M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
+W: http://wireless.kernel.org/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: Documentation/rfkill.txt
F: net/rfkill/
VERSION = 3
PATCHLEVEL = 5
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc2
NAME = Saber-toothed Squirrel
# *DOCUMENTATION*
goto err;
}
- r = omap_device_register(pdev);
+ r = platform_device_add(pdev);
if (r) {
- pr_err("Could not register omap_device for %s\n", pdev_name);
+ pr_err("Could not register platform_device for %s\n", pdev_name);
goto err;
}
select GENERIC_IRQ_SHOW
select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
select GENERIC_CPU_DEVICES
+ select GENERIC_STRNCPY_FROM_USER if MMU
+ select GENERIC_STRNLEN_USER if MMU
select FPU if MMU
select ARCH_USES_GETTIMEOFFSET if MMU && !COLDFIRE
include include/asm-generic/Kbuild.asm
header-y += cachectl.h
+
+generic-y += word-at-a-time.h
/*
* QSPI module.
*/
-#define MCFQSPI_IOBASE (MCF_IPSBAR + 0x340)
+#define MCFQSPI_BASE (MCF_IPSBAR + 0x340)
#define MCFQSPI_SIZE 0x40
#define MCFQSPI_CS0 147
#define copy_from_user(to, from, n) __copy_from_user(to, from, n)
#define copy_to_user(to, from, n) __copy_to_user(to, from, n)
-long strncpy_from_user(char *dst, const char __user *src, long count);
-long strnlen_user(const char __user *src, long n);
+#define user_addr_max() \
+ (segment_eq(get_fs(), USER_DS) ? TASK_SIZE : ~0UL)
+
+extern long strncpy_from_user(char *dst, const char __user *src, long count);
+extern __must_check long strlen_user(const char __user *str);
+extern __must_check long strnlen_user(const char __user *str, long n);
+
unsigned long __clear_user(void __user *to, unsigned long n);
#define clear_user __clear_user
-#define strlen_user(str) strnlen_user(str, 32767)
-
#endif /* _M68K_UACCESS_H */
}
}
-#ifdef CONFIG_COLDFIRE
+#if defined(CONFIG_COLDFIRE) || !defined(CONFIG_MMU)
asmlinkage int syscall_trace_enter(void)
{
int ret = 0;
mach_sched_init(timer_interrupt);
}
-#ifdef CONFIG_M68KCLASSIC
+#ifdef CONFIG_ARCH_USES_GETTIMEOFFSET
u32 arch_gettimeoffset(void)
{
module_init(rtc_init);
-#endif /* CONFIG_M68KCLASSIC */
+#endif /* CONFIG_ARCH_USES_GETTIMEOFFSET */
}
EXPORT_SYMBOL(__generic_copy_to_user);
-/*
- * Copy a null terminated string from userspace.
- */
-long strncpy_from_user(char *dst, const char __user *src, long count)
-{
- long res;
- char c;
-
- if (count <= 0)
- return count;
-
- asm volatile ("\n"
- "1: "MOVES".b (%2)+,%4\n"
- " move.b %4,(%1)+\n"
- " jeq 2f\n"
- " subq.l #1,%3\n"
- " jne 1b\n"
- "2: sub.l %3,%0\n"
- "3:\n"
- " .section .fixup,\"ax\"\n"
- " .even\n"
- "10: move.l %5,%0\n"
- " jra 3b\n"
- " .previous\n"
- "\n"
- " .section __ex_table,\"a\"\n"
- " .align 4\n"
- " .long 1b,10b\n"
- " .previous"
- : "=d" (res), "+a" (dst), "+a" (src), "+r" (count), "=&d" (c)
- : "i" (-EFAULT), "0" (count));
-
- return res;
-}
-EXPORT_SYMBOL(strncpy_from_user);
-
-/*
- * Return the size of a string (including the ending 0)
- *
- * Return 0 on exception, a value greater than N if too long
- */
-long strnlen_user(const char __user *src, long n)
-{
- char c;
- long res;
-
- asm volatile ("\n"
- "1: subq.l #1,%1\n"
- " jmi 3f\n"
- "2: "MOVES".b (%0)+,%2\n"
- " tst.b %2\n"
- " jne 1b\n"
- " jra 4f\n"
- "\n"
- "3: addq.l #1,%0\n"
- "4: sub.l %4,%0\n"
- "5:\n"
- " .section .fixup,\"ax\"\n"
- " .even\n"
- "20: sub.l %0,%0\n"
- " jra 5b\n"
- " .previous\n"
- "\n"
- " .section __ex_table,\"a\"\n"
- " .align 4\n"
- " .long 2b,20b\n"
- " .previous\n"
- : "=&a" (res), "+d" (n), "=&d" (c)
- : "0" (src), "r" (src));
-
- return res;
-}
-EXPORT_SYMBOL(strnlen_user);
-
/*
* Zero Userspace
*/
#endif
static u32 m68328_tick_cnt;
+static irq_handler_t timer_interrupt;
/***************************************************************************/
TSTAT &= 0;
m68328_tick_cnt += TICKS_PER_JIFFY;
- return arch_timer_interrupt(irq, dummy);
+ return timer_interrupt(irq, dummy);
}
/***************************************************************************/
/***************************************************************************/
-void hw_timer_init(void)
+void hw_timer_init(irq_handler_t handler)
{
/* disable timer 1 */
TCTL = 0;
/* Enable timer 1 */
TCTL |= TCTL_TEN;
clocksource_register_hz(&m68328_clk, TICKS_PER_JIFFY*HZ);
+ timer_interrupt = handler;
}
/***************************************************************************/
#define OSCILLATOR (unsigned long int)33000000
#endif
+static irq_handler_t timer_interrupt;
unsigned long int system_clock;
extern QUICC *pquicc;
pquicc->timer_ter1 = 0x0002; /* clear timer event */
- return arch_timer_interrupt(irq, dummy);
+ return timer_interrupt(irq, dummy);
}
static struct irqaction m68360_timer_irq = {
.handler = hw_tick,
};
-void hw_timer_init(void)
+void hw_timer_init(irq_handler_t handler)
{
unsigned char prescaler;
unsigned short tgcr_save;
pquicc->timer_ter1 = 0x0003; /* clear timer events */
+ timer_interrupt = handler;
+
/* enable timer 1 interrupt in CIMR */
setup_irq(CPMVEC_TIMER1, &m68360_timer_irq);
select GENERIC_SMP_IDLE_THREAD
select GENERIC_CLOCKEVENTS
select GENERIC_CMOS_UPDATE if SH_SH03 || SH_DREAMCAST
+ select GENERIC_STRNCPY_FROM_USER
+ select GENERIC_STRNLEN_USER
help
The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
+ifneq ($(SUBARCH),$(ARCH))
+ ifeq ($(CROSS_COMPILE),)
+ CROSS_COMPILE := $(call cc-cross-prefix, $(UTS_MACHINE)-linux- $(UTS_MACHINE)-linux-gnu- $(UTS_MACHINE)-unknown-linux-gnu-)
+ endif
+endif
+
isa-y := any
isa-$(CONFIG_SH_DSP) := sh
isa-$(CONFIG_CPU_SH2) := sh2
KBUILD_DEFCONFIG := cayman_defconfig
endif
-ifneq ($(SUBARCH),$(ARCH))
- ifeq ($(CROSS_COMPILE),)
- CROSS_COMPILE := $(call cc-cross-prefix, $(UTS_MACHINE)-linux- $(UTS_MACHINE)-linux-gnu- $(UTS_MACHINE)-unknown-linux-gnu-)
- endif
-endif
-
ifdef CONFIG_CPU_LITTLE_ENDIAN
ld-bfd := elf32-$(UTS_MACHINE)-linux
-LDFLAGS_vmlinux += --defsym 'jiffies=jiffies_64' --oformat $(ld-bfd)
+LDFLAGS_vmlinux += --defsym jiffies=jiffies_64 --oformat $(ld-bfd)
LDFLAGS += -EL
else
ld-bfd := elf32-$(UTS_MACHINE)big-linux
-LDFLAGS_vmlinux += --defsym 'jiffies=jiffies_64+4' --oformat $(ld-bfd)
+LDFLAGS_vmlinux += --defsym jiffies=jiffies_64+4 --oformat $(ld-bfd)
LDFLAGS += -EB
endif
include include/asm-generic/Kbuild.asm
+generic-y += bitsperlong.h
+generic-y += cputime.h
+generic-y += current.h
+generic-y += delay.h
+generic-y += div64.h
+generic-y += emergency-restart.h
+generic-y += errno.h
+generic-y += fcntl.h
+generic-y += ioctl.h
+generic-y += ipcbuf.h
+generic-y += irq_regs.h
+generic-y += kvm_para.h
+generic-y += local.h
+generic-y += local64.h
+generic-y += param.h
+generic-y += parport.h
+generic-y += percpu.h
+generic-y += poll.h
+generic-y += mman.h
+generic-y += msgbuf.h
+generic-y += resource.h
+generic-y += scatterlist.h
+generic-y += sembuf.h
+generic-y += serial.h
+generic-y += shmbuf.h
+generic-y += siginfo.h
+generic-y += sizes.h
+generic-y += socket.h
+generic-y += statfs.h
+generic-y += termbits.h
+generic-y += termios.h
+generic-y += ucontext.h
+generic-y += xor.h
+
header-y += cachectl.h
header-y += cpu-features.h
header-y += hw_breakpoint.h
+++ /dev/null
-#include <asm-generic/bitsperlong.h>
+++ /dev/null
-#ifndef __SH_CPUTIME_H
-#define __SH_CPUTIME_H
-
-#include <asm-generic/cputime.h>
-
-#endif /* __SH_CPUTIME_H */
+++ /dev/null
-#include <asm-generic/current.h>
+++ /dev/null
-#include <asm-generic/delay.h>
+++ /dev/null
-#include <asm-generic/div64.h>
+++ /dev/null
-#ifndef _ASM_EMERGENCY_RESTART_H
-#define _ASM_EMERGENCY_RESTART_H
-
-#include <asm-generic/emergency-restart.h>
-
-#endif /* _ASM_EMERGENCY_RESTART_H */
+++ /dev/null
-#ifndef __ASM_SH_ERRNO_H
-#define __ASM_SH_ERRNO_H
-
-#include <asm-generic/errno.h>
-
-#endif /* __ASM_SH_ERRNO_H */
+++ /dev/null
-#include <asm-generic/fcntl.h>
+++ /dev/null
-#include <asm-generic/ioctl.h>
+++ /dev/null
-#include <asm-generic/ipcbuf.h>
+++ /dev/null
-#include <asm-generic/irq_regs.h>
+++ /dev/null
-#include <asm-generic/kvm_para.h>
+++ /dev/null
-#ifndef __ASM_SH_LOCAL_H
-#define __ASM_SH_LOCAL_H
-
-#include <asm-generic/local.h>
-
-#endif /* __ASM_SH_LOCAL_H */
-
+++ /dev/null
-#include <asm-generic/local64.h>
+++ /dev/null
-#include <asm-generic/mman.h>
+++ /dev/null
-#include <asm-generic/msgbuf.h>
+++ /dev/null
-#include <asm-generic/param.h>
+++ /dev/null
-#include <asm-generic/parport.h>
+++ /dev/null
-#ifndef __ARCH_SH_PERCPU
-#define __ARCH_SH_PERCPU
-
-#include <asm-generic/percpu.h>
-
-#endif /* __ARCH_SH_PERCPU */
+++ /dev/null
-#include <asm-generic/poll.h>
+++ /dev/null
-#ifndef __ASM_SH_RESOURCE_H
-#define __ASM_SH_RESOURCE_H
-
-#include <asm-generic/resource.h>
-
-#endif /* __ASM_SH_RESOURCE_H */
+++ /dev/null
-#ifndef __ASM_SH_SCATTERLIST_H
-#define __ASM_SH_SCATTERLIST_H
-
-#include <asm-generic/scatterlist.h>
-
-#endif /* __ASM_SH_SCATTERLIST_H */
+++ /dev/null
-#include <asm-generic/sembuf.h>
+++ /dev/null
-#include <asm-generic/serial.h>
+++ /dev/null
-#include <asm-generic/shmbuf.h>
+++ /dev/null
-#ifndef __ASM_SH_SIGINFO_H
-#define __ASM_SH_SIGINFO_H
-
-#include <asm-generic/siginfo.h>
-
-#endif /* __ASM_SH_SIGINFO_H */
+++ /dev/null
-#include <asm-generic/sizes.h>
+++ /dev/null
-#include <asm-generic/socket.h>
+++ /dev/null
-#ifndef __ASM_SH_STATFS_H
-#define __ASM_SH_STATFS_H
-
-#include <asm-generic/statfs.h>
-
-#endif /* __ASM_SH_STATFS_H */
+++ /dev/null
-#include <asm-generic/termbits.h>
+++ /dev/null
-#include <asm-generic/termios.h>
(__chk_user_ptr(addr), \
__access_ok((unsigned long __force)(addr), (size)))
+#define user_addr_max() (current_thread_info()->addr_limit.seg)
+
/*
* Uh, these should become the main single-value transfer routines ...
* They automatically use the right size if we just have the right
# include "uaccess_64.h"
#endif
+extern long strncpy_from_user(char *dest, const char __user *src, long count);
+
+extern __must_check long strlen_user(const char __user *str);
+extern __must_check long strnlen_user(const char __user *str, long n);
+
/* Generic arbitrary sized copy. */
/* Return the number of bytes NOT copied */
__kernel_size_t __copy_user(void *to, const void *from, __kernel_size_t n);
__cl_size; \
})
-/**
- * strncpy_from_user: - Copy a NUL terminated string from userspace.
- * @dst: Destination address, in kernel space. This buffer must be at
- * least @count bytes long.
- * @src: Source address, in user space.
- * @count: Maximum number of bytes to copy, including the trailing NUL.
- *
- * Copies a NUL-terminated string from userspace to kernel space.
- *
- * On success, returns the length of the string (not including the trailing
- * NUL).
- *
- * If access to userspace fails, returns -EFAULT (some data may have been
- * copied).
- *
- * If @count is smaller than the length of the string, copies @count bytes
- * and returns @count.
- */
-#define strncpy_from_user(dest,src,count) \
-({ \
- unsigned long __sfu_src = (unsigned long)(src); \
- int __sfu_count = (int)(count); \
- long __sfu_res = -EFAULT; \
- \
- if (__access_ok(__sfu_src, __sfu_count)) \
- __sfu_res = __strncpy_from_user((unsigned long)(dest), \
- __sfu_src, __sfu_count); \
- \
- __sfu_res; \
-})
-
static inline unsigned long
copy_from_user(void *to, const void __user *from, unsigned long n)
{
return __copy_size;
}
-/**
- * strnlen_user: - Get the size of a string in user space.
- * @s: The string to measure.
- * @n: The maximum valid length
- *
- * Context: User context only. This function may sleep.
- *
- * Get the size of a NUL-terminated string in user space.
- *
- * Returns the size of the string INCLUDING the terminating NUL.
- * On exception, returns 0.
- * If the string is too long, returns a value greater than @n.
- */
-static inline long strnlen_user(const char __user *s, long n)
-{
- if (!__addr_ok(s))
- return 0;
- else
- return __strnlen_user(s, n);
-}
-
-/**
- * strlen_user: - Get the size of a string in user space.
- * @str: The string to measure.
- *
- * Context: User context only. This function may sleep.
- *
- * Get the size of a NUL-terminated string in user space.
- *
- * Returns the size of the string INCLUDING the terminating NUL.
- * On exception, returns 0.
- *
- * If there is a limit on the length of a valid string, you may wish to
- * consider using strnlen_user() instead.
- */
-#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
-
/*
* The exception table consists of pairs of addresses: the first is the
* address of an instruction that is allowed to fault, and the second is
extern void __put_user_unknown(void);
-static inline int
-__strncpy_from_user(unsigned long __dest, unsigned long __user __src, int __count)
-{
- __kernel_size_t res;
- unsigned long __dummy, _d, _s, _c;
-
- __asm__ __volatile__(
- "9:\n"
- "mov.b @%2+, %1\n\t"
- "cmp/eq #0, %1\n\t"
- "bt/s 2f\n"
- "1:\n"
- "mov.b %1, @%3\n\t"
- "dt %4\n\t"
- "bf/s 9b\n\t"
- " add #1, %3\n\t"
- "2:\n\t"
- "sub %4, %0\n"
- "3:\n"
- ".section .fixup,\"ax\"\n"
- "4:\n\t"
- "mov.l 5f, %1\n\t"
- "jmp @%1\n\t"
- " mov %9, %0\n\t"
- ".balign 4\n"
- "5: .long 3b\n"
- ".previous\n"
- ".section __ex_table,\"a\"\n"
- " .balign 4\n"
- " .long 9b,4b\n"
- ".previous"
- : "=r" (res), "=&z" (__dummy), "=r" (_s), "=r" (_d), "=r"(_c)
- : "0" (__count), "2" (__src), "3" (__dest), "4" (__count),
- "i" (-EFAULT)
- : "memory", "t");
-
- return res;
-}
-
-/*
- * Return the size of a string (including the ending 0 even when we have
- * exceeded the maximum string length).
- */
-static inline long __strnlen_user(const char __user *__s, long __n)
-{
- unsigned long res;
- unsigned long __dummy;
-
- __asm__ __volatile__(
- "1:\t"
- "mov.b @(%0,%3), %1\n\t"
- "cmp/eq %4, %0\n\t"
- "bt/s 2f\n\t"
- " add #1, %0\n\t"
- "tst %1, %1\n\t"
- "bf 1b\n\t"
- "2:\n"
- ".section .fixup,\"ax\"\n"
- "3:\n\t"
- "mov.l 4f, %1\n\t"
- "jmp @%1\n\t"
- " mov #0, %0\n"
- ".balign 4\n"
- "4: .long 2b\n"
- ".previous\n"
- ".section __ex_table,\"a\"\n"
- " .balign 4\n"
- " .long 1b,3b\n"
- ".previous"
- : "=z" (res), "=&r" (__dummy)
- : "0" (0), "r" (__s), "r" (__n)
- : "t");
- return res;
-}
-
#endif /* __ASM_SH_UACCESS_32_H */
extern long __put_user_asm_q(void *, long);
extern void __put_user_unknown(void);
-extern long __strnlen_user(const char *__s, long __n);
-extern int __strncpy_from_user(unsigned long __dest,
- unsigned long __user __src, int __count);
-
#endif /* __ASM_SH_UACCESS_64_H */
+++ /dev/null
-#include <asm-generic/ucontext.h>
--- /dev/null
+#ifndef __ASM_SH_WORD_AT_A_TIME_H
+#define __ASM_SH_WORD_AT_A_TIME_H
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+# include <asm-generic/word-at-a-time.h>
+#else
+/*
+ * Little-endian version cribbed from x86.
+ */
+struct word_at_a_time {
+ const unsigned long one_bits, high_bits;
+};
+
+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }
+
+/* Carl Chatfield / Jan Achrenius G+ version for 32-bit */
+static inline long count_masked_bytes(long mask)
+{
+ /* (000000 0000ff 00ffff ffffff) -> ( 1 1 2 3 ) */
+ long a = (0x0ff0001+mask) >> 23;
+ /* Fix the 1 for 00 case */
+ return a & mask;
+}
+
+/* Return nonzero if it has a zero */
+static inline unsigned long has_zero(unsigned long a, unsigned long *bits, const struct word_at_a_time *c)
+{
+ unsigned long mask = ((a - c->one_bits) & ~a) & c->high_bits;
+ *bits = mask;
+ return mask;
+}
+
+static inline unsigned long prep_zero_mask(unsigned long a, unsigned long bits, const struct word_at_a_time *c)
+{
+ return bits;
+}
+
+static inline unsigned long create_zero_mask(unsigned long bits)
+{
+ bits = (bits - 1) & ~bits;
+ return bits >> 7;
+}
+
+/* The mask we created is directly usable as a bytemask */
+#define zero_bytemask(mask) (mask)
+
+static inline unsigned long find_zero(unsigned long mask)
+{
+ return count_masked_bytes(mask);
+}
+#endif
+
+#endif
+++ /dev/null
-#include <asm-generic/xor.h>
+++ /dev/null
-/*
- * SH-2A UBC definitions
- *
- * Copyright (C) 2008 Kieran Bingham
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- */
-
-#ifndef __ASM_CPU_SH2A_UBC_H
-#define __ASM_CPU_SH2A_UBC_H
-
-#define UBC_BARA 0xfffc0400
-#define UBC_BAMRA 0xfffc0404
-#define UBC_BBRA 0xfffc04a0 /* 16 bit access */
-#define UBC_BDRA 0xfffc0408
-#define UBC_BDMRA 0xfffc040c
-
-#define UBC_BARB 0xfffc0410
-#define UBC_BAMRB 0xfffc0414
-#define UBC_BBRB 0xfffc04b0 /* 16 bit access */
-#define UBC_BDRB 0xfffc0418
-#define UBC_BDMRB 0xfffc041c
-
-#define UBC_BRCR 0xfffc04c0
-
-#endif /* __ASM_CPU_SH2A_UBC_H */
#endif /* CONFIG_MMU */
-/*
- * int __strncpy_from_user(unsigned long __dest, unsigned long __src,
- * int __count)
- *
- * Inputs:
- * (r2) target address
- * (r3) source address
- * (r4) maximum size in bytes
- *
- * Ouputs:
- * (*r2) copied data
- * (r2) -EFAULT (in case of faulting)
- * copied data (otherwise)
- */
- .global __strncpy_from_user
-__strncpy_from_user:
- pta ___strncpy_from_user1, tr0
- pta ___strncpy_from_user_done, tr1
- or r4, ZERO, r5 /* r5 = original count */
- beq/u r4, r63, tr1 /* early exit if r4==0 */
- movi -(EFAULT), r6 /* r6 = reply, no real fixup */
- or ZERO, ZERO, r7 /* r7 = data, clear top byte of data */
-
-___strncpy_from_user1:
- ld.b r3, 0, r7 /* Fault address: only in reading */
- st.b r2, 0, r7
- addi r2, 1, r2
- addi r3, 1, r3
- beq/u ZERO, r7, tr1
- addi r4, -1, r4 /* return real number of copied bytes */
- bne/l ZERO, r4, tr0
-
-___strncpy_from_user_done:
- sub r5, r4, r6 /* If done, return copied */
-
-___strncpy_from_user_exit:
- or r6, ZERO, r2
- ptabs LINK, tr0
- blink tr0, ZERO
-
-/*
- * extern long __strnlen_user(const char *__s, long __n)
- *
- * Inputs:
- * (r2) source address
- * (r3) source size in bytes
- *
- * Ouputs:
- * (r2) -EFAULT (in case of faulting)
- * string length (otherwise)
- */
- .global __strnlen_user
-__strnlen_user:
- pta ___strnlen_user_set_reply, tr0
- pta ___strnlen_user1, tr1
- or ZERO, ZERO, r5 /* r5 = counter */
- movi -(EFAULT), r6 /* r6 = reply, no real fixup */
- or ZERO, ZERO, r7 /* r7 = data, clear top byte of data */
- beq r3, ZERO, tr0
-
-___strnlen_user1:
- ldx.b r2, r5, r7 /* Fault address: only in reading */
- addi r3, -1, r3 /* No real fixup */
- addi r5, 1, r5
- beq r3, ZERO, tr0
- bne r7, ZERO, tr1
-! The line below used to be active. This meant led to a junk byte lying between each pair
-! of entries in the argv & envp structures in memory. Whilst the program saw the right data
-! via the argv and envp arguments to main, it meant the 'flat' representation visible through
-! /proc/$pid/cmdline was corrupt, causing trouble with ps, for example.
-! addi r5, 1, r5 /* Include '\0' */
-
-___strnlen_user_set_reply:
- or r5, ZERO, r6 /* If done, return counter */
-
-___strnlen_user_exit:
- or r6, ZERO, r2
- ptabs LINK, tr0
- blink tr0, ZERO
-
/*
* extern long __get_user_asm_?(void *val, long addr)
*
.long ___copy_user2, ___copy_user_exit
.long ___clear_user1, ___clear_user_exit
#endif
- .long ___strncpy_from_user1, ___strncpy_from_user_exit
- .long ___strnlen_user1, ___strnlen_user_exit
.long ___get_user_asm_b1, ___get_user_asm_b_exit
.long ___get_user_asm_w1, ___get_user_asm_w_exit
.long ___get_user_asm_l1, ___get_user_asm_l_exit
#include <linux/sched.h>
#include <linux/export.h>
#include <linux/stackprotector.h>
+#include <asm/fpu.h>
struct kmem_cache *task_xstate_cachep = NULL;
unsigned int xstate_size;
#include <asm/switch_to.h>
struct task_struct *last_task_used_math = NULL;
+struct pt_regs fake_swapper_regs = { 0, };
void show_regs(struct pt_regs *regs)
{
EXPORT_SYMBOL(__get_user_asm_w);
EXPORT_SYMBOL(__get_user_asm_l);
EXPORT_SYMBOL(__get_user_asm_q);
-EXPORT_SYMBOL(__strnlen_user);
-EXPORT_SYMBOL(__strncpy_from_user);
EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(copy_page);
EXPORT_SYMBOL(__copy_user);
pxor IN3, STATE4
movaps IN4, IV
#else
- pxor (INP), STATE2
- pxor 0x10(INP), STATE3
pxor IN1, STATE4
movaps IN2, IV
+ movups (INP), IN1
+ pxor IN1, STATE2
+ movups 0x10(INP), IN2
+ pxor IN2, STATE3
#endif
movups STATE1, (OUTP)
movups STATE2, 0x10(OUTP)
bool ret = false;
struct pvclock_vcpu_time_info *src;
- /*
- * per_cpu() is safe here because this function is only called from
- * timer functions where preemption is already disabled.
- */
- WARN_ON(!in_atomic());
src = &__get_cpu_var(hv_clock);
if ((src->flags & PVCLOCK_GUEST_STOPPED) != 0) {
__this_cpu_and(hv_clock.flags, ~PVCLOCK_GUEST_STOPPED);
if ((i == cpu) || (has_mc && match_llc(c, o)))
link_mask(llc_shared, cpu, i);
+ }
+
+ /*
+ * This needs a separate iteration over the cpus because we rely on all
+ * cpu_sibling_mask links to be set-up.
+ */
+ for_each_cpu(i, cpu_sibling_setup_mask) {
+ o = &cpu_data(i);
+
if ((i == cpu) || (has_mc && match_mc(c, o))) {
link_mask(core, cpu, i);
void *map;
int ret;
- if (__range_not_ok(from, n, TASK_SIZE) == 0)
+ if (__range_not_ok(from, n, TASK_SIZE))
return len;
do {
map->lock = regmap_lock_mutex;
map->unlock = regmap_unlock_mutex;
}
- map->format.buf_size = (config->reg_bits + config->val_bits) / 8;
map->format.reg_bytes = DIV_ROUND_UP(config->reg_bits, 8);
map->format.pad_bytes = config->pad_bits / 8;
map->format.val_bytes = DIV_ROUND_UP(config->val_bits, 8);
- map->format.buf_size += map->format.pad_bytes;
+ map->format.buf_size = DIV_ROUND_UP(config->reg_bits +
+ config->val_bits + config->pad_bits, 8);
map->reg_shift = config->pad_bits % 8;
if (config->reg_stride)
map->reg_stride = config->reg_stride;
ret = regcache_init(map, config);
if (ret < 0)
- goto err_free_workbuf;
+ goto err_debugfs;
/* Add a devres resource for dev_get_regmap() */
m = devres_alloc(dev_get_regmap_release, sizeof(*m), GFP_KERNEL);
err_cache:
regcache_exit(map);
-err_free_workbuf:
+err_debugfs:
+ regmap_debugfs_exit(map);
kfree(map->work_buf);
err_map:
kfree(map);
return ret;
}
+EXPORT_SYMBOL_GPL(regmap_reinit_cache);
/**
* regmap_exit(): Free a previously allocated register map
bcma_chipco_chipctl_maskset(cc, 0, ~0, 0x7);
break;
case 0x4331:
- /* BCM4331 workaround is SPROM-related, we put it in sprom.c */
+ case 43431:
+ /* Ext PA lines must be enabled for tx on BCM4331 */
+ bcma_chipco_bcm4331_ext_pa_lines_ctl(cc, true);
break;
case 43224:
if (bus->chipinfo.rev == 0) {
int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core,
bool enable)
{
- struct pci_dev *pdev = pc->core->bus->host_pci;
+ struct pci_dev *pdev;
u32 coremask, tmp;
int err = 0;
- if (core->bus->hosttype != BCMA_HOSTTYPE_PCI) {
+ if (!pc || core->bus->hosttype != BCMA_HOSTTYPE_PCI) {
/* This bcma device is not on a PCI host-bus. So the IRQs are
* not routed through the PCI core.
* So we must not enable routing through the PCI core. */
goto out;
}
+ pdev = pc->core->bus->host_pci;
+
err = pci_read_config_dword(pdev, BCMA_PCI_IRQMASK, &tmp);
if (err)
goto out;
if (!sprom)
return -ENOMEM;
- if (bus->chipinfo.id == 0x4331)
+ if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431)
bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, false);
pr_debug("SPROM offset 0x%x\n", offset);
bcma_sprom_read(bus, offset, sprom);
- if (bus->chipinfo.id == 0x4331)
+ if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431)
bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, true);
err = bcma_sprom_valid(sprom);
/* data ready? */
if (readl(trng->base + TRNG_ODATA) & 1) {
*data = readl(trng->base + TRNG_ODATA);
+ /*
+ ensure data ready is only set again AFTER the next data
+ word is ready in case it got set between checking ISR
+ and reading ODATA, so we don't risk re-reading the
+ same word
+ */
+ readl(trng->base + TRNG_ISR);
return 4;
} else
return 0;
unsigned long next_match_value;
unsigned long max_match_value;
unsigned long rate;
- spinlock_t lock;
+ raw_spinlock_t lock;
struct clock_event_device ced;
struct clocksource cs;
unsigned long total_cycles;
};
-static DEFINE_SPINLOCK(sh_cmt_lock);
+static DEFINE_RAW_SPINLOCK(sh_cmt_lock);
#define CMSTR -1 /* shared register */
#define CMCSR 0 /* channel register */
unsigned long flags, value;
/* start stop register shared by multiple timer channels */
- spin_lock_irqsave(&sh_cmt_lock, flags);
+ raw_spin_lock_irqsave(&sh_cmt_lock, flags);
value = sh_cmt_read(p, CMSTR);
if (start)
value &= ~(1 << cfg->timer_bit);
sh_cmt_write(p, CMSTR, value);
- spin_unlock_irqrestore(&sh_cmt_lock, flags);
+ raw_spin_unlock_irqrestore(&sh_cmt_lock, flags);
}
static int sh_cmt_enable(struct sh_cmt_priv *p, unsigned long *rate)
{
unsigned long flags;
- spin_lock_irqsave(&p->lock, flags);
+ raw_spin_lock_irqsave(&p->lock, flags);
__sh_cmt_set_next(p, delta);
- spin_unlock_irqrestore(&p->lock, flags);
+ raw_spin_unlock_irqrestore(&p->lock, flags);
}
static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)
int ret = 0;
unsigned long flags;
- spin_lock_irqsave(&p->lock, flags);
+ raw_spin_lock_irqsave(&p->lock, flags);
if (!(p->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE)))
ret = sh_cmt_enable(p, &p->rate);
if ((flag == FLAG_CLOCKSOURCE) && (!(p->flags & FLAG_CLOCKEVENT)))
__sh_cmt_set_next(p, p->max_match_value);
out:
- spin_unlock_irqrestore(&p->lock, flags);
+ raw_spin_unlock_irqrestore(&p->lock, flags);
return ret;
}
unsigned long flags;
unsigned long f;
- spin_lock_irqsave(&p->lock, flags);
+ raw_spin_lock_irqsave(&p->lock, flags);
f = p->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE);
p->flags &= ~flag;
if ((flag == FLAG_CLOCKEVENT) && (p->flags & FLAG_CLOCKSOURCE))
__sh_cmt_set_next(p, p->max_match_value);
- spin_unlock_irqrestore(&p->lock, flags);
+ raw_spin_unlock_irqrestore(&p->lock, flags);
}
static struct sh_cmt_priv *cs_to_sh_cmt(struct clocksource *cs)
unsigned long value;
int has_wrapped;
- spin_lock_irqsave(&p->lock, flags);
+ raw_spin_lock_irqsave(&p->lock, flags);
value = p->total_cycles;
raw = sh_cmt_get_counter(p, &has_wrapped);
if (unlikely(has_wrapped))
raw += p->match_value + 1;
- spin_unlock_irqrestore(&p->lock, flags);
+ raw_spin_unlock_irqrestore(&p->lock, flags);
return value + raw;
}
p->max_match_value = (1 << p->width) - 1;
p->match_value = p->max_match_value;
- spin_lock_init(&p->lock);
+ raw_spin_lock_init(&p->lock);
if (clockevent_rating)
sh_cmt_register_clockevent(p, name, clockevent_rating);
struct clock_event_device ced;
};
-static DEFINE_SPINLOCK(sh_mtu2_lock);
+static DEFINE_RAW_SPINLOCK(sh_mtu2_lock);
#define TSTR -1 /* shared register */
#define TCR 0 /* channel register */
unsigned long flags, value;
/* start stop register shared by multiple timer channels */
- spin_lock_irqsave(&sh_mtu2_lock, flags);
+ raw_spin_lock_irqsave(&sh_mtu2_lock, flags);
value = sh_mtu2_read(p, TSTR);
if (start)
value &= ~(1 << cfg->timer_bit);
sh_mtu2_write(p, TSTR, value);
- spin_unlock_irqrestore(&sh_mtu2_lock, flags);
+ raw_spin_unlock_irqrestore(&sh_mtu2_lock, flags);
}
static int sh_mtu2_enable(struct sh_mtu2_priv *p)
struct clocksource cs;
};
-static DEFINE_SPINLOCK(sh_tmu_lock);
+static DEFINE_RAW_SPINLOCK(sh_tmu_lock);
#define TSTR -1 /* shared register */
#define TCOR 0 /* channel register */
unsigned long flags, value;
/* start stop register shared by multiple timer channels */
- spin_lock_irqsave(&sh_tmu_lock, flags);
+ raw_spin_lock_irqsave(&sh_tmu_lock, flags);
value = sh_tmu_read(p, TSTR);
if (start)
value &= ~(1 << cfg->timer_bit);
sh_tmu_write(p, TSTR, value);
- spin_unlock_irqrestore(&sh_tmu_lock, flags);
+ raw_spin_unlock_irqrestore(&sh_tmu_lock, flags);
}
static int sh_tmu_enable(struct sh_tmu_priv *p)
sh_tmu_enable(p);
- /* TODO: calculate good shift from rate and counter bit width */
-
- ced->shift = 32;
- ced->mult = div_sc(p->rate, NSEC_PER_SEC, ced->shift);
- ced->max_delta_ns = clockevent_delta2ns(0xffffffff, ced);
- ced->min_delta_ns = 5000;
+ clockevents_config(ced, p->rate);
if (periodic) {
p->periodic = (p->rate + HZ/2) / HZ;
ced->set_mode = sh_tmu_clock_event_mode;
dev_info(&p->pdev->dev, "used for clock events\n");
- clockevents_register_device(ced);
+
+ clockevents_config_and_register(ced, 1, 0x300, 0xffffffff);
ret = setup_irq(p->irqaction.irq, &p->irqaction);
if (ret) {
struct intel_load_detect_pipe tmp;
if (I915_HAS_HOTPLUG(dev)) {
- /* We can not rely on the HPD pin always being correctly wired
- * up, for example many KVM do not pass it through, and so
- * only trust an assertion that the monitor is connected.
- */
if (intel_crt_detect_hotplug(connector)) {
DRM_DEBUG_KMS("CRT detected via hotplug\n");
return connector_status_connected;
- } else
+ } else {
DRM_DEBUG_KMS("CRT not detected via hotplug\n");
+ return connector_status_disconnected;
+ }
}
if (intel_crt_detect_ddc(connector))
u32 cb_color_view[12];
u32 cb_color_pitch[12];
u32 cb_color_slice[12];
+ u32 cb_color_slice_idx[12];
u32 cb_color_attrib[12];
u32 cb_color_cmask_slice[8];/* unused */
u32 cb_color_fmask_slice[8];/* unused */
track->cb_color_info[i] = 0;
track->cb_color_view[i] = 0xFFFFFFFF;
track->cb_color_pitch[i] = 0;
- track->cb_color_slice[i] = 0;
+ track->cb_color_slice[i] = 0xfffffff;
+ track->cb_color_slice_idx[i] = 0;
}
track->cb_target_mask = 0xFFFFFFFF;
track->cb_shader_mask = 0xFFFFFFFF;
track->cb_dirty = true;
+ track->db_depth_slice = 0xffffffff;
track->db_depth_view = 0xFFFFC000;
track->db_depth_size = 0xFFFFFFFF;
track->db_depth_control = 0xFFFFFFFF;
{
struct evergreen_cs_track *track = p->track;
unsigned palign, halign, tileb, slice_pt;
+ unsigned mtile_pr, mtile_ps, mtileb;
tileb = 64 * surf->bpe * surf->nsamples;
- palign = track->group_size / (8 * surf->bpe * surf->nsamples);
- palign = MAX(8, palign);
slice_pt = 1;
if (tileb > surf->tsplit) {
slice_pt = tileb / surf->tsplit;
/* macro tile width & height */
palign = (8 * surf->bankw * track->npipes) * surf->mtilea;
halign = (8 * surf->bankh * surf->nbanks) / surf->mtilea;
- surf->layer_size = surf->nbx * surf->nby * surf->bpe * slice_pt;
+ mtileb = (palign / 8) * (halign / 8) * tileb;;
+ mtile_pr = surf->nbx / palign;
+ mtile_ps = (mtile_pr * surf->nby) / halign;
+ surf->layer_size = mtile_ps * mtileb * slice_pt;
surf->base_align = (palign / 8) * (halign / 8) * tileb;
surf->palign = palign;
surf->halign = halign;
offset += surf.layer_size * mslice;
if (offset > radeon_bo_size(track->cb_color_bo[id])) {
+ /* old ddx are broken they allocate bo with w*h*bpp but
+ * program slice with ALIGN(h, 8), catch this and patch
+ * command stream.
+ */
+ if (!surf.mode) {
+ volatile u32 *ib = p->ib.ptr;
+ unsigned long tmp, nby, bsize, size, min = 0;
+
+ /* find the height the ddx wants */
+ if (surf.nby > 8) {
+ min = surf.nby - 8;
+ }
+ bsize = radeon_bo_size(track->cb_color_bo[id]);
+ tmp = track->cb_color_bo_offset[id] << 8;
+ for (nby = surf.nby; nby > min; nby--) {
+ size = nby * surf.nbx * surf.bpe * surf.nsamples;
+ if ((tmp + size * mslice) <= bsize) {
+ break;
+ }
+ }
+ if (nby > min) {
+ surf.nby = nby;
+ slice = ((nby * surf.nbx) / 64) - 1;
+ if (!evergreen_surface_check(p, &surf, "cb")) {
+ /* check if this one works */
+ tmp += surf.layer_size * mslice;
+ if (tmp <= bsize) {
+ ib[track->cb_color_slice_idx[id]] = slice;
+ goto old_ddx_ok;
+ }
+ }
+ }
+ }
dev_warn(p->dev, "%s:%d cb[%d] bo too small (layer size %d, "
"offset %d, max layer %d, bo size %ld, slice %d)\n",
__func__, __LINE__, id, surf.layer_size,
surf.tsplit, surf.mtilea);
return -EINVAL;
}
+old_ddx_ok:
return 0;
}
case CB_COLOR7_SLICE:
tmp = (reg - CB_COLOR0_SLICE) / 0x3c;
track->cb_color_slice[tmp] = radeon_get_ib_value(p, idx);
+ track->cb_color_slice_idx[tmp] = idx;
track->cb_dirty = true;
break;
case CB_COLOR8_SLICE:
case CB_COLOR11_SLICE:
tmp = ((reg - CB_COLOR8_SLICE) / 0x1c) + 8;
track->cb_color_slice[tmp] = radeon_get_ib_value(p, idx);
+ track->cb_color_slice_idx[tmp] = idx;
track->cb_dirty = true;
break;
case CB_COLOR0_ATTRIB:
* 2.13.0 - virtual memory support, streamout
* 2.14.0 - add evergreen tiling informations
* 2.15.0 - add max_pipes query
+ * 2.16.0 - fix evergreen 2D tiled surface calculation
*/
#define KMS_DRIVER_MAJOR 2
-#define KMS_DRIVER_MINOR 15
+#define KMS_DRIVER_MINOR 16
#define KMS_DRIVER_PATCHLEVEL 0
int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
int radeon_driver_unload_kms(struct drm_device *dev);
(*destroy)(bo);
else
kfree(bo);
+ ttm_mem_global_free(mem_glob, acc_size);
return -EINVAL;
}
bo->destroy = destroy;
struct ttm_buffer_object **p_bo)
{
struct ttm_buffer_object *bo;
- struct ttm_mem_global *mem_glob = bdev->glob->mem_glob;
size_t acc_size;
int ret;
- acc_size = ttm_bo_acc_size(bdev, size, sizeof(struct ttm_buffer_object));
- ret = ttm_mem_global_alloc(mem_glob, acc_size, false, false);
- if (unlikely(ret != 0))
- return ret;
-
bo = kzalloc(sizeof(*bo), GFP_KERNEL);
-
- if (unlikely(bo == NULL)) {
- ttm_mem_global_free(mem_glob, acc_size);
+ if (unlikely(bo == NULL))
return -ENOMEM;
- }
+ acc_size = ttm_bo_acc_size(bdev, size, sizeof(struct ttm_buffer_object));
ret = ttm_bo_init(bdev, bo, size, type, placement, page_alignment,
buffer_start, interruptible,
persistent_swap_storage, acc_size, NULL, NULL);
return NULL;
}
+int vga_switcheroo_get_client_state(struct pci_dev *pdev)
+{
+ struct vga_switcheroo_client *client;
+
+ client = find_client_from_pci(&vgasr_priv.clients, pdev);
+ if (!client)
+ return VGA_SWITCHEROO_NOT_FOUND;
+ if (!vgasr_priv.active)
+ return VGA_SWITCHEROO_INIT;
+ return client->pwr_state;
+}
+EXPORT_SYMBOL(vga_switcheroo_get_client_state);
+
void vga_switcheroo_unregister_client(struct pci_dev *pdev)
{
struct vga_switcheroo_client *client;
vga_switchon(new_client);
vga_set_default_device(new_client->pdev);
- set_audio_state(new_client->id, VGA_SWITCHEROO_ON);
-
return 0;
}
active->active = false;
+ set_audio_state(active->id, VGA_SWITCHEROO_OFF);
+
if (new_client->fb_info) {
struct fb_event event;
event.info = new_client->fb_info;
if (new_client->ops->reprobe)
new_client->ops->reprobe(new_client->pdev);
- set_audio_state(active->id, VGA_SWITCHEROO_OFF);
-
if (active->pwr_state == VGA_SWITCHEROO_ON)
vga_switchoff(active);
+ set_audio_state(new_client->id, VGA_SWITCHEROO_ON);
+
new_client->active = true;
return 0;
}
/* pwr off the device not in use */
if (strncmp(usercmd, "OFF", 3) == 0) {
list_for_each_entry(client, &vgasr_priv.clients, list) {
- if (client->active)
+ if (client->active || client_is_audio(client))
continue;
+ set_audio_state(client->id, VGA_SWITCHEROO_OFF);
if (client->pwr_state == VGA_SWITCHEROO_ON)
vga_switchoff(client);
}
/* pwr on the device not in use */
if (strncmp(usercmd, "ON", 2) == 0) {
list_for_each_entry(client, &vgasr_priv.clients, list) {
- if (client->active)
+ if (client->active || client_is_audio(client))
continue;
if (client->pwr_state == VGA_SWITCHEROO_OFF)
vga_switchon(client);
+ set_audio_state(client->id, VGA_SWITCHEROO_ON);
}
goto out;
}
config LEDS_ASIC3
bool "LED support for the HTC ASIC3"
- depends on LEDS_CLASS
+ depends on LEDS_CLASS=y
depends on MFD_ASIC3
default y
help
config LEDS_RENESAS_TPU
bool "LED support for Renesas TPU"
- depends on LEDS_CLASS && HAVE_CLK && GENERIC_GPIO
+ depends on LEDS_CLASS=y && HAVE_CLK && GENERIC_GPIO
help
This option enables build of the LED TPU platform driver,
suitable to drive any TPU channel on newer Renesas SoCs.
led_cdev->brightness = led_cdev->brightness_get(led_cdev);
}
-static ssize_t led_brightness_show(struct device *dev,
+static ssize_t led_brightness_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct led_classdev *led_cdev = dev_get_drvdata(dev);
if (!led_cdev->blink_brightness)
led_cdev->blink_brightness = led_cdev->max_brightness;
- if (led_get_trigger_data(led_cdev) &&
- delay_on == led_cdev->blink_delay_on &&
- delay_off == led_cdev->blink_delay_off)
- return;
-
- led_stop_software_blink(led_cdev);
-
led_cdev->blink_delay_on = delay_on;
led_cdev->blink_delay_off = delay_off;
#include <net/route.h>
#include <net/net_namespace.h>
#include <net/netns/generic.h>
+#include <net/pkt_sched.h>
#include "bonding.h"
#include "bond_3ad.h"
#include "bond_alb.h"
return next;
}
-#define bond_queue_mapping(skb) (*(u16 *)((skb)->cb))
-
/**
* bond_dev_queue_xmit - Prepare skb for xmit.
*
{
skb->dev = slave_dev;
- skb->queue_mapping = bond_queue_mapping(skb);
+ BUILD_BUG_ON(sizeof(skb->queue_mapping) !=
+ sizeof(qdisc_skb_cb(skb)->bond_queue_mapping));
+ skb->queue_mapping = qdisc_skb_cb(skb)->bond_queue_mapping;
if (unlikely(netpoll_tx_running(slave_dev)))
bond_netpoll_send_skb(bond_get_slave_by_dev(bond, slave_dev), skb);
/*
* Save the original txq to restore before passing to the driver
*/
- bond_queue_mapping(skb) = skb->queue_mapping;
+ qdisc_skb_cb(skb)->bond_queue_mapping = skb->queue_mapping;
if (unlikely(txq >= dev->real_num_tx_queues)) {
do {
}
}
- pr_info("%s: Unable to set %.*s as primary slave.\n",
- bond->dev->name, (int)strlen(buf) - 1, buf);
+ strncpy(bond->params.primary, ifname, IFNAMSIZ);
+ bond->params.primary[IFNAMSIZ - 1] = 0;
+
+ pr_info("%s: Recording %s as primary, "
+ "but it has not been enslaved to %s yet.\n",
+ bond->dev->name, ifname, bond->dev->name);
out:
write_unlock_bh(&bond->curr_slave_lock);
read_unlock(&bond->lock);
*
* We iterate from priv->tx_echo to priv->tx_next and check if the
* packet has been transmitted, echo it back to the CAN framework.
- * If we discover a not yet transmitted package, stop looking for more.
+ * If we discover a not yet transmitted packet, stop looking for more.
*/
static void c_can_do_tx(struct net_device *dev)
{
for (/* nix */; (priv->tx_next - priv->tx_echo) > 0; priv->tx_echo++) {
msg_obj_no = get_tx_echo_msg_obj(priv);
val = c_can_read_reg32(priv, &priv->regs->txrqst1);
- if (!(val & (1 << msg_obj_no))) {
+ if (!(val & (1 << (msg_obj_no - 1)))) {
can_get_echo_skb(dev,
msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST);
stats->tx_bytes += priv->read_reg(priv,
& IF_MCONT_DLC_MASK;
stats->tx_packets++;
c_can_inval_msg_object(dev, 0, msg_obj_no);
+ } else {
+ break;
}
}
struct net_device *dev = napi->dev;
struct c_can_priv *priv = netdev_priv(dev);
- irqstatus = priv->read_reg(priv, &priv->regs->interrupt);
+ irqstatus = priv->irqstatus;
if (!irqstatus)
goto end;
static irqreturn_t c_can_isr(int irq, void *dev_id)
{
- u16 irqstatus;
struct net_device *dev = (struct net_device *)dev_id;
struct c_can_priv *priv = netdev_priv(dev);
- irqstatus = priv->read_reg(priv, &priv->regs->interrupt);
- if (!irqstatus)
+ priv->irqstatus = priv->read_reg(priv, &priv->regs->interrupt);
+ if (!priv->irqstatus)
return IRQ_NONE;
/* disable all interrupts and schedule the NAPI */
goto exit_irq_fail;
}
+ napi_enable(&priv->napi);
+
/* start the c_can controller */
c_can_start(dev);
- napi_enable(&priv->napi);
netif_start_queue(dev);
return 0;
unsigned int tx_next;
unsigned int tx_echo;
void *priv; /* for board-specific data */
+ u16 irqstatus;
};
struct net_device *alloc_c_can_dev(void);
struct cc770_platform_data *pdata = pdev->dev.platform_data;
priv->can.clock.freq = pdata->osc_freq;
- if (priv->cpu_interface | CPUIF_DSC)
+ if (priv->cpu_interface & CPUIF_DSC)
priv->can.clock.freq /= 2;
priv->clkout = pdata->cor;
priv->bus_config = pdata->bcr;
rtnl_lock();
err = __rtnl_link_register(&dummy_link_ops);
- for (i = 0; i < numdummies && !err; i++)
+ for (i = 0; i < numdummies && !err; i++) {
err = dummy_init_one();
+ cond_resched();
+ }
if (err < 0)
__rtnl_link_unregister(&dummy_link_ops);
rtnl_unlock();
#define ETH_RX_ERROR_FALGS ETH_FAST_PATH_RX_CQE_PHY_DECODE_ERR_FLG
-#define BNX2X_IP_CSUM_ERR(cqe) \
- (!((cqe)->fast_path_cqe.status_flags & \
- ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG) && \
- ((cqe)->fast_path_cqe.type_error_flags & \
- ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG))
-
-#define BNX2X_L4_CSUM_ERR(cqe) \
- (!((cqe)->fast_path_cqe.status_flags & \
- ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG) && \
- ((cqe)->fast_path_cqe.type_error_flags & \
- ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG))
-
-#define BNX2X_RX_CSUM_OK(cqe) \
- (!(BNX2X_L4_CSUM_ERR(cqe) || BNX2X_IP_CSUM_ERR(cqe)))
-
#define BNX2X_PRS_FLAG_OVERETH_IPV4(flags) \
(((le16_to_cpu(flags) & \
PARSING_FLAGS_OVER_ETHERNET_PROTOCOL) >> \
return 0;
}
+static void bnx2x_csum_validate(struct sk_buff *skb, union eth_rx_cqe *cqe,
+ struct bnx2x_fastpath *fp)
+{
+ /* Do nothing if no IP/L4 csum validation was done */
+
+ if (cqe->fast_path_cqe.status_flags &
+ (ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG |
+ ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG))
+ return;
+
+ /* If both IP/L4 validation were done, check if an error was found. */
+
+ if (cqe->fast_path_cqe.type_error_flags &
+ (ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG |
+ ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG))
+ fp->eth_q_stats.hw_csum_err++;
+ else
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+}
int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget)
{
skb_checksum_none_assert(skb);
- if (bp->dev->features & NETIF_F_RXCSUM) {
+ if (bp->dev->features & NETIF_F_RXCSUM)
+ bnx2x_csum_validate(skb, cqe, fp);
- if (likely(BNX2X_RX_CSUM_OK(cqe)))
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- else
- fp->eth_q_stats.hw_csum_err++;
- }
skb_record_rx_queue(skb, fp->rx_queue);
}
}
- if (tg3_flag(tp, 5755_PLUS))
+ if (tg3_flag(tp, 5755_PLUS) ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)
tg3_flag_set(tp, SHORT_DMA_BUG);
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
copied = make_tx_wrbs(adapter, txq, skb, wrb_cnt, dummy_wrb);
if (copied) {
+ int gso_segs = skb_shinfo(skb)->gso_segs;
+
/* record the sent skb in the sent_skb table */
BUG_ON(txo->sent_skb_list[start]);
txo->sent_skb_list[start] = skb;
be_txq_notify(adapter, txq->id, wrb_cnt);
- be_tx_stats_update(txo, wrb_cnt, copied,
- skb_shinfo(skb)->gso_segs, stopped);
+ be_tx_stats_update(txo, wrb_cnt, copied, gso_segs, stopped);
} else {
txq->head = start;
dev_kfree_skb_any(skb);
* When SoL/IDER sessions are active, autoneg/speed/duplex
* cannot be changed
*/
- if (hw->phy.ops.check_reset_block(hw)) {
+ if (hw->phy.ops.check_reset_block &&
+ hw->phy.ops.check_reset_block(hw)) {
e_err("Cannot change link characteristics when SoL/IDER is active.\n");
return -EINVAL;
}
* PHY loopback cannot be performed if SoL/IDER
* sessions are active
*/
- if (hw->phy.ops.check_reset_block(hw)) {
+ if (hw->phy.ops.check_reset_block &&
+ hw->phy.ops.check_reset_block(hw)) {
e_err("Cannot do PHY loopback test when SoL/IDER is active.\n");
*data = 0;
goto out;
* In the case of the phy reset being blocked, we already have a link.
* We do not need to set it up again.
*/
- if (hw->phy.ops.check_reset_block(hw))
+ if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))
return 0;
/*
adapter->hw.phy.ms_type = e1000_ms_hw_default;
}
- if (hw->phy.ops.check_reset_block(hw))
+ if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))
e_info("PHY reset is blocked due to SOL/IDER session.\n");
/* Set initial default active device features */
if (!(adapter->flags & FLAG_HAS_AMT))
e1000e_release_hw_control(adapter);
err_eeprom:
- if (!hw->phy.ops.check_reset_block(hw))
+ if (hw->phy.ops.check_reset_block && !hw->phy.ops.check_reset_block(hw))
e1000_phy_hw_reset(&adapter->hw);
err_hw_init:
kfree(adapter->tx_ring);
s32 ret_val;
u32 ctrl;
- ret_val = phy->ops.check_reset_block(hw);
- if (ret_val)
- return 0;
+ if (phy->ops.check_reset_block) {
+ ret_val = phy->ops.check_reset_block(hw);
+ if (ret_val)
+ return 0;
+ }
ret_val = phy->ops.acquire(hw);
if (ret_val)
union ixgbe_adv_rx_desc *rx_desc,
struct sk_buff *skb)
{
+ struct net_device *dev = rx_ring->netdev;
+
ixgbe_update_rsc_stats(rx_ring, skb);
ixgbe_rx_hash(rx_ring, rx_desc, skb);
ixgbe_ptp_rx_hwtstamp(rx_ring->q_vector, skb);
#endif
- if (ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_VP)) {
+ if ((dev->features & NETIF_F_HW_VLAN_RX) &&
+ ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_VP)) {
u16 vid = le16_to_cpu(rx_desc->wb.upper.vlan);
__vlan_hwaccel_put_tag(skb, vid);
}
skb_record_rx_queue(skb, rx_ring->queue_index);
- skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+ skb->protocol = eth_type_trans(skb, dev);
}
static void ixgbe_rx_skb(struct ixgbe_q_vector *q_vector,
if (hw->mac.type == ixgbe_mac_82598EB)
netif_set_gso_max_size(adapter->netdev, 32768);
-
- /* Enable VLAN tag insert/strip */
- adapter->netdev->features |= NETIF_F_HW_VLAN_RX;
-
hw->mac.ops.set_vfta(&adapter->hw, 0, 0, true);
#ifdef IXGBE_FCOE
{
struct ixgbe_adapter *adapter = netdev_priv(netdev);
-#ifdef CONFIG_DCB
- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED)
- features &= ~NETIF_F_HW_VLAN_RX;
-#endif
-
/* return error if RXHASH is being enabled when RSS is not supported */
if (!(adapter->flags & IXGBE_FLAG_RSS_ENABLED))
features &= ~NETIF_F_RXHASH;
if (!(adapter->flags2 & IXGBE_FLAG2_RSC_CAPABLE))
features &= ~NETIF_F_LRO;
-
return features;
}
need_reset = true;
}
+ if (features & NETIF_F_HW_VLAN_RX)
+ ixgbe_vlan_strip_enable(adapter);
+ else
+ ixgbe_vlan_strip_disable(adapter);
+
if (changed & NETIF_F_RXALL)
need_reset = true;
/*
* Hardware-specific parameters.
*/
+#if defined(CONFIG_HAVE_CLK)
struct clk *clk;
+#endif
unsigned int t_clk;
};
mp->dev = dev;
/*
- * Get the clk rate, if there is one, otherwise use the default.
+ * Start with a default rate, and if there is a clock, allow
+ * it to override the default.
*/
+ mp->t_clk = 133000000;
+#if defined(CONFIG_HAVE_CLK)
mp->clk = clk_get(&pdev->dev, (pdev->id ? "1" : "0"));
if (!IS_ERR(mp->clk)) {
clk_prepare_enable(mp->clk);
mp->t_clk = clk_get_rate(mp->clk);
- } else {
- mp->t_clk = 133000000;
- printk(KERN_WARNING "Unable to get clock");
}
-
+#endif
set_params(mp, pd);
netif_set_real_num_tx_queues(dev, mp->txq_count);
netif_set_real_num_rx_queues(dev, mp->rxq_count);
phy_detach(mp->phy);
cancel_work_sync(&mp->tx_timeout_task);
+#if defined(CONFIG_HAVE_CLK)
if (!IS_ERR(mp->clk)) {
clk_disable_unprepare(mp->clk);
clk_put(mp->clk);
}
+#endif
+
free_netdev(mp->dev);
platform_set_drvdata(pdev, NULL);
struct sky2_port *sky2 = netdev_priv(dev);
netdev_features_t changed = dev->features ^ features;
- if (changed & NETIF_F_RXCSUM) {
- bool on = features & NETIF_F_RXCSUM;
- sky2_write32(sky2->hw, Q_ADDR(rxqaddr[sky2->port], Q_CSR),
- on ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);
+ if ((changed & NETIF_F_RXCSUM) &&
+ !(sky2->hw->flags & SKY2_HW_NEW_LE)) {
+ sky2_write32(sky2->hw,
+ Q_ADDR(rxqaddr[sky2->port], Q_CSR),
+ (features & NETIF_F_RXCSUM)
+ ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);
}
if (changed & NETIF_F_RXHASH)
/* Update stats */
ndev->stats.tx_packets++;
ndev->stats.tx_bytes += skb->len;
-
- /* Free buffer */
- dev_kfree_skb_irq(skb);
}
+ dev_kfree_skb_irq(skb);
txcidx = readl(LPC_ENET_TXCONSUMEINDEX(pldat->net_base));
}
- if (netif_queue_stopped(ndev))
- netif_wake_queue(ndev);
+ if (pldat->num_used_tx_buffs <= ENET_TX_DESC/2) {
+ if (netif_queue_stopped(ndev))
+ netif_wake_queue(ndev);
+ }
}
static int __lpc_handle_recv(struct net_device *ndev, int budget)
.ndo_set_rx_mode = lpc_eth_set_multicast_list,
.ndo_do_ioctl = lpc_eth_ioctl,
.ndo_set_mac_address = lpc_set_mac_address,
+ .ndo_change_mtu = eth_change_mtu,
};
static int lpc_eth_drv_probe(struct platform_device *pdev)
if (status & LinkChg)
__rtl8169_check_link_status(dev, tp, tp->mmio_addr, true);
- napi_disable(&tp->napi);
- rtl_irq_disable(tp);
-
- napi_enable(&tp->napi);
- napi_schedule(&tp->napi);
+ rtl_irq_enable_all(tp);
}
static void rtl_task(struct work_struct *work)
if STMMAC_ETH
config STMMAC_PLATFORM
- tristate "STMMAC platform bus support"
+ bool "STMMAC Platform bus support"
depends on STMMAC_ETH
default y
---help---
If unsure, say N.
config STMMAC_PCI
- tristate "STMMAC support on PCI bus (EXPERIMENTAL)"
+ bool "STMMAC PCI bus support (EXPERIMENTAL)"
depends on STMMAC_ETH && PCI && EXPERIMENTAL
---help---
This is to select the Synopsys DWMAC available on PCI devices,
#include <linux/clk.h>
#include <linux/stmmac.h>
#include <linux/phy.h>
+#include <linux/pci.h>
#include "common.h"
#ifdef CONFIG_STMMAC_TIMER
#include "stmmac_timer.h"
extern void stmmac_set_ethtool_ops(struct net_device *netdev);
extern const struct stmmac_desc_ops enh_desc_ops;
extern const struct stmmac_desc_ops ndesc_ops;
-
int stmmac_freeze(struct net_device *ndev);
int stmmac_restore(struct net_device *ndev);
int stmmac_resume(struct net_device *ndev);
static inline int stmmac_clk_enable(struct stmmac_priv *priv)
{
if (!IS_ERR(priv->stmmac_clk))
- return clk_enable(priv->stmmac_clk);
+ return clk_prepare_enable(priv->stmmac_clk);
return 0;
}
if (IS_ERR(priv->stmmac_clk))
return;
- clk_disable(priv->stmmac_clk);
+ clk_disable_unprepare(priv->stmmac_clk);
}
static inline int stmmac_clk_get(struct stmmac_priv *priv)
{
return 0;
}
#endif /* CONFIG_HAVE_CLK */
+
+
+#ifdef CONFIG_STMMAC_PLATFORM
+extern struct platform_driver stmmac_pltfr_driver;
+static inline int stmmac_register_platform(void)
+{
+ int err;
+
+ err = platform_driver_register(&stmmac_pltfr_driver);
+ if (err)
+ pr_err("stmmac: failed to register the platform driver\n");
+
+ return err;
+}
+static inline void stmmac_unregister_platform(void)
+{
+ platform_driver_register(&stmmac_pltfr_driver);
+}
+#else
+static inline int stmmac_register_platform(void)
+{
+ pr_debug("stmmac: do not register the platf driver\n");
+
+ return -EINVAL;
+}
+static inline void stmmac_unregister_platform(void)
+{
+}
+#endif /* CONFIG_STMMAC_PLATFORM */
+
+#ifdef CONFIG_STMMAC_PCI
+extern struct pci_driver stmmac_pci_driver;
+static inline int stmmac_register_pci(void)
+{
+ int err;
+
+ err = pci_register_driver(&stmmac_pci_driver);
+ if (err)
+ pr_err("stmmac: failed to register the PCI driver\n");
+
+ return err;
+}
+static inline void stmmac_unregister_pci(void)
+{
+ pci_unregister_driver(&stmmac_pci_driver);
+}
+#else
+static inline int stmmac_register_pci(void)
+{
+ pr_debug("stmmac: do not register the PCI driver\n");
+
+ return -EINVAL;
+}
+static inline void stmmac_unregister_pci(void)
+{
+}
+#endif /* CONFIG_STMMAC_PCI */
/**
* stmmac_selec_desc_mode
- * @dev : device pointer
- * Description: select the Enhanced/Alternate or Normal descriptors */
+ * @priv : private structure
+ * Description: select the Enhanced/Alternate or Normal descriptors
+ */
static void stmmac_selec_desc_mode(struct stmmac_priv *priv)
{
if (priv->plat->enh_desc) {
/**
* stmmac_dvr_probe
* @device: device pointer
+ * @plat_dat: platform data pointer
+ * @addr: iobase memory address
* Description: this is the main probe function used to
* call the alloc_etherdev, allocate the priv structure.
*/
}
#endif /* CONFIG_PM */
+/* Driver can be configured w/ and w/ both PCI and Platf drivers
+ * depending on the configuration selected.
+ */
+static int __init stmmac_init(void)
+{
+ int err_plt = 0;
+ int err_pci = 0;
+
+ err_plt = stmmac_register_platform();
+ err_pci = stmmac_register_pci();
+
+ if ((err_pci) && (err_plt)) {
+ pr_err("stmmac: driver registration failed\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void __exit stmmac_exit(void)
+{
+ stmmac_unregister_platform();
+ stmmac_unregister_pci();
+}
+
+module_init(stmmac_init);
+module_exit(stmmac_exit);
+
#ifndef MODULE
static int __init stmmac_cmdline_opt(char *str)
{
MODULE_DEVICE_TABLE(pci, stmmac_id_table);
-static struct pci_driver stmmac_driver = {
+struct pci_driver stmmac_pci_driver = {
.name = STMMAC_RESOURCE_NAME,
.id_table = stmmac_id_table,
.probe = stmmac_pci_probe,
#endif
};
-/**
- * stmmac_init_module - Entry point for the driver
- * Description: This function is the entry point for the driver.
- */
-static int __init stmmac_init_module(void)
-{
- int ret;
-
- ret = pci_register_driver(&stmmac_driver);
- if (ret < 0)
- pr_err("%s: ERROR: driver registration failed\n", __func__);
-
- return ret;
-}
-
-/**
- * stmmac_cleanup_module - Cleanup routine for the driver
- * Description: This function is the cleanup routine for the driver.
- */
-static void __exit stmmac_cleanup_module(void)
-{
- pci_unregister_driver(&stmmac_driver);
-}
-
-module_init(stmmac_init_module);
-module_exit(stmmac_cleanup_module);
-
MODULE_DESCRIPTION("STMMAC 10/100/1000 Ethernet PCI driver");
MODULE_AUTHOR("Rayagond Kokatanur <rayagond.kokatanur@vayavyalabs.com>");
MODULE_AUTHOR("Giuseppe Cavallaro <peppe.cavallaro@st.com>");
};
MODULE_DEVICE_TABLE(of, stmmac_dt_ids);
-static struct platform_driver stmmac_driver = {
+struct platform_driver stmmac_pltfr_driver = {
.probe = stmmac_pltfr_probe,
.remove = stmmac_pltfr_remove,
.driver = {
},
};
-module_platform_driver(stmmac_driver);
-
MODULE_DESCRIPTION("STMMAC 10/100/1000 Ethernet PLATFORM driver");
MODULE_AUTHOR("Giuseppe Cavallaro <peppe.cavallaro@st.com>");
MODULE_LICENSE("GPL");
static void niu_tx_work(struct niu *np, struct tx_ring_info *rp)
{
struct netdev_queue *txq;
- unsigned int tx_bytes;
u16 pkt_cnt, tmp;
int cons, index;
u64 cs;
netif_printk(np, tx_done, KERN_DEBUG, np->dev,
"%s() pkt_cnt[%u] cons[%d]\n", __func__, pkt_cnt, cons);
- tx_bytes = 0;
- tmp = pkt_cnt;
- while (tmp--) {
- tx_bytes += rp->tx_buffs[cons].skb->len;
+ while (pkt_cnt--)
cons = release_tx_packet(np, rp, cons);
- }
rp->cons = cons;
smp_mb();
- netdev_tx_completed_queue(txq, pkt_cnt, tx_bytes);
-
out:
if (unlikely(netif_tx_queue_stopped(txq) &&
(niu_tx_avail(rp) > NIU_TX_WAKEUP_THRESH(rp)))) {
struct tx_ring_info *rp = &np->tx_rings[i];
niu_free_tx_ring_info(np, rp);
- netdev_tx_reset_queue(netdev_get_tx_queue(np->dev, i));
}
kfree(np->tx_rings);
np->tx_rings = NULL;
prod = NEXT_TX(rp, prod);
}
- netdev_tx_sent_queue(txq, skb->len);
-
if (prod < rp->prod)
rp->wrap_bit ^= TX_RING_KICK_WRAP;
rp->prod = prod;
depends on TILE
default y
select CRC32
+ select TILE_GXIO_MPIPE if TILEGX
+ select HIGH_RES_TIMERS if TILEGX
---help---
This is a standard Linux network device driver for the
on-chip Tilera Gigabit Ethernet and XAUI interfaces.
obj-$(CONFIG_TILE_NET) += tile_net.o
ifdef CONFIG_TILEGX
-tile_net-objs := tilegx.o mpipe.o iorpc_mpipe.o dma_queue.o
+tile_net-y := tilegx.o
else
-tile_net-objs := tilepro.o
+tile_net-y := tilepro.o
endif
--- /dev/null
+/*
+ * Copyright 2012 Tilera Corporation. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ * NON INFRINGEMENT. See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/moduleparam.h>
+#include <linux/sched.h>
+#include <linux/kernel.h> /* printk() */
+#include <linux/slab.h> /* kmalloc() */
+#include <linux/errno.h> /* error codes */
+#include <linux/types.h> /* size_t */
+#include <linux/interrupt.h>
+#include <linux/in.h>
+#include <linux/irq.h>
+#include <linux/netdevice.h> /* struct device, and other headers */
+#include <linux/etherdevice.h> /* eth_type_trans */
+#include <linux/skbuff.h>
+#include <linux/ioctl.h>
+#include <linux/cdev.h>
+#include <linux/hugetlb.h>
+#include <linux/in6.h>
+#include <linux/timer.h>
+#include <linux/hrtimer.h>
+#include <linux/ktime.h>
+#include <linux/io.h>
+#include <linux/ctype.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+
+#include <asm/checksum.h>
+#include <asm/homecache.h>
+#include <gxio/mpipe.h>
+#include <arch/sim.h>
+
+/* Default transmit lockup timeout period, in jiffies. */
+#define TILE_NET_TIMEOUT (5 * HZ)
+
+/* The maximum number of distinct channels (idesc.channel is 5 bits). */
+#define TILE_NET_CHANNELS 32
+
+/* Maximum number of idescs to handle per "poll". */
+#define TILE_NET_BATCH 128
+
+/* Maximum number of packets to handle per "poll". */
+#define TILE_NET_WEIGHT 64
+
+/* Number of entries in each iqueue. */
+#define IQUEUE_ENTRIES 512
+
+/* Number of entries in each equeue. */
+#define EQUEUE_ENTRIES 2048
+
+/* Total header bytes per equeue slot. Must be big enough for 2 bytes
+ * of NET_IP_ALIGN alignment, plus 14 bytes (?) of L2 header, plus up to
+ * 60 bytes of actual TCP header. We round up to align to cache lines.
+ */
+#define HEADER_BYTES 128
+
+/* Maximum completions per cpu per device (must be a power of two).
+ * ISSUE: What is the right number here? If this is too small, then
+ * egress might block waiting for free space in a completions array.
+ * ISSUE: At the least, allocate these only for initialized echannels.
+ */
+#define TILE_NET_MAX_COMPS 64
+
+#define MAX_FRAGS (MAX_SKB_FRAGS + 1)
+
+/* Size of completions data to allocate.
+ * ISSUE: Probably more than needed since we don't use all the channels.
+ */
+#define COMPS_SIZE (TILE_NET_CHANNELS * sizeof(struct tile_net_comps))
+
+/* Size of NotifRing data to allocate. */
+#define NOTIF_RING_SIZE (IQUEUE_ENTRIES * sizeof(gxio_mpipe_idesc_t))
+
+/* Timeout to wake the per-device TX timer after we stop the queue.
+ * We don't want the timeout too short (adds overhead, and might end
+ * up causing stop/wake/stop/wake cycles) or too long (affects performance).
+ * For the 10 Gb NIC, 30 usec means roughly 30+ 1500-byte packets.
+ */
+#define TX_TIMER_DELAY_USEC 30
+
+/* Timeout to wake the per-cpu egress timer to free completions. */
+#define EGRESS_TIMER_DELAY_USEC 1000
+
+MODULE_AUTHOR("Tilera Corporation");
+MODULE_LICENSE("GPL");
+
+/* A "packet fragment" (a chunk of memory). */
+struct frag {
+ void *buf;
+ size_t length;
+};
+
+/* A single completion. */
+struct tile_net_comp {
+ /* The "complete_count" when the completion will be complete. */
+ s64 when;
+ /* The buffer to be freed when the completion is complete. */
+ struct sk_buff *skb;
+};
+
+/* The completions for a given cpu and echannel. */
+struct tile_net_comps {
+ /* The completions. */
+ struct tile_net_comp comp_queue[TILE_NET_MAX_COMPS];
+ /* The number of completions used. */
+ unsigned long comp_next;
+ /* The number of completions freed. */
+ unsigned long comp_last;
+};
+
+/* The transmit wake timer for a given cpu and echannel. */
+struct tile_net_tx_wake {
+ struct hrtimer timer;
+ struct net_device *dev;
+};
+
+/* Info for a specific cpu. */
+struct tile_net_info {
+ /* The NAPI struct. */
+ struct napi_struct napi;
+ /* Packet queue. */
+ gxio_mpipe_iqueue_t iqueue;
+ /* Our cpu. */
+ int my_cpu;
+ /* True if iqueue is valid. */
+ bool has_iqueue;
+ /* NAPI flags. */
+ bool napi_added;
+ bool napi_enabled;
+ /* Number of small sk_buffs which must still be provided. */
+ unsigned int num_needed_small_buffers;
+ /* Number of large sk_buffs which must still be provided. */
+ unsigned int num_needed_large_buffers;
+ /* A timer for handling egress completions. */
+ struct hrtimer egress_timer;
+ /* True if "egress_timer" is scheduled. */
+ bool egress_timer_scheduled;
+ /* Comps for each egress channel. */
+ struct tile_net_comps *comps_for_echannel[TILE_NET_CHANNELS];
+ /* Transmit wake timer for each egress channel. */
+ struct tile_net_tx_wake tx_wake[TILE_NET_CHANNELS];
+};
+
+/* Info for egress on a particular egress channel. */
+struct tile_net_egress {
+ /* The "equeue". */
+ gxio_mpipe_equeue_t *equeue;
+ /* The headers for TSO. */
+ unsigned char *headers;
+};
+
+/* Info for a specific device. */
+struct tile_net_priv {
+ /* Our network device. */
+ struct net_device *dev;
+ /* The primary link. */
+ gxio_mpipe_link_t link;
+ /* The primary channel, if open, else -1. */
+ int channel;
+ /* The "loopify" egress link, if needed. */
+ gxio_mpipe_link_t loopify_link;
+ /* The "loopify" egress channel, if open, else -1. */
+ int loopify_channel;
+ /* The egress channel (channel or loopify_channel). */
+ int echannel;
+ /* Total stats. */
+ struct net_device_stats stats;
+};
+
+/* Egress info, indexed by "priv->echannel" (lazily created as needed). */
+static struct tile_net_egress egress_for_echannel[TILE_NET_CHANNELS];
+
+/* Devices currently associated with each channel.
+ * NOTE: The array entry can become NULL after ifconfig down, but
+ * we do not free the underlying net_device structures, so it is
+ * safe to use a pointer after reading it from this array.
+ */
+static struct net_device *tile_net_devs_for_channel[TILE_NET_CHANNELS];
+
+/* A mutex for "tile_net_devs_for_channel". */
+static DEFINE_MUTEX(tile_net_devs_for_channel_mutex);
+
+/* The per-cpu info. */
+static DEFINE_PER_CPU(struct tile_net_info, per_cpu_info);
+
+/* The "context" for all devices. */
+static gxio_mpipe_context_t context;
+
+/* Buffer sizes and mpipe enum codes for buffer stacks.
+ * See arch/tile/include/gxio/mpipe.h for the set of possible values.
+ */
+#define BUFFER_SIZE_SMALL_ENUM GXIO_MPIPE_BUFFER_SIZE_128
+#define BUFFER_SIZE_SMALL 128
+#define BUFFER_SIZE_LARGE_ENUM GXIO_MPIPE_BUFFER_SIZE_1664
+#define BUFFER_SIZE_LARGE 1664
+
+/* The small/large "buffer stacks". */
+static int small_buffer_stack = -1;
+static int large_buffer_stack = -1;
+
+/* Amount of memory allocated for each buffer stack. */
+static size_t buffer_stack_size;
+
+/* The actual memory allocated for the buffer stacks. */
+static void *small_buffer_stack_va;
+static void *large_buffer_stack_va;
+
+/* The buckets. */
+static int first_bucket = -1;
+static int num_buckets = 1;
+
+/* The ingress irq. */
+static int ingress_irq = -1;
+
+/* Text value of tile_net.cpus if passed as a module parameter. */
+static char *network_cpus_string;
+
+/* The actual cpus in "network_cpus". */
+static struct cpumask network_cpus_map;
+
+/* If "loopify=LINK" was specified, this is "LINK". */
+static char *loopify_link_name;
+
+/* If "tile_net.custom" was specified, this is non-NULL. */
+static char *custom_str;
+
+/* The "tile_net.cpus" argument specifies the cpus that are dedicated
+ * to handle ingress packets.
+ *
+ * The parameter should be in the form "tile_net.cpus=m-n[,x-y]", where
+ * m, n, x, y are integer numbers that represent the cpus that can be
+ * neither a dedicated cpu nor a dataplane cpu.
+ */
+static bool network_cpus_init(void)
+{
+ char buf[1024];
+ int rc;
+
+ if (network_cpus_string == NULL)
+ return false;
+
+ rc = cpulist_parse_crop(network_cpus_string, &network_cpus_map);
+ if (rc != 0) {
+ pr_warn("tile_net.cpus=%s: malformed cpu list\n",
+ network_cpus_string);
+ return false;
+ }
+
+ /* Remove dedicated cpus. */
+ cpumask_and(&network_cpus_map, &network_cpus_map, cpu_possible_mask);
+
+ if (cpumask_empty(&network_cpus_map)) {
+ pr_warn("Ignoring empty tile_net.cpus='%s'.\n",
+ network_cpus_string);
+ return false;
+ }
+
+ cpulist_scnprintf(buf, sizeof(buf), &network_cpus_map);
+ pr_info("Linux network CPUs: %s\n", buf);
+ return true;
+}
+
+module_param_named(cpus, network_cpus_string, charp, 0444);
+MODULE_PARM_DESC(cpus, "cpulist of cores that handle network interrupts");
+
+/* The "tile_net.loopify=LINK" argument causes the named device to
+ * actually use "loop0" for ingress, and "loop1" for egress. This
+ * allows an app to sit between the actual link and linux, passing
+ * (some) packets along to linux, and forwarding (some) packets sent
+ * out by linux.
+ */
+module_param_named(loopify, loopify_link_name, charp, 0444);
+MODULE_PARM_DESC(loopify, "name the device to use loop0/1 for ingress/egress");
+
+/* The "tile_net.custom" argument causes us to ignore the "conventional"
+ * classifier metadata, in particular, the "l2_offset".
+ */
+module_param_named(custom, custom_str, charp, 0444);
+MODULE_PARM_DESC(custom, "indicates a (heavily) customized classifier");
+
+/* Atomically update a statistics field.
+ * Note that on TILE-Gx, this operation is fire-and-forget on the
+ * issuing core (single-cycle dispatch) and takes only a few cycles
+ * longer than a regular store when the request reaches the home cache.
+ * No expensive bus management overhead is required.
+ */
+static void tile_net_stats_add(unsigned long value, unsigned long *field)
+{
+ BUILD_BUG_ON(sizeof(atomic_long_t) != sizeof(unsigned long));
+ atomic_long_add(value, (atomic_long_t *)field);
+}
+
+/* Allocate and push a buffer. */
+static bool tile_net_provide_buffer(bool small)
+{
+ int stack = small ? small_buffer_stack : large_buffer_stack;
+ const unsigned long buffer_alignment = 128;
+ struct sk_buff *skb;
+ int len;
+
+ len = sizeof(struct sk_buff **) + buffer_alignment;
+ len += (small ? BUFFER_SIZE_SMALL : BUFFER_SIZE_LARGE);
+ skb = dev_alloc_skb(len);
+ if (skb == NULL)
+ return false;
+
+ /* Make room for a back-pointer to 'skb' and guarantee alignment. */
+ skb_reserve(skb, sizeof(struct sk_buff **));
+ skb_reserve(skb, -(long)skb->data & (buffer_alignment - 1));
+
+ /* Save a back-pointer to 'skb'. */
+ *(struct sk_buff **)(skb->data - sizeof(struct sk_buff **)) = skb;
+
+ /* Make sure "skb" and the back-pointer have been flushed. */
+ wmb();
+
+ gxio_mpipe_push_buffer(&context, stack,
+ (void *)va_to_tile_io_addr(skb->data));
+
+ return true;
+}
+
+/* Convert a raw mpipe buffer to its matching skb pointer. */
+static struct sk_buff *mpipe_buf_to_skb(void *va)
+{
+ /* Acquire the associated "skb". */
+ struct sk_buff **skb_ptr = va - sizeof(*skb_ptr);
+ struct sk_buff *skb = *skb_ptr;
+
+ /* Paranoia. */
+ if (skb->data != va) {
+ /* Panic here since there's a reasonable chance
+ * that corrupt buffers means generic memory
+ * corruption, with unpredictable system effects.
+ */
+ panic("Corrupt linux buffer! va=%p, skb=%p, skb->data=%p",
+ va, skb, skb->data);
+ }
+
+ return skb;
+}
+
+static void tile_net_pop_all_buffers(int stack)
+{
+ for (;;) {
+ tile_io_addr_t addr =
+ (tile_io_addr_t)gxio_mpipe_pop_buffer(&context, stack);
+ if (addr == 0)
+ break;
+ dev_kfree_skb_irq(mpipe_buf_to_skb(tile_io_addr_to_va(addr)));
+ }
+}
+
+/* Provide linux buffers to mPIPE. */
+static void tile_net_provide_needed_buffers(void)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+
+ while (info->num_needed_small_buffers != 0) {
+ if (!tile_net_provide_buffer(true))
+ goto oops;
+ info->num_needed_small_buffers--;
+ }
+
+ while (info->num_needed_large_buffers != 0) {
+ if (!tile_net_provide_buffer(false))
+ goto oops;
+ info->num_needed_large_buffers--;
+ }
+
+ return;
+
+oops:
+ /* Add a description to the page allocation failure dump. */
+ pr_notice("Tile %d still needs some buffers\n", info->my_cpu);
+}
+
+static inline bool filter_packet(struct net_device *dev, void *buf)
+{
+ /* Filter packets received before we're up. */
+ if (dev == NULL || !(dev->flags & IFF_UP))
+ return true;
+
+ /* Filter out packets that aren't for us. */
+ if (!(dev->flags & IFF_PROMISC) &&
+ !is_multicast_ether_addr(buf) &&
+ compare_ether_addr(dev->dev_addr, buf) != 0)
+ return true;
+
+ return false;
+}
+
+static void tile_net_receive_skb(struct net_device *dev, struct sk_buff *skb,
+ gxio_mpipe_idesc_t *idesc, unsigned long len)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ struct tile_net_priv *priv = netdev_priv(dev);
+
+ /* Encode the actual packet length. */
+ skb_put(skb, len);
+
+ skb->protocol = eth_type_trans(skb, dev);
+
+ /* Acknowledge "good" hardware checksums. */
+ if (idesc->cs && idesc->csum_seed_val == 0xFFFF)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+ netif_receive_skb(skb);
+
+ /* Update stats. */
+ tile_net_stats_add(1, &priv->stats.rx_packets);
+ tile_net_stats_add(len, &priv->stats.rx_bytes);
+
+ /* Need a new buffer. */
+ if (idesc->size == BUFFER_SIZE_SMALL_ENUM)
+ info->num_needed_small_buffers++;
+ else
+ info->num_needed_large_buffers++;
+}
+
+/* Handle a packet. Return true if "processed", false if "filtered". */
+static bool tile_net_handle_packet(gxio_mpipe_idesc_t *idesc)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ struct net_device *dev = tile_net_devs_for_channel[idesc->channel];
+ uint8_t l2_offset;
+ void *va;
+ void *buf;
+ unsigned long len;
+ bool filter;
+
+ /* Drop packets for which no buffer was available.
+ * NOTE: This happens under heavy load.
+ */
+ if (idesc->be) {
+ struct tile_net_priv *priv = netdev_priv(dev);
+ tile_net_stats_add(1, &priv->stats.rx_dropped);
+ gxio_mpipe_iqueue_consume(&info->iqueue, idesc);
+ if (net_ratelimit())
+ pr_info("Dropping packet (insufficient buffers).\n");
+ return false;
+ }
+
+ /* Get the "l2_offset", if allowed. */
+ l2_offset = custom_str ? 0 : gxio_mpipe_idesc_get_l2_offset(idesc);
+
+ /* Get the raw buffer VA (includes "headroom"). */
+ va = tile_io_addr_to_va((unsigned long)(long)idesc->va);
+
+ /* Get the actual packet start/length. */
+ buf = va + l2_offset;
+ len = idesc->l2_size - l2_offset;
+
+ /* Point "va" at the raw buffer. */
+ va -= NET_IP_ALIGN;
+
+ filter = filter_packet(dev, buf);
+ if (filter) {
+ gxio_mpipe_iqueue_drop(&info->iqueue, idesc);
+ } else {
+ struct sk_buff *skb = mpipe_buf_to_skb(va);
+
+ /* Skip headroom, and any custom header. */
+ skb_reserve(skb, NET_IP_ALIGN + l2_offset);
+
+ tile_net_receive_skb(dev, skb, idesc, len);
+ }
+
+ gxio_mpipe_iqueue_consume(&info->iqueue, idesc);
+ return !filter;
+}
+
+/* Handle some packets for the current CPU.
+ *
+ * This function handles up to TILE_NET_BATCH idescs per call.
+ *
+ * ISSUE: Since we do not provide new buffers until this function is
+ * complete, we must initially provide enough buffers for each network
+ * cpu to fill its iqueue and also its batched idescs.
+ *
+ * ISSUE: The "rotting packet" race condition occurs if a packet
+ * arrives after the queue appears to be empty, and before the
+ * hypervisor interrupt is re-enabled.
+ */
+static int tile_net_poll(struct napi_struct *napi, int budget)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ unsigned int work = 0;
+ gxio_mpipe_idesc_t *idesc;
+ int i, n;
+
+ /* Process packets. */
+ while ((n = gxio_mpipe_iqueue_try_peek(&info->iqueue, &idesc)) > 0) {
+ for (i = 0; i < n; i++) {
+ if (i == TILE_NET_BATCH)
+ goto done;
+ if (tile_net_handle_packet(idesc + i)) {
+ if (++work >= budget)
+ goto done;
+ }
+ }
+ }
+
+ /* There are no packets left. */
+ napi_complete(&info->napi);
+
+ /* Re-enable hypervisor interrupts. */
+ gxio_mpipe_enable_notif_ring_interrupt(&context, info->iqueue.ring);
+
+ /* HACK: Avoid the "rotting packet" problem. */
+ if (gxio_mpipe_iqueue_try_peek(&info->iqueue, &idesc) > 0)
+ napi_schedule(&info->napi);
+
+ /* ISSUE: Handle completions? */
+
+done:
+ tile_net_provide_needed_buffers();
+
+ return work;
+}
+
+/* Handle an ingress interrupt on the current cpu. */
+static irqreturn_t tile_net_handle_ingress_irq(int irq, void *unused)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ napi_schedule(&info->napi);
+ return IRQ_HANDLED;
+}
+
+/* Free some completions. This must be called with interrupts blocked. */
+static int tile_net_free_comps(gxio_mpipe_equeue_t *equeue,
+ struct tile_net_comps *comps,
+ int limit, bool force_update)
+{
+ int n = 0;
+ while (comps->comp_last < comps->comp_next) {
+ unsigned int cid = comps->comp_last % TILE_NET_MAX_COMPS;
+ struct tile_net_comp *comp = &comps->comp_queue[cid];
+ if (!gxio_mpipe_equeue_is_complete(equeue, comp->when,
+ force_update || n == 0))
+ break;
+ dev_kfree_skb_irq(comp->skb);
+ comps->comp_last++;
+ if (++n == limit)
+ break;
+ }
+ return n;
+}
+
+/* Add a completion. This must be called with interrupts blocked.
+ * tile_net_equeue_try_reserve() will have ensured a free completion entry.
+ */
+static void add_comp(gxio_mpipe_equeue_t *equeue,
+ struct tile_net_comps *comps,
+ uint64_t when, struct sk_buff *skb)
+{
+ int cid = comps->comp_next % TILE_NET_MAX_COMPS;
+ comps->comp_queue[cid].when = when;
+ comps->comp_queue[cid].skb = skb;
+ comps->comp_next++;
+}
+
+static void tile_net_schedule_tx_wake_timer(struct net_device *dev)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ struct tile_net_priv *priv = netdev_priv(dev);
+
+ hrtimer_start(&info->tx_wake[priv->echannel].timer,
+ ktime_set(0, TX_TIMER_DELAY_USEC * 1000UL),
+ HRTIMER_MODE_REL_PINNED);
+}
+
+static enum hrtimer_restart tile_net_handle_tx_wake_timer(struct hrtimer *t)
+{
+ struct tile_net_tx_wake *tx_wake =
+ container_of(t, struct tile_net_tx_wake, timer);
+ netif_wake_subqueue(tx_wake->dev, smp_processor_id());
+ return HRTIMER_NORESTART;
+}
+
+/* Make sure the egress timer is scheduled. */
+static void tile_net_schedule_egress_timer(void)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+
+ if (!info->egress_timer_scheduled) {
+ hrtimer_start(&info->egress_timer,
+ ktime_set(0, EGRESS_TIMER_DELAY_USEC * 1000UL),
+ HRTIMER_MODE_REL_PINNED);
+ info->egress_timer_scheduled = true;
+ }
+}
+
+/* The "function" for "info->egress_timer".
+ *
+ * This timer will reschedule itself as long as there are any pending
+ * completions expected for this tile.
+ */
+static enum hrtimer_restart tile_net_handle_egress_timer(struct hrtimer *t)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ unsigned long irqflags;
+ bool pending = false;
+ int i;
+
+ local_irq_save(irqflags);
+
+ /* The timer is no longer scheduled. */
+ info->egress_timer_scheduled = false;
+
+ /* Free all possible comps for this tile. */
+ for (i = 0; i < TILE_NET_CHANNELS; i++) {
+ struct tile_net_egress *egress = &egress_for_echannel[i];
+ struct tile_net_comps *comps = info->comps_for_echannel[i];
+ if (comps->comp_last >= comps->comp_next)
+ continue;
+ tile_net_free_comps(egress->equeue, comps, -1, true);
+ pending = pending || (comps->comp_last < comps->comp_next);
+ }
+
+ /* Reschedule timer if needed. */
+ if (pending)
+ tile_net_schedule_egress_timer();
+
+ local_irq_restore(irqflags);
+
+ return HRTIMER_NORESTART;
+}
+
+/* Helper function for "tile_net_update()".
+ * "dev" (i.e. arg) is the device being brought up or down,
+ * or NULL if all devices are now down.
+ */
+static void tile_net_update_cpu(void *arg)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ struct net_device *dev = arg;
+
+ if (!info->has_iqueue)
+ return;
+
+ if (dev != NULL) {
+ if (!info->napi_added) {
+ netif_napi_add(dev, &info->napi,
+ tile_net_poll, TILE_NET_WEIGHT);
+ info->napi_added = true;
+ }
+ if (!info->napi_enabled) {
+ napi_enable(&info->napi);
+ info->napi_enabled = true;
+ }
+ enable_percpu_irq(ingress_irq, 0);
+ } else {
+ disable_percpu_irq(ingress_irq);
+ if (info->napi_enabled) {
+ napi_disable(&info->napi);
+ info->napi_enabled = false;
+ }
+ /* FIXME: Drain the iqueue. */
+ }
+}
+
+/* Helper function for tile_net_open() and tile_net_stop().
+ * Always called under tile_net_devs_for_channel_mutex.
+ */
+static int tile_net_update(struct net_device *dev)
+{
+ static gxio_mpipe_rules_t rules; /* too big to fit on the stack */
+ bool saw_channel = false;
+ int channel;
+ int rc;
+ int cpu;
+
+ gxio_mpipe_rules_init(&rules, &context);
+
+ for (channel = 0; channel < TILE_NET_CHANNELS; channel++) {
+ if (tile_net_devs_for_channel[channel] == NULL)
+ continue;
+ if (!saw_channel) {
+ saw_channel = true;
+ gxio_mpipe_rules_begin(&rules, first_bucket,
+ num_buckets, NULL);
+ gxio_mpipe_rules_set_headroom(&rules, NET_IP_ALIGN);
+ }
+ gxio_mpipe_rules_add_channel(&rules, channel);
+ }
+
+ /* NOTE: This can fail if there is no classifier.
+ * ISSUE: Can anything else cause it to fail?
+ */
+ rc = gxio_mpipe_rules_commit(&rules);
+ if (rc != 0) {
+ netdev_warn(dev, "gxio_mpipe_rules_commit failed: %d\n", rc);
+ return -EIO;
+ }
+
+ /* Update all cpus, sequentially (to protect "netif_napi_add()"). */
+ for_each_online_cpu(cpu)
+ smp_call_function_single(cpu, tile_net_update_cpu,
+ (saw_channel ? dev : NULL), 1);
+
+ /* HACK: Allow packets to flow in the simulator. */
+ if (saw_channel)
+ sim_enable_mpipe_links(0, -1);
+
+ return 0;
+}
+
+/* Allocate and initialize mpipe buffer stacks, and register them in
+ * the mPIPE TLBs, for both small and large packet sizes.
+ * This routine supports tile_net_init_mpipe(), below.
+ */
+static int init_buffer_stacks(struct net_device *dev, int num_buffers)
+{
+ pte_t hash_pte = pte_set_home((pte_t) { 0 }, PAGE_HOME_HASH);
+ int rc;
+
+ /* Compute stack bytes; we round up to 64KB and then use
+ * alloc_pages() so we get the required 64KB alignment as well.
+ */
+ buffer_stack_size =
+ ALIGN(gxio_mpipe_calc_buffer_stack_bytes(num_buffers),
+ 64 * 1024);
+
+ /* Allocate two buffer stack indices. */
+ rc = gxio_mpipe_alloc_buffer_stacks(&context, 2, 0, 0);
+ if (rc < 0) {
+ netdev_err(dev, "gxio_mpipe_alloc_buffer_stacks failed: %d\n",
+ rc);
+ return rc;
+ }
+ small_buffer_stack = rc;
+ large_buffer_stack = rc + 1;
+
+ /* Allocate the small memory stack. */
+ small_buffer_stack_va =
+ alloc_pages_exact(buffer_stack_size, GFP_KERNEL);
+ if (small_buffer_stack_va == NULL) {
+ netdev_err(dev,
+ "Could not alloc %zd bytes for buffer stacks\n",
+ buffer_stack_size);
+ return -ENOMEM;
+ }
+ rc = gxio_mpipe_init_buffer_stack(&context, small_buffer_stack,
+ BUFFER_SIZE_SMALL_ENUM,
+ small_buffer_stack_va,
+ buffer_stack_size, 0);
+ if (rc != 0) {
+ netdev_err(dev, "gxio_mpipe_init_buffer_stack: %d\n", rc);
+ return rc;
+ }
+ rc = gxio_mpipe_register_client_memory(&context, small_buffer_stack,
+ hash_pte, 0);
+ if (rc != 0) {
+ netdev_err(dev,
+ "gxio_mpipe_register_buffer_memory failed: %d\n",
+ rc);
+ return rc;
+ }
+
+ /* Allocate the large buffer stack. */
+ large_buffer_stack_va =
+ alloc_pages_exact(buffer_stack_size, GFP_KERNEL);
+ if (large_buffer_stack_va == NULL) {
+ netdev_err(dev,
+ "Could not alloc %zd bytes for buffer stacks\n",
+ buffer_stack_size);
+ return -ENOMEM;
+ }
+ rc = gxio_mpipe_init_buffer_stack(&context, large_buffer_stack,
+ BUFFER_SIZE_LARGE_ENUM,
+ large_buffer_stack_va,
+ buffer_stack_size, 0);
+ if (rc != 0) {
+ netdev_err(dev, "gxio_mpipe_init_buffer_stack failed: %d\n",
+ rc);
+ return rc;
+ }
+ rc = gxio_mpipe_register_client_memory(&context, large_buffer_stack,
+ hash_pte, 0);
+ if (rc != 0) {
+ netdev_err(dev,
+ "gxio_mpipe_register_buffer_memory failed: %d\n",
+ rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+/* Allocate per-cpu resources (memory for completions and idescs).
+ * This routine supports tile_net_init_mpipe(), below.
+ */
+static int alloc_percpu_mpipe_resources(struct net_device *dev,
+ int cpu, int ring)
+{
+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);
+ int order, i, rc;
+ struct page *page;
+ void *addr;
+
+ /* Allocate the "comps". */
+ order = get_order(COMPS_SIZE);
+ page = homecache_alloc_pages(GFP_KERNEL, order, cpu);
+ if (page == NULL) {
+ netdev_err(dev, "Failed to alloc %zd bytes comps memory\n",
+ COMPS_SIZE);
+ return -ENOMEM;
+ }
+ addr = pfn_to_kaddr(page_to_pfn(page));
+ memset(addr, 0, COMPS_SIZE);
+ for (i = 0; i < TILE_NET_CHANNELS; i++)
+ info->comps_for_echannel[i] =
+ addr + i * sizeof(struct tile_net_comps);
+
+ /* If this is a network cpu, create an iqueue. */
+ if (cpu_isset(cpu, network_cpus_map)) {
+ order = get_order(NOTIF_RING_SIZE);
+ page = homecache_alloc_pages(GFP_KERNEL, order, cpu);
+ if (page == NULL) {
+ netdev_err(dev,
+ "Failed to alloc %zd bytes iqueue memory\n",
+ NOTIF_RING_SIZE);
+ return -ENOMEM;
+ }
+ addr = pfn_to_kaddr(page_to_pfn(page));
+ rc = gxio_mpipe_iqueue_init(&info->iqueue, &context, ring++,
+ addr, NOTIF_RING_SIZE, 0);
+ if (rc < 0) {
+ netdev_err(dev,
+ "gxio_mpipe_iqueue_init failed: %d\n", rc);
+ return rc;
+ }
+ info->has_iqueue = true;
+ }
+
+ return ring;
+}
+
+/* Initialize NotifGroup and buckets.
+ * This routine supports tile_net_init_mpipe(), below.
+ */
+static int init_notif_group_and_buckets(struct net_device *dev,
+ int ring, int network_cpus_count)
+{
+ int group, rc;
+
+ /* Allocate one NotifGroup. */
+ rc = gxio_mpipe_alloc_notif_groups(&context, 1, 0, 0);
+ if (rc < 0) {
+ netdev_err(dev, "gxio_mpipe_alloc_notif_groups failed: %d\n",
+ rc);
+ return rc;
+ }
+ group = rc;
+
+ /* Initialize global num_buckets value. */
+ if (network_cpus_count > 4)
+ num_buckets = 256;
+ else if (network_cpus_count > 1)
+ num_buckets = 16;
+
+ /* Allocate some buckets, and set global first_bucket value. */
+ rc = gxio_mpipe_alloc_buckets(&context, num_buckets, 0, 0);
+ if (rc < 0) {
+ netdev_err(dev, "gxio_mpipe_alloc_buckets failed: %d\n", rc);
+ return rc;
+ }
+ first_bucket = rc;
+
+ /* Init group and buckets. */
+ rc = gxio_mpipe_init_notif_group_and_buckets(
+ &context, group, ring, network_cpus_count,
+ first_bucket, num_buckets,
+ GXIO_MPIPE_BUCKET_STICKY_FLOW_LOCALITY);
+ if (rc != 0) {
+ netdev_err(
+ dev,
+ "gxio_mpipe_init_notif_group_and_buckets failed: %d\n",
+ rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+/* Create an irq and register it, then activate the irq and request
+ * interrupts on all cores. Note that "ingress_irq" being initialized
+ * is how we know not to call tile_net_init_mpipe() again.
+ * This routine supports tile_net_init_mpipe(), below.
+ */
+static int tile_net_setup_interrupts(struct net_device *dev)
+{
+ int cpu, rc;
+
+ rc = create_irq();
+ if (rc < 0) {
+ netdev_err(dev, "create_irq failed: %d\n", rc);
+ return rc;
+ }
+ ingress_irq = rc;
+ tile_irq_activate(ingress_irq, TILE_IRQ_PERCPU);
+ rc = request_irq(ingress_irq, tile_net_handle_ingress_irq,
+ 0, NULL, NULL);
+ if (rc != 0) {
+ netdev_err(dev, "request_irq failed: %d\n", rc);
+ destroy_irq(ingress_irq);
+ ingress_irq = -1;
+ return rc;
+ }
+
+ for_each_online_cpu(cpu) {
+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);
+ if (info->has_iqueue) {
+ gxio_mpipe_request_notif_ring_interrupt(
+ &context, cpu_x(cpu), cpu_y(cpu),
+ 1, ingress_irq, info->iqueue.ring);
+ }
+ }
+
+ return 0;
+}
+
+/* Undo any state set up partially by a failed call to tile_net_init_mpipe. */
+static void tile_net_init_mpipe_fail(void)
+{
+ int cpu;
+
+ /* Do cleanups that require the mpipe context first. */
+ if (small_buffer_stack >= 0)
+ tile_net_pop_all_buffers(small_buffer_stack);
+ if (large_buffer_stack >= 0)
+ tile_net_pop_all_buffers(large_buffer_stack);
+
+ /* Destroy mpipe context so the hardware no longer owns any memory. */
+ gxio_mpipe_destroy(&context);
+
+ for_each_online_cpu(cpu) {
+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);
+ free_pages((unsigned long)(info->comps_for_echannel[0]),
+ get_order(COMPS_SIZE));
+ info->comps_for_echannel[0] = NULL;
+ free_pages((unsigned long)(info->iqueue.idescs),
+ get_order(NOTIF_RING_SIZE));
+ info->iqueue.idescs = NULL;
+ }
+
+ if (small_buffer_stack_va)
+ free_pages_exact(small_buffer_stack_va, buffer_stack_size);
+ if (large_buffer_stack_va)
+ free_pages_exact(large_buffer_stack_va, buffer_stack_size);
+
+ small_buffer_stack_va = NULL;
+ large_buffer_stack_va = NULL;
+ large_buffer_stack = -1;
+ small_buffer_stack = -1;
+ first_bucket = -1;
+}
+
+/* The first time any tilegx network device is opened, we initialize
+ * the global mpipe state. If this step fails, we fail to open the
+ * device, but if it succeeds, we never need to do it again, and since
+ * tile_net can't be unloaded, we never undo it.
+ *
+ * Note that some resources in this path (buffer stack indices,
+ * bindings from init_buffer_stack, etc.) are hypervisor resources
+ * that are freed implicitly by gxio_mpipe_destroy().
+ */
+static int tile_net_init_mpipe(struct net_device *dev)
+{
+ int i, num_buffers, rc;
+ int cpu;
+ int first_ring, ring;
+ int network_cpus_count = cpus_weight(network_cpus_map);
+
+ if (!hash_default) {
+ netdev_err(dev, "Networking requires hash_default!\n");
+ return -EIO;
+ }
+
+ rc = gxio_mpipe_init(&context, 0);
+ if (rc != 0) {
+ netdev_err(dev, "gxio_mpipe_init failed: %d\n", rc);
+ return -EIO;
+ }
+
+ /* Set up the buffer stacks. */
+ num_buffers =
+ network_cpus_count * (IQUEUE_ENTRIES + TILE_NET_BATCH);
+ rc = init_buffer_stacks(dev, num_buffers);
+ if (rc != 0)
+ goto fail;
+
+ /* Provide initial buffers. */
+ rc = -ENOMEM;
+ for (i = 0; i < num_buffers; i++) {
+ if (!tile_net_provide_buffer(true)) {
+ netdev_err(dev, "Cannot allocate initial sk_bufs!\n");
+ goto fail;
+ }
+ }
+ for (i = 0; i < num_buffers; i++) {
+ if (!tile_net_provide_buffer(false)) {
+ netdev_err(dev, "Cannot allocate initial sk_bufs!\n");
+ goto fail;
+ }
+ }
+
+ /* Allocate one NotifRing for each network cpu. */
+ rc = gxio_mpipe_alloc_notif_rings(&context, network_cpus_count, 0, 0);
+ if (rc < 0) {
+ netdev_err(dev, "gxio_mpipe_alloc_notif_rings failed %d\n",
+ rc);
+ goto fail;
+ }
+
+ /* Init NotifRings per-cpu. */
+ first_ring = rc;
+ ring = first_ring;
+ for_each_online_cpu(cpu) {
+ rc = alloc_percpu_mpipe_resources(dev, cpu, ring);
+ if (rc < 0)
+ goto fail;
+ ring = rc;
+ }
+
+ /* Initialize NotifGroup and buckets. */
+ rc = init_notif_group_and_buckets(dev, first_ring, network_cpus_count);
+ if (rc != 0)
+ goto fail;
+
+ /* Create and enable interrupts. */
+ rc = tile_net_setup_interrupts(dev);
+ if (rc != 0)
+ goto fail;
+
+ return 0;
+
+fail:
+ tile_net_init_mpipe_fail();
+ return rc;
+}
+
+/* Create persistent egress info for a given egress channel.
+ * Note that this may be shared between, say, "gbe0" and "xgbe0".
+ * ISSUE: Defer header allocation until TSO is actually needed?
+ */
+static int tile_net_init_egress(struct net_device *dev, int echannel)
+{
+ struct page *headers_page, *edescs_page, *equeue_page;
+ gxio_mpipe_edesc_t *edescs;
+ gxio_mpipe_equeue_t *equeue;
+ unsigned char *headers;
+ int headers_order, edescs_order, equeue_order;
+ size_t edescs_size;
+ int edma;
+ int rc = -ENOMEM;
+
+ /* Only initialize once. */
+ if (egress_for_echannel[echannel].equeue != NULL)
+ return 0;
+
+ /* Allocate memory for the "headers". */
+ headers_order = get_order(EQUEUE_ENTRIES * HEADER_BYTES);
+ headers_page = alloc_pages(GFP_KERNEL, headers_order);
+ if (headers_page == NULL) {
+ netdev_warn(dev,
+ "Could not alloc %zd bytes for TSO headers.\n",
+ PAGE_SIZE << headers_order);
+ goto fail;
+ }
+ headers = pfn_to_kaddr(page_to_pfn(headers_page));
+
+ /* Allocate memory for the "edescs". */
+ edescs_size = EQUEUE_ENTRIES * sizeof(*edescs);
+ edescs_order = get_order(edescs_size);
+ edescs_page = alloc_pages(GFP_KERNEL, edescs_order);
+ if (edescs_page == NULL) {
+ netdev_warn(dev,
+ "Could not alloc %zd bytes for eDMA ring.\n",
+ edescs_size);
+ goto fail_headers;
+ }
+ edescs = pfn_to_kaddr(page_to_pfn(edescs_page));
+
+ /* Allocate memory for the "equeue". */
+ equeue_order = get_order(sizeof(*equeue));
+ equeue_page = alloc_pages(GFP_KERNEL, equeue_order);
+ if (equeue_page == NULL) {
+ netdev_warn(dev,
+ "Could not alloc %zd bytes for equeue info.\n",
+ PAGE_SIZE << equeue_order);
+ goto fail_edescs;
+ }
+ equeue = pfn_to_kaddr(page_to_pfn(equeue_page));
+
+ /* Allocate an edma ring. Note that in practice this can't
+ * fail, which is good, because we will leak an edma ring if so.
+ */
+ rc = gxio_mpipe_alloc_edma_rings(&context, 1, 0, 0);
+ if (rc < 0) {
+ netdev_warn(dev, "gxio_mpipe_alloc_edma_rings failed: %d\n",
+ rc);
+ goto fail_equeue;
+ }
+ edma = rc;
+
+ /* Initialize the equeue. */
+ rc = gxio_mpipe_equeue_init(equeue, &context, edma, echannel,
+ edescs, edescs_size, 0);
+ if (rc != 0) {
+ netdev_err(dev, "gxio_mpipe_equeue_init failed: %d\n", rc);
+ goto fail_equeue;
+ }
+
+ /* Done. */
+ egress_for_echannel[echannel].equeue = equeue;
+ egress_for_echannel[echannel].headers = headers;
+ return 0;
+
+fail_equeue:
+ __free_pages(equeue_page, equeue_order);
+
+fail_edescs:
+ __free_pages(edescs_page, edescs_order);
+
+fail_headers:
+ __free_pages(headers_page, headers_order);
+
+fail:
+ return rc;
+}
+
+/* Return channel number for a newly-opened link. */
+static int tile_net_link_open(struct net_device *dev, gxio_mpipe_link_t *link,
+ const char *link_name)
+{
+ int rc = gxio_mpipe_link_open(link, &context, link_name, 0);
+ if (rc < 0) {
+ netdev_err(dev, "Failed to open '%s'\n", link_name);
+ return rc;
+ }
+ rc = gxio_mpipe_link_channel(link);
+ if (rc < 0 || rc >= TILE_NET_CHANNELS) {
+ netdev_err(dev, "gxio_mpipe_link_channel bad value: %d\n", rc);
+ gxio_mpipe_link_close(link);
+ return -EINVAL;
+ }
+ return rc;
+}
+
+/* Help the kernel activate the given network interface. */
+static int tile_net_open(struct net_device *dev)
+{
+ struct tile_net_priv *priv = netdev_priv(dev);
+ int cpu, rc;
+
+ mutex_lock(&tile_net_devs_for_channel_mutex);
+
+ /* Do one-time initialization the first time any device is opened. */
+ if (ingress_irq < 0) {
+ rc = tile_net_init_mpipe(dev);
+ if (rc != 0)
+ goto fail;
+ }
+
+ /* Determine if this is the "loopify" device. */
+ if (unlikely((loopify_link_name != NULL) &&
+ !strcmp(dev->name, loopify_link_name))) {
+ rc = tile_net_link_open(dev, &priv->link, "loop0");
+ if (rc < 0)
+ goto fail;
+ priv->channel = rc;
+ rc = tile_net_link_open(dev, &priv->loopify_link, "loop1");
+ if (rc < 0)
+ goto fail;
+ priv->loopify_channel = rc;
+ priv->echannel = rc;
+ } else {
+ rc = tile_net_link_open(dev, &priv->link, dev->name);
+ if (rc < 0)
+ goto fail;
+ priv->channel = rc;
+ priv->echannel = rc;
+ }
+
+ /* Initialize egress info (if needed). Once ever, per echannel. */
+ rc = tile_net_init_egress(dev, priv->echannel);
+ if (rc != 0)
+ goto fail;
+
+ tile_net_devs_for_channel[priv->channel] = dev;
+
+ rc = tile_net_update(dev);
+ if (rc != 0)
+ goto fail;
+
+ mutex_unlock(&tile_net_devs_for_channel_mutex);
+
+ /* Initialize the transmit wake timer for this device for each cpu. */
+ for_each_online_cpu(cpu) {
+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);
+ struct tile_net_tx_wake *tx_wake =
+ &info->tx_wake[priv->echannel];
+
+ hrtimer_init(&tx_wake->timer, CLOCK_MONOTONIC,
+ HRTIMER_MODE_REL);
+ tx_wake->timer.function = tile_net_handle_tx_wake_timer;
+ tx_wake->dev = dev;
+ }
+
+ for_each_online_cpu(cpu)
+ netif_start_subqueue(dev, cpu);
+ netif_carrier_on(dev);
+ return 0;
+
+fail:
+ if (priv->loopify_channel >= 0) {
+ if (gxio_mpipe_link_close(&priv->loopify_link) != 0)
+ netdev_warn(dev, "Failed to close loopify link!\n");
+ priv->loopify_channel = -1;
+ }
+ if (priv->channel >= 0) {
+ if (gxio_mpipe_link_close(&priv->link) != 0)
+ netdev_warn(dev, "Failed to close link!\n");
+ priv->channel = -1;
+ }
+ priv->echannel = -1;
+ tile_net_devs_for_channel[priv->channel] = NULL;
+ mutex_unlock(&tile_net_devs_for_channel_mutex);
+
+ /* Don't return raw gxio error codes to generic Linux. */
+ return (rc > -512) ? rc : -EIO;
+}
+
+/* Help the kernel deactivate the given network interface. */
+static int tile_net_stop(struct net_device *dev)
+{
+ struct tile_net_priv *priv = netdev_priv(dev);
+ int cpu;
+
+ for_each_online_cpu(cpu) {
+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);
+ struct tile_net_tx_wake *tx_wake =
+ &info->tx_wake[priv->echannel];
+
+ hrtimer_cancel(&tx_wake->timer);
+ netif_stop_subqueue(dev, cpu);
+ }
+
+ mutex_lock(&tile_net_devs_for_channel_mutex);
+ tile_net_devs_for_channel[priv->channel] = NULL;
+ (void)tile_net_update(dev);
+ if (priv->loopify_channel >= 0) {
+ if (gxio_mpipe_link_close(&priv->loopify_link) != 0)
+ netdev_warn(dev, "Failed to close loopify link!\n");
+ priv->loopify_channel = -1;
+ }
+ if (priv->channel >= 0) {
+ if (gxio_mpipe_link_close(&priv->link) != 0)
+ netdev_warn(dev, "Failed to close link!\n");
+ priv->channel = -1;
+ }
+ priv->echannel = -1;
+ mutex_unlock(&tile_net_devs_for_channel_mutex);
+
+ return 0;
+}
+
+/* Determine the VA for a fragment. */
+static inline void *tile_net_frag_buf(skb_frag_t *f)
+{
+ unsigned long pfn = page_to_pfn(skb_frag_page(f));
+ return pfn_to_kaddr(pfn) + f->page_offset;
+}
+
+/* Acquire a completion entry and an egress slot, or if we can't,
+ * stop the queue and schedule the tx_wake timer.
+ */
+static s64 tile_net_equeue_try_reserve(struct net_device *dev,
+ struct tile_net_comps *comps,
+ gxio_mpipe_equeue_t *equeue,
+ int num_edescs)
+{
+ /* Try to acquire a completion entry. */
+ if (comps->comp_next - comps->comp_last < TILE_NET_MAX_COMPS - 1 ||
+ tile_net_free_comps(equeue, comps, 32, false) != 0) {
+
+ /* Try to acquire an egress slot. */
+ s64 slot = gxio_mpipe_equeue_try_reserve(equeue, num_edescs);
+ if (slot >= 0)
+ return slot;
+
+ /* Freeing some completions gives the equeue time to drain. */
+ tile_net_free_comps(equeue, comps, TILE_NET_MAX_COMPS, false);
+
+ slot = gxio_mpipe_equeue_try_reserve(equeue, num_edescs);
+ if (slot >= 0)
+ return slot;
+ }
+
+ /* Still nothing; give up and stop the queue for a short while. */
+ netif_stop_subqueue(dev, smp_processor_id());
+ tile_net_schedule_tx_wake_timer(dev);
+ return -1;
+}
+
+/* Determine how many edesc's are needed for TSO.
+ *
+ * Sometimes, if "sendfile()" requires copying, we will be called with
+ * "data" containing the header and payload, with "frags" being empty.
+ * Sometimes, for example when using NFS over TCP, a single segment can
+ * span 3 fragments. This requires special care.
+ */
+static int tso_count_edescs(struct sk_buff *skb)
+{
+ struct skb_shared_info *sh = skb_shinfo(skb);
+ unsigned int data_len = skb->data_len;
+ unsigned int p_len = sh->gso_size;
+ long f_id = -1; /* id of the current fragment */
+ long f_size = -1; /* size of the current fragment */
+ long f_used = -1; /* bytes used from the current fragment */
+ long n; /* size of the current piece of payload */
+ int num_edescs = 0;
+ int segment;
+
+ for (segment = 0; segment < sh->gso_segs; segment++) {
+
+ unsigned int p_used = 0;
+
+ /* One edesc for header and for each piece of the payload. */
+ for (num_edescs++; p_used < p_len; num_edescs++) {
+
+ /* Advance as needed. */
+ while (f_used >= f_size) {
+ f_id++;
+ f_size = sh->frags[f_id].size;
+ f_used = 0;
+ }
+
+ /* Use bytes from the current fragment. */
+ n = p_len - p_used;
+ if (n > f_size - f_used)
+ n = f_size - f_used;
+ f_used += n;
+ p_used += n;
+ }
+
+ /* The last segment may be less than gso_size. */
+ data_len -= p_len;
+ if (data_len < p_len)
+ p_len = data_len;
+ }
+
+ return num_edescs;
+}
+
+/* Prepare modified copies of the skbuff headers.
+ * FIXME: add support for IPv6.
+ */
+static void tso_headers_prepare(struct sk_buff *skb, unsigned char *headers,
+ s64 slot)
+{
+ struct skb_shared_info *sh = skb_shinfo(skb);
+ struct iphdr *ih;
+ struct tcphdr *th;
+ unsigned int data_len = skb->data_len;
+ unsigned char *data = skb->data;
+ unsigned int ih_off, th_off, sh_len, p_len;
+ unsigned int isum_seed, tsum_seed, id, seq;
+ long f_id = -1; /* id of the current fragment */
+ long f_size = -1; /* size of the current fragment */
+ long f_used = -1; /* bytes used from the current fragment */
+ long n; /* size of the current piece of payload */
+ int segment;
+
+ /* Locate original headers and compute various lengths. */
+ ih = ip_hdr(skb);
+ th = tcp_hdr(skb);
+ ih_off = skb_network_offset(skb);
+ th_off = skb_transport_offset(skb);
+ sh_len = th_off + tcp_hdrlen(skb);
+ p_len = sh->gso_size;
+
+ /* Set up seed values for IP and TCP csum and initialize id and seq. */
+ isum_seed = ((0xFFFF - ih->check) +
+ (0xFFFF - ih->tot_len) +
+ (0xFFFF - ih->id));
+ tsum_seed = th->check + (0xFFFF ^ htons(skb->len));
+ id = ntohs(ih->id);
+ seq = ntohl(th->seq);
+
+ /* Prepare all the headers. */
+ for (segment = 0; segment < sh->gso_segs; segment++) {
+ unsigned char *buf;
+ unsigned int p_used = 0;
+
+ /* Copy to the header memory for this segment. */
+ buf = headers + (slot % EQUEUE_ENTRIES) * HEADER_BYTES +
+ NET_IP_ALIGN;
+ memcpy(buf, data, sh_len);
+
+ /* Update copied ip header. */
+ ih = (struct iphdr *)(buf + ih_off);
+ ih->tot_len = htons(sh_len + p_len - ih_off);
+ ih->id = htons(id);
+ ih->check = csum_long(isum_seed + ih->tot_len +
+ ih->id) ^ 0xffff;
+
+ /* Update copied tcp header. */
+ th = (struct tcphdr *)(buf + th_off);
+ th->seq = htonl(seq);
+ th->check = csum_long(tsum_seed + htons(sh_len + p_len));
+ if (segment != sh->gso_segs - 1) {
+ th->fin = 0;
+ th->psh = 0;
+ }
+
+ /* Skip past the header. */
+ slot++;
+
+ /* Skip past the payload. */
+ while (p_used < p_len) {
+
+ /* Advance as needed. */
+ while (f_used >= f_size) {
+ f_id++;
+ f_size = sh->frags[f_id].size;
+ f_used = 0;
+ }
+
+ /* Use bytes from the current fragment. */
+ n = p_len - p_used;
+ if (n > f_size - f_used)
+ n = f_size - f_used;
+ f_used += n;
+ p_used += n;
+
+ slot++;
+ }
+
+ id++;
+ seq += p_len;
+
+ /* The last segment may be less than gso_size. */
+ data_len -= p_len;
+ if (data_len < p_len)
+ p_len = data_len;
+ }
+
+ /* Flush the headers so they are ready for hardware DMA. */
+ wmb();
+}
+
+/* Pass all the data to mpipe for egress. */
+static void tso_egress(struct net_device *dev, gxio_mpipe_equeue_t *equeue,
+ struct sk_buff *skb, unsigned char *headers, s64 slot)
+{
+ struct tile_net_priv *priv = netdev_priv(dev);
+ struct skb_shared_info *sh = skb_shinfo(skb);
+ unsigned int data_len = skb->data_len;
+ unsigned int p_len = sh->gso_size;
+ gxio_mpipe_edesc_t edesc_head = { { 0 } };
+ gxio_mpipe_edesc_t edesc_body = { { 0 } };
+ long f_id = -1; /* id of the current fragment */
+ long f_size = -1; /* size of the current fragment */
+ long f_used = -1; /* bytes used from the current fragment */
+ long n; /* size of the current piece of payload */
+ unsigned long tx_packets = 0, tx_bytes = 0;
+ unsigned int csum_start, sh_len;
+ int segment;
+
+ /* Prepare to egress the headers: set up header edesc. */
+ csum_start = skb_checksum_start_offset(skb);
+ sh_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ edesc_head.csum = 1;
+ edesc_head.csum_start = csum_start;
+ edesc_head.csum_dest = csum_start + skb->csum_offset;
+ edesc_head.xfer_size = sh_len;
+
+ /* This is only used to specify the TLB. */
+ edesc_head.stack_idx = large_buffer_stack;
+ edesc_body.stack_idx = large_buffer_stack;
+
+ /* Egress all the edescs. */
+ for (segment = 0; segment < sh->gso_segs; segment++) {
+ void *va;
+ unsigned char *buf;
+ unsigned int p_used = 0;
+
+ /* Egress the header. */
+ buf = headers + (slot % EQUEUE_ENTRIES) * HEADER_BYTES +
+ NET_IP_ALIGN;
+ edesc_head.va = va_to_tile_io_addr(buf);
+ gxio_mpipe_equeue_put_at(equeue, edesc_head, slot);
+ slot++;
+
+ /* Egress the payload. */
+ while (p_used < p_len) {
+
+ /* Advance as needed. */
+ while (f_used >= f_size) {
+ f_id++;
+ f_size = sh->frags[f_id].size;
+ f_used = 0;
+ }
+
+ va = tile_net_frag_buf(&sh->frags[f_id]) + f_used;
+
+ /* Use bytes from the current fragment. */
+ n = p_len - p_used;
+ if (n > f_size - f_used)
+ n = f_size - f_used;
+ f_used += n;
+ p_used += n;
+
+ /* Egress a piece of the payload. */
+ edesc_body.va = va_to_tile_io_addr(va);
+ edesc_body.xfer_size = n;
+ edesc_body.bound = !(p_used < p_len);
+ gxio_mpipe_equeue_put_at(equeue, edesc_body, slot);
+ slot++;
+ }
+
+ tx_packets++;
+ tx_bytes += sh_len + p_len;
+
+ /* The last segment may be less than gso_size. */
+ data_len -= p_len;
+ if (data_len < p_len)
+ p_len = data_len;
+ }
+
+ /* Update stats. */
+ tile_net_stats_add(tx_packets, &priv->stats.tx_packets);
+ tile_net_stats_add(tx_bytes, &priv->stats.tx_bytes);
+}
+
+/* Do "TSO" handling for egress.
+ *
+ * Normally drivers set NETIF_F_TSO only to support hardware TSO;
+ * otherwise the stack uses scatter-gather to implement GSO in software.
+ * On our testing, enabling GSO support (via NETIF_F_SG) drops network
+ * performance down to around 7.5 Gbps on the 10G interfaces, although
+ * also dropping cpu utilization way down, to under 8%. But
+ * implementing "TSO" in the driver brings performance back up to line
+ * rate, while dropping cpu usage even further, to less than 4%. In
+ * practice, profiling of GSO shows that skb_segment() is what causes
+ * the performance overheads; we benefit in the driver from using
+ * preallocated memory to duplicate the TCP/IP headers.
+ */
+static int tile_net_tx_tso(struct sk_buff *skb, struct net_device *dev)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ struct tile_net_priv *priv = netdev_priv(dev);
+ int channel = priv->echannel;
+ struct tile_net_egress *egress = &egress_for_echannel[channel];
+ struct tile_net_comps *comps = info->comps_for_echannel[channel];
+ gxio_mpipe_equeue_t *equeue = egress->equeue;
+ unsigned long irqflags;
+ int num_edescs;
+ s64 slot;
+
+ /* Determine how many mpipe edesc's are needed. */
+ num_edescs = tso_count_edescs(skb);
+
+ local_irq_save(irqflags);
+
+ /* Try to acquire a completion entry and an egress slot. */
+ slot = tile_net_equeue_try_reserve(dev, comps, equeue, num_edescs);
+ if (slot < 0) {
+ local_irq_restore(irqflags);
+ return NETDEV_TX_BUSY;
+ }
+
+ /* Set up copies of header data properly. */
+ tso_headers_prepare(skb, egress->headers, slot);
+
+ /* Actually pass the data to the network hardware. */
+ tso_egress(dev, equeue, skb, egress->headers, slot);
+
+ /* Add a completion record. */
+ add_comp(equeue, comps, slot + num_edescs - 1, skb);
+
+ local_irq_restore(irqflags);
+
+ /* Make sure the egress timer is scheduled. */
+ tile_net_schedule_egress_timer();
+
+ return NETDEV_TX_OK;
+}
+
+/* Analyze the body and frags for a transmit request. */
+static unsigned int tile_net_tx_frags(struct frag *frags,
+ struct sk_buff *skb,
+ void *b_data, unsigned int b_len)
+{
+ unsigned int i, n = 0;
+
+ struct skb_shared_info *sh = skb_shinfo(skb);
+
+ if (b_len != 0) {
+ frags[n].buf = b_data;
+ frags[n++].length = b_len;
+ }
+
+ for (i = 0; i < sh->nr_frags; i++) {
+ skb_frag_t *f = &sh->frags[i];
+ frags[n].buf = tile_net_frag_buf(f);
+ frags[n++].length = skb_frag_size(f);
+ }
+
+ return n;
+}
+
+/* Help the kernel transmit a packet. */
+static int tile_net_tx(struct sk_buff *skb, struct net_device *dev)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ struct tile_net_priv *priv = netdev_priv(dev);
+ struct tile_net_egress *egress = &egress_for_echannel[priv->echannel];
+ gxio_mpipe_equeue_t *equeue = egress->equeue;
+ struct tile_net_comps *comps =
+ info->comps_for_echannel[priv->echannel];
+ unsigned int len = skb->len;
+ unsigned char *data = skb->data;
+ unsigned int num_edescs;
+ struct frag frags[MAX_FRAGS];
+ gxio_mpipe_edesc_t edescs[MAX_FRAGS];
+ unsigned long irqflags;
+ gxio_mpipe_edesc_t edesc = { { 0 } };
+ unsigned int i;
+ s64 slot;
+
+ if (skb_is_gso(skb))
+ return tile_net_tx_tso(skb, dev);
+
+ num_edescs = tile_net_tx_frags(frags, skb, data, skb_headlen(skb));
+
+ /* This is only used to specify the TLB. */
+ edesc.stack_idx = large_buffer_stack;
+
+ /* Prepare the edescs. */
+ for (i = 0; i < num_edescs; i++) {
+ edesc.xfer_size = frags[i].length;
+ edesc.va = va_to_tile_io_addr(frags[i].buf);
+ edescs[i] = edesc;
+ }
+
+ /* Mark the final edesc. */
+ edescs[num_edescs - 1].bound = 1;
+
+ /* Add checksum info to the initial edesc, if needed. */
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ unsigned int csum_start = skb_checksum_start_offset(skb);
+ edescs[0].csum = 1;
+ edescs[0].csum_start = csum_start;
+ edescs[0].csum_dest = csum_start + skb->csum_offset;
+ }
+
+ local_irq_save(irqflags);
+
+ /* Try to acquire a completion entry and an egress slot. */
+ slot = tile_net_equeue_try_reserve(dev, comps, equeue, num_edescs);
+ if (slot < 0) {
+ local_irq_restore(irqflags);
+ return NETDEV_TX_BUSY;
+ }
+
+ for (i = 0; i < num_edescs; i++)
+ gxio_mpipe_equeue_put_at(equeue, edescs[i], slot++);
+
+ /* Add a completion record. */
+ add_comp(equeue, comps, slot - 1, skb);
+
+ /* NOTE: Use ETH_ZLEN for short packets (e.g. 42 < 60). */
+ tile_net_stats_add(1, &priv->stats.tx_packets);
+ tile_net_stats_add(max_t(unsigned int, len, ETH_ZLEN),
+ &priv->stats.tx_bytes);
+
+ local_irq_restore(irqflags);
+
+ /* Make sure the egress timer is scheduled. */
+ tile_net_schedule_egress_timer();
+
+ return NETDEV_TX_OK;
+}
+
+/* Return subqueue id on this core (one per core). */
+static u16 tile_net_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+ return smp_processor_id();
+}
+
+/* Deal with a transmit timeout. */
+static void tile_net_tx_timeout(struct net_device *dev)
+{
+ int cpu;
+
+ for_each_online_cpu(cpu)
+ netif_wake_subqueue(dev, cpu);
+}
+
+/* Ioctl commands. */
+static int tile_net_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ return -EOPNOTSUPP;
+}
+
+/* Get system network statistics for device. */
+static struct net_device_stats *tile_net_get_stats(struct net_device *dev)
+{
+ struct tile_net_priv *priv = netdev_priv(dev);
+ return &priv->stats;
+}
+
+/* Change the MTU. */
+static int tile_net_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if ((new_mtu < 68) || (new_mtu > 1500))
+ return -EINVAL;
+ dev->mtu = new_mtu;
+ return 0;
+}
+
+/* Change the Ethernet address of the NIC.
+ *
+ * The hypervisor driver does not support changing MAC address. However,
+ * the hardware does not do anything with the MAC address, so the address
+ * which gets used on outgoing packets, and which is accepted on incoming
+ * packets, is completely up to us.
+ *
+ * Returns 0 on success, negative on failure.
+ */
+static int tile_net_set_mac_address(struct net_device *dev, void *p)
+{
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EINVAL;
+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+ return 0;
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/* Polling 'interrupt' - used by things like netconsole to send skbs
+ * without having to re-enable interrupts. It's not called while
+ * the interrupt routine is executing.
+ */
+static void tile_net_netpoll(struct net_device *dev)
+{
+ disable_percpu_irq(ingress_irq);
+ tile_net_handle_ingress_irq(ingress_irq, NULL);
+ enable_percpu_irq(ingress_irq, 0);
+}
+#endif
+
+static const struct net_device_ops tile_net_ops = {
+ .ndo_open = tile_net_open,
+ .ndo_stop = tile_net_stop,
+ .ndo_start_xmit = tile_net_tx,
+ .ndo_select_queue = tile_net_select_queue,
+ .ndo_do_ioctl = tile_net_ioctl,
+ .ndo_get_stats = tile_net_get_stats,
+ .ndo_change_mtu = tile_net_change_mtu,
+ .ndo_tx_timeout = tile_net_tx_timeout,
+ .ndo_set_mac_address = tile_net_set_mac_address,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = tile_net_netpoll,
+#endif
+};
+
+/* The setup function.
+ *
+ * This uses ether_setup() to assign various fields in dev, including
+ * setting IFF_BROADCAST and IFF_MULTICAST, then sets some extra fields.
+ */
+static void tile_net_setup(struct net_device *dev)
+{
+ ether_setup(dev);
+ dev->netdev_ops = &tile_net_ops;
+ dev->watchdog_timeo = TILE_NET_TIMEOUT;
+ dev->features |= NETIF_F_LLTX;
+ dev->features |= NETIF_F_HW_CSUM;
+ dev->features |= NETIF_F_SG;
+ dev->features |= NETIF_F_TSO;
+ dev->mtu = 1500;
+}
+
+/* Allocate the device structure, register the device, and obtain the
+ * MAC address from the hypervisor.
+ */
+static void tile_net_dev_init(const char *name, const uint8_t *mac)
+{
+ int ret;
+ int i;
+ int nz_addr = 0;
+ struct net_device *dev;
+ struct tile_net_priv *priv;
+
+ /* HACK: Ignore "loop" links. */
+ if (strncmp(name, "loop", 4) == 0)
+ return;
+
+ /* Allocate the device structure. Normally, "name" is a
+ * template, instantiated by register_netdev(), but not for us.
+ */
+ dev = alloc_netdev_mqs(sizeof(*priv), name, tile_net_setup,
+ NR_CPUS, 1);
+ if (!dev) {
+ pr_err("alloc_netdev_mqs(%s) failed\n", name);
+ return;
+ }
+
+ /* Initialize "priv". */
+ priv = netdev_priv(dev);
+ memset(priv, 0, sizeof(*priv));
+ priv->dev = dev;
+ priv->channel = -1;
+ priv->loopify_channel = -1;
+ priv->echannel = -1;
+
+ /* Get the MAC address and set it in the device struct; this must
+ * be done before the device is opened. If the MAC is all zeroes,
+ * we use a random address, since we're probably on the simulator.
+ */
+ for (i = 0; i < 6; i++)
+ nz_addr |= mac[i];
+
+ if (nz_addr) {
+ memcpy(dev->dev_addr, mac, 6);
+ dev->addr_len = 6;
+ } else {
+ random_ether_addr(dev->dev_addr);
+ }
+
+ /* Register the network device. */
+ ret = register_netdev(dev);
+ if (ret) {
+ netdev_err(dev, "register_netdev failed %d\n", ret);
+ free_netdev(dev);
+ return;
+ }
+}
+
+/* Per-cpu module initialization. */
+static void tile_net_init_module_percpu(void *unused)
+{
+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);
+ int my_cpu = smp_processor_id();
+
+ info->has_iqueue = false;
+
+ info->my_cpu = my_cpu;
+
+ /* Initialize the egress timer. */
+ hrtimer_init(&info->egress_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ info->egress_timer.function = tile_net_handle_egress_timer;
+}
+
+/* Module initialization. */
+static int __init tile_net_init_module(void)
+{
+ int i;
+ char name[GXIO_MPIPE_LINK_NAME_LEN];
+ uint8_t mac[6];
+
+ pr_info("Tilera Network Driver\n");
+
+ mutex_init(&tile_net_devs_for_channel_mutex);
+
+ /* Initialize each CPU. */
+ on_each_cpu(tile_net_init_module_percpu, NULL, 1);
+
+ /* Find out what devices we have, and initialize them. */
+ for (i = 0; gxio_mpipe_link_enumerate_mac(i, name, mac) >= 0; i++)
+ tile_net_dev_init(name, mac);
+
+ if (!network_cpus_init())
+ network_cpus_map = *cpu_online_mask;
+
+ return 0;
+}
+
+module_init(tile_net_init_module);
u32 nvsp_version;
atomic_t num_outstanding_sends;
+ wait_queue_head_t wait_drain;
bool start_remove;
bool destroy;
/*
if (!net_device)
return NULL;
+ init_waitqueue_head(&net_device->wait_drain);
net_device->start_remove = false;
net_device->destroy = false;
net_device->dev = device;
spin_unlock_irqrestore(&device->channel->inbound_lock, flags);
/* Wait for all send completions */
- while (atomic_read(&net_device->num_outstanding_sends)) {
- dev_info(&device->device,
- "waiting for %d requests to complete...\n",
- atomic_read(&net_device->num_outstanding_sends));
- udelay(100);
- }
+ wait_event(net_device->wait_drain,
+ atomic_read(&net_device->num_outstanding_sends) == 0);
netvsc_disconnect_vsp(net_device);
num_outstanding_sends =
atomic_dec_return(&net_device->num_outstanding_sends);
+ if (net_device->destroy && num_outstanding_sends == 0)
+ wake_up(&net_device->wait_drain);
+
if (netif_queue_stopped(ndev) && !net_device->start_remove &&
(hv_ringbuf_avail_percent(&device->channel->outbound)
> RING_AVAIL_PERCENT_HIWATER ||
#define IP1001_APS_ON 11 /* IP1001 APS Mode bit */
#define IP101A_G_APS_ON 2 /* IP101A/G APS Mode bit */
#define IP101A_G_IRQ_CONF_STATUS 0x11 /* Conf Info IRQ & Status Reg */
+#define IP101A_G_IRQ_PIN_USED (1<<15) /* INTR pin used */
+#define IP101A_G_IRQ_DEFAULT IP101A_G_IRQ_PIN_USED
static int ip175c_config_init(struct phy_device *phydev)
{
if (c < 0)
return c;
+ /* INTR pin used: speed/link/duplex will cause an interrupt */
+ c = phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, IP101A_G_IRQ_DEFAULT);
+ if (c < 0)
+ return c;
+
if (phydev->interface == PHY_INTERFACE_MODE_RGMII) {
/* Additional delay (2ns) used to adjust RX clock phase
* at RGMII interface */
}
/**
* of_mdio_find_bus - Given an mii_bus node, find the mii_bus.
- * @mdio_np: Pointer to the mii_bus.
+ * @mdio_bus_np: Pointer to the mii_bus.
*
* Returns a pointer to the mii_bus, or NULL if none found.
*
}
static const u8 sierra_net_ifnum_list[] = { 7, 10, 11 };
-static const struct sierra_net_info_data sierra_net_info_data_68A3 = {
+static const struct sierra_net_info_data sierra_net_info_data_direct_ip = {
.rx_urb_size = 8 * 1024,
.whitelist = {
.infolen = ARRAY_SIZE(sierra_net_ifnum_list),
}
};
-static const struct driver_info sierra_net_info_68A3 = {
+static const struct driver_info sierra_net_info_direct_ip = {
.description = "Sierra Wireless USB-to-WWAN Modem",
.flags = FLAG_WWAN | FLAG_SEND_ZLP,
.bind = sierra_net_bind,
.status = sierra_net_status,
.rx_fixup = sierra_net_rx_fixup,
.tx_fixup = sierra_net_tx_fixup,
- .data = (unsigned long)&sierra_net_info_data_68A3,
+ .data = (unsigned long)&sierra_net_info_data_direct_ip,
};
static const struct usb_device_id products[] = {
{USB_DEVICE(0x1199, 0x68A3), /* Sierra Wireless USB-to-WWAN modem */
- .driver_info = (unsigned long) &sierra_net_info_68A3},
+ .driver_info = (unsigned long) &sierra_net_info_direct_ip},
+ {USB_DEVICE(0x0F3D, 0x68A3), /* AT&T Direct IP modem */
+ .driver_info = (unsigned long) &sierra_net_info_direct_ip},
+ {USB_DEVICE(0x1199, 0x68AA), /* Sierra Wireless Direct IP LTE modem */
+ .driver_info = (unsigned long) &sierra_net_info_direct_ip},
+ {USB_DEVICE(0x0F3D, 0x68AA), /* AT&T Direct IP LTE modem */
+ .driver_info = (unsigned long) &sierra_net_info_direct_ip},
{}, /* last item */
};
#define VIRTNET_DRIVER_VERSION "1.0.0"
struct virtnet_stats {
- struct u64_stats_sync syncp;
+ struct u64_stats_sync tx_syncp;
+ struct u64_stats_sync rx_syncp;
u64 tx_bytes;
u64 tx_packets;
hdr = skb_vnet_hdr(skb);
- u64_stats_update_begin(&stats->syncp);
+ u64_stats_update_begin(&stats->rx_syncp);
stats->rx_bytes += skb->len;
stats->rx_packets++;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_end(&stats->rx_syncp);
if (hdr->hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
pr_debug("Needs csum!\n");
while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
pr_debug("Sent skb %p\n", skb);
- u64_stats_update_begin(&stats->syncp);
+ u64_stats_update_begin(&stats->tx_syncp);
stats->tx_bytes += skb->len;
stats->tx_packets++;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_end(&stats->tx_syncp);
tot_sgs += skb_vnet_hdr(skb)->num_sg;
dev_kfree_skb_any(skb);
u64 tpackets, tbytes, rpackets, rbytes;
do {
- start = u64_stats_fetch_begin(&stats->syncp);
+ start = u64_stats_fetch_begin(&stats->tx_syncp);
tpackets = stats->tx_packets;
tbytes = stats->tx_bytes;
+ } while (u64_stats_fetch_retry(&stats->tx_syncp, start));
+
+ do {
+ start = u64_stats_fetch_begin(&stats->rx_syncp);
rpackets = stats->rx_packets;
rbytes = stats->rx_bytes;
- } while (u64_stats_fetch_retry(&stats->syncp, start));
+ } while (u64_stats_fetch_retry(&stats->rx_syncp, start));
tot->rx_packets += rpackets;
tot->tx_packets += tpackets;
* from the mac80211 subsystem. */
u16 mac80211_initially_registered_queues;
+ /* Set this if we call ieee80211_register_hw() and check if we call
+ * ieee80211_unregister_hw(). */
+ bool hw_registred;
+
/* We can only have one operating interface (802.11 core)
* at a time. General information about this interface follows.
*/
err = ieee80211_register_hw(wl->hw);
if (err)
goto err_one_core_detach;
+ wl->hw_registred = true;
b43_leds_register(wl->current_dev);
goto out;
hw->queues = modparam_qos ? B43_QOS_QUEUE_NUM : 1;
wl->mac80211_initially_registered_queues = hw->queues;
+ wl->hw_registred = false;
hw->max_rates = 2;
SET_IEEE80211_DEV(hw, dev->dev);
if (is_valid_ether_addr(sprom->et1mac))
* as the ieee80211 unreg will destroy the workqueue. */
cancel_work_sync(&wldev->restart_work);
- /* Restore the queues count before unregistering, because firmware detect
- * might have modified it. Restoring is important, so the networking
- * stack can properly free resources. */
- wl->hw->queues = wl->mac80211_initially_registered_queues;
- b43_leds_stop(wldev);
- ieee80211_unregister_hw(wl->hw);
+ B43_WARN_ON(!wl);
+ if (wl->current_dev == wldev && wl->hw_registred) {
+ /* Restore the queues count before unregistering, because firmware detect
+ * might have modified it. Restoring is important, so the networking
+ * stack can properly free resources. */
+ wl->hw->queues = wl->mac80211_initially_registered_queues;
+ b43_leds_stop(wldev);
+ ieee80211_unregister_hw(wl->hw);
+ }
b43_one_core_detach(wldev->dev);
cancel_work_sync(&wldev->restart_work);
B43_WARN_ON(!wl);
- if (wl->current_dev == wldev) {
+ if (wl->current_dev == wldev && wl->hw_registred) {
/* Restore the queues count before unregistering, because firmware detect
* might have modified it. Restoring is important, so the networking
* stack can properly free resources. */
data |= 1 << SDIO_FUNC_1 | 1 << SDIO_FUNC_2 | 1;
brcmf_sdio_regwb(sdiodev, SDIO_CCCR_IENx, data, &ret);
- /* redirect, configure ane enable io for interrupt signal */
+ /* redirect, configure and enable io for interrupt signal */
data = SDIO_SEPINT_MASK | SDIO_SEPINT_OE;
- if (sdiodev->irq_flags | IRQF_TRIGGER_HIGH)
+ if (sdiodev->irq_flags & IRQF_TRIGGER_HIGH)
data |= SDIO_SEPINT_ACT_HI;
brcmf_sdio_regwb(sdiodev, SDIO_CCCR_BRCM_SEPINT, data, &ret);
netif_stop_queue(priv->net_dev);
}
-/* Called by register_netdev() */
-static int ipw2100_net_init(struct net_device *dev)
-{
- struct ipw2100_priv *priv = libipw_priv(dev);
-
- return ipw2100_up(priv, 1);
-}
-
static int ipw2100_wdev_init(struct net_device *dev)
{
struct ipw2100_priv *priv = libipw_priv(dev);
.ndo_stop = ipw2100_close,
.ndo_start_xmit = libipw_xmit,
.ndo_change_mtu = libipw_change_mtu,
- .ndo_init = ipw2100_net_init,
.ndo_tx_timeout = ipw2100_tx_timeout,
.ndo_set_mac_address = ipw2100_set_address,
.ndo_validate_addr = eth_validate_addr,
printk(KERN_INFO DRV_NAME
": Detected Intel PRO/Wireless 2100 Network Connection\n");
+ err = ipw2100_up(priv, 1);
+ if (err)
+ goto fail;
+
err = ipw2100_wdev_init(dev);
if (err)
goto fail;
* network device we would call ipw2100_up. This introduced a race
* condition with newer hotplug configurations (network was coming
* up and making calls before the device was initialized).
- *
- * If we called ipw2100_up before we registered the device, then the
- * device name wasn't registered. So, we instead use the net_dev->init
- * member to call a function that then just turns and calls ipw2100_up.
- * net_dev->init is called after name allocation but before the
- * notifier chain is called */
+ */
err = register_netdev(dev);
if (err) {
printk(KERN_WARNING DRV_NAME
#define IWL6000_UCODE_API_MAX 6
#define IWL6050_UCODE_API_MAX 5
#define IWL6000G2_UCODE_API_MAX 6
+#define IWL6035_UCODE_API_MAX 6
/* Oldest version we won't warn about */
#define IWL6000_UCODE_API_OK 4
#define IWL6000G2_UCODE_API_OK 5
#define IWL6050_UCODE_API_OK 5
#define IWL6000G2B_UCODE_API_OK 6
+#define IWL6035_UCODE_API_OK 6
/* Lowest firmware API version supported */
#define IWL6000_UCODE_API_MIN 4
#define IWL6050_UCODE_API_MIN 4
-#define IWL6000G2_UCODE_API_MIN 4
+#define IWL6000G2_UCODE_API_MIN 5
+#define IWL6035_UCODE_API_MIN 6
/* EEPROM versions */
#define EEPROM_6000_TX_POWER_VERSION (4)
IWL_DEVICE_6030,
};
+#define IWL_DEVICE_6035 \
+ .fw_name_pre = IWL6030_FW_PRE, \
+ .ucode_api_max = IWL6035_UCODE_API_MAX, \
+ .ucode_api_ok = IWL6035_UCODE_API_OK, \
+ .ucode_api_min = IWL6035_UCODE_API_MIN, \
+ .device_family = IWL_DEVICE_FAMILY_6030, \
+ .max_inst_size = IWL60_RTC_INST_SIZE, \
+ .max_data_size = IWL60_RTC_DATA_SIZE, \
+ .eeprom_ver = EEPROM_6030_EEPROM_VERSION, \
+ .eeprom_calib_ver = EEPROM_6030_TX_POWER_VERSION, \
+ .base_params = &iwl6000_g2_base_params, \
+ .bt_params = &iwl6000_bt_params, \
+ .need_temp_offset_calib = true, \
+ .led_mode = IWL_LED_RF_STATE, \
+ .adv_pm = true
+
const struct iwl_cfg iwl6035_2agn_cfg = {
.name = "Intel(R) Centrino(R) Advanced-N 6235 AGN",
- IWL_DEVICE_6030,
+ IWL_DEVICE_6035,
.ht_params = &iwl6000_ht_params,
};
key_flags |= STA_KEY_MULTICAST_MSK;
sta_cmd.key.key_flags = key_flags;
- sta_cmd.key.key_offset = WEP_INVALID_OFFSET;
+ sta_cmd.key.key_offset = keyconf->hw_key_idx;
sta_cmd.sta.modify_mask = STA_MODIFY_KEY_MASK;
sta_cmd.mode = STA_CONTROL_MODIFY_MSK;
/* We have our copies now, allow OS release its copies */
release_firmware(ucode_raw);
- complete(&drv->request_firmware_complete);
drv->op_mode = iwl_dvm_ops.start(drv->trans, drv->cfg, &drv->fw);
if (!drv->op_mode)
- goto out_free_fw;
+ goto out_unbind;
+ /*
+ * Complete the firmware request last so that
+ * a driver unbind (stop) doesn't run while we
+ * are doing the start() above.
+ */
+ complete(&drv->request_firmware_complete);
return;
try_again:
* iwl_get_max_txpower_avg - get the highest tx power from all chains.
* find the highest tx power from all chains for the channel
*/
-static s8 iwl_get_max_txpower_avg(const struct iwl_cfg *cfg,
+static s8 iwl_get_max_txpower_avg(struct iwl_priv *priv,
struct iwl_eeprom_enhanced_txpwr *enhanced_txpower,
int element, s8 *max_txpower_in_half_dbm)
{
s8 max_txpower_avg = 0; /* (dBm) */
/* Take the highest tx power from any valid chains */
- if ((cfg->valid_tx_ant & ANT_A) &&
+ if ((priv->hw_params.valid_tx_ant & ANT_A) &&
(enhanced_txpower[element].chain_a_max > max_txpower_avg))
max_txpower_avg = enhanced_txpower[element].chain_a_max;
- if ((cfg->valid_tx_ant & ANT_B) &&
+ if ((priv->hw_params.valid_tx_ant & ANT_B) &&
(enhanced_txpower[element].chain_b_max > max_txpower_avg))
max_txpower_avg = enhanced_txpower[element].chain_b_max;
- if ((cfg->valid_tx_ant & ANT_C) &&
+ if ((priv->hw_params.valid_tx_ant & ANT_C) &&
(enhanced_txpower[element].chain_c_max > max_txpower_avg))
max_txpower_avg = enhanced_txpower[element].chain_c_max;
- if (((cfg->valid_tx_ant == ANT_AB) |
- (cfg->valid_tx_ant == ANT_BC) |
- (cfg->valid_tx_ant == ANT_AC)) &&
+ if (((priv->hw_params.valid_tx_ant == ANT_AB) |
+ (priv->hw_params.valid_tx_ant == ANT_BC) |
+ (priv->hw_params.valid_tx_ant == ANT_AC)) &&
(enhanced_txpower[element].mimo2_max > max_txpower_avg))
max_txpower_avg = enhanced_txpower[element].mimo2_max;
- if ((cfg->valid_tx_ant == ANT_ABC) &&
+ if ((priv->hw_params.valid_tx_ant == ANT_ABC) &&
(enhanced_txpower[element].mimo3_max > max_txpower_avg))
max_txpower_avg = enhanced_txpower[element].mimo3_max;
((txp->delta_20_in_40 & 0xf0) >> 4),
(txp->delta_20_in_40 & 0x0f));
- max_txp_avg = iwl_get_max_txpower_avg(priv->cfg, txp_array, idx,
+ max_txp_avg = iwl_get_max_txpower_avg(priv, txp_array, idx,
&max_txp_avg_halfdbm);
/*
WIPHY_FLAG_DISABLE_BEACON_HINTS |
WIPHY_FLAG_IBSS_RSN;
+#ifdef CONFIG_PM_SLEEP
if (priv->fw->img[IWL_UCODE_WOWLAN].sec[0].len &&
priv->trans->ops->wowlan_suspend &&
device_can_wakeup(priv->trans->dev)) {
hw->wiphy->wowlan.pattern_max_len =
IWLAGN_WOWLAN_MAX_PATTERN_LEN;
}
+#endif
if (iwlwifi_mod_params.power_save)
hw->wiphy->flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT;
ret = ieee80211_register_hw(priv->hw);
if (ret) {
IWL_ERR(priv, "Failed to register hw (error %d)\n", ret);
+ iwl_leds_exit(priv);
return ret;
}
priv->mac80211_registered = 1;
#define SCD_TXFACT (SCD_BASE + 0x10)
#define SCD_ACTIVE (SCD_BASE + 0x14)
#define SCD_QUEUECHAIN_SEL (SCD_BASE + 0xe8)
+#define SCD_CHAINEXT_EN (SCD_BASE + 0x244)
#define SCD_AGGR_SEL (SCD_BASE + 0x248)
#define SCD_INTERRUPT_MASK (SCD_BASE + 0x108)
iwl_write_prph(trans, SCD_DRAM_BASE_ADDR,
trans_pcie->scd_bc_tbls.dma >> 10);
+ /* The chain extension of the SCD doesn't work well. This feature is
+ * enabled by default by the HW, so we need to disable it manually.
+ */
+ iwl_write_prph(trans, SCD_CHAINEXT_EN, 0);
+
/* Enable DMA channel */
for (chan = 0; chan < FH_TCSR_CHNL_NUM ; chan++)
iwl_write_direct32(trans, FH_TCSR_CHNL_TX_CONFIG_REG(chan),
hdr = (struct ieee80211_hdr *) skb->data;
mac80211_hwsim_monitor_ack(data2->hw, hdr->addr2);
}
+ txi->flags |= IEEE80211_TX_STAT_ACK;
}
ieee80211_tx_status_irqsafe(data2->hw, skb);
return 0;
"unregister family %i\n", ret);
}
+static const struct ieee80211_iface_limit hwsim_if_limits[] = {
+ { .max = 1, .types = BIT(NL80211_IFTYPE_ADHOC) },
+ { .max = 2048, .types = BIT(NL80211_IFTYPE_STATION) |
+ BIT(NL80211_IFTYPE_P2P_CLIENT) |
+#ifdef CONFIG_MAC80211_MESH
+ BIT(NL80211_IFTYPE_MESH_POINT) |
+#endif
+ BIT(NL80211_IFTYPE_AP) |
+ BIT(NL80211_IFTYPE_P2P_GO) },
+};
+
+static const struct ieee80211_iface_combination hwsim_if_comb = {
+ .limits = hwsim_if_limits,
+ .n_limits = ARRAY_SIZE(hwsim_if_limits),
+ .max_interfaces = 2048,
+ .num_different_channels = 1,
+};
+
static int __init init_mac80211_hwsim(void)
{
int i, err = 0;
hw->wiphy->n_addresses = 2;
hw->wiphy->addresses = data->addresses;
+ hw->wiphy->iface_combinations = &hwsim_if_comb;
+ hw->wiphy->n_iface_combinations = 1;
+
if (fake_hw_scan) {
hw->wiphy->max_scan_ssids = 255;
hw->wiphy->max_scan_ie_len = IEEE80211_MAX_DATA_LEN;
bss_cfg->ssid.ssid_len = params->ssid_len;
}
+ switch (params->hidden_ssid) {
+ case NL80211_HIDDEN_SSID_NOT_IN_USE:
+ bss_cfg->bcast_ssid_ctl = 1;
+ break;
+ case NL80211_HIDDEN_SSID_ZERO_LEN:
+ bss_cfg->bcast_ssid_ctl = 0;
+ break;
+ case NL80211_HIDDEN_SSID_ZERO_CONTENTS:
+ /* firmware doesn't support this type of hidden SSID */
+ default:
+ return -EINVAL;
+ }
+
if (mwifiex_set_secure_params(priv, bss_cfg, params)) {
kfree(bss_cfg);
wiphy_err(wiphy, "Failed to parse secuirty parameters!\n");
#define TLV_TYPE_CHANNELBANDLIST (PROPRIETARY_TLV_BASE_ID + 42)
#define TLV_TYPE_UAP_BEACON_PERIOD (PROPRIETARY_TLV_BASE_ID + 44)
#define TLV_TYPE_UAP_DTIM_PERIOD (PROPRIETARY_TLV_BASE_ID + 45)
+#define TLV_TYPE_UAP_BCAST_SSID (PROPRIETARY_TLV_BASE_ID + 48)
#define TLV_TYPE_UAP_RTS_THRESHOLD (PROPRIETARY_TLV_BASE_ID + 51)
#define TLV_TYPE_UAP_WPA_PASSPHRASE (PROPRIETARY_TLV_BASE_ID + 60)
#define TLV_TYPE_UAP_ENCRY_PROTOCOL (PROPRIETARY_TLV_BASE_ID + 64)
u8 ssid[0];
} __packed;
+struct host_cmd_tlv_bcast_ssid {
+ struct host_cmd_tlv tlv;
+ u8 bcast_ctl;
+} __packed;
+
struct host_cmd_tlv_beacon_period {
struct host_cmd_tlv tlv;
__le16 period;
struct host_cmd_tlv_dtim_period *dtim_period;
struct host_cmd_tlv_beacon_period *beacon_period;
struct host_cmd_tlv_ssid *ssid;
+ struct host_cmd_tlv_bcast_ssid *bcast_ssid;
struct host_cmd_tlv_channel_band *chan_band;
struct host_cmd_tlv_frag_threshold *frag_threshold;
struct host_cmd_tlv_rts_threshold *rts_threshold;
cmd_size += sizeof(struct host_cmd_tlv) +
bss_cfg->ssid.ssid_len;
tlv += sizeof(struct host_cmd_tlv) + bss_cfg->ssid.ssid_len;
+
+ bcast_ssid = (struct host_cmd_tlv_bcast_ssid *)tlv;
+ bcast_ssid->tlv.type = cpu_to_le16(TLV_TYPE_UAP_BCAST_SSID);
+ bcast_ssid->tlv.len =
+ cpu_to_le16(sizeof(bcast_ssid->bcast_ctl));
+ bcast_ssid->bcast_ctl = bss_cfg->bcast_ssid_ctl;
+ cmd_size += sizeof(struct host_cmd_tlv_bcast_ssid);
+ tlv += sizeof(struct host_cmd_tlv_bcast_ssid);
}
if (bss_cfg->channel && bss_cfg->channel <= MAX_CHANNEL_BAND_BG) {
chan_band = (struct host_cmd_tlv_channel_band *)tlv;
if (!bss_cfg)
return -ENOMEM;
+ mwifiex_set_sys_config_invalid_data(bss_cfg);
bss_cfg->band_cfg = BAND_CONFIG_MANUAL;
bss_cfg->channel = channel;
* for hardware which doesn't support hardware
* sequence counting.
*/
- spinlock_t seqlock;
- u16 seqno;
+ atomic_t seqno;
};
static inline struct rt2x00_intf* vif_to_intf(struct ieee80211_vif *vif)
else
rt2x00dev->intf_sta_count++;
- spin_lock_init(&intf->seqlock);
mutex_init(&intf->beacon_skb_mutex);
intf->beacon = entry;
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
struct rt2x00_intf *intf = vif_to_intf(tx_info->control.vif);
+ u16 seqno;
if (!(tx_info->flags & IEEE80211_TX_CTL_ASSIGN_SEQ))
return;
* sequence counting per-frame, since those will override the
* sequence counter given by mac80211.
*/
- spin_lock(&intf->seqlock);
-
if (test_bit(ENTRY_TXD_FIRST_FRAGMENT, &txdesc->flags))
- intf->seqno += 0x10;
- hdr->seq_ctrl &= cpu_to_le16(IEEE80211_SCTL_FRAG);
- hdr->seq_ctrl |= cpu_to_le16(intf->seqno);
-
- spin_unlock(&intf->seqlock);
+ seqno = atomic_add_return(0x10, &intf->seqno);
+ else
+ seqno = atomic_read(&intf->seqno);
+ hdr->seq_ctrl &= cpu_to_le16(IEEE80211_SCTL_FRAG);
+ hdr->seq_ctrl |= cpu_to_le16(seqno);
}
static void rt2x00queue_create_tx_descriptor_plcp(struct rt2x00_dev *rt2x00dev,
radio_on = true;
} else if (radio_on) {
radio_on = false;
- cancel_delayed_work_sync(&priv->led_on);
+ cancel_delayed_work(&priv->led_on);
ieee80211_queue_delayed_work(hw, &priv->led_off, 0);
}
} else if (radio_on) {
list_for_each_entry(_maps_node_, &pinctrl_maps, node) \
for (_i_ = 0, _map_ = &_maps_node_->maps[_i_]; \
_i_ < _maps_node_->num_maps; \
- i++, _map_ = &_maps_node_->maps[_i_])
+ _i_++, _map_ = &_maps_node_->maps[_i_])
/**
* pinctrl_provide_dummies() - indicate if pinctrl provides dummy state support
#include "core.h"
#include "pinctrl-imx.h"
-#define IMX_PMX_DUMP(info, p, m, c, n) \
-{ \
- int i, j; \
- printk("Format: Pin Mux Config\n"); \
- for (i = 0; i < n; i++) { \
- j = p[i]; \
- printk("%s %d 0x%lx\n", \
- info->pins[j].name, \
- m[i], c[i]); \
- } \
+#define IMX_PMX_DUMP(info, p, m, c, n) \
+{ \
+ int i, j; \
+ printk(KERN_DEBUG "Format: Pin Mux Config\n"); \
+ for (i = 0; i < n; i++) { \
+ j = p[i]; \
+ printk(KERN_DEBUG "%s %d 0x%lx\n", \
+ info->pins[j].name, \
+ m[i], c[i]); \
+ } \
}
/* The bits in CONFIG cell defined in binding doc*/
/* create mux map */
parent = of_get_parent(np);
- if (!parent)
+ if (!parent) {
+ kfree(new_map);
return -EINVAL;
+ }
new_map[0].type = PIN_MAP_TYPE_MUX_GROUP;
new_map[0].data.mux.function = parent->name;
new_map[0].data.mux.group = np->name;
}
dev_dbg(pctldev->dev, "maps: function %s group %s num %d\n",
- new_map->data.mux.function, new_map->data.mux.group, map_num);
+ (*map)->data.mux.function, (*map)->data.mux.group, map_num);
return 0;
}
static void imx_dt_free_map(struct pinctrl_dev *pctldev,
struct pinctrl_map *map, unsigned num_maps)
{
- int i;
-
- for (i = 0; i < num_maps; i++)
- kfree(map);
+ kfree(map);
}
static struct pinctrl_ops imx_pctrl_ops = {
grp->configs[j] = config & ~IMX_PAD_SION;
}
-#ifdef DEBUG
IMX_PMX_DUMP(info, grp->pins, grp->mux_mode, grp->configs, grp->npins);
-#endif
+
return 0;
}
/* Compose group name */
group = kzalloc(length, GFP_KERNEL);
- if (!group)
- return -ENOMEM;
+ if (!group) {
+ ret = -ENOMEM;
+ goto free;
+ }
snprintf(group, length, "%s.%d", np->name, reg);
new_map[i].data.mux.group = group;
i++;
pconfig = kmemdup(&config, sizeof(config), GFP_KERNEL);
if (!pconfig) {
ret = -ENOMEM;
- goto free;
+ goto free_group;
}
new_map[i].type = PIN_MAP_TYPE_CONFIGS_GROUP;
return 0;
+free_group:
+ if (!purecfg)
+ free(group);
free:
kfree(new_map);
return ret;
return 0;
err:
+ platform_set_drvdata(pdev, NULL);
iounmap(d->base);
return ret;
}
{
struct mxs_pinctrl_data *d = platform_get_drvdata(pdev);
+ platform_set_drvdata(pdev, NULL);
pinctrl_unregister(d->pctl);
iounmap(d->base);
* wakeup is anyhow controlled by the RIMSC and FIMSC registers.
*/
if (nmk_chip->sleepmode && on) {
- __nmk_gpio_set_slpm(nmk_chip, gpio % nmk_chip->chip.base,
+ __nmk_gpio_set_slpm(nmk_chip, gpio % NMK_GPIO_PER_CHIP,
NMK_GPIO_SLPM_WAKEUP_ENABLE);
}
ret = PTR_ERR(clk);
goto out_unmap;
}
+ clk_prepare(clk);
nmk_chip = kzalloc(sizeof(*nmk_chip), GFP_KERNEL);
if (!nmk_chip) {
return ret;
}
-static const struct of_device_id pinmux_ids[] = {
+static const struct of_device_id pinmux_ids[] __devinitconst = {
{ .compatible = "sirf,prima2-gpio-pinmux" },
{}
};
.of_match_table = of_anatop_regulator_match_tbl,
},
.probe = anatop_regulator_probe,
- .remove = anatop_regulator_remove,
+ .remove = __devexit_p(anatop_regulator_remove),
};
static int __init anatop_regulator_init(void)
return -EINVAL;
}
+ if (min_uV < rdev->desc->min_uV)
+ min_uV = rdev->desc->min_uV;
+
ret = DIV_ROUND_UP(min_uV - rdev->desc->min_uV, rdev->desc->uV_step);
if (ret < 0)
return ret;
}
static int gpio_regulator_set_value(struct regulator_dev *dev,
- int min, int max)
+ int min, int max, unsigned *selector)
{
struct gpio_regulator_data *data = rdev_get_drvdata(dev);
- int ptr, target, state, best_val = INT_MAX;
+ int ptr, target = 0, state, best_val = INT_MAX;
for (ptr = 0; ptr < data->nr_states; ptr++)
if (data->states[ptr].value < best_val &&
data->states[ptr].value >= min &&
- data->states[ptr].value <= max)
+ data->states[ptr].value <= max) {
target = data->states[ptr].gpios;
+ best_val = data->states[ptr].value;
+ if (selector)
+ *selector = ptr;
+ }
if (best_val == INT_MAX)
return -EINVAL;
int min_uV, int max_uV,
unsigned *selector)
{
- return gpio_regulator_set_value(dev, min_uV, max_uV);
+ return gpio_regulator_set_value(dev, min_uV, max_uV, selector);
}
static int gpio_regulator_list_voltage(struct regulator_dev *dev,
static int gpio_regulator_set_current_limit(struct regulator_dev *dev,
int min_uA, int max_uA)
{
- return gpio_regulator_set_value(dev, min_uA, max_uA);
+ return gpio_regulator_set_value(dev, min_uA, max_uA, NULL);
}
static struct regulator_ops gpio_regulator_voltage_ops = {
cfg.dev = &pdev->dev;
cfg.init_data = config->init_data;
- cfg.driver_data = &drvdata;
+ cfg.driver_data = drvdata;
drvdata->dev = regulator_register(&drvdata->desc, &cfg);
if (IS_ERR(drvdata->dev)) {
config.dev = &client->dev;
config.init_data = pdata->regulator;
config.driver_data = info;
+ config.regmap = info->regmap;
info->regulator = regulator_register(&dcdc_desc, &config);
if (IS_ERR(info->regulator)) {
err_unregister_regulator:
while (--id >= 0)
regulator_unregister(pmic->rdev[id]);
- kfree(pmic->rdev);
- kfree(pmic->desc);
- kfree(pmic);
return ret;
}
for (id = 0; id < PALMAS_NUM_REGS; id++)
regulator_unregister(pmic->rdev[id]);
-
- kfree(pmic->rdev);
- kfree(pmic->desc);
- kfree(pmic);
return 0;
}
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
-#include <linux/version.h>
#include <linux/blkdev.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
}
cmd = qlt_ctio_to_cmd(vha, handle, ctio);
- if (cmd == NULL) {
- if (status != CTIO_SUCCESS)
- qlt_term_ctio_exchange(vha, ctio, NULL, status);
+ if (cmd == NULL)
return;
- }
+
se_cmd = &cmd->se_cmd;
tfo = se_cmd->se_tfo;
out_term:
ql_dbg(ql_dbg_tgt_mgt, vha, 0xf020, "Terminating work cmd %p", cmd);
/*
- * cmd has not sent to target yet, so pass NULL as the second argument
+ * cmd has not sent to target yet, so pass NULL as the second
+ * argument to qlt_send_term_exchange() and free the memory here.
*/
spin_lock_irqsave(&ha->hardware_lock, flags);
qlt_send_term_exchange(vha, NULL, &cmd->atio, 1);
+ kmem_cache_free(qla_tgt_cmd_cachep, cmd);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
if (sess)
ha->tgt.tgt_ops->put_sess(sess);
#define QLA_TGT_XMIT_STATUS 2
#define QLA_TGT_XMIT_ALL (QLA_TGT_XMIT_STATUS|QLA_TGT_XMIT_DATA)
-#include <linux/version.h>
extern struct qla_tgt_data qla_target;
/*
*/
static int tcm_qla2xxx_npiv_extract_wwn(const char *ns, u64 *nm)
{
- unsigned int i, j, value;
+ unsigned int i, j;
u8 wwn[8];
memset(wwn, 0, sizeof(wwn));
/* Validate and store the new name */
for (i = 0, j = 0; i < 16; i++) {
+ int value;
+
value = hex_to_bin(*ns++);
if (value >= 0)
j = (j << 4) | value;
/*
* Called from qla_target.c:qlt_issue_task_mgmt()
*/
-int tcm_qla2xxx_handle_tmr(struct qla_tgt_mgmt_cmd *mcmd, uint32_t lun,
- uint8_t tmr_func, uint32_t tag)
+static int tcm_qla2xxx_handle_tmr(struct qla_tgt_mgmt_cmd *mcmd, uint32_t lun,
+ uint8_t tmr_func, uint32_t tag)
{
struct qla_tgt_sess *sess = mcmd->sess;
struct se_cmd *se_cmd = &mcmd->se_cmd;
struct target_fabric_configfs *tcm_qla2xxx_fabric_configfs;
struct target_fabric_configfs *tcm_qla2xxx_npiv_fabric_configfs;
-static int tcm_qla2xxx_setup_nacl_from_rport(
- struct se_portal_group *se_tpg,
- struct se_node_acl *se_nacl,
- struct tcm_qla2xxx_lport *lport,
- struct tcm_qla2xxx_nacl *nacl,
- u64 rport_wwnn)
-{
- struct scsi_qla_host *vha = lport->qla_vha;
- struct Scsi_Host *sh = vha->host;
- struct fc_host_attrs *fc_host = shost_to_fc_host(sh);
- struct fc_rport *rport;
- unsigned long flags;
- void *node;
- int rc;
-
- /*
- * Scan the existing rports, and create a session for the
- * explict NodeACL is an matching rport->node_name already
- * exists.
- */
- spin_lock_irqsave(sh->host_lock, flags);
- list_for_each_entry(rport, &fc_host->rports, peers) {
- if (rport_wwnn != rport->node_name)
- continue;
-
- pr_debug("Located existing rport_wwpn and rport->node_name: 0x%016LX, port_id: 0x%04x\n",
- rport->node_name, rport->port_id);
- nacl->nport_id = rport->port_id;
-
- spin_unlock_irqrestore(sh->host_lock, flags);
-
- spin_lock_irqsave(&vha->hw->hardware_lock, flags);
- node = btree_lookup32(&lport->lport_fcport_map, rport->port_id);
- if (node) {
- rc = btree_update32(&lport->lport_fcport_map,
- rport->port_id, se_nacl);
- } else {
- rc = btree_insert32(&lport->lport_fcport_map,
- rport->port_id, se_nacl,
- GFP_ATOMIC);
- }
- spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
-
- if (rc) {
- pr_err("Unable to insert se_nacl into fcport_map");
- WARN_ON(rc > 0);
- return rc;
- }
-
- pr_debug("Inserted into fcport_map: %p for WWNN: 0x%016LX, port_id: 0x%08x\n",
- se_nacl, rport_wwnn, nacl->nport_id);
-
- return 1;
- }
- spin_unlock_irqrestore(sh->host_lock, flags);
-
- return 0;
-}
-
+static void tcm_qla2xxx_clear_sess_lookup(struct tcm_qla2xxx_lport *,
+ struct tcm_qla2xxx_nacl *, struct qla_tgt_sess *);
/*
* Expected to be called with struct qla_hw_data->hardware_lock held
*/
pr_debug("Removed from fcport_map: %p for WWNN: 0x%016LX, port_id: 0x%06x\n",
se_nacl, nacl->nport_wwnn, nacl->nport_id);
+ /*
+ * Now clear the se_nacl and session pointers from our HW lport lookup
+ * table mapping for this initiator's fabric S_ID and LOOP_ID entries.
+ *
+ * This is done ahead of callbacks into tcm_qla2xxx_free_session() ->
+ * target_wait_for_sess_cmds() before the session waits for outstanding
+ * I/O to complete, to avoid a race between session shutdown execution
+ * and incoming ATIOs or TMRs picking up a stale se_node_act reference.
+ */
+ tcm_qla2xxx_clear_sess_lookup(lport, nacl, sess);
+}
+
+static void tcm_qla2xxx_release_session(struct kref *kref)
+{
+ struct se_session *se_sess = container_of(kref,
+ struct se_session, sess_kref);
+
+ qlt_unreg_sess(se_sess->fabric_sess_ptr);
+}
+
+static void tcm_qla2xxx_put_session(struct se_session *se_sess)
+{
+ struct qla_tgt_sess *sess = se_sess->fabric_sess_ptr;
+ struct qla_hw_data *ha = sess->vha->hw;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ha->hardware_lock, flags);
+ kref_put(&se_sess->sess_kref, tcm_qla2xxx_release_session);
+ spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
static void tcm_qla2xxx_put_sess(struct qla_tgt_sess *sess)
{
- target_put_session(sess->se_sess);
+ tcm_qla2xxx_put_session(sess->se_sess);
}
static void tcm_qla2xxx_shutdown_sess(struct qla_tgt_sess *sess)
struct config_group *group,
const char *name)
{
- struct se_wwn *se_wwn = se_tpg->se_tpg_wwn;
- struct tcm_qla2xxx_lport *lport = container_of(se_wwn,
- struct tcm_qla2xxx_lport, lport_wwn);
struct se_node_acl *se_nacl, *se_nacl_new;
struct tcm_qla2xxx_nacl *nacl;
u64 wwnn;
u32 qla2xxx_nexus_depth;
- int rc;
if (tcm_qla2xxx_parse_wwn(name, &wwnn, 1) < 0)
return ERR_PTR(-EINVAL);
nacl = container_of(se_nacl, struct tcm_qla2xxx_nacl, se_node_acl);
nacl->nport_wwnn = wwnn;
tcm_qla2xxx_format_wwn(&nacl->nport_name[0], TCM_QLA2XXX_NAMELEN, wwnn);
- /*
- * Setup a se_nacl handle based on an a matching struct fc_rport setup
- * via drivers/scsi/qla2xxx/qla_init.c:qla2x00_reg_remote_port()
- */
- rc = tcm_qla2xxx_setup_nacl_from_rport(se_tpg, se_nacl, lport,
- nacl, wwnn);
- if (rc < 0) {
- tcm_qla2xxx_release_fabric_acl(se_tpg, se_nacl_new);
- return ERR_PTR(rc);
- }
return se_nacl;
}
nacl->qla_tgt_sess, new_se_nacl, new_se_nacl->initiatorname);
}
+/*
+ * Should always be called with qla_hw_data->hardware_lock held.
+ */
+static void tcm_qla2xxx_clear_sess_lookup(struct tcm_qla2xxx_lport *lport,
+ struct tcm_qla2xxx_nacl *nacl, struct qla_tgt_sess *sess)
+{
+ struct se_session *se_sess = sess->se_sess;
+ unsigned char be_sid[3];
+
+ be_sid[0] = sess->s_id.b.domain;
+ be_sid[1] = sess->s_id.b.area;
+ be_sid[2] = sess->s_id.b.al_pa;
+
+ tcm_qla2xxx_set_sess_by_s_id(lport, NULL, nacl, se_sess,
+ sess, be_sid);
+ tcm_qla2xxx_set_sess_by_loop_id(lport, NULL, nacl, se_sess,
+ sess, sess->loop_id);
+}
+
static void tcm_qla2xxx_free_session(struct qla_tgt_sess *sess)
{
struct qla_tgt *tgt = sess->tgt;
struct se_node_acl *se_nacl;
struct tcm_qla2xxx_lport *lport;
struct tcm_qla2xxx_nacl *nacl;
- unsigned char be_sid[3];
- unsigned long flags;
BUG_ON(in_interrupt());
return;
}
target_wait_for_sess_cmds(se_sess, 0);
- /*
- * And now clear the se_nacl and session pointers from our HW lport
- * mappings for fabric S_ID and LOOP_ID.
- */
- memset(&be_sid, 0, 3);
- be_sid[0] = sess->s_id.b.domain;
- be_sid[1] = sess->s_id.b.area;
- be_sid[2] = sess->s_id.b.al_pa;
-
- spin_lock_irqsave(&ha->hardware_lock, flags);
- tcm_qla2xxx_set_sess_by_s_id(lport, NULL, nacl, se_sess,
- sess, be_sid);
- tcm_qla2xxx_set_sess_by_loop_id(lport, NULL, nacl, se_sess,
- sess, sess->loop_id);
- spin_unlock_irqrestore(&ha->hardware_lock, flags);
transport_deregister_session_configfs(sess->se_sess);
transport_deregister_session(sess->se_sess);
.new_cmd_map = NULL,
.check_stop_free = tcm_qla2xxx_check_stop_free,
.release_cmd = tcm_qla2xxx_release_cmd,
+ .put_session = tcm_qla2xxx_put_session,
.shutdown_session = tcm_qla2xxx_shutdown_session,
.close_session = tcm_qla2xxx_close_session,
.sess_get_index = tcm_qla2xxx_sess_get_index,
.tpg_release_fabric_acl = tcm_qla2xxx_release_fabric_acl,
.tpg_get_inst_index = tcm_qla2xxx_tpg_get_inst_index,
.release_cmd = tcm_qla2xxx_release_cmd,
+ .put_session = tcm_qla2xxx_put_session,
.shutdown_session = tcm_qla2xxx_shutdown_session,
.close_session = tcm_qla2xxx_close_session,
.sess_get_index = tcm_qla2xxx_sess_get_index,
out:
transport_kunmap_data_sg(cmd);
- target_complete_cmd(cmd, GOOD);
- return 0;
+ if (!rc)
+ target_complete_cmd(cmd, GOOD);
+ return rc;
}
static inline int core_alua_state_nonoptimized(
}
EXPORT_SYMBOL(transport_register_session);
-static void target_release_session(struct kref *kref)
+void target_release_session(struct kref *kref)
{
struct se_session *se_sess = container_of(kref,
struct se_session, sess_kref);
void target_put_session(struct se_session *se_sess)
{
+ struct se_portal_group *tpg = se_sess->se_tpg;
+
+ if (tpg->se_tpg_tfo->put_session != NULL) {
+ tpg->se_tpg_tfo->put_session(se_sess);
+ return;
+ }
kref_put(&se_sess->sess_kref, target_release_session);
}
EXPORT_SYMBOL(target_put_session);
return 0;
}
+static void sci_cleanup_single(struct sci_port *port)
+{
+ sci_free_gpios(port);
+
+ clk_put(port->iclk);
+ clk_put(port->fclk);
+
+ pm_runtime_disable(port->port.dev);
+}
+
#ifdef CONFIG_SERIAL_SH_SCI_CONSOLE
static void serial_console_putchar(struct uart_port *port, int ch)
{
cpufreq_unregister_notifier(&port->freq_transition,
CPUFREQ_TRANSITION_NOTIFIER);
- sci_free_gpios(port);
-
uart_remove_one_port(&sci_uart_driver, &port->port);
- clk_put(port->iclk);
- clk_put(port->fclk);
+ sci_cleanup_single(port);
- pm_runtime_disable(&dev->dev);
return 0;
}
index+1, SCI_NPORTS);
dev_notice(&dev->dev, "Consider bumping "
"CONFIG_SERIAL_SH_SCI_NR_UARTS!\n");
- return 0;
+ return -EINVAL;
}
ret = sci_init_single(dev, sciport, index, p);
if (ret)
return ret;
- return uart_add_one_port(&sci_uart_driver, &sciport->port);
+ ret = uart_add_one_port(&sci_uart_driver, &sciport->port);
+ if (ret) {
+ sci_cleanup_single(sciport);
+ return ret;
+ }
+
+ return 0;
}
static int __devinit sci_probe(struct platform_device *dev)
ret = sci_probe_single(dev, dev->id, p, sp);
if (ret)
- goto err_unreg;
+ return ret;
sp->freq_transition.notifier_call = sci_notifier;
ret = cpufreq_register_notifier(&sp->freq_transition,
CPUFREQ_TRANSITION_NOTIFIER);
- if (unlikely(ret < 0))
- goto err_unreg;
+ if (unlikely(ret < 0)) {
+ sci_cleanup_single(sp);
+ return ret;
+ }
#ifdef CONFIG_SH_STANDARD_BIOS
sh_bios_gdb_detach();
#endif
return 0;
-
-err_unreg:
- sci_remove(dev);
- return ret;
}
static int sci_suspend(struct device *dev)
{
struct omap_dss_device *dssdev = to_dss_device(dev);
struct taal_data *td = dev_get_drvdata(&dssdev->dev);
- u8 errors;
+ u8 errors = 0;
int r;
mutex_lock(&td->lock);
static inline void dss_uninitialize_debugfs(void)
{
}
-static inline int dss_debugfs_create_file(const char *name,
- void (*write)(struct seq_file *))
+int dss_debugfs_create_file(const char *name, void (*write)(struct seq_file *))
{
return 0;
}
/* CLKIN4DDR = 16 * TXBYTECLKHS */
tlp_avail = thsbyte_clk * (blank - trans_lp);
- ttxclkesc = tdsi_fclk / lp_clk_div;
+ ttxclkesc = tdsi_fclk * lp_clk_div;
lp_inter = ((tlp_avail - 8 * thsbyte_clk - 5 * tdsi_fclk) / ttxclkesc -
26) / 16;
DSSDBG("dss_runtime_put\n");
r = pm_runtime_put_sync(&dss.pdev->dev);
- WARN_ON(r < 0);
+ WARN_ON(r < 0 && r != -EBUSY);
}
/* DEBUGFS */
static int add_all_parents(struct btrfs_root *root, struct btrfs_path *path,
struct ulist *parents, int level,
- struct btrfs_key *key, u64 wanted_disk_byte,
+ struct btrfs_key *key, u64 time_seq,
+ u64 wanted_disk_byte,
const u64 *extent_item_pos)
{
int ret;
*/
while (1) {
eie = NULL;
- ret = btrfs_next_leaf(root, path);
+ ret = btrfs_next_old_leaf(root, path, time_seq);
if (ret < 0)
return ret;
if (ret)
goto out;
}
- if (level == 0) {
- if (ret == 1 && path->slots[0] >= btrfs_header_nritems(eb)) {
- ret = btrfs_next_leaf(root, path);
- if (ret)
- goto out;
- eb = path->nodes[0];
- }
-
+ if (level == 0)
btrfs_item_key_to_cpu(eb, &key, path->slots[0]);
- }
- ret = add_all_parents(root, path, parents, level, &key,
+ ret = add_all_parents(root, path, parents, level, &key, time_seq,
ref->wanted_disk_byte, extent_item_pos);
out:
btrfs_free_path(path);
#define BTRFS_INODE_IN_DEFRAG 3
#define BTRFS_INODE_DELALLOC_META_RESERVED 4
#define BTRFS_INODE_HAS_ORPHAN_ITEM 5
+#define BTRFS_INODE_HAS_ASYNC_EXTENT 6
/* in memory btrfs inode */
struct btrfs_inode {
#include "print-tree.h"
#include "locking.h"
#include "check-integrity.h"
+#include "rcu-string.h"
#define BTRFSIC_BLOCK_HASHTABLE_SIZE 0x10000
#define BTRFSIC_BLOCK_LINK_HASHTABLE_SIZE 0x10000
superblock_tmp->never_written = 0;
superblock_tmp->mirror_num = 1 + superblock_mirror_num;
if (state->print_mask & BTRFSIC_PRINT_MASK_SUPERBLOCK_WRITE)
- printk(KERN_INFO "New initial S-block (bdev %p, %s)"
- " @%llu (%s/%llu/%d)\n",
- superblock_bdev, device->name,
- (unsigned long long)dev_bytenr,
- dev_state->name,
- (unsigned long long)dev_bytenr,
- superblock_mirror_num);
+ printk_in_rcu(KERN_INFO "New initial S-block (bdev %p, %s)"
+ " @%llu (%s/%llu/%d)\n",
+ superblock_bdev,
+ rcu_str_deref(device->name),
+ (unsigned long long)dev_bytenr,
+ dev_state->name,
+ (unsigned long long)dev_bytenr,
+ superblock_mirror_num);
list_add(&superblock_tmp->all_blocks_node,
&state->all_blocks_list);
btrfsic_block_hashtable_add(superblock_tmp,
return 0;
}
+/*
+ * This allocates memory and gets a tree modification sequence number when
+ * needed.
+ *
+ * Returns 0 when no sequence number is needed, < 0 on error.
+ * Returns 1 when a sequence number was added. In this case,
+ * fs_info->tree_mod_seq_lock was acquired and must be released by the caller
+ * after inserting into the rb tree.
+ */
static inline int tree_mod_alloc(struct btrfs_fs_info *fs_info, gfp_t flags,
struct tree_mod_elem **tm_ret)
{
*/
kfree(tm);
seq = 0;
+ spin_unlock(&fs_info->tree_mod_seq_lock);
} else {
__get_tree_mod_seq(fs_info, &tm->elem);
seq = tm->elem.seq;
}
- spin_unlock(&fs_info->tree_mod_seq_lock);
return seq;
}
tm->slot = slot;
tm->generation = btrfs_node_ptr_generation(eb, slot);
- return __tree_mod_log_insert(fs_info, tm);
+ ret = __tree_mod_log_insert(fs_info, tm);
+ spin_unlock(&fs_info->tree_mod_seq_lock);
+ return ret;
}
static noinline int
tm->move.nr_items = nr_items;
tm->op = MOD_LOG_MOVE_KEYS;
- return __tree_mod_log_insert(fs_info, tm);
+ ret = __tree_mod_log_insert(fs_info, tm);
+ spin_unlock(&fs_info->tree_mod_seq_lock);
+ return ret;
}
static noinline int
tm->generation = btrfs_header_generation(old_root);
tm->op = MOD_LOG_ROOT_REPLACE;
- return __tree_mod_log_insert(fs_info, tm);
+ ret = __tree_mod_log_insert(fs_info, tm);
+ spin_unlock(&fs_info->tree_mod_seq_lock);
+ return ret;
}
static struct tree_mod_elem *
looped = 1;
}
+ /* if there's no old root to return, return what we found instead */
+ if (!found)
+ found = tm;
+
return found;
}
return eb_rewin;
}
+/*
+ * get_old_root() rewinds the state of @root's root node to the given @time_seq
+ * value. If there are no changes, the current root->root_node is returned. If
+ * anything changed in between, there's a fresh buffer allocated on which the
+ * rewind operations are done. In any case, the returned buffer is read locked.
+ * Returns NULL on error (with no locks held).
+ */
static inline struct extent_buffer *
get_old_root(struct btrfs_root *root, u64 time_seq)
{
struct tree_mod_elem *tm;
struct extent_buffer *eb;
- struct tree_mod_root *old_root;
+ struct tree_mod_root *old_root = NULL;
u64 old_generation;
+ u64 logical;
+ eb = btrfs_read_lock_root_node(root);
tm = __tree_mod_log_oldest_root(root->fs_info, root, time_seq);
if (!tm)
return root->node;
- old_root = &tm->old_root;
- old_generation = tm->generation;
+ if (tm->op == MOD_LOG_ROOT_REPLACE) {
+ old_root = &tm->old_root;
+ old_generation = tm->generation;
+ logical = old_root->logical;
+ } else {
+ logical = root->node->start;
+ }
- tm = tree_mod_log_search(root->fs_info, old_root->logical, time_seq);
+ tm = tree_mod_log_search(root->fs_info, logical, time_seq);
/*
* there was an item in the log when __tree_mod_log_oldest_root
* returned. this one must not go away, because the time_seq passed to
*/
BUG_ON(!tm);
- if (old_root->logical == root->node->start) {
- /* there are logged operations for the current root */
- eb = btrfs_clone_extent_buffer(root->node);
- } else {
- /* there's a root replace operation for the current root */
+ if (old_root)
eb = alloc_dummy_extent_buffer(tm->index << PAGE_CACHE_SHIFT,
root->nodesize);
+ else
+ eb = btrfs_clone_extent_buffer(root->node);
+ btrfs_tree_read_unlock(root->node);
+ free_extent_buffer(root->node);
+ if (!eb)
+ return NULL;
+ btrfs_tree_read_lock(eb);
+ if (old_root) {
btrfs_set_header_bytenr(eb, eb->start);
btrfs_set_header_backref_rev(eb, BTRFS_MIXED_BACKREF_REV);
btrfs_set_header_owner(eb, root->root_key.objectid);
+ btrfs_set_header_level(eb, old_root->level);
+ btrfs_set_header_generation(eb, old_generation);
}
- if (!eb)
- return NULL;
- btrfs_set_header_level(eb, old_root->level);
- btrfs_set_header_generation(eb, old_generation);
__tree_mod_log_rewind(eb, time_seq, tm);
+ extent_buffer_get(eb);
return eb;
}
BTRFS_NODEPTRS_PER_BLOCK(root) / 4)
return 0;
- btrfs_header_nritems(mid);
-
left = read_node_slot(root, parent, pslot - 1);
if (left) {
btrfs_tree_lock(left);
wret = push_node_left(trans, root, left, mid, 1);
if (wret < 0)
ret = wret;
- btrfs_header_nritems(mid);
}
/*
again:
b = get_old_root(root, time_seq);
- extent_buffer_get(b);
level = btrfs_header_level(b);
- btrfs_tree_read_lock(b);
p->locks[level] = BTRFS_READ_LOCK;
while (b) {
* returns < 0 on io errors.
*/
int btrfs_next_leaf(struct btrfs_root *root, struct btrfs_path *path)
+{
+ return btrfs_next_old_leaf(root, path, 0);
+}
+
+int btrfs_next_old_leaf(struct btrfs_root *root, struct btrfs_path *path,
+ u64 time_seq)
{
int slot;
int level;
path->keep_locks = 1;
path->leave_spinning = 1;
- ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+ if (time_seq)
+ ret = btrfs_search_old_slot(root, &key, path, time_seq);
+ else
+ ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
path->keep_locks = 0;
if (ret < 0)
}
int btrfs_next_leaf(struct btrfs_root *root, struct btrfs_path *path);
+int btrfs_next_old_leaf(struct btrfs_root *root, struct btrfs_path *path,
+ u64 time_seq);
static inline int btrfs_next_item(struct btrfs_root *root, struct btrfs_path *p)
{
++p->slots[0];
}
}
}
+
+void btrfs_destroy_delayed_inodes(struct btrfs_root *root)
+{
+ struct btrfs_delayed_root *delayed_root;
+ struct btrfs_delayed_node *curr_node, *prev_node;
+
+ delayed_root = btrfs_get_delayed_root(root);
+
+ curr_node = btrfs_first_delayed_node(delayed_root);
+ while (curr_node) {
+ __btrfs_kill_delayed_node(curr_node);
+
+ prev_node = curr_node;
+ curr_node = btrfs_next_delayed_node(curr_node);
+ btrfs_release_delayed_node(prev_node);
+ }
+}
+
/* Used for drop dead root */
void btrfs_kill_all_delayed_nodes(struct btrfs_root *root);
+/* Used for clean the transaction */
+void btrfs_destroy_delayed_inodes(struct btrfs_root *root);
+
/* Used for readdir() */
void btrfs_get_delayed_items(struct inode *inode, struct list_head *ins_list,
struct list_head *del_list);
#include "free-space-cache.h"
#include "inode-map.h"
#include "check-integrity.h"
+#include "rcu-string.h"
static struct extent_io_ops btree_extent_io_ops;
static void end_workqueue_fn(struct btrfs_work *work);
features = btrfs_super_incompat_flags(disk_super);
features |= BTRFS_FEATURE_INCOMPAT_MIXED_BACKREF;
- if (tree_root->fs_info->compress_type & BTRFS_COMPRESS_LZO)
+ if (tree_root->fs_info->compress_type == BTRFS_COMPRESS_LZO)
features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO;
/*
struct btrfs_device *device = (struct btrfs_device *)
bh->b_private;
- printk_ratelimited(KERN_WARNING "lost page write due to "
- "I/O error on %s\n", device->name);
+ printk_ratelimited_in_rcu(KERN_WARNING "lost page write due to "
+ "I/O error on %s\n",
+ rcu_str_deref(device->name));
/* note, we dont' set_buffer_write_io_error because we have
* our own ways of dealing with the IO errors
*/
wait_for_completion(&device->flush_wait);
if (bio_flagged(bio, BIO_EOPNOTSUPP)) {
- printk("btrfs: disabling barriers on dev %s\n",
- device->name);
+ printk_in_rcu("btrfs: disabling barriers on dev %s\n",
+ rcu_str_deref(device->name));
device->nobarriers = 1;
}
if (!bio_flagged(bio, BIO_UPTODATE)) {
delayed_refs = &trans->delayed_refs;
-again:
spin_lock(&delayed_refs->lock);
if (delayed_refs->num_entries == 0) {
spin_unlock(&delayed_refs->lock);
return ret;
}
- node = rb_first(&delayed_refs->root);
- while (node) {
+ while ((node = rb_first(&delayed_refs->root)) != NULL) {
ref = rb_entry(node, struct btrfs_delayed_ref_node, rb_node);
- node = rb_next(node);
-
- ref->in_tree = 0;
- rb_erase(&ref->rb_node, &delayed_refs->root);
- delayed_refs->num_entries--;
atomic_set(&ref->refs, 1);
if (btrfs_delayed_ref_is_head(ref)) {
struct btrfs_delayed_ref_head *head;
head = btrfs_delayed_node_to_head(ref);
- spin_unlock(&delayed_refs->lock);
- mutex_lock(&head->mutex);
+ if (!mutex_trylock(&head->mutex)) {
+ atomic_inc(&ref->refs);
+ spin_unlock(&delayed_refs->lock);
+
+ /* Need to wait for the delayed ref to run */
+ mutex_lock(&head->mutex);
+ mutex_unlock(&head->mutex);
+ btrfs_put_delayed_ref(ref);
+
+ continue;
+ }
+
kfree(head->extent_op);
delayed_refs->num_heads--;
if (list_empty(&head->cluster))
delayed_refs->num_heads_ready--;
list_del_init(&head->cluster);
- mutex_unlock(&head->mutex);
- btrfs_put_delayed_ref(ref);
- goto again;
}
+ ref->in_tree = 0;
+ rb_erase(&ref->rb_node, &delayed_refs->root);
+ delayed_refs->num_entries--;
+
spin_unlock(&delayed_refs->lock);
btrfs_put_delayed_ref(ref);
&(&BTRFS_I(page->mapping->host)->io_tree)->buffer,
offset >> PAGE_CACHE_SHIFT);
spin_unlock(&dirty_pages->buffer_lock);
- if (eb) {
+ if (eb)
ret = test_and_clear_bit(EXTENT_BUFFER_DIRTY,
&eb->bflags);
- atomic_set(&eb->refs, 1);
- }
if (PageWriteback(page))
end_page_writeback(page);
spin_unlock_irq(&page->mapping->tree_lock);
}
- page->mapping->a_ops->invalidatepage(page, 0);
unlock_page(page);
+ page_cache_release(page);
}
}
u64 start;
u64 end;
int ret;
+ bool loop = true;
unpin = pinned_extents;
+again:
while (1) {
ret = find_first_extent_bit(unpin, 0, &start, &end,
EXTENT_DIRTY);
cond_resched();
}
+ if (loop) {
+ if (unpin == &root->fs_info->freed_extents[0])
+ unpin = &root->fs_info->freed_extents[1];
+ else
+ unpin = &root->fs_info->freed_extents[0];
+ loop = false;
+ goto again;
+ }
+
return 0;
}
/* FIXME: cleanup wait for commit */
cur_trans->in_commit = 1;
cur_trans->blocked = 1;
- if (waitqueue_active(&root->fs_info->transaction_blocked_wait))
- wake_up(&root->fs_info->transaction_blocked_wait);
+ wake_up(&root->fs_info->transaction_blocked_wait);
cur_trans->blocked = 0;
- if (waitqueue_active(&root->fs_info->transaction_wait))
- wake_up(&root->fs_info->transaction_wait);
+ wake_up(&root->fs_info->transaction_wait);
cur_trans->commit_done = 1;
- if (waitqueue_active(&cur_trans->commit_wait))
- wake_up(&cur_trans->commit_wait);
+ wake_up(&cur_trans->commit_wait);
+
+ btrfs_destroy_delayed_inodes(root);
+ btrfs_assert_delayed_root_empty(root);
btrfs_destroy_pending_snapshots(cur_trans);
btrfs_destroy_marked_extents(root, &cur_trans->dirty_pages,
EXTENT_DIRTY);
+ btrfs_destroy_pinned_extent(root,
+ root->fs_info->pinned_extents);
/*
memset(cur_trans, 0, sizeof(*cur_trans));
if (waitqueue_active(&t->commit_wait))
wake_up(&t->commit_wait);
+ btrfs_destroy_delayed_inodes(root);
+ btrfs_assert_delayed_root_empty(root);
+
btrfs_destroy_pending_snapshots(t);
btrfs_destroy_delalloc_inodes(root);
#include "volumes.h"
#include "check-integrity.h"
#include "locking.h"
+#include "rcu-string.h"
static struct kmem_cache *extent_state_cache;
static struct kmem_cache *extent_buffer_cache;
return -EIO;
}
- printk(KERN_INFO "btrfs read error corrected: ino %lu off %llu (dev %s "
- "sector %llu)\n", page->mapping->host->i_ino, start,
- dev->name, sector);
+ printk_in_rcu(KERN_INFO "btrfs read error corrected: ino %lu off %llu "
+ "(dev %s sector %llu)\n", page->mapping->host->i_ino,
+ start, rcu_str_deref(dev->name), sector);
bio_put(bio);
return 0;
if (IS_ERR(trans)) {
extent_clear_unlock_delalloc(inode,
&BTRFS_I(inode)->io_tree,
- start, end, NULL,
+ start, end, locked_page,
EXTENT_CLEAR_UNLOCK_PAGE |
EXTENT_CLEAR_UNLOCK |
EXTENT_CLEAR_DELALLOC |
out_unlock:
extent_clear_unlock_delalloc(inode,
&BTRFS_I(inode)->io_tree,
- start, end, NULL,
+ start, end, locked_page,
EXTENT_CLEAR_UNLOCK_PAGE |
EXTENT_CLEAR_UNLOCK |
EXTENT_CLEAR_DELALLOC |
compress_file_range(async_cow->inode, async_cow->locked_page,
async_cow->start, async_cow->end, async_cow,
&num_added);
- if (num_added == 0)
+ if (num_added == 0) {
+ iput(async_cow->inode);
async_cow->inode = NULL;
+ }
}
/*
{
struct async_cow *async_cow;
async_cow = container_of(work, struct async_cow, work);
+ if (async_cow->inode)
+ iput(async_cow->inode);
kfree(async_cow);
}
while (start < end) {
async_cow = kmalloc(sizeof(*async_cow), GFP_NOFS);
BUG_ON(!async_cow); /* -ENOMEM */
- async_cow->inode = inode;
+ async_cow->inode = igrab(inode);
async_cow->root = root;
async_cow->locked_page = locked_page;
async_cow->start = start;
u64 ino = btrfs_ino(inode);
path = btrfs_alloc_path();
- if (!path)
+ if (!path) {
+ extent_clear_unlock_delalloc(inode,
+ &BTRFS_I(inode)->io_tree,
+ start, end, locked_page,
+ EXTENT_CLEAR_UNLOCK_PAGE |
+ EXTENT_CLEAR_UNLOCK |
+ EXTENT_CLEAR_DELALLOC |
+ EXTENT_CLEAR_DIRTY |
+ EXTENT_SET_WRITEBACK |
+ EXTENT_END_WRITEBACK);
return -ENOMEM;
+ }
nolock = btrfs_is_free_space_inode(root, inode);
trans = btrfs_join_transaction(root);
if (IS_ERR(trans)) {
+ extent_clear_unlock_delalloc(inode,
+ &BTRFS_I(inode)->io_tree,
+ start, end, locked_page,
+ EXTENT_CLEAR_UNLOCK_PAGE |
+ EXTENT_CLEAR_UNLOCK |
+ EXTENT_CLEAR_DELALLOC |
+ EXTENT_CLEAR_DIRTY |
+ EXTENT_SET_WRITEBACK |
+ EXTENT_END_WRITEBACK);
btrfs_free_path(path);
return PTR_ERR(trans);
}
}
btrfs_release_path(path);
- if (cur_offset <= end && cow_start == (u64)-1)
+ if (cur_offset <= end && cow_start == (u64)-1) {
cow_start = cur_offset;
+ cur_offset = end;
+ }
+
if (cow_start != (u64)-1) {
ret = cow_file_range(inode, locked_page, cow_start, end,
page_started, nr_written, 1);
if (!ret)
ret = err;
+ if (ret && cur_offset < end)
+ extent_clear_unlock_delalloc(inode,
+ &BTRFS_I(inode)->io_tree,
+ cur_offset, end, locked_page,
+ EXTENT_CLEAR_UNLOCK_PAGE |
+ EXTENT_CLEAR_UNLOCK |
+ EXTENT_CLEAR_DELALLOC |
+ EXTENT_CLEAR_DIRTY |
+ EXTENT_SET_WRITEBACK |
+ EXTENT_END_WRITEBACK);
+
btrfs_free_path(path);
return ret;
}
int ret;
struct btrfs_root *root = BTRFS_I(inode)->root;
- if (BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW)
+ if (BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW) {
ret = run_delalloc_nocow(inode, locked_page, start, end,
page_started, 1, nr_written);
- else if (BTRFS_I(inode)->flags & BTRFS_INODE_PREALLOC)
+ } else if (BTRFS_I(inode)->flags & BTRFS_INODE_PREALLOC) {
ret = run_delalloc_nocow(inode, locked_page, start, end,
page_started, 0, nr_written);
- else if (!btrfs_test_opt(root, COMPRESS) &&
- !(BTRFS_I(inode)->force_compress) &&
- !(BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS))
+ } else if (!btrfs_test_opt(root, COMPRESS) &&
+ !(BTRFS_I(inode)->force_compress) &&
+ !(BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS)) {
ret = cow_file_range(inode, locked_page, start, end,
page_started, nr_written, 1);
- else
+ } else {
+ set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
+ &BTRFS_I(inode)->runtime_flags);
ret = cow_file_range_async(inode, locked_page, start, end,
page_started, nr_written);
+ }
return ret;
}
else
b_inode->flags &= ~BTRFS_INODE_NODATACOW;
- if (b_dir->flags & BTRFS_INODE_COMPRESS)
+ if (b_dir->flags & BTRFS_INODE_COMPRESS) {
b_inode->flags |= BTRFS_INODE_COMPRESS;
- else
- b_inode->flags &= ~BTRFS_INODE_COMPRESS;
+ b_inode->flags &= ~BTRFS_INODE_NOCOMPRESS;
+ } else {
+ b_inode->flags &= ~(BTRFS_INODE_COMPRESS |
+ BTRFS_INODE_NOCOMPRESS);
+ }
}
static int btrfs_rename(struct inode *old_dir, struct dentry *old_dentry,
#include "locking.h"
#include "inode-map.h"
#include "backref.h"
+#include "rcu-string.h"
/* Mask out flags that are inappropriate for the given type of inode. */
static inline __u32 btrfs_mask_flags(umode_t mode, __u32 flags)
return -ENOENT;
}
-/*
- * Validaty check of prev em and next em:
- * 1) no prev/next em
- * 2) prev/next em is an hole/inline extent
- */
-static int check_adjacent_extents(struct inode *inode, struct extent_map *em)
+static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start)
{
struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
- struct extent_map *prev = NULL, *next = NULL;
- int ret = 0;
+ struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
+ struct extent_map *em;
+ u64 len = PAGE_CACHE_SIZE;
+ /*
+ * hopefully we have this extent in the tree already, try without
+ * the full extent lock
+ */
read_lock(&em_tree->lock);
- prev = lookup_extent_mapping(em_tree, em->start - 1, (u64)-1);
- next = lookup_extent_mapping(em_tree, em->start + em->len, (u64)-1);
+ em = lookup_extent_mapping(em_tree, start, len);
read_unlock(&em_tree->lock);
- if ((!prev || prev->block_start >= EXTENT_MAP_LAST_BYTE) &&
- (!next || next->block_start >= EXTENT_MAP_LAST_BYTE))
- ret = 1;
- free_extent_map(prev);
- free_extent_map(next);
+ if (!em) {
+ /* get the big lock and read metadata off disk */
+ lock_extent(io_tree, start, start + len - 1);
+ em = btrfs_get_extent(inode, NULL, 0, start, len, 0);
+ unlock_extent(io_tree, start, start + len - 1);
+
+ if (IS_ERR(em))
+ return NULL;
+ }
+
+ return em;
+}
+
+static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em)
+{
+ struct extent_map *next;
+ bool ret = true;
+
+ /* this is the last extent */
+ if (em->start + em->len >= i_size_read(inode))
+ return false;
+ next = defrag_lookup_extent(inode, em->start + em->len);
+ if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
+ ret = false;
+
+ free_extent_map(next);
return ret;
}
-static int should_defrag_range(struct inode *inode, u64 start, u64 len,
- int thresh, u64 *last_len, u64 *skip,
- u64 *defrag_end)
+static int should_defrag_range(struct inode *inode, u64 start, int thresh,
+ u64 *last_len, u64 *skip, u64 *defrag_end)
{
- struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
- struct extent_map *em = NULL;
- struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
+ struct extent_map *em;
int ret = 1;
+ bool next_mergeable = true;
/*
* make sure that once we start defragging an extent, we keep on
*skip = 0;
- /*
- * hopefully we have this extent in the tree already, try without
- * the full extent lock
- */
- read_lock(&em_tree->lock);
- em = lookup_extent_mapping(em_tree, start, len);
- read_unlock(&em_tree->lock);
-
- if (!em) {
- /* get the big lock and read metadata off disk */
- lock_extent(io_tree, start, start + len - 1);
- em = btrfs_get_extent(inode, NULL, 0, start, len, 0);
- unlock_extent(io_tree, start, start + len - 1);
-
- if (IS_ERR(em))
- return 0;
- }
+ em = defrag_lookup_extent(inode, start);
+ if (!em)
+ return 0;
/* this will cover holes, and inline extents */
if (em->block_start >= EXTENT_MAP_LAST_BYTE) {
goto out;
}
- /* If we have nothing to merge with us, just skip. */
- if (check_adjacent_extents(inode, em)) {
- ret = 0;
- goto out;
- }
+ next_mergeable = defrag_check_next_extent(inode, em);
/*
- * we hit a real extent, if it is big don't bother defragging it again
+ * we hit a real extent, if it is big or the next extent is not a
+ * real extent, don't bother defragging it
*/
- if ((*last_len == 0 || *last_len >= thresh) && em->len >= thresh)
+ if ((*last_len == 0 || *last_len >= thresh) &&
+ (em->len >= thresh || !next_mergeable))
ret = 0;
-
out:
/*
* last_len ends up being a counter of how many bytes we've defragged.
break;
if (!should_defrag_range(inode, (u64)i << PAGE_CACHE_SHIFT,
- PAGE_CACHE_SIZE, extent_thresh,
- &last_len, &skip, &defrag_end)) {
+ extent_thresh, &last_len, &skip,
+ &defrag_end)) {
unsigned long next;
/*
* the should_defrag function tells us how much to skip
ret = -EINVAL;
goto out_free;
}
+ if (device->fs_devices && device->fs_devices->seeding) {
+ printk(KERN_INFO "btrfs: resizer unable to apply on "
+ "seeding device %llu\n", devid);
+ ret = -EINVAL;
+ goto out_free;
+ }
+
if (!strcmp(sizestr, "max"))
new_size = device->bdev->bd_inode->i_size;
else {
do_div(new_size, root->sectorsize);
new_size *= root->sectorsize;
- printk(KERN_INFO "btrfs: new size for %s is %llu\n",
- device->name, (unsigned long long)new_size);
+ printk_in_rcu(KERN_INFO "btrfs: new size for %s is %llu\n",
+ rcu_str_deref(device->name),
+ (unsigned long long)new_size);
if (new_size > old_size) {
trans = btrfs_start_transaction(root, 0);
di_args->total_bytes = dev->total_bytes;
memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid));
if (dev->name) {
- strncpy(di_args->path, dev->name, sizeof(di_args->path));
+ struct rcu_string *name;
+
+ rcu_read_lock();
+ name = rcu_dereference(dev->name);
+ strncpy(di_args->path, name->str, sizeof(di_args->path));
+ rcu_read_unlock();
di_args->path[sizeof(di_args->path) - 1] = 0;
} else {
di_args->path[0] = '\0';
/* start IO across the range first to instantiate any delalloc
* extents
*/
- filemap_write_and_wait_range(inode->i_mapping, start, orig_end);
+ filemap_fdatawrite_range(inode->i_mapping, start, orig_end);
+
+ /*
+ * So with compression we will find and lock a dirty page and clear the
+ * first one as dirty, setup an async extent, and immediately return
+ * with the entire range locked but with nobody actually marked with
+ * writeback. So we can't just filemap_write_and_wait_range() and
+ * expect it to work since it will just kick off a thread to do the
+ * actual work. So we need to call filemap_fdatawrite_range _again_
+ * since it will wait on the page lock, which won't be unlocked until
+ * after the pages have been marked as writeback and so we're good to go
+ * from there. We have to do this otherwise we'll miss the ordered
+ * extents and that results in badness. Please Josef, do not think you
+ * know better and pull this out at some point in the future, it is
+ * right and you are wrong.
+ */
+ if (test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
+ &BTRFS_I(inode)->runtime_flags))
+ filemap_fdatawrite_range(inode->i_mapping, start, orig_end);
+
+ filemap_fdatawait_range(inode->i_mapping, start, orig_end);
end = orig_end;
found = 0;
--- /dev/null
+/*
+ * Copyright (C) 2012 Red Hat. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+struct rcu_string {
+ struct rcu_head rcu;
+ char str[0];
+};
+
+static inline struct rcu_string *rcu_string_strdup(const char *src, gfp_t mask)
+{
+ size_t len = strlen(src) + 1;
+ struct rcu_string *ret = kzalloc(sizeof(struct rcu_string) +
+ (len * sizeof(char)), mask);
+ if (!ret)
+ return ret;
+ strncpy(ret->str, src, len);
+ return ret;
+}
+
+static inline void rcu_string_free(struct rcu_string *str)
+{
+ if (str)
+ kfree_rcu(str, rcu);
+}
+
+#define printk_in_rcu(fmt, ...) do { \
+ rcu_read_lock(); \
+ printk(fmt, __VA_ARGS__); \
+ rcu_read_unlock(); \
+} while (0)
+
+#define printk_ratelimited_in_rcu(fmt, ...) do { \
+ rcu_read_lock(); \
+ printk_ratelimited(fmt, __VA_ARGS__); \
+ rcu_read_unlock(); \
+} while (0)
+
+#define rcu_str_deref(rcu_str) ({ \
+ struct rcu_string *__str = rcu_dereference(rcu_str); \
+ __str->str; \
+})
#include "backref.h"
#include "extent_io.h"
#include "check-integrity.h"
+#include "rcu-string.h"
/*
* This is only the first step towards a full-features scrub. It reads all
* hold all of the paths here
*/
for (i = 0; i < ipath->fspath->elem_cnt; ++i)
- printk(KERN_WARNING "btrfs: %s at logical %llu on dev "
+ printk_in_rcu(KERN_WARNING "btrfs: %s at logical %llu on dev "
"%s, sector %llu, root %llu, inode %llu, offset %llu, "
"length %llu, links %u (path: %s)\n", swarn->errstr,
- swarn->logical, swarn->dev->name,
+ swarn->logical, rcu_str_deref(swarn->dev->name),
(unsigned long long)swarn->sector, root, inum, offset,
min(isize - offset, (u64)PAGE_SIZE), nlink,
(char *)(unsigned long)ipath->fspath->val[i]);
return 0;
err:
- printk(KERN_WARNING "btrfs: %s at logical %llu on dev "
+ printk_in_rcu(KERN_WARNING "btrfs: %s at logical %llu on dev "
"%s, sector %llu, root %llu, inode %llu, offset %llu: path "
"resolving failed with ret=%d\n", swarn->errstr,
- swarn->logical, swarn->dev->name,
+ swarn->logical, rcu_str_deref(swarn->dev->name),
(unsigned long long)swarn->sector, root, inum, offset, ret);
free_ipath(ipath);
do {
ret = tree_backref_for_extent(&ptr, eb, ei, item_size,
&ref_root, &ref_level);
- printk(KERN_WARNING
+ printk_in_rcu(KERN_WARNING
"btrfs: %s at logical %llu on dev %s, "
"sector %llu: metadata %s (level %d) in tree "
- "%llu\n", errstr, swarn.logical, dev->name,
+ "%llu\n", errstr, swarn.logical,
+ rcu_str_deref(dev->name),
(unsigned long long)swarn.sector,
ref_level ? "node" : "leaf",
ret < 0 ? -1 : ref_level,
spin_lock(&sdev->stat_lock);
++sdev->stat.uncorrectable_errors;
spin_unlock(&sdev->stat_lock);
- printk_ratelimited(KERN_ERR
+
+ printk_ratelimited_in_rcu(KERN_ERR
"btrfs: unable to fixup (nodatasum) error at logical %llu on dev %s\n",
- (unsigned long long)fixup->logical, sdev->dev->name);
+ (unsigned long long)fixup->logical,
+ rcu_str_deref(sdev->dev->name));
}
btrfs_free_path(path);
spin_lock(&sdev->stat_lock);
sdev->stat.corrected_errors++;
spin_unlock(&sdev->stat_lock);
- printk_ratelimited(KERN_ERR
+ printk_ratelimited_in_rcu(KERN_ERR
"btrfs: fixed up error at logical %llu on dev %s\n",
- (unsigned long long)logical, sdev->dev->name);
+ (unsigned long long)logical,
+ rcu_str_deref(sdev->dev->name));
}
} else {
did_not_correct_error:
spin_lock(&sdev->stat_lock);
sdev->stat.uncorrectable_errors++;
spin_unlock(&sdev->stat_lock);
- printk_ratelimited(KERN_ERR
+ printk_ratelimited_in_rcu(KERN_ERR
"btrfs: unable to fixup (regular) error at logical %llu on dev %s\n",
- (unsigned long long)logical, sdev->dev->name);
+ (unsigned long long)logical,
+ rcu_str_deref(sdev->dev->name));
}
out:
#include "version.h"
#include "export.h"
#include "compression.h"
+#include "rcu-string.h"
#define CREATE_TRACE_POINTS
#include <trace/events/btrfs.h>
"error %d\n", btrfs_ino(inode), ret);
}
+static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
+{
+ struct btrfs_fs_info *fs_info = btrfs_sb(root->d_sb);
+ struct btrfs_fs_devices *cur_devices;
+ struct btrfs_device *dev, *first_dev = NULL;
+ struct list_head *head;
+ struct rcu_string *name;
+
+ mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ cur_devices = fs_info->fs_devices;
+ while (cur_devices) {
+ head = &cur_devices->devices;
+ list_for_each_entry(dev, head, dev_list) {
+ if (!first_dev || dev->devid < first_dev->devid)
+ first_dev = dev;
+ }
+ cur_devices = cur_devices->seed;
+ }
+
+ if (first_dev) {
+ rcu_read_lock();
+ name = rcu_dereference(first_dev->name);
+ seq_escape(m, name->str, " \t\n\\");
+ rcu_read_unlock();
+ } else {
+ WARN_ON(1);
+ }
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ return 0;
+}
+
static const struct super_operations btrfs_super_ops = {
.drop_inode = btrfs_drop_inode,
.evict_inode = btrfs_evict_inode,
.put_super = btrfs_put_super,
.sync_fs = btrfs_sync_fs,
.show_options = btrfs_show_options,
+ .show_devname = btrfs_show_devname,
.write_inode = btrfs_write_inode,
.dirty_inode = btrfs_fs_dirty_inode,
.alloc_inode = btrfs_alloc_inode,
kmem_cache_free(btrfs_transaction_cachep, cur_trans);
cur_trans = fs_info->running_transaction;
goto loop;
+ } else if (root->fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) {
+ spin_unlock(&root->fs_info->trans_lock);
+ kmem_cache_free(btrfs_transaction_cachep, cur_trans);
+ return -EROFS;
}
atomic_set(&cur_trans->num_writers, 1);
static void cleanup_transaction(struct btrfs_trans_handle *trans,
- struct btrfs_root *root)
+ struct btrfs_root *root, int err)
{
struct btrfs_transaction *cur_trans = trans->transaction;
WARN_ON(trans->use_count > 1);
+ btrfs_abort_transaction(trans, root, err);
+
spin_lock(&root->fs_info->trans_lock);
list_del_init(&cur_trans->list);
+ if (cur_trans == root->fs_info->running_transaction) {
+ root->fs_info->running_transaction = NULL;
+ root->fs_info->trans_no_join = 0;
+ }
spin_unlock(&root->fs_info->trans_lock);
btrfs_cleanup_one_transaction(trans->transaction, root);
// WARN_ON(1);
if (current->journal_info == trans)
current->journal_info = NULL;
- cleanup_transaction(trans, root);
+ cleanup_transaction(trans, root, ret);
return ret;
}
#include "volumes.h"
#include "async-thread.h"
#include "check-integrity.h"
+#include "rcu-string.h"
static int init_first_rw_device(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
device = list_entry(fs_devices->devices.next,
struct btrfs_device, dev_list);
list_del(&device->dev_list);
- kfree(device->name);
+ rcu_string_free(device->name);
kfree(device);
}
kfree(fs_devices);
{
struct btrfs_device *device;
struct btrfs_fs_devices *fs_devices;
+ struct rcu_string *name;
u64 found_transid = btrfs_super_generation(disk_super);
- char *name;
fs_devices = find_fsid(disk_super->fsid);
if (!fs_devices) {
memcpy(device->uuid, disk_super->dev_item.uuid,
BTRFS_UUID_SIZE);
spin_lock_init(&device->io_lock);
- device->name = kstrdup(path, GFP_NOFS);
- if (!device->name) {
+
+ name = rcu_string_strdup(path, GFP_NOFS);
+ if (!name) {
kfree(device);
return -ENOMEM;
}
+ rcu_assign_pointer(device->name, name);
INIT_LIST_HEAD(&device->dev_alloc_list);
/* init readahead state */
device->fs_devices = fs_devices;
fs_devices->num_devices++;
- } else if (!device->name || strcmp(device->name, path)) {
- name = kstrdup(path, GFP_NOFS);
+ } else if (!device->name || strcmp(device->name->str, path)) {
+ name = rcu_string_strdup(path, GFP_NOFS);
if (!name)
return -ENOMEM;
- kfree(device->name);
- device->name = name;
+ rcu_string_free(device->name);
+ rcu_assign_pointer(device->name, name);
if (device->missing) {
fs_devices->missing_devices--;
device->missing = 0;
/* We have held the volume lock, it is safe to get the devices. */
list_for_each_entry(orig_dev, &orig->devices, dev_list) {
+ struct rcu_string *name;
+
device = kzalloc(sizeof(*device), GFP_NOFS);
if (!device)
goto error;
- device->name = kstrdup(orig_dev->name, GFP_NOFS);
- if (!device->name) {
+ /*
+ * This is ok to do without rcu read locked because we hold the
+ * uuid mutex so nothing we touch in here is going to disappear.
+ */
+ name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS);
+ if (!name) {
kfree(device);
goto error;
}
+ rcu_assign_pointer(device->name, name);
device->devid = orig_dev->devid;
device->work.func = pending_bios_fn;
}
list_del_init(&device->dev_list);
fs_devices->num_devices--;
- kfree(device->name);
+ rcu_string_free(device->name);
kfree(device);
}
if (device->bdev)
blkdev_put(device->bdev, device->mode);
- kfree(device->name);
+ rcu_string_free(device->name);
kfree(device);
}
mutex_lock(&fs_devices->device_list_mutex);
list_for_each_entry(device, &fs_devices->devices, dev_list) {
struct btrfs_device *new_device;
+ struct rcu_string *name;
if (device->bdev)
fs_devices->open_devices--;
new_device = kmalloc(sizeof(*new_device), GFP_NOFS);
BUG_ON(!new_device); /* -ENOMEM */
memcpy(new_device, device, sizeof(*new_device));
- new_device->name = kstrdup(device->name, GFP_NOFS);
- BUG_ON(device->name && !new_device->name); /* -ENOMEM */
+
+ /* Safe because we are under uuid_mutex */
+ name = rcu_string_strdup(device->name->str, GFP_NOFS);
+ BUG_ON(device->name && !name); /* -ENOMEM */
+ rcu_assign_pointer(new_device->name, name);
new_device->bdev = NULL;
new_device->writeable = 0;
new_device->in_fs_metadata = 0;
if (!device->name)
continue;
- bdev = blkdev_get_by_path(device->name, flags, holder);
+ bdev = blkdev_get_by_path(device->name->str, flags, holder);
if (IS_ERR(bdev)) {
- printk(KERN_INFO "open %s failed\n", device->name);
+ printk(KERN_INFO "open %s failed\n", device->name->str);
goto error;
}
filemap_write_and_wait(bdev->bd_inode->i_mapping);
struct block_device *bdev;
struct list_head *devices;
struct super_block *sb = root->fs_info->sb;
+ struct rcu_string *name;
u64 total_bytes;
int seeding_dev = 0;
int ret = 0;
goto error;
}
- device->name = kstrdup(device_path, GFP_NOFS);
- if (!device->name) {
+ name = rcu_string_strdup(device_path, GFP_NOFS);
+ if (!name) {
kfree(device);
ret = -ENOMEM;
goto error;
}
+ rcu_assign_pointer(device->name, name);
ret = find_next_devid(root, &device->devid);
if (ret) {
- kfree(device->name);
+ rcu_string_free(device->name);
kfree(device);
goto error;
}
trans = btrfs_start_transaction(root, 0);
if (IS_ERR(trans)) {
- kfree(device->name);
+ rcu_string_free(device->name);
kfree(device);
ret = PTR_ERR(trans);
goto error;
unlock_chunks(root);
btrfs_abort_transaction(trans, root, ret);
btrfs_end_transaction(trans, root);
- kfree(device->name);
+ rcu_string_free(device->name);
kfree(device);
error:
blkdev_put(bdev, FMODE_EXCL);
bio->bi_sector = bbio->stripes[dev_nr].physical >> 9;
dev = bbio->stripes[dev_nr].dev;
if (dev && dev->bdev && (rw != WRITE || dev->writeable)) {
+#ifdef DEBUG
+ struct rcu_string *name;
+
+ rcu_read_lock();
+ name = rcu_dereference(dev->name);
pr_debug("btrfs_map_bio: rw %d, secor=%llu, dev=%lu "
"(%s id %llu), size=%u\n", rw,
(u64)bio->bi_sector, (u_long)dev->bdev->bd_dev,
- dev->name, dev->devid, bio->bi_size);
+ name->str, dev->devid, bio->bi_size);
+ rcu_read_unlock();
+#endif
bio->bi_bdev = dev->bdev;
if (async_submit)
schedule_bio(root, dev, rw, bio);
key.offset = device->devid;
ret = btrfs_search_slot(NULL, dev_root, &key, path, 0, 0);
if (ret) {
- printk(KERN_WARNING "btrfs: no dev_stats entry found for device %s (devid %llu) (OK on first mount after mkfs)\n",
- device->name, (unsigned long long)device->devid);
+ printk_in_rcu(KERN_WARNING "btrfs: no dev_stats entry found for device %s (devid %llu) (OK on first mount after mkfs)\n",
+ rcu_str_deref(device->name),
+ (unsigned long long)device->devid);
__btrfs_reset_dev_stats(device);
device->dev_stats_valid = 1;
btrfs_release_path(path);
BUG_ON(!path);
ret = btrfs_search_slot(trans, dev_root, &key, path, -1, 1);
if (ret < 0) {
- printk(KERN_WARNING "btrfs: error %d while searching for dev_stats item for device %s!\n",
- ret, device->name);
+ printk_in_rcu(KERN_WARNING "btrfs: error %d while searching for dev_stats item for device %s!\n",
+ ret, rcu_str_deref(device->name));
goto out;
}
/* need to delete old one and insert a new one */
ret = btrfs_del_item(trans, dev_root, path);
if (ret != 0) {
- printk(KERN_WARNING "btrfs: delete too small dev_stats item for device %s failed %d!\n",
- device->name, ret);
+ printk_in_rcu(KERN_WARNING "btrfs: delete too small dev_stats item for device %s failed %d!\n",
+ rcu_str_deref(device->name), ret);
goto out;
}
ret = 1;
ret = btrfs_insert_empty_item(trans, dev_root, path,
&key, sizeof(*ptr));
if (ret < 0) {
- printk(KERN_WARNING "btrfs: insert dev_stats item for device %s failed %d!\n",
- device->name, ret);
+ printk_in_rcu(KERN_WARNING "btrfs: insert dev_stats item for device %s failed %d!\n",
+ rcu_str_deref(device->name), ret);
goto out;
}
}
{
if (!dev->dev_stats_valid)
return;
- printk_ratelimited(KERN_ERR
+ printk_ratelimited_in_rcu(KERN_ERR
"btrfs: bdev %s errs: wr %u, rd %u, flush %u, corrupt %u, gen %u\n",
- dev->name,
+ rcu_str_deref(dev->name),
btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_WRITE_ERRS),
btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_READ_ERRS),
btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_FLUSH_ERRS),
static void btrfs_dev_stat_print_on_load(struct btrfs_device *dev)
{
- printk(KERN_INFO "btrfs: bdev %s errs: wr %u, rd %u, flush %u, corrupt %u, gen %u\n",
- dev->name,
+ printk_in_rcu(KERN_INFO "btrfs: bdev %s errs: wr %u, rd %u, flush %u, corrupt %u, gen %u\n",
+ rcu_str_deref(dev->name),
btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_WRITE_ERRS),
btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_READ_ERRS),
btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_FLUSH_ERRS),
/* the mode sent to blkdev_get */
fmode_t mode;
- char *name;
+ struct rcu_string *name;
/* the internal btrfs device id */
u64 devid;
static struct kobj_type uuid_ktype = {
};
-void exofs_sysfs_dbg_print()
+void exofs_sysfs_dbg_print(void)
{
#ifdef CONFIG_EXOFS_DEBUG
struct kobject *k_name, *k_tmp;
/* Wait for I_SYNC. This function drops i_lock... */
inode_sleep_on_writeback(inode);
/* Inode may be gone, start again */
+ spin_lock(&wb->list_lock);
continue;
}
inode->i_state |= I_SYNC;
#define _ASM_GENERIC_BUG_H
#include <linux/compiler.h>
+#include <linux/kernel.h>
#ifdef CONFIG_BUG
struct drm_object_properties *properties;
};
-#define DRM_OBJECT_MAX_PROPERTY 16
+#define DRM_OBJECT_MAX_PROPERTY 24
struct drm_object_properties {
int count;
uint32_t ids[DRM_OBJECT_MAX_PROPERTY];
__u16 src;
__u16 dst;
} p16;
+ struct {
+ __be16 src;
+ __be16 dst;
+ } b16;
__u32 v32;
+ __be32 b32;
};
struct xt_hmark_info {
#ifdef CONFIG_TINY_RCU
-static inline int rcu_needs_cpu(int cpu)
+static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)
{
+ *delta_jiffies = ULONG_MAX;
return 0;
}
int rcu_preempt_needs_cpu(void);
-static inline int rcu_needs_cpu(int cpu)
+static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)
{
+ *delta_jiffies = ULONG_MAX;
return rcu_preempt_needs_cpu();
}
extern void rcu_init(void);
extern void rcu_note_context_switch(int cpu);
-extern int rcu_needs_cpu(int cpu);
+extern int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies);
extern void rcu_cpu_stall_reset(void);
/*
* Number of busy cpus in this group.
*/
atomic_t nr_busy_cpus;
+
+ unsigned long cpumask[0]; /* iteration mask */
};
struct sched_group {
return to_cpumask(sg->cpumask);
}
+/*
+ * cpumask masking which cpus in the group are allowed to iterate up the domain
+ * tree.
+ */
+static inline struct cpumask *sched_group_mask(struct sched_group *sg)
+{
+ return to_cpumask(sg->sgp->cpumask);
+}
+
/**
* group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
* @group: The group whose first cpu is to be returned.
#define tcp_flag_word(tp) ( ((union tcp_word_hdr *)(tp))->words [3])
enum {
- TCP_FLAG_CWR = __cpu_to_be32(0x00800000),
- TCP_FLAG_ECE = __cpu_to_be32(0x00400000),
- TCP_FLAG_URG = __cpu_to_be32(0x00200000),
- TCP_FLAG_ACK = __cpu_to_be32(0x00100000),
- TCP_FLAG_PSH = __cpu_to_be32(0x00080000),
- TCP_FLAG_RST = __cpu_to_be32(0x00040000),
- TCP_FLAG_SYN = __cpu_to_be32(0x00020000),
- TCP_FLAG_FIN = __cpu_to_be32(0x00010000),
- TCP_RESERVED_BITS = __cpu_to_be32(0x0F000000),
- TCP_DATA_OFFSET = __cpu_to_be32(0xF0000000)
+ TCP_FLAG_CWR = __constant_cpu_to_be32(0x00800000),
+ TCP_FLAG_ECE = __constant_cpu_to_be32(0x00400000),
+ TCP_FLAG_URG = __constant_cpu_to_be32(0x00200000),
+ TCP_FLAG_ACK = __constant_cpu_to_be32(0x00100000),
+ TCP_FLAG_PSH = __constant_cpu_to_be32(0x00080000),
+ TCP_FLAG_RST = __constant_cpu_to_be32(0x00040000),
+ TCP_FLAG_SYN = __constant_cpu_to_be32(0x00020000),
+ TCP_FLAG_FIN = __constant_cpu_to_be32(0x00010000),
+ TCP_RESERVED_BITS = __constant_cpu_to_be32(0x0F000000),
+ TCP_DATA_OFFSET = __constant_cpu_to_be32(0xF0000000)
};
/*
enum vga_switcheroo_state {
VGA_SWITCHEROO_OFF,
VGA_SWITCHEROO_ON,
+ /* below are referred only from vga_switcheroo_get_client_state() */
+ VGA_SWITCHEROO_INIT,
+ VGA_SWITCHEROO_NOT_FOUND,
};
enum vga_switcheroo_client_id {
int vga_switcheroo_process_delayed_switch(void);
+int vga_switcheroo_get_client_state(struct pci_dev *dev);
+
#else
static inline void vga_switcheroo_unregister_client(struct pci_dev *dev) {}
int id, bool active) { return 0; }
static inline void vga_switcheroo_unregister_handler(void) {}
static inline int vga_switcheroo_process_delayed_switch(void) { return 0; }
+static inline int vga_switcheroo_get_client_state(struct pci_dev *dev) { return VGA_SWITCHEROO_ON; }
+
#endif
u32 pmtu_orig;
u32 pmtu_learned;
struct inetpeer_addr_base redirect_learned;
- struct list_head gc_list;
+ union {
+ struct list_head gc_list;
+ struct rcu_head gc_rcu;
+ };
/*
* Once inet_peer is queued for deletion (refcnt == -1), following fields
* are not available: rid, ip_id_count, tcp_ts, tcp_ts_stamp
{
struct flowi4 fl4 = {
.flowi4_oif = oif,
+ .flowi4_tos = tos,
.daddr = daddr,
.saddr = saddr,
- .flowi4_tos = tos,
};
return ip_route_output_key(net, &fl4);
}
struct qdisc_skb_cb {
unsigned int pkt_len;
- unsigned char data[24];
+ u16 bond_queue_mapping;
+ u16 _pad;
+ unsigned char data[20];
};
static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
{
struct qdisc_skb_cb *qcb;
- BUILD_BUG_ON(sizeof(skb->cb) < sizeof(unsigned int) + sz);
+
+ BUILD_BUG_ON(sizeof(skb->cb) < offsetof(struct qdisc_skb_cb, data) + sz);
BUILD_BUG_ON(sizeof(qcb->data) < sz);
}
*/
int (*check_stop_free)(struct se_cmd *);
void (*release_cmd)(struct se_cmd *);
+ void (*put_session)(struct se_session *);
/*
* Called with spin_lock_bh(struct se_portal_group->session_lock held.
*/
* "In holdoff": Nothing to do, holding off after unsuccessful attempt.
* "Begin holdoff": Attempt failed, don't retry until next jiffy.
* "Dyntick with callbacks": Entering dyntick-idle despite callbacks.
+ * "Dyntick with lazy callbacks": Entering dyntick-idle w/lazy callbacks.
* "More callbacks": Still more callbacks, try again to clear them out.
* "Callbacks drained": All callbacks processed, off to dyntick idle!
* "Timer": Timer fired to cause CPU to continue processing callbacks.
#define PANIC_TIMER_STEP 100
#define PANIC_BLINK_SPD 18
-int panic_on_oops;
+int panic_on_oops = CONFIG_PANIC_ON_OOPS_VALUE;
static unsigned long tainted_mask;
static int pause_on_oops;
static int pause_on_oops_flag;
*/
crash_kexec(NULL);
- kmsg_dump(KMSG_DUMP_PANIC);
-
/*
* Note smp_send_stop is the usual smp shutdown function, which
* unfortunately means it may not be hardened to work in a panic
*/
smp_send_stop();
+ kmsg_dump(KMSG_DUMP_PANIC);
+
atomic_notifier_call_chain(&panic_notifier_list, 0, buf);
bust_spinlocks(0);
rdp->qlen_lazy += rsp->qlen_lazy;
rdp->qlen += rsp->qlen;
rdp->n_cbs_adopted += rsp->qlen;
+ if (rsp->qlen_lazy != rsp->qlen)
+ rcu_idle_count_callbacks_posted();
rsp->qlen_lazy = 0;
rsp->qlen = 0;
/* Process level is worth LLONG_MAX/2. */
int dynticks_nmi_nesting; /* Track NMI nesting level. */
atomic_t dynticks; /* Even value for idle, else odd. */
+#ifdef CONFIG_RCU_FAST_NO_HZ
+ int dyntick_drain; /* Prepare-for-idle state variable. */
+ unsigned long dyntick_holdoff;
+ /* No retries for the jiffy of failure. */
+ struct timer_list idle_gp_timer;
+ /* Wake up CPU sleeping with callbacks. */
+ unsigned long idle_gp_timer_expires;
+ /* When to wake up CPU (for repost). */
+ bool idle_first_pass; /* First pass of attempt to go idle? */
+ unsigned long nonlazy_posted;
+ /* # times non-lazy CBs posted to CPU. */
+ unsigned long nonlazy_posted_snap;
+ /* idle-period nonlazy_posted snapshot. */
+#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
};
/* RCU's kthread states for tracing. */
* Because we not have RCU_FAST_NO_HZ, just check whether this CPU needs
* any flavor of RCU.
*/
-int rcu_needs_cpu(int cpu)
+int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)
{
+ *delta_jiffies = ULONG_MAX;
return rcu_cpu_has_callbacks(cpu);
}
#define RCU_IDLE_GP_DELAY 6 /* Roughly one grace period. */
#define RCU_IDLE_LAZY_GP_DELAY (6 * HZ) /* Roughly six seconds. */
-/* Loop counter for rcu_prepare_for_idle(). */
-static DEFINE_PER_CPU(int, rcu_dyntick_drain);
-/* If rcu_dyntick_holdoff==jiffies, don't try to enter dyntick-idle mode. */
-static DEFINE_PER_CPU(unsigned long, rcu_dyntick_holdoff);
-/* Timer to awaken the CPU if it enters dyntick-idle mode with callbacks. */
-static DEFINE_PER_CPU(struct timer_list, rcu_idle_gp_timer);
-/* Scheduled expiry time for rcu_idle_gp_timer to allow reposting. */
-static DEFINE_PER_CPU(unsigned long, rcu_idle_gp_timer_expires);
-/* Enable special processing on first attempt to enter dyntick-idle mode. */
-static DEFINE_PER_CPU(bool, rcu_idle_first_pass);
-/* Running count of non-lazy callbacks posted, never decremented. */
-static DEFINE_PER_CPU(unsigned long, rcu_nonlazy_posted);
-/* Snapshot of rcu_nonlazy_posted to detect meaningful exits from idle. */
-static DEFINE_PER_CPU(unsigned long, rcu_nonlazy_posted_snap);
-
-/*
- * Allow the CPU to enter dyntick-idle mode if either: (1) There are no
- * callbacks on this CPU, (2) this CPU has not yet attempted to enter
- * dyntick-idle mode, or (3) this CPU is in the process of attempting to
- * enter dyntick-idle mode. Otherwise, if we have recently tried and failed
- * to enter dyntick-idle mode, we refuse to try to enter it. After all,
- * it is better to incur scheduling-clock interrupts than to spin
- * continuously for the same time duration!
- */
-int rcu_needs_cpu(int cpu)
-{
- /* Flag a new idle sojourn to the idle-entry state machine. */
- per_cpu(rcu_idle_first_pass, cpu) = 1;
- /* If no callbacks, RCU doesn't need the CPU. */
- if (!rcu_cpu_has_callbacks(cpu))
- return 0;
- /* Otherwise, RCU needs the CPU only if it recently tried and failed. */
- return per_cpu(rcu_dyntick_holdoff, cpu) == jiffies;
-}
-
/*
* Does the specified flavor of RCU have non-lazy callbacks pending on
* the specified CPU? Both RCU flavor and CPU are specified by the
rcu_preempt_cpu_has_nonlazy_callbacks(cpu);
}
+/*
+ * Allow the CPU to enter dyntick-idle mode if either: (1) There are no
+ * callbacks on this CPU, (2) this CPU has not yet attempted to enter
+ * dyntick-idle mode, or (3) this CPU is in the process of attempting to
+ * enter dyntick-idle mode. Otherwise, if we have recently tried and failed
+ * to enter dyntick-idle mode, we refuse to try to enter it. After all,
+ * it is better to incur scheduling-clock interrupts than to spin
+ * continuously for the same time duration!
+ *
+ * The delta_jiffies argument is used to store the time when RCU is
+ * going to need the CPU again if it still has callbacks. The reason
+ * for this is that rcu_prepare_for_idle() might need to post a timer,
+ * but if so, it will do so after tick_nohz_stop_sched_tick() has set
+ * the wakeup time for this CPU. This means that RCU's timer can be
+ * delayed until the wakeup time, which defeats the purpose of posting
+ * a timer.
+ */
+int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)
+{
+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
+
+ /* Flag a new idle sojourn to the idle-entry state machine. */
+ rdtp->idle_first_pass = 1;
+ /* If no callbacks, RCU doesn't need the CPU. */
+ if (!rcu_cpu_has_callbacks(cpu)) {
+ *delta_jiffies = ULONG_MAX;
+ return 0;
+ }
+ if (rdtp->dyntick_holdoff == jiffies) {
+ /* RCU recently tried and failed, so don't try again. */
+ *delta_jiffies = 1;
+ return 1;
+ }
+ /* Set up for the possibility that RCU will post a timer. */
+ if (rcu_cpu_has_nonlazy_callbacks(cpu))
+ *delta_jiffies = RCU_IDLE_GP_DELAY;
+ else
+ *delta_jiffies = RCU_IDLE_LAZY_GP_DELAY;
+ return 0;
+}
+
/*
* Handler for smp_call_function_single(). The only point of this
* handler is to wake the CPU up, so the handler does only tracing.
*/
static void rcu_prepare_for_idle_init(int cpu)
{
- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies - 1;
- setup_timer(&per_cpu(rcu_idle_gp_timer, cpu),
- rcu_idle_gp_timer_func, cpu);
- per_cpu(rcu_idle_gp_timer_expires, cpu) = jiffies - 1;
- per_cpu(rcu_idle_first_pass, cpu) = 1;
+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
+
+ rdtp->dyntick_holdoff = jiffies - 1;
+ setup_timer(&rdtp->idle_gp_timer, rcu_idle_gp_timer_func, cpu);
+ rdtp->idle_gp_timer_expires = jiffies - 1;
+ rdtp->idle_first_pass = 1;
}
/*
* Clean up for exit from idle. Because we are exiting from idle, there
- * is no longer any point to rcu_idle_gp_timer, so cancel it. This will
+ * is no longer any point to ->idle_gp_timer, so cancel it. This will
* do nothing if this timer is not active, so just cancel it unconditionally.
*/
static void rcu_cleanup_after_idle(int cpu)
{
- del_timer(&per_cpu(rcu_idle_gp_timer, cpu));
+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
+
+ del_timer(&rdtp->idle_gp_timer);
trace_rcu_prep_idle("Cleanup after idle");
}
* Because it is not legal to invoke rcu_process_callbacks() with irqs
* disabled, we do one pass of force_quiescent_state(), then do a
* invoke_rcu_core() to cause rcu_process_callbacks() to be invoked
- * later. The per-cpu rcu_dyntick_drain variable controls the sequencing.
+ * later. The ->dyntick_drain field controls the sequencing.
*
* The caller must have disabled interrupts.
*/
static void rcu_prepare_for_idle(int cpu)
{
struct timer_list *tp;
+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
/*
* If this is an idle re-entry, for example, due to use of
* RCU_NONIDLE() or the new idle-loop tracing API within the idle
* loop, then don't take any state-machine actions, unless the
* momentary exit from idle queued additional non-lazy callbacks.
- * Instead, repost the rcu_idle_gp_timer if this CPU has callbacks
+ * Instead, repost the ->idle_gp_timer if this CPU has callbacks
* pending.
*/
- if (!per_cpu(rcu_idle_first_pass, cpu) &&
- (per_cpu(rcu_nonlazy_posted, cpu) ==
- per_cpu(rcu_nonlazy_posted_snap, cpu))) {
+ if (!rdtp->idle_first_pass &&
+ (rdtp->nonlazy_posted == rdtp->nonlazy_posted_snap)) {
if (rcu_cpu_has_callbacks(cpu)) {
- tp = &per_cpu(rcu_idle_gp_timer, cpu);
- mod_timer_pinned(tp, per_cpu(rcu_idle_gp_timer_expires, cpu));
+ tp = &rdtp->idle_gp_timer;
+ mod_timer_pinned(tp, rdtp->idle_gp_timer_expires);
}
return;
}
- per_cpu(rcu_idle_first_pass, cpu) = 0;
- per_cpu(rcu_nonlazy_posted_snap, cpu) =
- per_cpu(rcu_nonlazy_posted, cpu) - 1;
+ rdtp->idle_first_pass = 0;
+ rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted - 1;
/*
* If there are no callbacks on this CPU, enter dyntick-idle mode.
* Also reset state to avoid prejudicing later attempts.
*/
if (!rcu_cpu_has_callbacks(cpu)) {
- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies - 1;
- per_cpu(rcu_dyntick_drain, cpu) = 0;
+ rdtp->dyntick_holdoff = jiffies - 1;
+ rdtp->dyntick_drain = 0;
trace_rcu_prep_idle("No callbacks");
return;
}
* If in holdoff mode, just return. We will presumably have
* refrained from disabling the scheduling-clock tick.
*/
- if (per_cpu(rcu_dyntick_holdoff, cpu) == jiffies) {
+ if (rdtp->dyntick_holdoff == jiffies) {
trace_rcu_prep_idle("In holdoff");
return;
}
- /* Check and update the rcu_dyntick_drain sequencing. */
- if (per_cpu(rcu_dyntick_drain, cpu) <= 0) {
+ /* Check and update the ->dyntick_drain sequencing. */
+ if (rdtp->dyntick_drain <= 0) {
/* First time through, initialize the counter. */
- per_cpu(rcu_dyntick_drain, cpu) = RCU_IDLE_FLUSHES;
- } else if (per_cpu(rcu_dyntick_drain, cpu) <= RCU_IDLE_OPT_FLUSHES &&
+ rdtp->dyntick_drain = RCU_IDLE_FLUSHES;
+ } else if (rdtp->dyntick_drain <= RCU_IDLE_OPT_FLUSHES &&
!rcu_pending(cpu) &&
!local_softirq_pending()) {
/* Can we go dyntick-idle despite still having callbacks? */
- trace_rcu_prep_idle("Dyntick with callbacks");
- per_cpu(rcu_dyntick_drain, cpu) = 0;
- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies;
- if (rcu_cpu_has_nonlazy_callbacks(cpu))
- per_cpu(rcu_idle_gp_timer_expires, cpu) =
+ rdtp->dyntick_drain = 0;
+ rdtp->dyntick_holdoff = jiffies;
+ if (rcu_cpu_has_nonlazy_callbacks(cpu)) {
+ trace_rcu_prep_idle("Dyntick with callbacks");
+ rdtp->idle_gp_timer_expires =
jiffies + RCU_IDLE_GP_DELAY;
- else
- per_cpu(rcu_idle_gp_timer_expires, cpu) =
+ } else {
+ rdtp->idle_gp_timer_expires =
jiffies + RCU_IDLE_LAZY_GP_DELAY;
- tp = &per_cpu(rcu_idle_gp_timer, cpu);
- mod_timer_pinned(tp, per_cpu(rcu_idle_gp_timer_expires, cpu));
- per_cpu(rcu_nonlazy_posted_snap, cpu) =
- per_cpu(rcu_nonlazy_posted, cpu);
+ trace_rcu_prep_idle("Dyntick with lazy callbacks");
+ }
+ tp = &rdtp->idle_gp_timer;
+ mod_timer_pinned(tp, rdtp->idle_gp_timer_expires);
+ rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
return; /* Nothing more to do immediately. */
- } else if (--per_cpu(rcu_dyntick_drain, cpu) <= 0) {
+ } else if (--(rdtp->dyntick_drain) <= 0) {
/* We have hit the limit, so time to give up. */
- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies;
+ rdtp->dyntick_holdoff = jiffies;
trace_rcu_prep_idle("Begin holdoff");
invoke_rcu_core(); /* Force the CPU out of dyntick-idle. */
return;
*/
static void rcu_idle_count_callbacks_posted(void)
{
- __this_cpu_add(rcu_nonlazy_posted, 1);
+ __this_cpu_add(rcu_dynticks.nonlazy_posted, 1);
}
#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
{
- struct timer_list *tltp = &per_cpu(rcu_idle_gp_timer, cpu);
+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
+ struct timer_list *tltp = &rdtp->idle_gp_timer;
sprintf(cp, "drain=%d %c timer=%lu",
- per_cpu(rcu_dyntick_drain, cpu),
- per_cpu(rcu_dyntick_holdoff, cpu) == jiffies ? 'H' : '.',
+ rdtp->dyntick_drain,
+ rdtp->dyntick_holdoff == jiffies ? 'H' : '.',
timer_pending(tltp) ? tltp->expires - jiffies : -1);
}
#ifdef CONFIG_SCHED_DEBUG
-static __read_mostly int sched_domain_debug_enabled;
+static __read_mostly int sched_debug_enabled;
-static int __init sched_domain_debug_setup(char *str)
+static int __init sched_debug_setup(char *str)
{
- sched_domain_debug_enabled = 1;
+ sched_debug_enabled = 1;
return 0;
}
-early_param("sched_debug", sched_domain_debug_setup);
+early_param("sched_debug", sched_debug_setup);
+
+static inline bool sched_debug(void)
+{
+ return sched_debug_enabled;
+}
static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level,
struct cpumask *groupmask)
break;
}
- if (!group->sgp->power) {
+ /*
+ * Even though we initialize ->power to something semi-sane,
+ * we leave power_orig unset. This allows us to detect if
+ * domain iteration is still funny without causing /0 traps.
+ */
+ if (!group->sgp->power_orig) {
printk(KERN_CONT "\n");
printk(KERN_ERR "ERROR: domain->cpu_power not "
"set\n");
{
int level = 0;
- if (!sched_domain_debug_enabled)
+ if (!sched_debug_enabled)
return;
if (!sd) {
}
#else /* !CONFIG_SCHED_DEBUG */
# define sched_domain_debug(sd, cpu) do { } while (0)
+static inline bool sched_debug(void)
+{
+ return false;
+}
#endif /* CONFIG_SCHED_DEBUG */
static int sd_degenerate(struct sched_domain *sd)
struct sd_data data;
};
+/*
+ * Build an iteration mask that can exclude certain CPUs from the upwards
+ * domain traversal.
+ *
+ * Asymmetric node setups can result in situations where the domain tree is of
+ * unequal depth, make sure to skip domains that already cover the entire
+ * range.
+ *
+ * In that case build_sched_domains() will have terminated the iteration early
+ * and our sibling sd spans will be empty. Domains should always include the
+ * cpu they're built on, so check that.
+ *
+ */
+static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)
+{
+ const struct cpumask *span = sched_domain_span(sd);
+ struct sd_data *sdd = sd->private;
+ struct sched_domain *sibling;
+ int i;
+
+ for_each_cpu(i, span) {
+ sibling = *per_cpu_ptr(sdd->sd, i);
+ if (!cpumask_test_cpu(i, sched_domain_span(sibling)))
+ continue;
+
+ cpumask_set_cpu(i, sched_group_mask(sg));
+ }
+}
+
+/*
+ * Return the canonical balance cpu for this group, this is the first cpu
+ * of this group that's also in the iteration mask.
+ */
+int group_balance_cpu(struct sched_group *sg)
+{
+ return cpumask_first_and(sched_group_cpus(sg), sched_group_mask(sg));
+}
+
static int
build_overlap_sched_groups(struct sched_domain *sd, int cpu)
{
if (cpumask_test_cpu(i, covered))
continue;
+ child = *per_cpu_ptr(sdd->sd, i);
+
+ /* See the comment near build_group_mask(). */
+ if (!cpumask_test_cpu(i, sched_domain_span(child)))
+ continue;
+
sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size(),
GFP_KERNEL, cpu_to_node(cpu));
goto fail;
sg_span = sched_group_cpus(sg);
-
- child = *per_cpu_ptr(sdd->sd, i);
if (child->child) {
child = child->child;
cpumask_copy(sg_span, sched_domain_span(child));
cpumask_or(covered, covered, sg_span);
sg->sgp = *per_cpu_ptr(sdd->sgp, i);
- atomic_inc(&sg->sgp->ref);
+ if (atomic_inc_return(&sg->sgp->ref) == 1)
+ build_group_mask(sd, sg);
+ /*
+ * Initialize sgp->power such that even if we mess up the
+ * domains and no possible iteration will get us here, we won't
+ * die on a /0 trap.
+ */
+ sg->sgp->power = SCHED_POWER_SCALE * cpumask_weight(sg_span);
+
+ /*
+ * Make sure the first group of this domain contains the
+ * canonical balance cpu. Otherwise the sched_domain iteration
+ * breaks. See update_sg_lb_stats().
+ */
if ((!groups && cpumask_test_cpu(cpu, sg_span)) ||
- cpumask_first(sg_span) == cpu) {
- WARN_ON_ONCE(!cpumask_test_cpu(cpu, sg_span));
+ group_balance_cpu(sg) == cpu)
groups = sg;
- }
if (!first)
first = sg;
cpumask_clear(sched_group_cpus(sg));
sg->sgp->power = 0;
+ cpumask_setall(sched_group_mask(sg));
for_each_cpu(j, span) {
if (get_group(j, sdd, NULL) != group)
sg = sg->next;
} while (sg != sd->groups);
- if (cpu != group_first_cpu(sg))
+ if (cpu != group_balance_cpu(sg))
return;
update_group_power(sd, cpu);
static int __init setup_relax_domain_level(char *str)
{
- unsigned long val;
-
- val = simple_strtoul(str, NULL, 0);
- if (val < sched_domain_level_max)
- default_relax_domain_level = val;
+ if (kstrtoint(str, 0, &default_relax_domain_level))
+ pr_warn("Unable to set relax_domain_level\n");
return 1;
}
#ifdef CONFIG_NUMA
static int sched_domains_numa_levels;
-static int sched_domains_numa_scale;
static int *sched_domains_numa_distance;
static struct cpumask ***sched_domains_numa_masks;
static int sched_domains_curr_level;
static inline int sd_local_flags(int level)
{
- if (sched_domains_numa_distance[level] > REMOTE_DISTANCE)
+ if (sched_domains_numa_distance[level] > RECLAIM_DISTANCE)
return 0;
return SD_BALANCE_EXEC | SD_BALANCE_FORK | SD_WAKE_AFFINE;
return sched_domains_numa_masks[sched_domains_curr_level][cpu_to_node(cpu)];
}
+static void sched_numa_warn(const char *str)
+{
+ static int done = false;
+ int i,j;
+
+ if (done)
+ return;
+
+ done = true;
+
+ printk(KERN_WARNING "ERROR: %s\n\n", str);
+
+ for (i = 0; i < nr_node_ids; i++) {
+ printk(KERN_WARNING " ");
+ for (j = 0; j < nr_node_ids; j++)
+ printk(KERN_CONT "%02d ", node_distance(i,j));
+ printk(KERN_CONT "\n");
+ }
+ printk(KERN_WARNING "\n");
+}
+
+static bool find_numa_distance(int distance)
+{
+ int i;
+
+ if (distance == node_distance(0, 0))
+ return true;
+
+ for (i = 0; i < sched_domains_numa_levels; i++) {
+ if (sched_domains_numa_distance[i] == distance)
+ return true;
+ }
+
+ return false;
+}
+
static void sched_init_numa(void)
{
int next_distance, curr_distance = node_distance(0, 0);
int level = 0;
int i, j, k;
- sched_domains_numa_scale = curr_distance;
sched_domains_numa_distance = kzalloc(sizeof(int) * nr_node_ids, GFP_KERNEL);
if (!sched_domains_numa_distance)
return;
*
* Assumes node_distance(0,j) includes all distances in
* node_distance(i,j) in order to avoid cubic time.
- *
- * XXX: could be optimized to O(n log n) by using sort()
*/
next_distance = curr_distance;
for (i = 0; i < nr_node_ids; i++) {
for (j = 0; j < nr_node_ids; j++) {
- int distance = node_distance(0, j);
- if (distance > curr_distance &&
- (distance < next_distance ||
- next_distance == curr_distance))
- next_distance = distance;
+ for (k = 0; k < nr_node_ids; k++) {
+ int distance = node_distance(i, k);
+
+ if (distance > curr_distance &&
+ (distance < next_distance ||
+ next_distance == curr_distance))
+ next_distance = distance;
+
+ /*
+ * While not a strong assumption it would be nice to know
+ * about cases where if node A is connected to B, B is not
+ * equally connected to A.
+ */
+ if (sched_debug() && node_distance(k, i) != distance)
+ sched_numa_warn("Node-distance not symmetric");
+
+ if (sched_debug() && i && !find_numa_distance(distance))
+ sched_numa_warn("Node-0 not representative");
+ }
+ if (next_distance != curr_distance) {
+ sched_domains_numa_distance[level++] = next_distance;
+ sched_domains_numa_levels = level;
+ curr_distance = next_distance;
+ } else break;
}
- if (next_distance != curr_distance) {
- sched_domains_numa_distance[level++] = next_distance;
- sched_domains_numa_levels = level;
- curr_distance = next_distance;
- } else break;
+
+ /*
+ * In case of sched_debug() we verify the above assumption.
+ */
+ if (!sched_debug())
+ break;
}
/*
* 'level' contains the number of unique distances, excluding the
*per_cpu_ptr(sdd->sg, j) = sg;
- sgp = kzalloc_node(sizeof(struct sched_group_power),
+ sgp = kzalloc_node(sizeof(struct sched_group_power) + cpumask_size(),
GFP_KERNEL, cpu_to_node(j));
if (!sgp)
return -ENOMEM;
if (!sd)
return child;
- set_domain_attribute(sd, attr);
cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu));
if (child) {
sd->level = child->level + 1;
child->parent = sd;
}
sd->child = child;
+ set_domain_attribute(sd, attr);
return sd;
}
} while (group != child->groups);
}
- sdg->sgp->power = power;
+ sdg->sgp->power_orig = sdg->sgp->power = power;
}
/*
/**
* update_sg_lb_stats - Update sched_group's statistics for load balancing.
- * @sd: The sched_domain whose statistics are to be updated.
+ * @env: The load balancing environment.
* @group: sched_group whose statistics are to be updated.
* @load_idx: Load index of sched_domain of this_cpu for load calc.
* @local_group: Does group contain this_cpu.
int i;
if (local_group)
- balance_cpu = group_first_cpu(group);
+ balance_cpu = group_balance_cpu(group);
/* Tally up the load of all CPUs in the group */
max_cpu_load = 0;
/* Bias balancing toward cpus of our domain */
if (local_group) {
- if (idle_cpu(i) && !first_idle_cpu) {
+ if (idle_cpu(i) && !first_idle_cpu &&
+ cpumask_test_cpu(i, sched_group_mask(group))) {
first_idle_cpu = 1;
balance_cpu = i;
}
/**
* update_sd_pick_busiest - return 1 on busiest group
- * @sd: sched_domain whose statistics are to be checked
+ * @env: The load balancing environment.
* @sds: sched_domain statistics
* @sg: sched_group candidate to be checked for being the busiest
* @sgs: sched_group statistics
- * @this_cpu: the current cpu
*
* Determine if @sg is a busier group than the previously selected
* busiest group.
/**
* update_sd_lb_stats - Update sched_domain's statistics for load balancing.
- * @sd: sched_domain whose statistics are to be updated.
- * @this_cpu: Cpu for which load balance is currently performed.
- * @idle: Idle status of this_cpu
+ * @env: The load balancing environment.
* @cpus: Set of cpus considered for load balancing.
* @balance: Should we balance.
* @sds: variable to hold the statistics for this sched_domain.
* Returns 1 when packing is required and a task should be moved to
* this CPU. The amount of the imbalance is returned in *imbalance.
*
- * @sd: The sched_domain whose packing is to be checked.
+ * @env: The load balancing environment.
* @sds: Statistics of the sched_domain which is to be packed
- * @this_cpu: The cpu at whose sched_domain we're performing load-balance.
- * @imbalance: returns amount of imbalanced due to packing.
*/
static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
{
* fix_small_imbalance - Calculate the minor imbalance that exists
* amongst the groups of a sched_domain, during
* load balancing.
+ * @env: The load balancing environment.
* @sds: Statistics of the sched_domain whose imbalance is to be calculated.
- * @this_cpu: The cpu at whose sched_domain we're performing load-balance.
- * @imbalance: Variable to store the imbalance.
*/
static inline
void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
* Also calculates the amount of weighted load which should be moved
* to restore balance.
*
- * @sd: The sched_domain whose busiest group is to be returned.
- * @this_cpu: The cpu for which load balancing is currently being performed.
- * @imbalance: Variable which stores amount of weighted load which should
- * be moved to restore balance/put a group to idle.
- * @idle: The idle status of this_cpu.
+ * @env: The load balancing environment.
* @cpus: The set of CPUs under consideration for load-balancing.
* @balance: Pointer to a variable indicating if this_cpu
* is the appropriate cpu to perform load balancing at this_level.
task_running(rq, task) ||
!task->on_rq)) {
- raw_spin_unlock(&lowest_rq->lock);
+ double_unlock_balance(rq, lowest_rq);
lowest_rq = NULL;
break;
}
DECLARE_PER_CPU(struct sched_domain *, sd_llc);
DECLARE_PER_CPU(int, sd_llc_id);
+extern int group_balance_cpu(struct sched_group *sg);
+
#endif /* CONFIG_SMP */
#include "stats.h"
static void tick_nohz_stop_sched_tick(struct tick_sched *ts)
{
unsigned long seq, last_jiffies, next_jiffies, delta_jiffies;
+ unsigned long rcu_delta_jiffies;
ktime_t last_update, expires, now;
struct clock_event_device *dev = __get_cpu_var(tick_cpu_device).evtdev;
u64 time_delta;
time_delta = timekeeping_max_deferment();
} while (read_seqretry(&xtime_lock, seq));
- if (rcu_needs_cpu(cpu) || printk_needs_cpu(cpu) ||
+ if (rcu_needs_cpu(cpu, &rcu_delta_jiffies) || printk_needs_cpu(cpu) ||
arch_needs_cpu(cpu)) {
next_jiffies = last_jiffies + 1;
delta_jiffies = 1;
/* Get the next timer wheel timer */
next_jiffies = get_next_timer_interrupt(last_jiffies);
delta_jiffies = next_jiffies - last_jiffies;
+ if (rcu_delta_jiffies < delta_jiffies) {
+ next_jiffies = last_jiffies + rcu_delta_jiffies;
+ delta_jiffies = rcu_delta_jiffies;
+ }
}
/*
* Do not stop the tick, if we are only one off
void tracing_off(void)
{
if (global_trace.buffer)
- ring_buffer_record_on(global_trace.buffer);
+ ring_buffer_record_off(global_trace.buffer);
/*
* This flag is only looked at when buffers haven't been
* allocated yet. We don't really care about the race
#ifdef CONFIG_HARDLOCKUP_DETECTOR
+/*
+ * People like the simple clean cpu node info on boot.
+ * Reduce the watchdog noise by only printing messages
+ * that are different from what cpu0 displayed.
+ */
+static unsigned long cpu0_err;
+
static int watchdog_nmi_enable(int cpu)
{
struct perf_event_attr *wd_attr;
/* Try to register using hardware perf events */
event = perf_event_create_kernel_counter(wd_attr, cpu, NULL, watchdog_overflow_callback, NULL);
+
+ /* save cpu0 error for future comparision */
+ if (cpu == 0 && IS_ERR(event))
+ cpu0_err = PTR_ERR(event);
+
if (!IS_ERR(event)) {
- pr_info("enabled, takes one hw-pmu counter.\n");
+ /* only print for cpu0 or different than cpu0 */
+ if (cpu == 0 || cpu0_err)
+ pr_info("enabled on all CPUs, permanently consumes one hw-PMU counter.\n");
goto out_save;
}
+ /* skip displaying the same error again */
+ if (cpu > 0 && (PTR_ERR(event) == cpu0_err))
+ return PTR_ERR(event);
/* vary the KERN level based on the returned errno */
if (PTR_ERR(event) == -EOPNOTSUPP)
default 0 if !BOOTPARAM_SOFTLOCKUP_PANIC
default 1 if BOOTPARAM_SOFTLOCKUP_PANIC
+config PANIC_ON_OOPS
+ bool "Panic on Oops" if EXPERT
+ default n
+ help
+ Say Y here to enable the kernel to panic when it oopses. This
+ has the same effect as setting oops=panic on the kernel command
+ line.
+
+ This feature is useful to ensure that the kernel does not do
+ anything erroneous after an oops which could result in data
+ corruption or other issues.
+
+ Say N if unsure.
+
+config PANIC_ON_OOPS_VALUE
+ int
+ range 0 1
+ default 0 if !PANIC_ON_OOPS
+ default 1 if PANIC_ON_OOPS
+
config DETECT_HUNG_TASK
bool "Detect Hung Tasks"
depends on DEBUG_KERNEL
/* lockup suspected: */
if (print_once) {
print_once = 0;
- spin_dump(lock, "lockup");
+ spin_dump(lock, "lockup suspected");
#ifdef CONFIG_SMP
trigger_all_cpu_backtrace();
#endif
return memblock_search(&memblock.memory, addr) != -1;
}
+/**
+ * memblock_is_region_memory - check if a region is a subset of memory
+ * @base: base of region to check
+ * @size: size of region to check
+ *
+ * Check if the region [@base, @base+@size) is a subset of a memory block.
+ *
+ * RETURNS:
+ * 0 if false, non-zero if true
+ */
int __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t size)
{
int idx = memblock_search(&memblock.memory, base);
memblock.memory.regions[idx].size) >= end;
}
+/**
+ * memblock_is_region_reserved - check if a region intersects reserved memory
+ * @base: base of region to check
+ * @size: size of region to check
+ *
+ * Check if the region [@base, @base+@size) intersects a reserved memory block.
+ *
+ * RETURNS:
+ * 0 if false, non-zero if true
+ */
int __init_memblock memblock_is_region_reserved(phys_addr_t base, phys_addr_t size)
{
memblock_cap_size(base, &size);
unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg,
const nodemask_t *nodemask, unsigned long totalpages)
{
- unsigned long points;
+ long points;
if (oom_unkillable_task(p, memcg, nodemask))
return 0;
* Never return 0 for an eligible task regardless of the root bonus and
* oom_score_adj (oom_score_adj can't be OOM_SCORE_ADJ_MIN here).
*/
- return points ? points : 1;
+ return points > 0 ? points : 1;
}
/*
if (addr->sat_addr.s_node == ATADDR_BCAST &&
!sock_flag(sk, SOCK_BROADCAST)) {
#if 1
- printk(KERN_WARNING "%s is broken and did not set "
- "SO_BROADCAST. It will break when 2.2 is "
- "released.\n",
+ pr_warn("atalk_connect: %s is broken and did not set SO_BROADCAST.\n",
current->comm);
#else
return -EACCES;
}
if (sk->sk_state == BT_CONNECTED || !newsock ||
- test_bit(BT_DEFER_SETUP, &bt_sk(parent)->flags)) {
+ test_bit(BT_SK_DEFER_SETUP, &bt_sk(parent)->flags)) {
bt_accept_unlink(sk);
if (newsock)
sock_graft(sk, newsock);
#define TRACE_ON 1
#define TRACE_OFF 0
-static void send_dm_alert(struct work_struct *unused);
-
-
/*
* Globals, our netlink socket pointer
* and the work handle that will send up
static DEFINE_MUTEX(trace_state_mutex);
struct per_cpu_dm_data {
- struct work_struct dm_alert_work;
- struct sk_buff __rcu *skb;
- atomic_t dm_hit_count;
- struct timer_list send_timer;
- int cpu;
+ spinlock_t lock;
+ struct sk_buff *skb;
+ struct work_struct dm_alert_work;
+ struct timer_list send_timer;
};
struct dm_hw_stat_delta {
static unsigned long dm_hw_check_delta = 2*HZ;
static LIST_HEAD(hw_stats_list);
-static void reset_per_cpu_data(struct per_cpu_dm_data *data)
+static struct sk_buff *reset_per_cpu_data(struct per_cpu_dm_data *data)
{
size_t al;
struct net_dm_alert_msg *msg;
struct nlattr *nla;
struct sk_buff *skb;
- struct sk_buff *oskb = rcu_dereference_protected(data->skb, 1);
+ unsigned long flags;
al = sizeof(struct net_dm_alert_msg);
al += dm_hit_limit * sizeof(struct net_dm_drop_point);
sizeof(struct net_dm_alert_msg));
msg = nla_data(nla);
memset(msg, 0, al);
- } else
- schedule_work_on(data->cpu, &data->dm_alert_work);
-
- /*
- * Don't need to lock this, since we are guaranteed to only
- * run this on a single cpu at a time.
- * Note also that we only update data->skb if the old and new skb
- * pointers don't match. This ensures that we don't continually call
- * synchornize_rcu if we repeatedly fail to alloc a new netlink message.
- */
- if (skb != oskb) {
- rcu_assign_pointer(data->skb, skb);
-
- synchronize_rcu();
-
- atomic_set(&data->dm_hit_count, dm_hit_limit);
+ } else {
+ mod_timer(&data->send_timer, jiffies + HZ / 10);
}
+ spin_lock_irqsave(&data->lock, flags);
+ swap(data->skb, skb);
+ spin_unlock_irqrestore(&data->lock, flags);
+
+ return skb;
}
-static void send_dm_alert(struct work_struct *unused)
+static void send_dm_alert(struct work_struct *work)
{
struct sk_buff *skb;
- struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
+ struct per_cpu_dm_data *data;
- WARN_ON_ONCE(data->cpu != smp_processor_id());
+ data = container_of(work, struct per_cpu_dm_data, dm_alert_work);
- /*
- * Grab the skb we're about to send
- */
- skb = rcu_dereference_protected(data->skb, 1);
-
- /*
- * Replace it with a new one
- */
- reset_per_cpu_data(data);
+ skb = reset_per_cpu_data(data);
- /*
- * Ship it!
- */
if (skb)
genlmsg_multicast(skb, 0, NET_DM_GRP_ALERT, GFP_KERNEL);
-
- put_cpu_var(dm_cpu_data);
}
/*
* This is the timer function to delay the sending of an alert
* in the event that more drops will arrive during the
- * hysteresis period. Note that it operates under the timer interrupt
- * so we don't need to disable preemption here
+ * hysteresis period.
*/
-static void sched_send_work(unsigned long unused)
+static void sched_send_work(unsigned long _data)
{
- struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
-
- schedule_work_on(smp_processor_id(), &data->dm_alert_work);
+ struct per_cpu_dm_data *data = (struct per_cpu_dm_data *)_data;
- put_cpu_var(dm_cpu_data);
+ schedule_work(&data->dm_alert_work);
}
static void trace_drop_common(struct sk_buff *skb, void *location)
struct nlattr *nla;
int i;
struct sk_buff *dskb;
- struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
-
+ struct per_cpu_dm_data *data;
+ unsigned long flags;
- rcu_read_lock();
- dskb = rcu_dereference(data->skb);
+ local_irq_save(flags);
+ data = &__get_cpu_var(dm_cpu_data);
+ spin_lock(&data->lock);
+ dskb = data->skb;
if (!dskb)
goto out;
- if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {
- /*
- * we're already at zero, discard this hit
- */
- goto out;
- }
-
nlh = (struct nlmsghdr *)dskb->data;
nla = genlmsg_data(nlmsg_data(nlh));
msg = nla_data(nla);
for (i = 0; i < msg->entries; i++) {
if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) {
msg->points[i].count++;
- atomic_inc(&data->dm_hit_count);
goto out;
}
}
-
+ if (msg->entries == dm_hit_limit)
+ goto out;
/*
* We need to create a new entry
*/
if (!timer_pending(&data->send_timer)) {
data->send_timer.expires = jiffies + dm_delay * HZ;
- add_timer_on(&data->send_timer, smp_processor_id());
+ add_timer(&data->send_timer);
}
out:
- rcu_read_unlock();
- put_cpu_var(dm_cpu_data);
- return;
+ spin_unlock_irqrestore(&data->lock, flags);
}
static void trace_kfree_skb_hit(void *ignore, struct sk_buff *skb, void *location)
for_each_possible_cpu(cpu) {
data = &per_cpu(dm_cpu_data, cpu);
- data->cpu = cpu;
INIT_WORK(&data->dm_alert_work, send_dm_alert);
init_timer(&data->send_timer);
- data->send_timer.data = cpu;
+ data->send_timer.data = (unsigned long)data;
data->send_timer.function = sched_send_work;
+ spin_lock_init(&data->lock);
reset_per_cpu_data(data);
}
/**
* sk_unattached_filter_create - create an unattached filter
* @fprog: the filter program
- * @sk: the socket to use
+ * @pfp: the unattached filter that is created
*
- * Create a filter independent ofr any socket. We first run some
+ * Create a filter independent of any socket. We first run some
* sanity checks on it to make sure it does not explode on us later.
* If an error occurs or there is insufficient memory for the filter
* a negative errno code is returned. On success the return is zero.
rcu_read_lock_bh();
nht = rcu_dereference_bh(tbl->nht);
- for (h = 0; h < (1 << nht->hash_shift); h++) {
- if (h < s_h)
- continue;
+ for (h = s_h; h < (1 << nht->hash_shift); h++) {
if (h > s_h)
s_idx = 0;
for (n = rcu_dereference_bh(nht->hash_buckets[h]), idx = 0;
read_lock_bh(&tbl->lock);
- for (h = 0; h <= PNEIGH_HASHMASK; h++) {
- if (h < s_h)
- continue;
+ for (h = s_h; h <= PNEIGH_HASHMASK; h++) {
if (h > s_h)
s_idx = 0;
for (n = tbl->phash_buckets[h], idx = 0; n; n = n->next) {
struct neigh_table *tbl;
int t, family, s_t;
int proxy = 0;
- int err = 0;
+ int err;
read_lock(&neigh_tbl_lock);
family = ((struct rtgenmsg *) nlmsg_data(cb->nlh))->rtgen_family;
s_t = cb->args[0];
- for (tbl = neigh_tables, t = 0; tbl && (err >= 0);
+ for (tbl = neigh_tables, t = 0; tbl;
tbl = tbl->next, t++) {
if (t < s_t || (family && tbl->family != family))
continue;
err = pneigh_dump_table(tbl, skb, cb);
else
err = neigh_dump_table(tbl, skb, cb);
+ if (err < 0)
+ break;
}
read_unlock(&neigh_tbl_lock);
void netpoll_send_udp(struct netpoll *np, const char *msg, int len)
{
- int total_len, eth_len, ip_len, udp_len;
+ int total_len, ip_len, udp_len;
struct sk_buff *skb;
struct udphdr *udph;
struct iphdr *iph;
struct ethhdr *eth;
udp_len = len + sizeof(*udph);
- ip_len = eth_len = udp_len + sizeof(*iph);
- total_len = eth_len + ETH_HLEN + NET_IP_ALIGN;
+ ip_len = udp_len + sizeof(*iph);
+ total_len = ip_len + LL_RESERVED_SPACE(np->dev);
- skb = find_skb(np, total_len, total_len - len);
+ skb = find_skb(np, total_len + np->dev->needed_tailroom,
+ total_len - len);
if (!skb)
return;
skb_copy_to_linear_data(skb, msg, len);
- skb->len += len;
+ skb_put(skb, len);
skb_push(skb, sizeof(*udph));
skb_reset_transport_header(skb);
* @to: prior buffer
* @from: buffer to add
* @fragstolen: pointer to boolean
- *
+ * @delta_truesize: how much more was allocated than was requested
*/
bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
bool *fragstolen, int *delta_truesize)
}
EXPORT_SYMBOL(inet_peer_xrlim_allow);
+static void inetpeer_inval_rcu(struct rcu_head *head)
+{
+ struct inet_peer *p = container_of(head, struct inet_peer, gc_rcu);
+
+ spin_lock_bh(&gc_lock);
+ list_add_tail(&p->gc_list, &gc_list);
+ spin_unlock_bh(&gc_lock);
+
+ schedule_delayed_work(&gc_work, gc_delay);
+}
+
void inetpeer_invalidate_tree(int family)
{
struct inet_peer *old, *new, *prev;
prev = cmpxchg(&base->root, old, new);
if (prev == old) {
base->total = 0;
- spin_lock(&gc_lock);
- list_add_tail(&prev->gc_list, &gc_list);
- spin_unlock(&gc_lock);
- schedule_delayed_work(&gc_work, gc_delay);
+ call_rcu(&prev->gc_rcu, inetpeer_inval_rcu);
}
out:
struct ip_options *opt = &(IPCB(skb)->opt);
IP_INC_STATS_BH(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTFORWDATAGRAMS);
+ IP_ADD_STATS_BH(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTOCTETS, skb->len);
if (unlikely(opt->optlen))
ip_forward_options(skb);
struct ip_options *opt = &(IPCB(skb)->opt);
IP_INC_STATS_BH(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTFORWDATAGRAMS);
+ IP_ADD_STATS_BH(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTOCTETS, skb->len);
if (unlikely(opt->optlen))
ip_forward_options(skb);
neigh_flags = neigh->flags;
neigh_release(neigh);
}
- if (neigh_flags & NTF_ROUTER) {
+ if (!(neigh_flags & NTF_ROUTER)) {
RT6_TRACE("purging route %p via non-router but gateway\n",
rt);
return -1;
hdr->hop_limit--;
IP6_INC_STATS_BH(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
+ IP6_ADD_STATS_BH(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len);
return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD, skb, skb->dev, dst->dev,
ip6_forward_finish);
{
IP6_INC_STATS_BH(dev_net(skb_dst(skb)->dev), ip6_dst_idev(skb_dst(skb)),
IPSTATS_MIB_OUTFORWDATAGRAMS);
+ IP6_ADD_STATS_BH(dev_net(skb_dst(skb)->dev), ip6_dst_idev(skb_dst(skb)),
+ IPSTATS_MIB_OUTOCTETS, skb->len);
return dst_output(skb);
}
if (dev) {
unregister_netdev(dev);
spriv->dev = NULL;
+ module_put(THIS_MODULE);
}
}
}
if (rc < 0)
goto out_del_dev;
+ __module_get(THIS_MODULE);
/* Must be done after register_netdev() */
strlcpy(session->ifname, dev->name, IFNAMSIZ);
sk->sk_bound_dev_if);
if (IS_ERR(rt))
goto no_route;
- if (connected)
+ if (connected) {
sk_setup_caps(sk, &rt->dst);
- else
- dst_release(&rt->dst); /* safe since we hold rcu_read_lock */
+ } else {
+ skb_dst_set(skb, &rt->dst);
+ goto xmit;
+ }
}
/* We dont need to clone dst here, it is guaranteed to not disappear.
*/
skb_dst_set_noref(skb, &rt->dst);
+xmit:
/* Queue the packet to IP for output */
rc = ip_queue_xmit(skb, &inet->cork.fl);
rcu_read_unlock();
struct tid_ampdu_rx *tid_rx;
unsigned long timeout;
+ rcu_read_lock();
tid_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[*ptid]);
- if (!tid_rx)
+ if (!tid_rx) {
+ rcu_read_unlock();
return;
+ }
timeout = tid_rx->last_rx + TU_TO_JIFFIES(tid_rx->timeout);
if (time_is_after_jiffies(timeout)) {
mod_timer(&tid_rx->session_timer, timeout);
+ rcu_read_unlock();
return;
}
+ rcu_read_unlock();
#ifdef CONFIG_MAC80211_HT_DEBUG
printk(KERN_DEBUG "rx session timer expired on tid %d\n", (u16)*ptid);
sinfo.filled = 0;
sta_set_sinfo(sta, &sinfo);
- if (sinfo.filled | STATION_INFO_TX_BITRATE)
+ if (sinfo.filled & STATION_INFO_TX_BITRATE)
data[i] = 100000 *
cfg80211_calculate_bitrate(&sinfo.txrate);
i++;
- if (sinfo.filled | STATION_INFO_RX_BITRATE)
+ if (sinfo.filled & STATION_INFO_RX_BITRATE)
data[i] = 100000 *
cfg80211_calculate_bitrate(&sinfo.rxrate);
i++;
- if (sinfo.filled | STATION_INFO_SIGNAL_AVG)
+ if (sinfo.filled & STATION_INFO_SIGNAL_AVG)
data[i] = (u8)sinfo.signal_avg;
i++;
} else {
ieee80211_configure_filter(local);
break;
default:
+ mutex_lock(&local->mtx);
+ if (local->hw_roc_dev == sdata->dev &&
+ local->hw_roc_channel) {
+ /* ignore return value since this is racy */
+ drv_cancel_remain_on_channel(local);
+ ieee80211_queue_work(&local->hw, &local->hw_roc_done);
+ }
+ mutex_unlock(&local->mtx);
+
+ flush_work(&local->hw_roc_start);
+ flush_work(&local->hw_roc_done);
+
flush_work(&sdata->work);
/*
* When we get here, the interface is marked down.
sdata->vif.bss_conf.qos = true;
}
+static void __ieee80211_stop_poll(struct ieee80211_sub_if_data *sdata)
+{
+ lockdep_assert_held(&sdata->local->mtx);
+
+ sdata->u.mgd.flags &= ~(IEEE80211_STA_CONNECTION_POLL |
+ IEEE80211_STA_BEACON_POLL);
+ ieee80211_run_deferred_scan(sdata->local);
+}
+
+static void ieee80211_stop_poll(struct ieee80211_sub_if_data *sdata)
+{
+ mutex_lock(&sdata->local->mtx);
+ __ieee80211_stop_poll(sdata);
+ mutex_unlock(&sdata->local->mtx);
+}
+
static u32 ieee80211_handle_bss_capability(struct ieee80211_sub_if_data *sdata,
u16 capab, bool erp_valid, u8 erp)
{
sdata->u.mgd.flags |= IEEE80211_STA_RESET_SIGNAL_AVE;
/* just to be sure */
- sdata->u.mgd.flags &= ~(IEEE80211_STA_CONNECTION_POLL |
- IEEE80211_STA_BEACON_POLL);
+ ieee80211_stop_poll(sdata);
ieee80211_led_assoc(local, 1);
return;
}
- ifmgd->flags &= ~(IEEE80211_STA_CONNECTION_POLL |
- IEEE80211_STA_BEACON_POLL);
+ __ieee80211_stop_poll(sdata);
mutex_lock(&local->iflist_mtx);
ieee80211_recalc_ps(local, -1);
round_jiffies_up(jiffies +
IEEE80211_CONNECTION_IDLE_TIME));
out:
- ieee80211_run_deferred_scan(local);
mutex_unlock(&local->mtx);
}
net_dbg_ratelimited("%s: cancelling probereq poll due to a received beacon\n",
sdata->name);
#endif
+ mutex_lock(&local->mtx);
ifmgd->flags &= ~IEEE80211_STA_BEACON_POLL;
+ ieee80211_run_deferred_scan(local);
+ mutex_unlock(&local->mtx);
+
mutex_lock(&local->iflist_mtx);
ieee80211_recalc_ps(local, -1);
mutex_unlock(&local->iflist_mtx);
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
u8 frame_buf[DEAUTH_DISASSOC_LEN];
- ifmgd->flags &= ~(IEEE80211_STA_CONNECTION_POLL |
- IEEE80211_STA_BEACON_POLL);
+ ieee80211_stop_poll(sdata);
ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, reason,
false, frame_buf);
u32 flags;
if (sdata->vif.type == NL80211_IFTYPE_STATION) {
- sdata->u.mgd.flags &= ~(IEEE80211_STA_BEACON_POLL |
- IEEE80211_STA_CONNECTION_POLL);
+ __ieee80211_stop_poll(sdata);
/* let's probe the connection once */
flags = sdata->local->hw.flags;
if (test_and_clear_bit(TMR_RUNNING_CHANSW, &ifmgd->timers_running))
add_timer(&ifmgd->chswitch_timer);
ieee80211_sta_reset_beacon_monitor(sdata);
+
+ mutex_lock(&sdata->local->mtx);
ieee80211_restart_sta_timer(sdata);
+ mutex_unlock(&sdata->local->mtx);
}
#endif
}
local->oper_channel = cbss->channel;
- ieee80211_hw_config(local, 0);
+ ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL);
if (!have_sta) {
u32 rates = 0, basic_rates = 0;
return;
}
+ /* was never transmitted */
+ if (local->hw_roc_skb) {
+ u64 cookie;
+
+ cookie = local->hw_roc_cookie ^ 2;
+
+ cfg80211_mgmt_tx_status(local->hw_roc_dev, cookie,
+ local->hw_roc_skb->data,
+ local->hw_roc_skb->len, false,
+ GFP_KERNEL);
+
+ kfree_skb(local->hw_roc_skb);
+ local->hw_roc_skb = NULL;
+ local->hw_roc_skb_for_status = NULL;
+ }
+
if (!local->hw_roc_for_tx)
cfg80211_remain_on_channel_expired(local->hw_roc_dev,
local->hw_roc_cookie,
/* make the station visible */
sta_info_hash_add(local, sta);
- list_add(&sta->list, &local->sta_list);
+ list_add_rcu(&sta->list, &local->sta_list);
set_sta_flag(sta, WLAN_STA_INSERTED);
if (ret)
return ret;
- list_del(&sta->list);
+ list_del_rcu(&sta->list);
mutex_lock(&local->key_mtx);
for (i = 0; i < NUM_DEFAULT_KEYS; i++)
__le16 fc;
struct ieee80211_hdr hdr;
struct ieee80211s_hdr mesh_hdr __maybe_unused;
- struct mesh_path __maybe_unused *mppath = NULL;
+ struct mesh_path __maybe_unused *mppath = NULL, *mpath = NULL;
const u8 *encaps_data;
int encaps_len, skip_header_bytes;
int nh_pos, h_pos;
goto fail;
}
rcu_read_lock();
- if (!is_multicast_ether_addr(skb->data))
- mppath = mpp_path_lookup(skb->data, sdata);
+ if (!is_multicast_ether_addr(skb->data)) {
+ mpath = mesh_path_lookup(skb->data, sdata);
+ if (!mpath)
+ mppath = mpp_path_lookup(skb->data, sdata);
+ }
/*
* Use address extension if it is a packet from
enum ieee80211_sta_state state;
for (state = IEEE80211_STA_NOTEXIST;
- state < sta->sta_state - 1; state++)
+ state < sta->sta_state; state++)
WARN_ON(drv_sta_state(local, sta->sdata, sta,
state, state + 1));
}
return 0;
/* RTP port is even */
- port &= htons(~1);
- rtp_port = port;
- rtcp_port = htons(ntohs(port) + 1);
+ rtp_port = port & ~htons(1);
+ rtcp_port = port | htons(1);
/* Create expect for RTP */
if ((rtp_exp = nf_ct_expect_alloc(ct)) == NULL)
MODULE_ALIAS("ip6t_HMARK");
struct hmark_tuple {
- u32 src;
- u32 dst;
+ __be32 src;
+ __be32 dst;
union hmark_ports uports;
- uint8_t proto;
+ u8 proto;
};
-static inline u32 hmark_addr6_mask(const __u32 *addr32, const __u32 *mask)
+static inline __be32 hmark_addr6_mask(const __be32 *addr32, const __be32 *mask)
{
return (addr32[0] & mask[0]) ^
(addr32[1] & mask[1]) ^
(addr32[3] & mask[3]);
}
-static inline u32
-hmark_addr_mask(int l3num, const __u32 *addr32, const __u32 *mask)
+static inline __be32
+hmark_addr_mask(int l3num, const __be32 *addr32, const __be32 *mask)
{
switch (l3num) {
case AF_INET:
return 0;
}
+static inline void hmark_swap_ports(union hmark_ports *uports,
+ const struct xt_hmark_info *info)
+{
+ union hmark_ports hp;
+ u16 src, dst;
+
+ hp.b32 = (uports->b32 & info->port_mask.b32) | info->port_set.b32;
+ src = ntohs(hp.b16.src);
+ dst = ntohs(hp.b16.dst);
+
+ if (dst > src)
+ uports->v32 = (dst << 16) | src;
+ else
+ uports->v32 = (src << 16) | dst;
+}
+
static int
hmark_ct_set_htuple(const struct sk_buff *skb, struct hmark_tuple *t,
const struct xt_hmark_info *info)
otuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
rtuple = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
- t->src = hmark_addr_mask(otuple->src.l3num, otuple->src.u3.all,
- info->src_mask.all);
- t->dst = hmark_addr_mask(otuple->src.l3num, rtuple->src.u3.all,
- info->dst_mask.all);
+ t->src = hmark_addr_mask(otuple->src.l3num, otuple->src.u3.ip6,
+ info->src_mask.ip6);
+ t->dst = hmark_addr_mask(otuple->src.l3num, rtuple->src.u3.ip6,
+ info->dst_mask.ip6);
if (info->flags & XT_HMARK_FLAG(XT_HMARK_METHOD_L3))
return 0;
t->proto = nf_ct_protonum(ct);
if (t->proto != IPPROTO_ICMP) {
- t->uports.p16.src = otuple->src.u.all;
- t->uports.p16.dst = rtuple->src.u.all;
- t->uports.v32 = (t->uports.v32 & info->port_mask.v32) |
- info->port_set.v32;
- if (t->uports.p16.dst < t->uports.p16.src)
- swap(t->uports.p16.dst, t->uports.p16.src);
+ t->uports.b16.src = otuple->src.u.all;
+ t->uports.b16.dst = rtuple->src.u.all;
+ hmark_swap_ports(&t->uports, info);
}
return 0;
#endif
}
+/* This hash function is endian independent, to ensure consistent hashing if
+ * the cluster is composed of big and little endian systems. */
static inline u32
hmark_hash(struct hmark_tuple *t, const struct xt_hmark_info *info)
{
u32 hash;
+ u32 src = ntohl(t->src);
+ u32 dst = ntohl(t->dst);
- if (t->dst < t->src)
- swap(t->src, t->dst);
+ if (dst < src)
+ swap(src, dst);
- hash = jhash_3words(t->src, t->dst, t->uports.v32, info->hashrnd);
+ hash = jhash_3words(src, dst, t->uports.v32, info->hashrnd);
hash = hash ^ (t->proto & info->proto_mask);
return (((u64)hash * info->hmodulus) >> 32) + info->hoffset;
if (skb_copy_bits(skb, nhoff, &t->uports, sizeof(t->uports)) < 0)
return;
- t->uports.v32 = (t->uports.v32 & info->port_mask.v32) |
- info->port_set.v32;
-
- if (t->uports.p16.dst < t->uports.p16.src)
- swap(t->uports.p16.dst, t->uports.p16.src);
+ hmark_swap_ports(&t->uports, info);
}
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
return -1;
}
noicmp:
- t->src = hmark_addr6_mask(ip6->saddr.s6_addr32, info->src_mask.all);
- t->dst = hmark_addr6_mask(ip6->daddr.s6_addr32, info->dst_mask.all);
+ t->src = hmark_addr6_mask(ip6->saddr.s6_addr32, info->src_mask.ip6);
+ t->dst = hmark_addr6_mask(ip6->daddr.s6_addr32, info->dst_mask.ip6);
if (info->flags & XT_HMARK_FLAG(XT_HMARK_METHOD_L3))
return 0;
}
}
- t->src = (__force u32) ip->saddr;
- t->dst = (__force u32) ip->daddr;
-
- t->src &= info->src_mask.ip;
- t->dst &= info->dst_mask.ip;
+ t->src = ip->saddr & info->src_mask.ip;
+ t->dst = ip->daddr & info->dst_mask.ip;
if (info->flags & XT_HMARK_FLAG(XT_HMARK_METHOD_L3))
return 0;
pr_debug("%p\n", sk);
+ if (llcp_sock == NULL)
+ return -EBADFD;
+
addr->sa_family = AF_NFC;
*len = sizeof(struct sockaddr_nfc_llcp);
cfg80211_hold_bss(bss_from_pub(bss));
wdev->current_bss = bss_from_pub(bss);
+ wdev->sme_state = CFG80211_SME_CONNECTED;
cfg80211_upload_connect_keys(wdev);
nl80211_send_ibss_bssid(wiphy_to_dev(wdev->wiphy), dev, bssid,
struct cfg80211_event *ev;
unsigned long flags;
- CFG80211_DEV_WARN_ON(!wdev->ssid_len);
+ CFG80211_DEV_WARN_ON(wdev->sme_state != CFG80211_SME_CONNECTING);
ev = kzalloc(sizeof(*ev), gfp);
if (!ev)
#ifdef CONFIG_CFG80211_WEXT
wdev->wext.ibss.channel = params->channel;
#endif
+ wdev->sme_state = CFG80211_SME_CONNECTING;
err = rdev->ops->join_ibss(&rdev->wiphy, dev, params);
if (err) {
wdev->connect_keys = NULL;
+ wdev->sme_state = CFG80211_SME_IDLE;
return err;
}
}
wdev->current_bss = NULL;
+ wdev->sme_state = CFG80211_SME_IDLE;
wdev->ssid_len = 0;
#ifdef CONFIG_CFG80211_WEXT
if (!nowext)
enum nl80211_iftype iftype)
{
struct wireless_dev *wdev_iter;
+ u32 used_iftypes = BIT(iftype);
int num[NUM_NL80211_IFTYPES];
int total = 1;
int i, j;
num[wdev_iter->iftype]++;
total++;
+ used_iftypes |= BIT(wdev_iter->iftype);
}
mutex_unlock(&rdev->devlist_mtx);
for (i = 0; i < rdev->wiphy.n_iface_combinations; i++) {
const struct ieee80211_iface_combination *c;
struct ieee80211_iface_limit *limits;
+ u32 all_iftypes = 0;
c = &rdev->wiphy.iface_combinations[i];
if (rdev->wiphy.software_iftypes & BIT(iftype))
continue;
for (j = 0; j < c->n_limits; j++) {
+ all_iftypes |= limits[j].types;
if (!(limits[j].types & BIT(iftype)))
continue;
if (limits[j].max < num[iftype])
limits[j].max -= num[iftype];
}
}
- /* yay, it fits */
+
+ /*
+ * Finally check that all iftypes that we're currently
+ * using are actually part of this combination. If they
+ * aren't then we can't use this combination and have
+ * to continue to the next.
+ */
+ if ((all_iftypes & used_iftypes) != used_iftypes)
+ goto cont;
+
+ /*
+ * This combination covered all interface types and
+ * supported the requested numbers, so we're good.
+ */
kfree(limits);
return 0;
cont:
if (stream->runtime->state != SNDRV_PCM_STATE_RUNNING)
return -EPERM;
retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_PAUSE_PUSH);
- if (!retval) {
+ if (!retval)
stream->runtime->state = SNDRV_PCM_STATE_PAUSED;
- wake_up(&stream->runtime->sleep);
- }
return retval;
}
if (!retval) {
stream->runtime->state = SNDRV_PCM_STATE_SETUP;
wake_up(&stream->runtime->sleep);
+ stream->runtime->hw_pointer = 0;
+ stream->runtime->app_pointer = 0;
+ stream->runtime->total_bytes_available = 0;
+ stream->runtime->total_bytes_transferred = 0;
}
return retval;
}
static int DELAYED_INIT_MARK azx_first_init(struct azx *chip);
static int DELAYED_INIT_MARK azx_probe_continue(struct azx *chip);
+#ifdef SUPPORT_VGA_SWITCHEROO
static struct pci_dev __devinit *get_bound_vga(struct pci_dev *pci);
-#ifdef SUPPORT_VGA_SWITCHEROO
static void azx_vs_set_state(struct pci_dev *pci,
enum vga_switcheroo_state state)
{
#else
#define init_vga_switcheroo(chip) /* NOP */
#define register_vga_switcheroo(chip) 0
+#define check_hdmi_disabled(pci) false
#endif /* SUPPORT_VGA_SWITCHER */
/*
return azx_free(device->device_data);
}
+#ifdef SUPPORT_VGA_SWITCHEROO
/*
* Check of disabled HDMI controller by vga-switcheroo
*/
struct pci_dev *p = get_bound_vga(pci);
if (p) {
- if (vga_default_device() && p != vga_default_device())
+ if (vga_switcheroo_get_client_state(p) == VGA_SWITCHEROO_OFF)
vga_inactive = true;
pci_dev_put(p);
}
return vga_inactive;
}
+#endif /* SUPPORT_VGA_SWITCHEROO */
/*
* white/black-listing for position_fix
{ PCI_DEVICE(0x6549, 0x1200),
.driver_data = AZX_DRIVER_TERA | AZX_DCAPS_NO_64BIT },
/* Creative X-Fi (CA0110-IBG) */
+ /* CTHDA chips */
+ { PCI_DEVICE(0x1102, 0x0010),
+ .driver_data = AZX_DRIVER_CTHDA | AZX_DCAPS_PRESET_CTHDA },
+ { PCI_DEVICE(0x1102, 0x0012),
+ .driver_data = AZX_DRIVER_CTHDA | AZX_DCAPS_PRESET_CTHDA },
#if !defined(CONFIG_SND_CTXFI) && !defined(CONFIG_SND_CTXFI_MODULE)
/* the following entry conflicts with snd-ctxfi driver,
* as ctxfi driver mutates from HD-audio to native mode with
.driver_data = AZX_DRIVER_CTX | AZX_DCAPS_CTX_WORKAROUND |
AZX_DCAPS_RIRB_PRE_DELAY | AZX_DCAPS_POSFIX_LPIB },
#endif
- /* CTHDA chips */
- { PCI_DEVICE(0x1102, 0x0010),
- .driver_data = AZX_DRIVER_CTHDA | AZX_DCAPS_PRESET_CTHDA },
- { PCI_DEVICE(0x1102, 0x0012),
- .driver_data = AZX_DRIVER_CTHDA | AZX_DCAPS_PRESET_CTHDA },
/* Vortex86MX */
{ PCI_DEVICE(0x17f3, 0x3010), .driver_data = AZX_DRIVER_GENERIC },
/* VMware HDAudio */
static int cx_auto_init(struct hda_codec *codec)
{
struct conexant_spec *spec = codec->spec;
- /*snd_hda_sequence_write(codec, cx_auto_init_verbs);*/
+ snd_hda_gen_apply_verbs(codec);
cx_auto_init_output(codec);
cx_auto_init_input(codec);
cx_auto_init_digital(codec);
alc_fix_pll(codec);
alc_auto_init_amp(codec, spec->init_amp);
+ snd_hda_gen_apply_verbs(codec);
alc_init_special_input_src(codec);
alc_auto_init_std(codec);
ALC662_FIXUP_ASUS_MODE7,
ALC662_FIXUP_ASUS_MODE8,
ALC662_FIXUP_NO_JACK_DETECT,
+ ALC662_FIXUP_ZOTAC_Z68,
};
static const struct alc_fixup alc662_fixups[] = {
.type = ALC_FIXUP_FUNC,
.v.func = alc_fixup_no_jack_detect,
},
+ [ALC662_FIXUP_ZOTAC_Z68] = {
+ .type = ALC_FIXUP_PINS,
+ .v.pins = (const struct alc_pincfg[]) {
+ { 0x1b, 0x02214020 }, /* Front HP */
+ { }
+ }
+ },
};
static const struct snd_pci_quirk alc662_fixup_tbl[] = {
SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
+ SND_PCI_QUIRK(0x19da, 0xa130, "Zotac Z68", ALC662_FIXUP_ZOTAC_Z68),
SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
#if 0
}
static int wm2000_poll_bit(struct i2c_client *i2c,
- unsigned int reg, u8 mask, int timeout)
+ unsigned int reg, u8 mask)
{
+ int timeout = 4000;
int val;
val = wm2000_read(i2c, reg);
static int wm2000_power_up(struct i2c_client *i2c, int analogue)
{
struct wm2000_priv *wm2000 = dev_get_drvdata(&i2c->dev);
- int ret, timeout;
+ int ret;
BUG_ON(wm2000->anc_mode != ANC_OFF);
/* Wait for ANC engine to become ready */
if (!wm2000_poll_bit(i2c, WM2000_REG_ANC_STAT,
- WM2000_ANC_ENG_IDLE, 1)) {
+ WM2000_ANC_ENG_IDLE)) {
dev_err(&i2c->dev, "ANC engine failed to reset\n");
return -ETIMEDOUT;
}
if (!wm2000_poll_bit(i2c, WM2000_REG_SYS_STATUS,
- WM2000_STATUS_BOOT_COMPLETE, 1)) {
+ WM2000_STATUS_BOOT_COMPLETE)) {
dev_err(&i2c->dev, "ANC engine failed to initialise\n");
return -ETIMEDOUT;
}
dev_dbg(&i2c->dev, "Download complete\n");
if (analogue) {
- timeout = 248;
- wm2000_write(i2c, WM2000_REG_ANA_VMID_PU_TIME, timeout / 4);
+ wm2000_write(i2c, WM2000_REG_ANA_VMID_PU_TIME, 248 / 4);
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_ANA_SEQ_INCLUDE |
WM2000_MODE_MOUSE_ENABLE |
WM2000_MODE_THERMAL_ENABLE);
} else {
- timeout = 10;
-
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_MOUSE_ENABLE |
WM2000_MODE_THERMAL_ENABLE);
wm2000_write(i2c, WM2000_REG_SYS_CTL2, WM2000_ANC_INT_N_CLR);
if (!wm2000_poll_bit(i2c, WM2000_REG_SYS_STATUS,
- WM2000_STATUS_MOUSE_ACTIVE, timeout)) {
- dev_err(&i2c->dev, "Timed out waiting for device after %dms\n",
- timeout * 10);
+ WM2000_STATUS_MOUSE_ACTIVE)) {
+ dev_err(&i2c->dev, "Timed out waiting for device\n");
return -ETIMEDOUT;
}
static int wm2000_power_down(struct i2c_client *i2c, int analogue)
{
struct wm2000_priv *wm2000 = dev_get_drvdata(&i2c->dev);
- int timeout;
if (analogue) {
- timeout = 248;
- wm2000_write(i2c, WM2000_REG_ANA_VMID_PD_TIME, timeout / 4);
+ wm2000_write(i2c, WM2000_REG_ANA_VMID_PD_TIME, 248 / 4);
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_ANA_SEQ_INCLUDE |
WM2000_MODE_POWER_DOWN);
} else {
- timeout = 10;
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_POWER_DOWN);
}
if (!wm2000_poll_bit(i2c, WM2000_REG_SYS_STATUS,
- WM2000_STATUS_POWER_DOWN_COMPLETE, timeout)) {
+ WM2000_STATUS_POWER_DOWN_COMPLETE)) {
dev_err(&i2c->dev, "Timeout waiting for ANC power down\n");
return -ETIMEDOUT;
}
if (!wm2000_poll_bit(i2c, WM2000_REG_ANC_STAT,
- WM2000_ANC_ENG_IDLE, 1)) {
+ WM2000_ANC_ENG_IDLE)) {
dev_err(&i2c->dev, "Timeout waiting for ANC engine idle\n");
return -ETIMEDOUT;
}
}
if (!wm2000_poll_bit(i2c, WM2000_REG_SYS_STATUS,
- WM2000_STATUS_ANC_DISABLED, 10)) {
+ WM2000_STATUS_ANC_DISABLED)) {
dev_err(&i2c->dev, "Timeout waiting for ANC disable\n");
return -ETIMEDOUT;
}
if (!wm2000_poll_bit(i2c, WM2000_REG_ANC_STAT,
- WM2000_ANC_ENG_IDLE, 1)) {
+ WM2000_ANC_ENG_IDLE)) {
dev_err(&i2c->dev, "Timeout waiting for ANC engine idle\n");
return -ETIMEDOUT;
}
wm2000_write(i2c, WM2000_REG_SYS_CTL2, WM2000_ANC_INT_N_CLR);
if (!wm2000_poll_bit(i2c, WM2000_REG_SYS_STATUS,
- WM2000_STATUS_MOUSE_ACTIVE, 10)) {
+ WM2000_STATUS_MOUSE_ACTIVE)) {
dev_err(&i2c->dev, "Timed out waiting for MOUSE\n");
return -ETIMEDOUT;
}
static int wm2000_enter_standby(struct i2c_client *i2c, int analogue)
{
struct wm2000_priv *wm2000 = dev_get_drvdata(&i2c->dev);
- int timeout;
BUG_ON(wm2000->anc_mode != ANC_ACTIVE);
if (analogue) {
- timeout = 248;
- wm2000_write(i2c, WM2000_REG_ANA_VMID_PD_TIME, timeout / 4);
+ wm2000_write(i2c, WM2000_REG_ANA_VMID_PD_TIME, 248 / 4);
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_ANA_SEQ_INCLUDE |
WM2000_MODE_THERMAL_ENABLE |
WM2000_MODE_STANDBY_ENTRY);
} else {
- timeout = 10;
-
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_THERMAL_ENABLE |
WM2000_MODE_STANDBY_ENTRY);
}
if (!wm2000_poll_bit(i2c, WM2000_REG_SYS_STATUS,
- WM2000_STATUS_ANC_DISABLED, timeout)) {
+ WM2000_STATUS_ANC_DISABLED)) {
dev_err(&i2c->dev,
"Timed out waiting for ANC disable after 1ms\n");
return -ETIMEDOUT;
}
- if (!wm2000_poll_bit(i2c, WM2000_REG_ANC_STAT, WM2000_ANC_ENG_IDLE,
- 1)) {
+ if (!wm2000_poll_bit(i2c, WM2000_REG_ANC_STAT, WM2000_ANC_ENG_IDLE)) {
dev_err(&i2c->dev,
- "Timed out waiting for standby after %dms\n",
- timeout * 10);
+ "Timed out waiting for standby\n");
return -ETIMEDOUT;
}
static int wm2000_exit_standby(struct i2c_client *i2c, int analogue)
{
struct wm2000_priv *wm2000 = dev_get_drvdata(&i2c->dev);
- int timeout;
BUG_ON(wm2000->anc_mode != ANC_STANDBY);
wm2000_write(i2c, WM2000_REG_SYS_CTL1, 0);
if (analogue) {
- timeout = 248;
- wm2000_write(i2c, WM2000_REG_ANA_VMID_PU_TIME, timeout / 4);
+ wm2000_write(i2c, WM2000_REG_ANA_VMID_PU_TIME, 248 / 4);
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_ANA_SEQ_INCLUDE |
WM2000_MODE_THERMAL_ENABLE |
WM2000_MODE_MOUSE_ENABLE);
} else {
- timeout = 10;
-
wm2000_write(i2c, WM2000_REG_SYS_MODE_CNTRL,
WM2000_MODE_THERMAL_ENABLE |
WM2000_MODE_MOUSE_ENABLE);
wm2000_write(i2c, WM2000_REG_SYS_CTL2, WM2000_ANC_INT_N_CLR);
if (!wm2000_poll_bit(i2c, WM2000_REG_SYS_STATUS,
- WM2000_STATUS_MOUSE_ACTIVE, timeout)) {
- dev_err(&i2c->dev, "Timed out waiting for MOUSE after %dms\n",
- timeout * 10);
+ WM2000_STATUS_MOUSE_ACTIVE)) {
+ dev_err(&i2c->dev, "Timed out waiting for MOUSE\n");
return -ETIMEDOUT;
}
#define WM8994_NUM_DRC 3
#define WM8994_NUM_EQ 3
+static struct {
+ unsigned int reg;
+ unsigned int mask;
+} wm8994_vu_bits[] = {
+ { WM8994_LEFT_LINE_INPUT_1_2_VOLUME, WM8994_IN1_VU },
+ { WM8994_RIGHT_LINE_INPUT_1_2_VOLUME, WM8994_IN1_VU },
+ { WM8994_LEFT_LINE_INPUT_3_4_VOLUME, WM8994_IN2_VU },
+ { WM8994_RIGHT_LINE_INPUT_3_4_VOLUME, WM8994_IN2_VU },
+ { WM8994_SPEAKER_VOLUME_LEFT, WM8994_SPKOUT_VU },
+ { WM8994_SPEAKER_VOLUME_RIGHT, WM8994_SPKOUT_VU },
+ { WM8994_LEFT_OUTPUT_VOLUME, WM8994_HPOUT1_VU },
+ { WM8994_RIGHT_OUTPUT_VOLUME, WM8994_HPOUT1_VU },
+ { WM8994_LEFT_OPGA_VOLUME, WM8994_MIXOUT_VU },
+ { WM8994_RIGHT_OPGA_VOLUME, WM8994_MIXOUT_VU },
+
+ { WM8994_AIF1_DAC1_LEFT_VOLUME, WM8994_AIF1DAC1_VU },
+ { WM8994_AIF1_DAC1_RIGHT_VOLUME, WM8994_AIF1DAC1_VU },
+ { WM8994_AIF1_DAC2_LEFT_VOLUME, WM8994_AIF1DAC2_VU },
+ { WM8994_AIF1_DAC2_RIGHT_VOLUME, WM8994_AIF1DAC2_VU },
+ { WM8994_AIF2_DAC_LEFT_VOLUME, WM8994_AIF2DAC_VU },
+ { WM8994_AIF2_DAC_RIGHT_VOLUME, WM8994_AIF2DAC_VU },
+ { WM8994_AIF1_ADC1_LEFT_VOLUME, WM8994_AIF1ADC1_VU },
+ { WM8994_AIF1_ADC1_RIGHT_VOLUME, WM8994_AIF1ADC1_VU },
+ { WM8994_AIF1_ADC2_LEFT_VOLUME, WM8994_AIF1ADC2_VU },
+ { WM8994_AIF1_ADC2_RIGHT_VOLUME, WM8994_AIF1ADC2_VU },
+ { WM8994_AIF2_ADC_LEFT_VOLUME, WM8994_AIF2ADC_VU },
+ { WM8994_AIF2_ADC_RIGHT_VOLUME, WM8994_AIF1ADC2_VU },
+ { WM8994_DAC1_LEFT_VOLUME, WM8994_DAC1_VU },
+ { WM8994_DAC1_RIGHT_VOLUME, WM8994_DAC1_VU },
+ { WM8994_DAC2_LEFT_VOLUME, WM8994_DAC2_VU },
+ { WM8994_DAC2_RIGHT_VOLUME, WM8994_DAC2_VU },
+};
+
static int wm8994_drc_base[] = {
WM8994_AIF1_DRC1_1,
WM8994_AIF1_DRC2_1,
struct snd_soc_codec *codec = w->codec;
struct wm8994 *control = codec->control_data;
int mask = WM8994_AIF1DAC1L_ENA | WM8994_AIF1DAC1R_ENA;
+ int i;
int dac;
int adc;
int val;
WM8994_AIF1DAC2L_ENA);
break;
+ case SND_SOC_DAPM_POST_PMU:
+ for (i = 0; i < ARRAY_SIZE(wm8994_vu_bits); i++)
+ snd_soc_write(codec, wm8994_vu_bits[i].reg,
+ snd_soc_read(codec,
+ wm8994_vu_bits[i].reg));
+ break;
+
case SND_SOC_DAPM_PRE_PMD:
case SND_SOC_DAPM_POST_PMD:
snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5,
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
+ int i;
int dac;
int adc;
int val;
WM8994_AIF2DACR_ENA);
break;
+ case SND_SOC_DAPM_POST_PMU:
+ for (i = 0; i < ARRAY_SIZE(wm8994_vu_bits); i++)
+ snd_soc_write(codec, wm8994_vu_bits[i].reg,
+ snd_soc_read(codec,
+ wm8994_vu_bits[i].reg));
+ break;
+
case SND_SOC_DAPM_PRE_PMD:
case SND_SOC_DAPM_POST_PMD:
snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5,
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
if (wm8994->aif1clk_enable) {
- aif1clk_ev(w, kcontrol, event);
+ aif1clk_ev(w, kcontrol, SND_SOC_DAPM_PRE_PMU);
snd_soc_update_bits(codec, WM8994_AIF1_CLOCKING_1,
WM8994_AIF1CLK_ENA_MASK,
WM8994_AIF1CLK_ENA);
+ aif1clk_ev(w, kcontrol, SND_SOC_DAPM_POST_PMU);
wm8994->aif1clk_enable = 0;
}
if (wm8994->aif2clk_enable) {
- aif2clk_ev(w, kcontrol, event);
+ aif2clk_ev(w, kcontrol, SND_SOC_DAPM_PRE_PMU);
snd_soc_update_bits(codec, WM8994_AIF2_CLOCKING_1,
WM8994_AIF2CLK_ENA_MASK,
WM8994_AIF2CLK_ENA);
+ aif2clk_ev(w, kcontrol, SND_SOC_DAPM_POST_PMU);
wm8994->aif2clk_enable = 0;
}
break;
switch (event) {
case SND_SOC_DAPM_POST_PMD:
if (wm8994->aif1clk_disable) {
+ aif1clk_ev(w, kcontrol, SND_SOC_DAPM_PRE_PMD);
snd_soc_update_bits(codec, WM8994_AIF1_CLOCKING_1,
WM8994_AIF1CLK_ENA_MASK, 0);
- aif1clk_ev(w, kcontrol, event);
+ aif1clk_ev(w, kcontrol, SND_SOC_DAPM_POST_PMD);
wm8994->aif1clk_disable = 0;
}
if (wm8994->aif2clk_disable) {
+ aif2clk_ev(w, kcontrol, SND_SOC_DAPM_PRE_PMD);
snd_soc_update_bits(codec, WM8994_AIF2_CLOCKING_1,
WM8994_AIF2CLK_ENA_MASK, 0);
- aif2clk_ev(w, kcontrol, event);
+ aif2clk_ev(w, kcontrol, SND_SOC_DAPM_POST_PMD);
wm8994->aif2clk_disable = 0;
}
break;
static const struct snd_soc_dapm_widget wm8994_lateclk_widgets[] = {
SND_SOC_DAPM_SUPPLY("AIF1CLK", WM8994_AIF1_CLOCKING_1, 0, 0, aif1clk_ev,
- SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD),
+ SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU |
+ SND_SOC_DAPM_PRE_PMD),
SND_SOC_DAPM_SUPPLY("AIF2CLK", WM8994_AIF2_CLOCKING_1, 0, 0, aif2clk_ev,
- SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD),
+ SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU |
+ SND_SOC_DAPM_PRE_PMD),
SND_SOC_DAPM_PGA("Direct Voice", SND_SOC_NOPM, 0, 0, NULL, 0),
SND_SOC_DAPM_MIXER("SPKL", WM8994_POWER_MANAGEMENT_3, 8, 0,
left_speaker_mixer, ARRAY_SIZE(left_speaker_mixer)),
pm_runtime_put(codec->dev);
- /* Latch volume updates (right only; we always do left then right). */
- snd_soc_update_bits(codec, WM8994_AIF1_DAC1_LEFT_VOLUME,
- WM8994_AIF1DAC1_VU, WM8994_AIF1DAC1_VU);
- snd_soc_update_bits(codec, WM8994_AIF1_DAC1_RIGHT_VOLUME,
- WM8994_AIF1DAC1_VU, WM8994_AIF1DAC1_VU);
- snd_soc_update_bits(codec, WM8994_AIF1_DAC2_LEFT_VOLUME,
- WM8994_AIF1DAC2_VU, WM8994_AIF1DAC2_VU);
- snd_soc_update_bits(codec, WM8994_AIF1_DAC2_RIGHT_VOLUME,
- WM8994_AIF1DAC2_VU, WM8994_AIF1DAC2_VU);
- snd_soc_update_bits(codec, WM8994_AIF2_DAC_LEFT_VOLUME,
- WM8994_AIF2DAC_VU, WM8994_AIF2DAC_VU);
- snd_soc_update_bits(codec, WM8994_AIF2_DAC_RIGHT_VOLUME,
- WM8994_AIF2DAC_VU, WM8994_AIF2DAC_VU);
- snd_soc_update_bits(codec, WM8994_AIF1_ADC1_LEFT_VOLUME,
- WM8994_AIF1ADC1_VU, WM8994_AIF1ADC1_VU);
- snd_soc_update_bits(codec, WM8994_AIF1_ADC1_RIGHT_VOLUME,
- WM8994_AIF1ADC1_VU, WM8994_AIF1ADC1_VU);
- snd_soc_update_bits(codec, WM8994_AIF1_ADC2_LEFT_VOLUME,
- WM8994_AIF1ADC2_VU, WM8994_AIF1ADC2_VU);
- snd_soc_update_bits(codec, WM8994_AIF1_ADC2_RIGHT_VOLUME,
- WM8994_AIF1ADC2_VU, WM8994_AIF1ADC2_VU);
- snd_soc_update_bits(codec, WM8994_AIF2_ADC_LEFT_VOLUME,
- WM8994_AIF2ADC_VU, WM8994_AIF1ADC2_VU);
- snd_soc_update_bits(codec, WM8994_AIF2_ADC_RIGHT_VOLUME,
- WM8994_AIF2ADC_VU, WM8994_AIF1ADC2_VU);
- snd_soc_update_bits(codec, WM8994_DAC1_LEFT_VOLUME,
- WM8994_DAC1_VU, WM8994_DAC1_VU);
- snd_soc_update_bits(codec, WM8994_DAC1_RIGHT_VOLUME,
- WM8994_DAC1_VU, WM8994_DAC1_VU);
- snd_soc_update_bits(codec, WM8994_DAC2_LEFT_VOLUME,
- WM8994_DAC2_VU, WM8994_DAC2_VU);
- snd_soc_update_bits(codec, WM8994_DAC2_RIGHT_VOLUME,
- WM8994_DAC2_VU, WM8994_DAC2_VU);
+ /* Latch volume update bits */
+ for (i = 0; i < ARRAY_SIZE(wm8994_vu_bits); i++)
+ snd_soc_update_bits(codec, wm8994_vu_bits[i].reg,
+ wm8994_vu_bits[i].mask,
+ wm8994_vu_bits[i].mask);
/* Set the low bit of the 3D stereo depth so TLV matches */
snd_soc_update_bits(codec, WM8994_AIF1_DAC1_FILTERS_2,
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
+#include <linux/pinctrl/consumer.h>
#include "imx-audmux.h"
static int __devinit imx_audmux_probe(struct platform_device *pdev)
{
struct resource *res;
+ struct pinctrl *pinctrl;
const struct of_device_id *of_id =
of_match_device(imx_audmux_dt_ids, &pdev->dev);
if (!audmux_base)
return -EADDRNOTAVAIL;
+ pinctrl = devm_pinctrl_get_select_default(&pdev->dev);
+ if (IS_ERR(pinctrl)) {
+ dev_err(&pdev->dev, "setup pinctrl failed!");
+ return PTR_ERR(pinctrl);
+ }
+
audmux_clk = clk_get(&pdev->dev, "audmux");
if (IS_ERR(audmux_clk)) {
dev_dbg(&pdev->dev, "cannot get clock: %ld\n",
/* do we need to add this widget to the list ? */
if (list) {
int err;
- err = dapm_list_add_widget(list, path->sink);
+ err = dapm_list_add_widget(list, path->source);
if (err < 0) {
dev_err(widget->dapm->dev, "could not add widget %s\n",
widget->name);
if (stream == SNDRV_PCM_STREAM_PLAYBACK)
paths = is_connected_output_ep(dai->playback_widget, list);
else
- paths = is_connected_input_ep(dai->playback_widget, list);
+ paths = is_connected_input_ep(dai->capture_widget, list);
trace_snd_soc_dapm_connected(paths, stream);
dapm_clear_walk(&card->dapm);
for (i = 0; i < card->num_links; i++) {
be = &card->rtd[i];
+ if (!be->dai_link->no_pcm)
+ continue;
+
if (be->cpu_dai->playback_widget == widget ||
be->codec_dai->playback_widget == widget)
return be;
for (i = 0; i < card->num_links; i++) {
be = &card->rtd[i];
+ if (!be->dai_link->no_pcm)
+ continue;
+
if (be->cpu_dai->capture_widget == widget ||
be->codec_dai->capture_widget == widget)
return be;
MODULE_DESCRIPTION("Tegra30 AHUB driver");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:" DRV_NAME);
+MODULE_DEVICE_TABLE(of, tegra30_ahub_of_match);
unsigned long unlink_mask; /* bitmask of unlinked urbs */
/* data and sync endpoints for this stream */
+ unsigned int ep_num; /* the endpoint number */
struct snd_usb_endpoint *data_endpoint;
struct snd_usb_endpoint *sync_endpoint;
unsigned long flags;
subs->formats |= fp->formats;
subs->num_formats++;
subs->fmt_type = fp->fmt_type;
+ subs->ep_num = fp->endpoint;
}
/*
if (as->fmt_type != fp->fmt_type)
continue;
subs = &as->substream[stream];
- if (!subs->data_endpoint)
- continue;
- if (subs->data_endpoint->ep_num == fp->endpoint) {
+ if (subs->ep_num == fp->endpoint) {
list_add_tail(&fp->list, &subs->fmt_list);
subs->num_formats++;
subs->formats |= fp->formats;
if (as->fmt_type != fp->fmt_type)
continue;
subs = &as->substream[stream];
- if (subs->data_endpoint)
+ if (subs->ep_num)
continue;
err = snd_pcm_new_stream(as->pcm, stream, 1);
if (err < 0)