extended tables themselves, and also PASID support. With
this option set, extended tables will not be used even
on hardware which claims to support them.
++++++ ++ tboot_noforce [Default Off]
++++++ ++ Do not force the Intel IOMMU enabled under tboot.
++++++ ++ By default, tboot will force Intel IOMMU on, which
++++++ ++ could harm performance of some high-throughput
++++++ ++ devices like 40GBit network cards, even if identity
++++++ ++ mapping is enabled.
++++++ ++ Note that using this option lowers the security
++++++ ++ provided by tboot because it makes the system
++++++ ++ vulnerable to DMA attacks.
intel_idle.max_cstate= [KNL,HW,ACPI,X86]
0 disables intel_idle and fall back on acpi_idle.
nobypass [PPC/POWERNV]
Disable IOMMU bypass, using IOMMU for PCI devices.
++++ ++++ iommu.passthrough=
++++ ++++ [ARM64] Configure DMA to bypass the IOMMU by default.
++++ ++++ Format: { "0" | "1" }
++++ ++++ 0 - Use IOMMU translation for DMA.
++++ ++++ 1 - Bypass the IOMMU for DMA.
++++ ++++ unset - Use IOMMU translation for DMA.
io7= [HW] IO7 for Marvel based alpha systems
See comment before marvel_specify_io7 in
kernel and module base offset ASLR (Address Space
Layout Randomization).
+ ++ ++ + kasan_multi_shot
+ ++ ++ + [KNL] Enforce KASAN (Kernel Address Sanitizer) to print
+ ++ ++ + report on every invalid memory access. Without this
+ ++ ++ + parameter KASAN will print report only for the first
+ ++ ++ + invalid access.
+ ++ ++ +
keepinitrd [HW,ARM]
kernelcore= [KNL,X86,IA-64,PPC]
BPF (Safe dynamic programs and tools)
M: Alexei Starovoitov <ast@kernel.org>
+++++++ +M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org
L: linux-kernel@vger.kernel.org
S: Supported
+++++++ +F: arch/x86/net/bpf_jit*
+++++++ +F: Documentation/networking/filter.txt
+++++++ +F: include/linux/bpf*
+++++++ +F: include/linux/filter.h
+++++++ +F: include/uapi/linux/bpf*
+++++++ +F: include/uapi/linux/filter.h
F: kernel/bpf/
------- -F: tools/testing/selftests/bpf/
+++++++ +F: kernel/trace/bpf_trace.c
F: lib/test_bpf.c
+++++++ +F: net/bpf/
+++++++ +F: net/core/filter.c
+++++++ +F: net/sched/act_bpf.c
+++++++ +F: net/sched/cls_bpf.c
+++++++ +F: samples/bpf/
+++++++ +F: tools/net/bpf*
+++++++ +F: tools/testing/selftests/bpf/
BROADCOM B44 10/100 ETHERNET DRIVER
M: Michael Chan <michael.chan@broadcom.com>
CISCO VIC ETHERNET NIC DRIVER
M: Christian Benvenuti <benve@cisco.com>
- -- -M: Sujith Sankar <ssujith@cisco.com>
M: Govindarajulu Varadarajan <_govind@gmx.com>
M: Neel Patel <neepatel@cisco.com>
S: Supported
F: lib/lru_cache.c
F: Documentation/blockdev/drbd/
------- -DRIVER CORE, KOBJECTS, DEBUGFS, KERNFS AND SYSFS
+++++++ +DRIVER CORE, KOBJECTS, DEBUGFS AND SYSFS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git
S: Supported
F: Documentation/kobject.txt
F: drivers/base/
F: fs/debugfs/
------- -F: fs/kernfs/
F: fs/sysfs/
F: include/linux/debugfs.h
F: include/linux/kobj*
S: Maintained
F: drivers/edac/mpc85xx_edac.[ch]
+ ++ ++ +EDAC-PND2
+ ++ ++ +M: Tony Luck <tony.luck@intel.com>
+ ++ ++ +L: linux-edac@vger.kernel.org
+ ++ ++ +S: Maintained
+ ++ ++ +F: drivers/edac/pnd2_edac.[ch]
+ ++ ++ +
EDAC-PASEMI
M: Egor Martovetsky <egor@pasemi.com>
L: linux-edac@vger.kernel.org
F: net/bridge/
ETHERNET PHY LIBRARY
+++++++ +M: Andrew Lunn <andrew@lunn.ch>
M: Florian Fainelli <f.fainelli@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
S: Maintained
F: Documentation/devicetree/bindings/iommu/
F: drivers/iommu/
++++++++ F: include/linux/iommu.h
++++++++ F: include/linux/iova.h
IP MASQUERADING
M: Juanjo Ciarlante <jjciarla@raiz.uncu.edu.ar>
F: fs/autofs4/
KERNEL BUILD + files below scripts/ (unless maintained elsewhere)
+++++++ +M: Masahiro Yamada <yamada.masahiro@socionext.com>
M: Michal Marek <mmarek@suse.com>
------- -T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git for-next
------- -T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git rc-fixes
+++++++ +T: git git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git
L: linux-kbuild@vger.kernel.org
S: Maintained
F: Documentation/kbuild/
F: arch/mips/include/asm/kvm*
F: arch/mips/kvm/
+++++++ +KERNFS
+++++++ +M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+++++++ +M: Tejun Heo <tj@kernel.org>
+++++++ +T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git
+++++++ +S: Supported
+++++++ +F: include/linux/kernfs.h
+++++++ +F: fs/kernfs/
+++++++ +
KEXEC
M: Eric Biederman <ebiederm@xmission.com>
W: http://kernel.org/pub/linux/utils/kernel/kexec/
F: net/mac80211/
F: drivers/net/wireless/mac80211_hwsim.[ch]
- -- -MACVLAN DRIVER
- -- -M: Patrick McHardy <kaber@trash.net>
- -- -L: netdev@vger.kernel.org
- -- -S: Maintained
- -- -F: drivers/net/macvlan.c
- -- -F: include/linux/if_macvlan.h
- -- -
MAILBOX API
M: Jassi Brar <jassisinghbrar@gmail.com>
L: linux-kernel@vger.kernel.org
MARVELL MWIFIEX WIRELESS DRIVER
M: Amitkumar Karwar <akarwar@marvell.com>
M: Nishant Sarmukadam <nishants@marvell.com>
+ ++ +M: Ganapathi Bhat <gbhat@marvell.com>
+ ++ +M: Xinming Hu <huxm@marvell.com>
L: linux-wireless@vger.kernel.org
S: Maintained
F: drivers/net/wireless/marvell/mwifiex/
Q: http://patchwork.ozlabs.org/project/netdev/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git
+++++++ +B: mailto:netdev@vger.kernel.org
S: Maintained
F: net/
F: include/net/
F: block/partitions/ibm.c
S390 NETWORK DRIVERS
+++++++ +M: Julian Wiedmann <jwi@linux.vnet.ibm.com>
M: Ursula Braun <ubraun@linux.vnet.ibm.com>
L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/scsi/zfcp_*
S390 IUCV NETWORK LAYER
+++++++ +M: Julian Wiedmann <jwi@linux.vnet.ibm.com>
M: Ursula Braun <ubraun@linux.vnet.ibm.com>
L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/
F: include/linux/clk/ti.h
TI ETHERNET SWITCH DRIVER (CPSW)
------- -M: Mugunthan V N <mugunthanvnm@ti.com>
R: Grygorii Strashko <grygorii.strashko@ti.com>
L: linux-omap@vger.kernel.org
L: netdev@vger.kernel.org
F: tools/virtio/
F: drivers/net/virtio_net.c
F: drivers/block/virtio_blk.c
------- -F: include/linux/virtio_*.h
+++++++ +F: include/linux/virtio*.h
F: include/uapi/linux/virtio_*.h
F: drivers/crypto/virtio/
S: Maintained
F: drivers/media/platform/vivid/*
- -- -VLAN (802.1Q)
- -- -M: Patrick McHardy <kaber@trash.net>
- -- -L: netdev@vger.kernel.org
- -- -S: Maintained
- -- -F: drivers/net/macvlan.c
- -- -F: include/linux/if_*vlan.h
- -- -F: net/8021q/
- -- -
VLYNQ BUS
M: Florian Fainelli <f.fainelli@gmail.com>
L: openwrt-devel@lists.openwrt.org (subscribers-only)
__arm_dma_free(dev, size, cpu_addr, handle, attrs, true);
}
+++++++ +/*
+++++++ + * The whole dma_get_sgtable() idea is fundamentally unsafe - it seems
+++++++ + * that the intention is to allow exporting memory allocated via the
+++++++ + * coherent DMA APIs through the dma_buf API, which only accepts a
+++++++ + * scattertable. This presents a couple of problems:
+++++++ + * 1. Not all memory allocated via the coherent DMA APIs is backed by
+++++++ + * a struct page
+++++++ + * 2. Passing coherent DMA memory into the streaming APIs is not allowed
+++++++ + * as we will try to flush the memory through a different alias to that
+++++++ + * actually being used (and the flushes are redundant.)
+++++++ + */
int arm_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size,
unsigned long attrs)
{
------- - struct page *page = pfn_to_page(dma_to_pfn(dev, handle));
+++++++ + unsigned long pfn = dma_to_pfn(dev, handle);
+++++++ + struct page *page;
int ret;
+++++++ + /* If the PFN is not valid, we do not have a struct page */
+++++++ + if (!pfn_valid(pfn))
+++++++ + return -ENXIO;
+++++++ +
+++++++ + page = pfn_to_page(pfn);
+++++++ +
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
if (unlikely(ret))
return ret;
const struct dma_map_ops *dma_ops;
dev->archdata.dma_coherent = coherent;
+++++ +++
+++++ +++ /*
+++++ +++ * Don't override the dma_ops if they have already been set. Ideally
+++++ +++ * this should be the only location where dma_ops are set, remove this
+++++ +++ * check when all other callers of set_dma_ops will have disappeared.
+++++ +++ */
+++++ +++ if (dev->dma_ops)
+++++ +++ return;
+++++ +++
if (arm_setup_iommu_dma_ops(dev, dma_base, size, iommu))
dma_ops = arm_get_iommu_dma_map_ops(coherent);
else
#include <linux/dma-contiguous.h>
#include <linux/vmalloc.h>
#include <linux/swiotlb.h>
++++++++ #include <linux/pci.h>
#include <asm/cacheflush.h>
.mapping_error = iommu_dma_mapping_error,
};
----- ---/*
----- --- * TODO: Right now __iommu_setup_dma_ops() gets called too early to do
----- --- * everything it needs to - the device is only partially created and the
----- --- * IOMMU driver hasn't seen it yet, so it can't have a group. Thus we
----- --- * need this delayed attachment dance. Once IOMMU probe ordering is sorted
----- --- * to move the arch_setup_dma_ops() call later, all the notifier bits below
----- --- * become unnecessary, and will go away.
----- --- */
----- ---struct iommu_dma_notifier_data {
----- --- struct list_head list;
----- --- struct device *dev;
----- --- const struct iommu_ops *ops;
----- --- u64 dma_base;
----- --- u64 size;
----- ---};
----- ---static LIST_HEAD(iommu_dma_masters);
----- ---static DEFINE_MUTEX(iommu_dma_notifier_lock);
+++++ +++static int __init __iommu_dma_init(void)
+++++ +++{
+++++ +++ return iommu_dma_init();
+++++ +++}
+++++ +++arch_initcall(__iommu_dma_init);
----- ---static bool do_iommu_attach(struct device *dev, const struct iommu_ops *ops,
----- --- u64 dma_base, u64 size)
+++++ +++static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
+++++ +++ const struct iommu_ops *ops)
{
----- --- struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+++++ +++ struct iommu_domain *domain;
+++++ +++
+++++ +++ if (!ops)
+++++ +++ return;
/*
----- --- * If the IOMMU driver has the DMA domain support that we require,
----- --- * then the IOMMU core will have already configured a group for this
----- --- * device, and allocated the default domain for that group.
+++++ +++ * The IOMMU core code allocates the default DMA domain, which the
+++++ +++ * underlying IOMMU driver needs to support via the dma-iommu layer.
*/
+++++ +++ domain = iommu_get_domain_for_dev(dev);
+++++ +++
if (!domain)
goto out_err;
dev->dma_ops = &iommu_dma_ops;
}
----- --- return true;
+++++ +++ return;
+++++ +++
out_err:
----- --- pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
+++++ +++ pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
dev_name(dev));
----- --- return false;
----- ---}
----- ---
----- ---static void queue_iommu_attach(struct device *dev, const struct iommu_ops *ops,
----- --- u64 dma_base, u64 size)
----- ---{
----- --- struct iommu_dma_notifier_data *iommudata;
----- ---
----- --- iommudata = kzalloc(sizeof(*iommudata), GFP_KERNEL);
----- --- if (!iommudata)
----- --- return;
----- ---
----- --- iommudata->dev = dev;
----- --- iommudata->ops = ops;
----- --- iommudata->dma_base = dma_base;
----- --- iommudata->size = size;
----- ---
----- --- mutex_lock(&iommu_dma_notifier_lock);
----- --- list_add(&iommudata->list, &iommu_dma_masters);
----- --- mutex_unlock(&iommu_dma_notifier_lock);
----- ---}
----- ---
----- ---static int __iommu_attach_notifier(struct notifier_block *nb,
----- --- unsigned long action, void *data)
----- ---{
----- --- struct iommu_dma_notifier_data *master, *tmp;
----- ---
----- --- if (action != BUS_NOTIFY_BIND_DRIVER)
----- --- return 0;
----- ---
----- --- mutex_lock(&iommu_dma_notifier_lock);
----- --- list_for_each_entry_safe(master, tmp, &iommu_dma_masters, list) {
----- --- if (data == master->dev && do_iommu_attach(master->dev,
----- --- master->ops, master->dma_base, master->size)) {
----- --- list_del(&master->list);
----- --- kfree(master);
----- --- break;
----- --- }
----- --- }
----- --- mutex_unlock(&iommu_dma_notifier_lock);
----- --- return 0;
----- ---}
----- ---
----- ---static int __init register_iommu_dma_ops_notifier(struct bus_type *bus)
----- ---{
----- --- struct notifier_block *nb = kzalloc(sizeof(*nb), GFP_KERNEL);
----- --- int ret;
----- ---
----- --- if (!nb)
----- --- return -ENOMEM;
----- ---
----- --- nb->notifier_call = __iommu_attach_notifier;
----- ---
----- --- ret = bus_register_notifier(bus, nb);
----- --- if (ret) {
----- --- pr_warn("Failed to register DMA domain notifier; IOMMU DMA ops unavailable on bus '%s'\n",
----- --- bus->name);
----- --- kfree(nb);
----- --- }
----- --- return ret;
----- ---}
----- ---
----- ---static int __init __iommu_dma_init(void)
----- ---{
----- --- int ret;
----- ---
----- --- ret = iommu_dma_init();
----- --- if (!ret)
----- --- ret = register_iommu_dma_ops_notifier(&platform_bus_type);
----- --- if (!ret)
----- --- ret = register_iommu_dma_ops_notifier(&amba_bustype);
----- ---#ifdef CONFIG_PCI
----- --- if (!ret)
----- --- ret = register_iommu_dma_ops_notifier(&pci_bus_type);
----- ---#endif
----- --- return ret;
----- ---}
----- ---arch_initcall(__iommu_dma_init);
----- ---
----- ---static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
----- --- const struct iommu_ops *ops)
----- ---{
----- --- struct iommu_group *group;
----- ---
----- --- if (!ops)
----- --- return;
----- --- /*
----- --- * TODO: As a concession to the future, we're ready to handle being
----- --- * called both early and late (i.e. after bus_add_device). Once all
----- --- * the platform bus code is reworked to call us late and the notifier
----- --- * junk above goes away, move the body of do_iommu_attach here.
----- --- */
----- --- group = iommu_group_get(dev);
----- --- if (group) {
----- --- do_iommu_attach(dev, ops, dma_base, size);
----- --- iommu_group_put(group);
----- --- } else {
----- --- queue_iommu_attach(dev, ops, dma_base, size);
----- --- }
}
void arch_teardown_dma_ops(struct device *dev)
return -ENODEV;
/*
------- - * If the device has a _HID (or _CID) returning a valid ACPI/PNP
------- - * device ID, it is better to make it look less attractive here, so that
------- - * the other device with the same _ADR value (that may not have a valid
------- - * device ID) can be matched going forward. [This means a second spec
------- - * violation in a row, so whatever we do here is best effort anyway.]
+++++++ + * If the device has a _HID returning a valid ACPI/PNP device ID, it is
+++++++ + * better to make it look less attractive here, so that the other device
+++++++ + * with the same _ADR value (that may not have a valid device ID) can be
+++++++ + * matched going forward. [This means a second spec violation in a row,
+++++++ + * so whatever we do here is best effort anyway.]
*/
------- - return sta_present && list_empty(&adev->pnp.ids) ?
+++++++ + return sta_present && !adev->pnp.type.platform_id ?
FIND_CHILD_MAX_SCORE : FIND_CHILD_MIN_SCORE;
}
struct list_head *physnode_list;
unsigned int node_id;
int retval = -EINVAL;
----- --- enum dev_dma_attr attr;
if (has_acpi_companion(dev)) {
if (acpi_dev) {
if (!has_acpi_companion(dev))
ACPI_COMPANION_SET(dev, acpi_dev);
----- --- attr = acpi_get_dma_attr(acpi_dev);
----- --- if (attr != DEV_DMA_NOT_SUPPORTED)
----- --- acpi_dma_configure(dev, attr);
----- ---
acpi_physnode_link_name(physical_node_name, node_id);
retval = sysfs_create_link(&acpi_dev->dev.kobj, &dev->kobj,
physical_node_name);
* @dev: The pointer to the device
* @attr: device dma attributes
*/
----- ---void acpi_dma_configure(struct device *dev, enum dev_dma_attr attr)
+++++ +++int acpi_dma_configure(struct device *dev, enum dev_dma_attr attr)
{
const struct iommu_ops *iommu;
+++++ +++ u64 size;
iort_set_dma_mask(dev);
iommu = iort_iommu_configure(dev);
+++++ +++ if (IS_ERR(iommu))
+++++ +++ return PTR_ERR(iommu);
+++++ +++ size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1);
/*
* Assume dma valid range starts at 0 and covers the whole
* coherent_dma_mask.
*/
----- --- arch_setup_dma_ops(dev, 0, dev->coherent_dma_mask + 1, iommu,
----- --- attr == DEV_DMA_COHERENT);
+++++ +++ arch_setup_dma_ops(dev, 0, size, iommu, attr == DEV_DMA_COHERENT);
+++++ +++
+++++ +++ return 0;
}
EXPORT_SYMBOL_GPL(acpi_dma_configure);
return;
device->flags.match_driver = true;
------- - if (!ret) {
------- - ret = device_attach(&device->dev);
------- - if (ret < 0)
------- - return;
------- -
------- - if (!ret && device->pnp.type.platform_id)
------- - acpi_default_enumeration(device);
+++++++ + if (ret > 0) {
+++++++ + acpi_device_set_enumerated(device);
+++++++ + goto ok;
}
+++++++ + ret = device_attach(&device->dev);
+++++++ + if (ret < 0)
+++++++ + return;
+++++++ +
+++++++ + if (ret > 0 || !device->pnp.type.platform_id)
+++++++ + acpi_device_set_enumerated(device);
+++++++ + else
+++++++ + acpi_default_enumeration(device);
+++++++ +
ok:
list_for_each_entry(child, &device->children, node)
acpi_bus_attach(child);
};
struct arm_smmu_strtab_ent {
---- ---- bool valid;
---- ----
---- ---- bool bypass; /* Overrides s1/s2 config */
++++ ++++ /*
++++ ++++ * An STE is "assigned" if the master emitting the corresponding SID
++++ ++++ * is attached to a domain. The behaviour of an unassigned STE is
++++ ++++ * determined by the disable_bypass parameter, whereas an assigned
++++ ++++ * STE behaves according to s1_cfg/s2_cfg, which themselves are
++++ ++++ * configured according to the domain type.
++++ ++++ */
++++ ++++ bool assigned;
struct arm_smmu_s1_cfg *s1_cfg;
struct arm_smmu_s2_cfg *s2_cfg;
};
ARM_SMMU_DOMAIN_S1 = 0,
ARM_SMMU_DOMAIN_S2,
ARM_SMMU_DOMAIN_NESTED,
++++ ++++ ARM_SMMU_DOMAIN_BYPASS,
};
struct arm_smmu_domain {
* This is hideously complicated, but we only really care about
* three cases at the moment:
*
---- ---- * 1. Invalid (all zero) -> bypass (init)
---- ---- * 2. Bypass -> translation (attach)
---- ---- * 3. Translation -> bypass (detach)
++++ ++++ * 1. Invalid (all zero) -> bypass/fault (init)
++++ ++++ * 2. Bypass/fault -> translation/bypass (attach)
++++ ++++ * 3. Translation/bypass -> bypass/fault (detach)
*
* Given that we can't update the STE atomically and the SMMU
* doesn't read the thing in a defined order, that leaves us
}
/* Nuke the existing STE_0 value, as we're going to rewrite it */
---- ---- val = ste->valid ? STRTAB_STE_0_V : 0;
++++ ++++ val = STRTAB_STE_0_V;
++++ ++++
++++ ++++ /* Bypass/fault */
++++ ++++ if (!ste->assigned || !(ste->s1_cfg || ste->s2_cfg)) {
++++ ++++ if (!ste->assigned && disable_bypass)
++++ ++++ val |= STRTAB_STE_0_CFG_ABORT;
++++ ++++ else
++++ ++++ val |= STRTAB_STE_0_CFG_BYPASS;
---- ---- if (ste->bypass) {
---- ---- val |= disable_bypass ? STRTAB_STE_0_CFG_ABORT
---- ---- : STRTAB_STE_0_CFG_BYPASS;
dst[0] = cpu_to_le64(val);
dst[1] = cpu_to_le64(STRTAB_STE_1_SHCFG_INCOMING
<< STRTAB_STE_1_SHCFG_SHIFT);
static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
{
unsigned int i;
---- ---- struct arm_smmu_strtab_ent ste = {
---- ---- .valid = true,
---- ---- .bypass = true,
---- ---- };
++++ ++++ struct arm_smmu_strtab_ent ste = { .assigned = false };
for (i = 0; i < nent; ++i) {
arm_smmu_write_strtab_ent(NULL, -1, strtab, &ste);
{
struct arm_smmu_domain *smmu_domain;
---- ---- if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
++++ ++++ if (type != IOMMU_DOMAIN_UNMANAGED &&
++++ ++++ type != IOMMU_DOMAIN_DMA &&
++++ ++++ type != IOMMU_DOMAIN_IDENTITY)
return NULL;
/*
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu = smmu_domain->smmu;
++++ ++++ if (domain->type == IOMMU_DOMAIN_IDENTITY) {
++++ ++++ smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
++++ ++++ return 0;
++++ ++++ }
++++ ++++
/* Restrict the stage to what we can actually support */
if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
return step;
}
---- ----static int arm_smmu_install_ste_for_dev(struct iommu_fwspec *fwspec)
++++ ++++static void arm_smmu_install_ste_for_dev(struct iommu_fwspec *fwspec)
{
int i;
struct arm_smmu_master_data *master = fwspec->iommu_priv;
arm_smmu_write_strtab_ent(smmu, sid, step, &master->ste);
}
---- ----
---- ---- return 0;
}
static void arm_smmu_detach_dev(struct device *dev)
{
struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv;
---- ---- master->ste.bypass = true;
---- ---- if (arm_smmu_install_ste_for_dev(dev->iommu_fwspec) < 0)
---- ---- dev_warn(dev, "failed to install bypass STE\n");
++++ ++++ master->ste.assigned = false;
++++ ++++ arm_smmu_install_ste_for_dev(dev->iommu_fwspec);
}
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
ste = &master->ste;
/* Already attached to a different domain? */
---- ---- if (!ste->bypass)
++++ ++++ if (ste->assigned)
arm_smmu_detach_dev(dev);
mutex_lock(&smmu_domain->init_mutex);
goto out_unlock;
}
---- ---- ste->bypass = false;
---- ---- ste->valid = true;
++++ ++++ ste->assigned = true;
---- ---- if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
++++ ++++ if (smmu_domain->stage == ARM_SMMU_DOMAIN_BYPASS) {
++++ ++++ ste->s1_cfg = NULL;
++++ ++++ ste->s2_cfg = NULL;
++++ ++++ } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
ste->s1_cfg = &smmu_domain->s1_cfg;
ste->s2_cfg = NULL;
arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
ste->s2_cfg = &smmu_domain->s2_cfg;
}
---- ---- ret = arm_smmu_install_ste_for_dev(dev->iommu_fwspec);
---- ---- if (ret < 0)
---- ---- ste->valid = false;
---- ----
++++ ++++ arm_smmu_install_ste_for_dev(dev->iommu_fwspec);
out_unlock:
mutex_unlock(&smmu_domain->init_mutex);
return ret;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
++++ ++++ if (domain->type == IOMMU_DOMAIN_IDENTITY)
++++ ++++ return iova;
++++ ++++
if (!ops)
return 0;
master = fwspec->iommu_priv;
smmu = master->smmu;
---- ---- if (master && master->ste.valid)
++++ ++++ if (master && master->ste.assigned)
arm_smmu_detach_dev(dev);
iommu_group_remove_device(dev);
iommu_device_unlink(&smmu->iommu, dev);
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
++++ ++++ if (domain->type != IOMMU_DOMAIN_UNMANAGED)
++++ ++++ return -EINVAL;
++++ ++++
switch (attr) {
case DOMAIN_ATTR_NESTING:
*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
int ret = 0;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
++++ ++++ if (domain->type != IOMMU_DOMAIN_UNMANAGED)
++++ ++++ return -EINVAL;
++++ ++++
mutex_lock(&smmu_domain->init_mutex);
switch (attr) {
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
- - - prot, IOMMU_RESV_MSI);
+ + + prot, IOMMU_RESV_SW_MSI);
if (!region)
return;
list_add_tail(®ion->list, head);
+++++ +++
+++++ +++ iommu_dma_get_resv_regions(dev, head);
}
static void arm_smmu_put_resv_regions(struct device *dev,
.probe = arm_smmu_device_probe,
.remove = arm_smmu_device_remove,
};
+++++ +++module_platform_driver(arm_smmu_driver);
----- ---static int __init arm_smmu_init(void)
----- ---{
----- --- static bool registered;
----- --- int ret = 0;
----- ---
----- --- if (!registered) {
----- --- ret = platform_driver_register(&arm_smmu_driver);
----- --- registered = !ret;
----- --- }
----- --- return ret;
----- ---}
----- ---
----- ---static void __exit arm_smmu_exit(void)
----- ---{
----- --- return platform_driver_unregister(&arm_smmu_driver);
----- ---}
----- ---
----- ---subsys_initcall(arm_smmu_init);
----- ---module_exit(arm_smmu_exit);
----- ---
----- ---static int __init arm_smmu_of_init(struct device_node *np)
----- ---{
----- --- int ret = arm_smmu_init();
----- ---
----- --- if (ret)
----- --- return ret;
----- ---
----- --- if (!of_platform_device_create(np, NULL, platform_bus_type.dev_root))
----- --- return -ENODEV;
----- ---
----- --- return 0;
----- ---}
----- ---IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", arm_smmu_of_init);
----- ---
----- ---#ifdef CONFIG_ACPI
----- ---static int __init acpi_smmu_v3_init(struct acpi_table_header *table)
----- ---{
----- --- if (iort_node_match(ACPI_IORT_NODE_SMMU_V3))
----- --- return arm_smmu_init();
----- ---
----- --- return 0;
----- ---}
----- ---IORT_ACPI_DECLARE(arm_smmu_v3, ACPI_SIG_IORT, acpi_smmu_v3_init);
----- ---#endif
+++++ +++IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", NULL);
MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
#define ARM_SMMU_GR0_sTLBGSTATUS 0x74
#define sTLBGSTATUS_GSACTIVE (1 << 0)
#define TLB_LOOP_TIMEOUT 1000000 /* 1s! */
++++ ++++#define TLB_SPIN_COUNT 10
/* Stream mapping registers */
#define ARM_SMMU_GR0_SMR(n) (0x800 + ((n) << 2))
#define CBA2R_VMID_MASK 0xffff
/* Translation context bank */
---- ----#define ARM_SMMU_CB_BASE(smmu) ((smmu)->base + ((smmu)->size >> 1))
---- ----#define ARM_SMMU_CB(smmu, n) ((n) * (1 << (smmu)->pgshift))
++++ ++++#define ARM_SMMU_CB(smmu, n) ((smmu)->cb_base + ((n) << (smmu)->pgshift))
#define ARM_SMMU_CB_SCTLR 0x0
#define ARM_SMMU_CB_ACTLR 0x4
#define ARM_SMMU_CB_S1_TLBIVAL 0x620
#define ARM_SMMU_CB_S2_TLBIIPAS2 0x630
#define ARM_SMMU_CB_S2_TLBIIPAS2L 0x638
++++ ++++#define ARM_SMMU_CB_TLBSYNC 0x7f0
++++ ++++#define ARM_SMMU_CB_TLBSTATUS 0x7f4
#define ARM_SMMU_CB_ATS1PR 0x800
#define ARM_SMMU_CB_ATSR 0x8f0
struct device *dev;
void __iomem *base;
---- ---- unsigned long size;
++++ ++++ void __iomem *cb_base;
unsigned long pgshift;
#define ARM_SMMU_FEAT_COHERENT_WALK (1 << 0)
struct arm_smmu_cfg {
u8 cbndx;
u8 irptndx;
++++ ++++ union {
++++ ++++ u16 asid;
++++ ++++ u16 vmid;
++++ ++++ };
u32 cbar;
enum arm_smmu_context_fmt fmt;
};
#define INVALID_IRPTNDX 0xff
---- ----#define ARM_SMMU_CB_ASID(smmu, cfg) ((u16)(smmu)->cavium_id_base + (cfg)->cbndx)
---- ----#define ARM_SMMU_CB_VMID(smmu, cfg) ((u16)(smmu)->cavium_id_base + (cfg)->cbndx + 1)
---- ----
enum arm_smmu_domain_stage {
ARM_SMMU_DOMAIN_S1 = 0,
ARM_SMMU_DOMAIN_S2,
ARM_SMMU_DOMAIN_NESTED,
++++ ++++ ARM_SMMU_DOMAIN_BYPASS,
};
struct arm_smmu_domain {
}
/* Wait for any pending TLB invalidations to complete */
---- ----static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
++++ ++++static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu,
++++ ++++ void __iomem *sync, void __iomem *status)
{
---- ---- int count = 0;
---- ---- void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
---- ----
---- ---- writel_relaxed(0, gr0_base + ARM_SMMU_GR0_sTLBGSYNC);
---- ---- while (readl_relaxed(gr0_base + ARM_SMMU_GR0_sTLBGSTATUS)
---- ---- & sTLBGSTATUS_GSACTIVE) {
---- ---- cpu_relax();
---- ---- if (++count == TLB_LOOP_TIMEOUT) {
---- ---- dev_err_ratelimited(smmu->dev,
---- ---- "TLB sync timed out -- SMMU may be deadlocked\n");
---- ---- return;
++++ ++++ unsigned int spin_cnt, delay;
++++ ++++
++++ ++++ writel_relaxed(0, sync);
++++ ++++ for (delay = 1; delay < TLB_LOOP_TIMEOUT; delay *= 2) {
++++ ++++ for (spin_cnt = TLB_SPIN_COUNT; spin_cnt > 0; spin_cnt--) {
++++ ++++ if (!(readl_relaxed(status) & sTLBGSTATUS_GSACTIVE))
++++ ++++ return;
++++ ++++ cpu_relax();
}
---- ---- udelay(1);
++++ ++++ udelay(delay);
}
---- ---static void arm_smmu_tlb_sync(void *cookie)
++++ ++++ dev_err_ratelimited(smmu->dev,
++++ ++++ "TLB sync timed out -- SMMU may be deadlocked\n");
+ }
+
---- --- __arm_smmu_tlb_sync(smmu_domain->smmu);
++++ ++++static void arm_smmu_tlb_sync_global(struct arm_smmu_device *smmu)
++++ ++++{
++++ ++++ void __iomem *base = ARM_SMMU_GR0(smmu);
++++ ++++
++++ ++++ __arm_smmu_tlb_sync(smmu, base + ARM_SMMU_GR0_sTLBGSYNC,
++++ ++++ base + ARM_SMMU_GR0_sTLBGSTATUS);
++++ ++++}
++++ ++++
++++ ++++static void arm_smmu_tlb_sync_context(void *cookie)
+ {
+ struct arm_smmu_domain *smmu_domain = cookie;
++++ ++++ struct arm_smmu_device *smmu = smmu_domain->smmu;
++++ ++++ void __iomem *base = ARM_SMMU_CB(smmu, smmu_domain->cfg.cbndx);
++++ ++++
++++ ++++ __arm_smmu_tlb_sync(smmu, base + ARM_SMMU_CB_TLBSYNC,
++++ ++++ base + ARM_SMMU_CB_TLBSTATUS);
}
---- ---static void arm_smmu_tlb_inv_context(void *cookie)
- static void arm_smmu_tlb_sync(void *cookie)
++++ ++++static void arm_smmu_tlb_sync_vmid(void *cookie)
++++ +++{
++++ +++ struct arm_smmu_domain *smmu_domain = cookie;
- __arm_smmu_tlb_sync(smmu_domain->smmu);
++++ ++++
++++ ++++ arm_smmu_tlb_sync_global(smmu_domain->smmu);
++++ +++}
++++ +++
- static void arm_smmu_tlb_inv_context(void *cookie)
++++ ++++static void arm_smmu_tlb_inv_context_s1(void *cookie)
{
struct arm_smmu_domain *smmu_domain = cookie;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
---- ---- struct arm_smmu_device *smmu = smmu_domain->smmu;
---- ---- bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
---- ---- void __iomem *base;
++++ ++++ void __iomem *base = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx);
---- ---- if (stage1) {
---- ---- base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
---- ---- writel_relaxed(ARM_SMMU_CB_ASID(smmu, cfg),
---- ---- base + ARM_SMMU_CB_S1_TLBIASID);
---- ---- } else {
---- ---- base = ARM_SMMU_GR0(smmu);
---- ---- writel_relaxed(ARM_SMMU_CB_VMID(smmu, cfg),
---- ---- base + ARM_SMMU_GR0_TLBIVMID);
---- ---- }
++++ ++++ writel_relaxed(cfg->asid, base + ARM_SMMU_CB_S1_TLBIASID);
++++ ++++ arm_smmu_tlb_sync_context(cookie);
++++ ++++}
++++ +++
- __arm_smmu_tlb_sync(smmu);
++++ ++++static void arm_smmu_tlb_inv_context_s2(void *cookie)
++++ ++++{
++++ ++++ struct arm_smmu_domain *smmu_domain = cookie;
++++ ++++ struct arm_smmu_device *smmu = smmu_domain->smmu;
++++ ++++ void __iomem *base = ARM_SMMU_GR0(smmu);
+
---- --- __arm_smmu_tlb_sync(smmu);
++++ ++++ writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID);
++++ ++++ arm_smmu_tlb_sync_global(smmu);
}
static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
{
struct arm_smmu_domain *smmu_domain = cookie;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
---- ---- struct arm_smmu_device *smmu = smmu_domain->smmu;
bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
---- ---- void __iomem *reg;
++++ ++++ void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx);
if (stage1) {
---- ---- reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) {
iova &= ~12UL;
---- ---- iova |= ARM_SMMU_CB_ASID(smmu, cfg);
++++ ++++ iova |= cfg->asid;
do {
writel_relaxed(iova, reg);
iova += granule;
} while (size -= granule);
} else {
iova >>= 12;
---- ---- iova |= (u64)ARM_SMMU_CB_ASID(smmu, cfg) << 48;
++++ ++++ iova |= (u64)cfg->asid << 48;
do {
writeq_relaxed(iova, reg);
iova += granule >> 12;
} while (size -= granule);
}
---- ---- } else if (smmu->version == ARM_SMMU_V2) {
---- ---- reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
++++ ++++ } else {
reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
ARM_SMMU_CB_S2_TLBIIPAS2;
iova >>= 12;
smmu_write_atomic_lq(iova, reg);
iova += granule >> 12;
} while (size -= granule);
---- ---- } else {
---- ---- reg = ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_TLBIVMID;
---- ---- writel_relaxed(ARM_SMMU_CB_VMID(smmu, cfg), reg);
}
}
---- ----static const struct iommu_gather_ops arm_smmu_gather_ops = {
---- ---- .tlb_flush_all = arm_smmu_tlb_inv_context,
++++ ++++/*
++++ ++++ * On MMU-401 at least, the cost of firing off multiple TLBIVMIDs appears
++++ ++++ * almost negligible, but the benefit of getting the first one in as far ahead
++++ ++++ * of the sync as possible is significant, hence we don't just make this a
++++ ++++ * no-op and set .tlb_sync to arm_smmu_inv_context_s2() as you might think.
++++ ++++ */
++++ ++++static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size,
++++ ++++ size_t granule, bool leaf, void *cookie)
++++ ++++{
++++ ++++ struct arm_smmu_domain *smmu_domain = cookie;
++++ ++++ void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu);
++++ ++++
++++ ++++ writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID);
++++ ++++}
++++ ++++
++++ ++++static const struct iommu_gather_ops arm_smmu_s1_tlb_ops = {
++++ ++++ .tlb_flush_all = arm_smmu_tlb_inv_context_s1,
.tlb_add_flush = arm_smmu_tlb_inv_range_nosync,
---- ---- .tlb_sync = arm_smmu_tlb_sync,
++++ ++++ .tlb_sync = arm_smmu_tlb_sync_context,
++++ ++++};
++++ ++++
++++ ++++static const struct iommu_gather_ops arm_smmu_s2_tlb_ops_v2 = {
++++ ++++ .tlb_flush_all = arm_smmu_tlb_inv_context_s2,
++++ ++++ .tlb_add_flush = arm_smmu_tlb_inv_range_nosync,
++++ ++++ .tlb_sync = arm_smmu_tlb_sync_context,
++++ ++++};
++++ ++++
++++ ++++static const struct iommu_gather_ops arm_smmu_s2_tlb_ops_v1 = {
++++ ++++ .tlb_flush_all = arm_smmu_tlb_inv_context_s2,
++++ ++++ .tlb_add_flush = arm_smmu_tlb_inv_vmid_nosync,
++++ ++++ .tlb_sync = arm_smmu_tlb_sync_vmid,
};
static irqreturn_t arm_smmu_context_fault(int irq, void *dev)
struct arm_smmu_device *smmu = smmu_domain->smmu;
void __iomem *cb_base;
---- ---- cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
++++ ++++ cb_base = ARM_SMMU_CB(smmu, cfg->cbndx);
fsr = readl_relaxed(cb_base + ARM_SMMU_CB_FSR);
if (!(fsr & FSR_FAULT))
gr1_base = ARM_SMMU_GR1(smmu);
stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
---- ---- cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
++++ ++++ cb_base = ARM_SMMU_CB(smmu, cfg->cbndx);
if (smmu->version > ARM_SMMU_V1) {
if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64)
reg = CBA2R_RW64_32BIT;
/* 16-bit VMIDs live in CBA2R */
if (smmu->features & ARM_SMMU_FEAT_VMID16)
---- ---- reg |= ARM_SMMU_CB_VMID(smmu, cfg) << CBA2R_VMID_SHIFT;
++++ ++++ reg |= cfg->vmid << CBA2R_VMID_SHIFT;
writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBA2R(cfg->cbndx));
}
(CBAR_S1_MEMATTR_WB << CBAR_S1_MEMATTR_SHIFT);
} else if (!(smmu->features & ARM_SMMU_FEAT_VMID16)) {
/* 8-bit VMIDs live in CBAR */
---- ---- reg |= ARM_SMMU_CB_VMID(smmu, cfg) << CBAR_VMID_SHIFT;
++++ ++++ reg |= cfg->vmid << CBAR_VMID_SHIFT;
}
writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBAR(cfg->cbndx));
---- ---- /* TTBRs */
---- ---- if (stage1) {
---- ---- u16 asid = ARM_SMMU_CB_ASID(smmu, cfg);
---- ----
---- ---- if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) {
---- ---- reg = pgtbl_cfg->arm_v7s_cfg.ttbr[0];
---- ---- writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0);
---- ---- reg = pgtbl_cfg->arm_v7s_cfg.ttbr[1];
---- ---- writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR1);
---- ---- writel_relaxed(asid, cb_base + ARM_SMMU_CB_CONTEXTIDR);
---- ---- } else {
---- ---- reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
---- ---- reg64 |= (u64)asid << TTBRn_ASID_SHIFT;
---- ---- writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
---- ---- reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[1];
---- ---- reg64 |= (u64)asid << TTBRn_ASID_SHIFT;
---- ---- writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR1);
---- ---- }
---- ---- } else {
---- ---- reg64 = pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
---- ---- writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
---- ---- }
---- ----
---- ---- /* TTBCR */
++++ ++++ /*
++++ ++++ * TTBCR
++++ ++++ * We must write this before the TTBRs, since it determines the
++++ ++++ * access behaviour of some fields (in particular, ASID[15:8]).
++++ ++++ */
if (stage1) {
if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) {
reg = pgtbl_cfg->arm_v7s_cfg.tcr;
}
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
++++ ++++ /* TTBRs */
++++ ++++ if (stage1) {
++++ ++++ if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) {
++++ ++++ reg = pgtbl_cfg->arm_v7s_cfg.ttbr[0];
++++ ++++ writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0);
++++ ++++ reg = pgtbl_cfg->arm_v7s_cfg.ttbr[1];
++++ ++++ writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR1);
++++ ++++ writel_relaxed(cfg->asid, cb_base + ARM_SMMU_CB_CONTEXTIDR);
++++ ++++ } else {
++++ ++++ reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
++++ ++++ reg64 |= (u64)cfg->asid << TTBRn_ASID_SHIFT;
++++ ++++ writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
++++ ++++ reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[1];
++++ ++++ reg64 |= (u64)cfg->asid << TTBRn_ASID_SHIFT;
++++ ++++ writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR1);
++++ ++++ }
++++ ++++ } else {
++++ ++++ reg64 = pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
++++ ++++ writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
++++ ++++ }
++++ ++++
/* MAIRs (stage-1 only) */
if (stage1) {
if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) {
enum io_pgtable_fmt fmt;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
++++ ++++ const struct iommu_gather_ops *tlb_ops;
mutex_lock(&smmu_domain->init_mutex);
if (smmu_domain->smmu)
goto out_unlock;
++++ ++++ if (domain->type == IOMMU_DOMAIN_IDENTITY) {
++++ ++++ smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
++++ ++++ smmu_domain->smmu = smmu;
++++ ++++ goto out_unlock;
++++ ++++ }
++++ ++++
/*
* Mapping the requested stage onto what we support is surprisingly
* complicated, mainly because the spec allows S1+S2 SMMUs without
ias = min(ias, 32UL);
oas = min(oas, 32UL);
}
++++ ++++ tlb_ops = &arm_smmu_s1_tlb_ops;
break;
case ARM_SMMU_DOMAIN_NESTED:
/*
ias = min(ias, 40UL);
oas = min(oas, 40UL);
}
++++ ++++ if (smmu->version == ARM_SMMU_V2)
++++ ++++ tlb_ops = &arm_smmu_s2_tlb_ops_v2;
++++ ++++ else
++++ ++++ tlb_ops = &arm_smmu_s2_tlb_ops_v1;
break;
default:
ret = -EINVAL;
goto out_unlock;
}
---- ----
ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
smmu->num_context_banks);
if (ret < 0)
cfg->irptndx = cfg->cbndx;
}
++++ ++++ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2)
++++ ++++ cfg->vmid = cfg->cbndx + 1 + smmu->cavium_id_base;
++++ ++++ else
++++ ++++ cfg->asid = cfg->cbndx + smmu->cavium_id_base;
++++ ++++
pgtbl_cfg = (struct io_pgtable_cfg) {
.pgsize_bitmap = smmu->pgsize_bitmap,
.ias = ias,
.oas = oas,
---- ---- .tlb = &arm_smmu_gather_ops,
++++ ++++ .tlb = tlb_ops,
.iommu_dev = smmu->dev,
};
void __iomem *cb_base;
int irq;
---- ---- if (!smmu)
++++ ++++ if (!smmu || domain->type == IOMMU_DOMAIN_IDENTITY)
return;
/*
* Disable the context bank and free the page tables before freeing
* it.
*/
---- ---- cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
++++ ++++ cb_base = ARM_SMMU_CB(smmu, cfg->cbndx);
writel_relaxed(0, cb_base + ARM_SMMU_CB_SCTLR);
if (cfg->irptndx != INVALID_IRPTNDX) {
{
struct arm_smmu_domain *smmu_domain;
---- ---- if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
++++ ++++ if (type != IOMMU_DOMAIN_UNMANAGED &&
++++ ++++ type != IOMMU_DOMAIN_DMA &&
++++ ++++ type != IOMMU_DOMAIN_IDENTITY)
return NULL;
/*
* Allocate the domain and initialise some of its data structures.
{
struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_s2cr *s2cr = smmu->s2crs;
---- ---- enum arm_smmu_s2cr_type type = S2CR_TYPE_TRANS;
u8 cbndx = smmu_domain->cfg.cbndx;
++++ ++++ enum arm_smmu_s2cr_type type;
int i, idx;
++++ ++++ if (smmu_domain->stage == ARM_SMMU_DOMAIN_BYPASS)
++++ ++++ type = S2CR_TYPE_BYPASS;
++++ ++++ else
++++ ++++ type = S2CR_TYPE_TRANS;
++++ ++++
for_each_cfg_sme(fwspec, i, idx) {
if (type == s2cr[idx].type && cbndx == s2cr[idx].cbndx)
continue;
u64 phys;
unsigned long va;
---- ---- cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
++++ ++++ cb_base = ARM_SMMU_CB(smmu, cfg->cbndx);
/* ATS1 registers can only be written atomically */
va = iova & ~0xfffUL;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops;
++++ ++++ if (domain->type == IOMMU_DOMAIN_IDENTITY)
++++ ++++ return iova;
++++ ++++
if (!ops)
return 0;
}
if (mask & ~smmu->smr_mask_mask) {
dev_err(dev, "SMR mask 0x%x out of range for SMMU (0x%x)\n",
---- ---- sid, smmu->smr_mask_mask);
++++ ++++ mask, smmu->smr_mask_mask);
goto out_free;
}
}
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
++++ ++++ if (domain->type != IOMMU_DOMAIN_UNMANAGED)
++++ ++++ return -EINVAL;
++++ ++++
switch (attr) {
case DOMAIN_ATTR_NESTING:
*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
int ret = 0;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
++++ ++++ if (domain->type != IOMMU_DOMAIN_UNMANAGED)
++++ ++++ return -EINVAL;
++++ ++++
mutex_lock(&smmu_domain->init_mutex);
switch (attr) {
static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
{
---- ---- u32 fwid = 0;
++++ ++++ u32 mask, fwid = 0;
if (args->args_count > 0)
fwid |= (u16)args->args[0];
if (args->args_count > 1)
fwid |= (u16)args->args[1] << SMR_MASK_SHIFT;
++++ ++++ else if (!of_property_read_u32(args->np, "stream-match-mask", &mask))
++++ ++++ fwid |= (u16)mask << SMR_MASK_SHIFT;
return iommu_fwspec_add_ids(dev, &fwid, 1);
}
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
- - - prot, IOMMU_RESV_MSI);
+ + + prot, IOMMU_RESV_SW_MSI);
if (!region)
return;
list_add_tail(®ion->list, head);
+++++ +++
+++++ +++ iommu_dma_get_resv_regions(dev, head);
}
static void arm_smmu_put_resv_regions(struct device *dev,
/* Make sure all context banks are disabled and clear CB_FSR */
for (i = 0; i < smmu->num_context_banks; ++i) {
---- ---- cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, i);
++++ ++++ cb_base = ARM_SMMU_CB(smmu, i);
writel_relaxed(0, cb_base + ARM_SMMU_CB_SCTLR);
writel_relaxed(FSR_FAULT, cb_base + ARM_SMMU_CB_FSR);
/*
reg |= sCR0_EXIDENABLE;
/* Push the button */
---- ---- __arm_smmu_tlb_sync(smmu);
++++ ++++ arm_smmu_tlb_sync_global(smmu);
writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0);
}
/* Check for size mismatch of SMMU address space from mapped region */
size = 1 << (((id >> ID1_NUMPAGENDXB_SHIFT) & ID1_NUMPAGENDXB_MASK) + 1);
---- ---- size *= 2 << smmu->pgshift;
---- ---- if (smmu->size != size)
++++ ++++ size <<= smmu->pgshift;
++++ ++++ if (smmu->cb_base != gr0_base + size)
dev_warn(smmu->dev,
---- ---- "SMMU address space size (0x%lx) differs from mapped region size (0x%lx)!\n",
---- ---- size, smmu->size);
++++ ++++ "SMMU address space size (0x%lx) differs from mapped region size (0x%tx)!\n",
++++ ++++ size * 2, (smmu->cb_base - gr0_base) * 2);
smmu->num_s2_context_banks = (id >> ID1_NUMS2CB_SHIFT) & ID1_NUMS2CB_MASK;
smmu->num_context_banks = (id >> ID1_NUMCB_SHIFT) & ID1_NUMCB_MASK;
atomic_add_return(smmu->num_context_banks,
&cavium_smmu_context_count);
smmu->cavium_id_base -= smmu->num_context_banks;
++++ ++++ dev_notice(smmu->dev, "\tenabling workaround for Cavium erratum 27704\n");
}
/* ID2 */
return 0;
}
+++++ +++static void arm_smmu_bus_init(void)
+++++ +++{
+++++ +++ /* Oh, for a proper bus abstraction */
+++++ +++ if (!iommu_present(&platform_bus_type))
+++++ +++ bus_set_iommu(&platform_bus_type, &arm_smmu_ops);
+++++ +++#ifdef CONFIG_ARM_AMBA
+++++ +++ if (!iommu_present(&amba_bustype))
+++++ +++ bus_set_iommu(&amba_bustype, &arm_smmu_ops);
+++++ +++#endif
+++++ +++#ifdef CONFIG_PCI
+++++ +++ if (!iommu_present(&pci_bus_type)) {
+++++ +++ pci_request_acs();
+++++ +++ bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
+++++ +++ }
+++++ +++#endif
+++++ +++}
+++++ +++
static int arm_smmu_device_probe(struct platform_device *pdev)
{
struct resource *res;
smmu->base = devm_ioremap_resource(dev, res);
if (IS_ERR(smmu->base))
return PTR_ERR(smmu->base);
---- ---- smmu->size = resource_size(res);
++++ ++++ smmu->cb_base = smmu->base + resource_size(res) / 2;
num_irqs = 0;
while ((res = platform_get_resource(pdev, IORESOURCE_IRQ, num_irqs))) {
arm_smmu_device_reset(smmu);
arm_smmu_test_smr_masks(smmu);
----- --- /* Oh, for a proper bus abstraction */
----- --- if (!iommu_present(&platform_bus_type))
----- --- bus_set_iommu(&platform_bus_type, &arm_smmu_ops);
----- ---#ifdef CONFIG_ARM_AMBA
----- --- if (!iommu_present(&amba_bustype))
----- --- bus_set_iommu(&amba_bustype, &arm_smmu_ops);
----- ---#endif
----- ---#ifdef CONFIG_PCI
----- --- if (!iommu_present(&pci_bus_type)) {
----- --- pci_request_acs();
----- --- bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
----- --- }
----- ---#endif
+++++ +++ /*
+++++ +++ * For ACPI and generic DT bindings, an SMMU will be probed before
+++++ +++ * any device which might need it, so we want the bus ops in place
+++++ +++ * ready to handle default domain setup as soon as any SMMU exists.
+++++ +++ */
+++++ +++ if (!using_legacy_binding)
+++++ +++ arm_smmu_bus_init();
+++++ +++
++++ +++ return 0;
++++ +++}
++++ +++
+++++ +++/*
+++++ +++ * With the legacy DT binding in play, though, we have no guarantees about
+++++ +++ * probe order, but then we're also not doing default domains, so we can
+++++ +++ * delay setting bus ops until we're sure every possible SMMU is ready,
+++++ +++ * and that way ensure that no add_device() calls get missed.
+++++ +++ */
+++++ +++static int arm_smmu_legacy_bus_init(void)
+++++ +++{
+++++ +++ if (using_legacy_binding)
+++++ +++ arm_smmu_bus_init();
+ return 0;
+ }
+++++ +++device_initcall_sync(arm_smmu_legacy_bus_init);
+
static int arm_smmu_device_remove(struct platform_device *pdev)
{
struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
.probe = arm_smmu_device_probe,
.remove = arm_smmu_device_remove,
};
----- ---
----- ---static int __init arm_smmu_init(void)
----- ---{
----- --- static bool registered;
----- --- int ret = 0;
----- ---
----- --- if (!registered) {
----- --- ret = platform_driver_register(&arm_smmu_driver);
----- --- registered = !ret;
----- --- }
----- --- return ret;
----- ---}
----- ---
----- ---static void __exit arm_smmu_exit(void)
----- ---{
----- --- return platform_driver_unregister(&arm_smmu_driver);
----- ---}
----- ---
----- ---subsys_initcall(arm_smmu_init);
----- ---module_exit(arm_smmu_exit);
----- ---
----- ---static int __init arm_smmu_of_init(struct device_node *np)
----- ---{
----- --- int ret = arm_smmu_init();
----- ---
----- --- if (ret)
----- --- return ret;
----- ---
----- --- if (!of_platform_device_create(np, NULL, platform_bus_type.dev_root))
----- --- return -ENODEV;
----- ---
----- --- return 0;
----- ---}
----- ---IOMMU_OF_DECLARE(arm_smmuv1, "arm,smmu-v1", arm_smmu_of_init);
----- ---IOMMU_OF_DECLARE(arm_smmuv2, "arm,smmu-v2", arm_smmu_of_init);
----- ---IOMMU_OF_DECLARE(arm_mmu400, "arm,mmu-400", arm_smmu_of_init);
----- ---IOMMU_OF_DECLARE(arm_mmu401, "arm,mmu-401", arm_smmu_of_init);
----- ---IOMMU_OF_DECLARE(arm_mmu500, "arm,mmu-500", arm_smmu_of_init);
----- ---IOMMU_OF_DECLARE(cavium_smmuv2, "cavium,smmu-v2", arm_smmu_of_init);
----- ---
----- ---#ifdef CONFIG_ACPI
----- ---static int __init arm_smmu_acpi_init(struct acpi_table_header *table)
----- ---{
----- --- if (iort_node_match(ACPI_IORT_NODE_SMMU))
----- --- return arm_smmu_init();
----- ---
----- --- return 0;
----- ---}
----- ---IORT_ACPI_DECLARE(arm_smmu, ACPI_SIG_IORT, arm_smmu_acpi_init);
----- ---#endif
+++++ +++module_platform_driver(arm_smmu_driver);
+++++ +++
+++++ +++IOMMU_OF_DECLARE(arm_smmuv1, "arm,smmu-v1", NULL);
+++++ +++IOMMU_OF_DECLARE(arm_smmuv2, "arm,smmu-v2", NULL);
+++++ +++IOMMU_OF_DECLARE(arm_mmu400, "arm,mmu-400", NULL);
+++++ +++IOMMU_OF_DECLARE(arm_mmu401, "arm,mmu-401", NULL);
+++++ +++IOMMU_OF_DECLARE(arm_mmu500, "arm,mmu-500", NULL);
+++++ +++IOMMU_OF_DECLARE(cavium_smmuv2, "cavium,smmu-v2", NULL);
MODULE_DESCRIPTION("IOMMU API for ARM architected SMMU implementations");
MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
#define REG_V5_PT_BASE_PFN 0x00C
#define REG_V5_MMU_FLUSH_ALL 0x010
#define REG_V5_MMU_FLUSH_ENTRY 0x014
++++++++#define REG_V5_MMU_FLUSH_RANGE 0x018
++++++++#define REG_V5_MMU_FLUSH_START 0x020
++++++++#define REG_V5_MMU_FLUSH_END 0x024
#define REG_V5_INT_STATUS 0x060
#define REG_V5_INT_CLEAR 0x064
#define REG_V5_FAULT_AR_VA 0x070
{
unsigned int i;
-------- for (i = 0; i < num_inv; i++) {
-------- if (MMU_MAJ_VER(data->version) < 5)
++++++++ if (MMU_MAJ_VER(data->version) < 5) {
++++++++ for (i = 0; i < num_inv; i++) {
writel((iova & SPAGE_MASK) | 1,
data->sfrbase + REG_MMU_FLUSH_ENTRY);
-------- else
++++++++ iova += SPAGE_SIZE;
++++++++ }
++++++++ } else {
++++++++ if (num_inv == 1) {
writel((iova & SPAGE_MASK) | 1,
data->sfrbase + REG_V5_MMU_FLUSH_ENTRY);
-------- iova += SPAGE_SIZE;
++++++++ } else {
++++++++ writel((iova & SPAGE_MASK),
++++++++ data->sfrbase + REG_V5_MMU_FLUSH_START);
++++++++ writel((iova & SPAGE_MASK) + (num_inv - 1) * SPAGE_SIZE,
++++++++ data->sfrbase + REG_V5_MMU_FLUSH_END);
++++++++ writel(1, data->sfrbase + REG_V5_MMU_FLUSH_RANGE);
++++++++ }
}
}
spin_lock_irqsave(&data->lock, flags);
if (data->active && data->version >= MAKE_MMU_VER(3, 3)) {
clk_enable(data->clk_master);
- - - __sysmmu_tlb_invalidate_entry(data, iova, 1);
+ + + if (sysmmu_block(data)) {
+ + + if (data->version >= MAKE_MMU_VER(5, 0))
+ + + __sysmmu_tlb_invalidate(data);
+ + + else
+ + + __sysmmu_tlb_invalidate_entry(data, iova, 1);
+ + + sysmmu_unblock(data);
+ + + }
clk_disable(data->clk_master);
}
spin_unlock_irqrestore(&data->lock, flags);
goto err_counter;
/* Workaround for System MMU v3.3 to prevent caching 1MiB mapping */
-------- for (i = 0; i < NUM_LV1ENTRIES; i += 8) {
-------- domain->pgtable[i + 0] = ZERO_LV2LINK;
-------- domain->pgtable[i + 1] = ZERO_LV2LINK;
-------- domain->pgtable[i + 2] = ZERO_LV2LINK;
-------- domain->pgtable[i + 3] = ZERO_LV2LINK;
-------- domain->pgtable[i + 4] = ZERO_LV2LINK;
-------- domain->pgtable[i + 5] = ZERO_LV2LINK;
-------- domain->pgtable[i + 6] = ZERO_LV2LINK;
-------- domain->pgtable[i + 7] = ZERO_LV2LINK;
-------- }
++++++++ for (i = 0; i < NUM_LV1ENTRIES; i++)
++++++++ domain->pgtable[i] = ZERO_LV2LINK;
handle = dma_map_single(dma_dev, domain->pgtable, LV1TABLE_SIZE,
DMA_TO_DEVICE);
* (used when kernel is launched w/ TXT)
*/
static int force_on = 0;
++++++ ++int intel_iommu_tboot_noforce;
/*
* 0: Present
"Intel-IOMMU: enable pre-production PASID support\n");
intel_iommu_pasid28 = 1;
iommu_identity_mapping |= IDENTMAP_GFX;
++++++ ++ } else if (!strncmp(str, "tboot_noforce", 13)) {
++++++ ++ printk(KERN_INFO
++++++ ++ "Intel-IOMMU: not forcing on after tboot. This could expose security risk for tboot\n");
++++++ ++ intel_iommu_tboot_noforce = 1;
}
str += strcspn(str, ",");
* which we used for the IOMMU lookup. Strictly speaking
* we could do this for all PCI devices; we only need to
* get the BDF# from the scope table for ACPI matches. */
- - - if (pdev->is_virtfn)
+ + + if (pdev && pdev->is_virtfn)
goto got_pdev;
*bus = drhd->devices[i].bus;
return 0;
}
++++++ ++static void intel_disable_iommus(void)
++++++ ++{
++++++ ++ struct intel_iommu *iommu = NULL;
++++++ ++ struct dmar_drhd_unit *drhd;
++++++ ++
++++++ ++ for_each_iommu(iommu, drhd)
++++++ ++ iommu_disable_translation(iommu);
++++++ ++}
++++++ ++
static inline struct intel_iommu *dev_to_intel_iommu(struct device *dev)
{
return container_of(dev, struct intel_iommu, iommu.dev);
goto out_free_dmar;
}
------ -- if (no_iommu || dmar_disabled)
++++++ ++ if (no_iommu || dmar_disabled) {
++++++ ++ /*
++++++ ++ * We exit the function here to ensure IOMMU's remapping and
++++++ ++ * mempool aren't setup, which means that the IOMMU's PMRs
++++++ ++ * won't be disabled via the call to init_dmars(). So disable
++++++ ++ * it explicitly here. The PMRs were setup by tboot prior to
++++++ ++ * calling SENTER, but the kernel is expected to reset/tear
++++++ ++ * down the PMRs.
++++++ ++ */
++++++ ++ if (intel_iommu_tboot_noforce) {
++++++ ++ for_each_iommu(iommu, drhd)
++++++ ++ iommu_disable_protect_mem_regions(iommu);
++++++ ++ }
++++++ ++
++++++ ++ /*
++++++ ++ * Make sure the IOMMUs are switched off, even when we
++++++ ++ * boot into a kexec kernel and the previous kernel left
++++++ ++ * them enabled
++++++ ++ */
++++++ ++ intel_disable_iommus();
goto out_free_dmar;
++++++ ++ }
if (list_empty(&dmar_rmrr_units))
pr_info("No RMRR found\n");
reg = iommu_alloc_resv_region(IOAPIC_RANGE_START,
IOAPIC_RANGE_END - IOAPIC_RANGE_START + 1,
- - - 0, IOMMU_RESV_RESERVED);
+ + + 0, IOMMU_RESV_MSI);
if (!reg)
return;
list_add_tail(®->list, head);
static struct kset *iommu_group_kset;
static DEFINE_IDA(iommu_group_ida);
++++ ++++static unsigned int iommu_def_domain_type = IOMMU_DOMAIN_DMA;
struct iommu_callback_data {
const struct iommu_ops *ops;
[IOMMU_RESV_DIRECT] = "direct",
[IOMMU_RESV_RESERVED] = "reserved",
[IOMMU_RESV_MSI] = "msi",
+ + + [IOMMU_RESV_SW_MSI] = "msi",
};
#define IOMMU_GROUP_ATTR(_name, _mode, _show, _store) \
static void __iommu_detach_group(struct iommu_domain *domain,
struct iommu_group *group);
++++ ++++static int __init iommu_set_def_domain_type(char *str)
++++ ++++{
++++ ++++ bool pt;
++++ ++++
++++ ++++ if (!str || strtobool(str, &pt))
++++ ++++ return -EINVAL;
++++ ++++
++++ ++++ iommu_def_domain_type = pt ? IOMMU_DOMAIN_IDENTITY : IOMMU_DOMAIN_DMA;
++++ ++++ return 0;
++++ ++++}
++++ ++++early_param("iommu.passthrough", iommu_set_def_domain_type);
++++ ++++
static ssize_t iommu_group_attr_show(struct kobject *kobj,
struct attribute *__attr, char *buf)
{
* IOMMU driver.
*/
if (!group->default_domain) {
---- ---- group->default_domain = __iommu_domain_alloc(dev->bus,
---- ---- IOMMU_DOMAIN_DMA);
++++ ++++ struct iommu_domain *dom;
++++ ++++
++++ ++++ dom = __iommu_domain_alloc(dev->bus, iommu_def_domain_type);
++++ ++++ if (!dom && iommu_def_domain_type != IOMMU_DOMAIN_DMA) {
++++ ++++ dev_warn(dev,
++++ ++++ "failed to allocate default IOMMU domain of type %u; falling back to IOMMU_DOMAIN_DMA",
++++ ++++ iommu_def_domain_type);
++++ ++++ dom = __iommu_domain_alloc(dev->bus, IOMMU_DOMAIN_DMA);
++++ ++++ }
++++ ++++
++++ ++++ group->default_domain = dom;
if (!group->domain)
---- ---- group->domain = group->default_domain;
++++ ++++ group->domain = dom;
}
ret = iommu_group_add_device(group, dev);
* result in ADD/DEL notifiers to group->notifier
*/
if (action == BUS_NOTIFY_ADD_DEVICE) {
-------- if (ops->add_device)
-------- return ops->add_device(dev);
++++++++ if (ops->add_device) {
++++++++ int ret;
++++++++
++++++++ ret = ops->add_device(dev);
++++++++ return (ret) ? NOTIFY_DONE : NOTIFY_OK;
++++++++ }
} else if (action == BUS_NOTIFY_REMOVED_DEVICE) {
if (ops->remove_device && dev->iommu_group) {
ops->remove_device(dev);
}
EXPORT_SYMBOL_GPL(iommu_domain_window_disable);
++++++++ /**
++++++++ * report_iommu_fault() - report about an IOMMU fault to the IOMMU framework
++++++++ * @domain: the iommu domain where the fault has happened
++++++++ * @dev: the device where the fault has happened
++++++++ * @iova: the faulting address
++++++++ * @flags: mmu fault flags (e.g. IOMMU_FAULT_READ/IOMMU_FAULT_WRITE/...)
++++++++ *
++++++++ * This function should be called by the low-level IOMMU implementations
++++++++ * whenever IOMMU faults happen, to allow high-level users, that are
++++++++ * interested in such events, to know about them.
++++++++ *
++++++++ * This event may be useful for several possible use cases:
++++++++ * - mere logging of the event
++++++++ * - dynamic TLB/PTE loading
++++++++ * - if restarting of the faulting device is required
++++++++ *
++++++++ * Returns 0 on success and an appropriate error code otherwise (if dynamic
++++++++ * PTE/TLB loading will one day be supported, implementations will be able
++++++++ * to tell whether it succeeded or not according to this return value).
++++++++ *
++++++++ * Specifically, -ENOSYS is returned if a fault handler isn't installed
++++++++ * (though fault handlers can also return -ENOSYS, in case they want to
++++++++ * elicit the default behavior of the IOMMU drivers).
++++++++ */
++++++++ int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
++++++++ unsigned long iova, int flags)
++++++++ {
++++++++ int ret = -ENOSYS;
++++++++
++++++++ /*
++++++++ * if upper layers showed interest and installed a fault handler,
++++++++ * invoke it.
++++++++ */
++++++++ if (domain->handler)
++++++++ ret = domain->handler(domain, dev, iova, flags,
++++++++ domain->handler_token);
++++++++
++++++++ trace_io_page_fault(dev, iova, flags);
++++++++ return ret;
++++++++ }
++++++++ EXPORT_SYMBOL_GPL(report_iommu_fault);
++++++++
static int __init iommu_init(void)
{
iommu_group_kset = kset_create_and_add("iommu_groups",
}
struct iommu_resv_region *iommu_alloc_resv_region(phys_addr_t start,
- - - size_t length,
- - - int prot, int type)
+ + + size_t length, int prot,
+ + + enum iommu_resv_type type)
{
struct iommu_resv_region *region;
}
}
++++++++ /* Insert the iova into domain rbtree by holding writer lock */
++++++++ static void
++++++++ iova_insert_rbtree(struct rb_root *root, struct iova *iova,
++++++++ struct rb_node *start)
++++++++ {
++++++++ struct rb_node **new, *parent = NULL;
++++++++
++++++++ new = (start) ? &start : &(root->rb_node);
++++++++ /* Figure out where to put new node */
++++++++ while (*new) {
++++++++ struct iova *this = rb_entry(*new, struct iova, node);
++++++++
++++++++ parent = *new;
++++++++
++++++++ if (iova->pfn_lo < this->pfn_lo)
++++++++ new = &((*new)->rb_left);
++++++++ else if (iova->pfn_lo > this->pfn_lo)
++++++++ new = &((*new)->rb_right);
++++++++ else {
++++++++ WARN_ON(1); /* this should not happen */
++++++++ return;
++++++++ }
++++++++ }
++++++++ /* Add new node and rebalance tree. */
++++++++ rb_link_node(&iova->node, parent, new);
++++++++ rb_insert_color(&iova->node, root);
++++++++ }
++++++++
/*
* Computes the padding size required, to make the start address
* naturally aligned on the power-of-two order of its size
break; /* found a free slot */
}
adjust_limit_pfn:
----- --- limit_pfn = curr_iova->pfn_lo - 1;
+++++ +++ limit_pfn = curr_iova->pfn_lo ? (curr_iova->pfn_lo - 1) : 0;
move_left:
prev = curr;
curr = rb_prev(curr);
new->pfn_lo = limit_pfn - (size + pad_size) + 1;
new->pfn_hi = new->pfn_lo + size - 1;
-------- /* Insert the new_iova into domain rbtree by holding writer lock */
-------- /* Add new node and rebalance tree. */
-------- {
-------- struct rb_node **entry, *parent = NULL;
--------
-------- /* If we have 'prev', it's a valid place to start the
-------- insertion. Otherwise, start from the root. */
-------- if (prev)
-------- entry = &prev;
-------- else
-------- entry = &iovad->rbroot.rb_node;
--------
-------- /* Figure out where to put new node */
-------- while (*entry) {
-------- struct iova *this = rb_entry(*entry, struct iova, node);
-------- parent = *entry;
--------
-------- if (new->pfn_lo < this->pfn_lo)
-------- entry = &((*entry)->rb_left);
-------- else if (new->pfn_lo > this->pfn_lo)
-------- entry = &((*entry)->rb_right);
-------- else
-------- BUG(); /* this should not happen */
-------- }
--------
-------- /* Add new node and rebalance tree. */
-------- rb_link_node(&new->node, parent, entry);
-------- rb_insert_color(&new->node, &iovad->rbroot);
-------- }
++++++++ /* If we have 'prev', it's a valid place to start the insertion. */
++++++++ iova_insert_rbtree(&iovad->rbroot, new, prev);
__cached_rbnode_insert_update(iovad, saved_pfn, new);
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
return 0;
}
-------- static void
-------- iova_insert_rbtree(struct rb_root *root, struct iova *iova)
-------- {
-------- struct rb_node **new = &(root->rb_node), *parent = NULL;
-------- /* Figure out where to put new node */
-------- while (*new) {
-------- struct iova *this = rb_entry(*new, struct iova, node);
--------
-------- parent = *new;
--------
-------- if (iova->pfn_lo < this->pfn_lo)
-------- new = &((*new)->rb_left);
-------- else if (iova->pfn_lo > this->pfn_lo)
-------- new = &((*new)->rb_right);
-------- else
-------- BUG(); /* this should not happen */
-------- }
-------- /* Add new node and rebalance tree. */
-------- rb_link_node(&iova->node, parent, new);
-------- rb_insert_color(&iova->node, root);
-------- }
--------
static struct kmem_cache *iova_cache;
static unsigned int iova_cache_users;
static DEFINE_MUTEX(iova_cache_mutex);
iova = alloc_and_init_iova(pfn_lo, pfn_hi);
if (iova)
-------- iova_insert_rbtree(&iovad->rbroot, iova);
++++++++ iova_insert_rbtree(&iovad->rbroot, iova, NULL);
return iova;
}
rb_erase(&iova->node, &iovad->rbroot);
if (prev) {
-------- iova_insert_rbtree(&iovad->rbroot, prev);
++++++++ iova_insert_rbtree(&iovad->rbroot, prev, NULL);
iova->pfn_lo = pfn_lo;
}
if (next) {
-------- iova_insert_rbtree(&iovad->rbroot, next);
++++++++ iova_insert_rbtree(&iovad->rbroot, next, NULL);
iova->pfn_hi = pfn_hi;
}
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-iommu.h>
++++++++ #include <linux/dma-mapping.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
#include <linux/io.h>
void __iomem **bases;
int num_mmu;
int irq;
++ ++++++ struct iommu_device iommu;
struct list_head node; /* entry in rk_iommu_domain.iommus */
struct iommu_domain *domain; /* domain to which iommu is attached */
};
static int rk_iommu_add_device(struct device *dev)
{
struct iommu_group *group;
++ ++++++ struct rk_iommu *iommu;
int ret;
if (!rk_iommu_is_dev_iommu_master(dev))
if (ret)
goto err_remove_device;
++ ++++++ iommu = rk_iommu_from_dev(dev);
++ ++++++ if (iommu)
++ ++++++ iommu_device_link(&iommu->iommu, dev);
++ ++++++
iommu_group_put(group);
return 0;
static void rk_iommu_remove_device(struct device *dev)
{
++ ++++++ struct rk_iommu *iommu;
++ ++++++
if (!rk_iommu_is_dev_iommu_master(dev))
return;
++ ++++++ iommu = rk_iommu_from_dev(dev);
++ ++++++ if (iommu)
++ ++++++ iommu_device_unlink(&iommu->iommu, dev);
++ ++++++
iommu_group_remove_device(dev);
}
struct rk_iommu *iommu;
struct resource *res;
int num_res = pdev->num_resources;
-- ------ int i;
++ ++++++ int err, i;
iommu = devm_kzalloc(dev, sizeof(*iommu), GFP_KERNEL);
if (!iommu)
return -ENXIO;
}
-- ------ return 0;
++ ++++++ err = iommu_device_sysfs_add(&iommu->iommu, dev, NULL, dev_name(dev));
++ ++++++ if (err)
++ ++++++ return err;
++ ++++++
++ ++++++ iommu_device_set_ops(&iommu->iommu, &rk_iommu_ops);
++ ++++++ err = iommu_device_register(&iommu->iommu);
++ ++++++
++ ++++++ return err;
}
static int rk_iommu_remove(struct platform_device *pdev)
{
++ ++++++ struct rk_iommu *iommu = platform_get_drvdata(pdev);
++ ++++++
++ ++++++ if (iommu) {
++ ++++++ iommu_device_sysfs_remove(&iommu->iommu);
++ ++++++ iommu_device_unregister(&iommu->iommu);
++ ++++++ }
++ ++++++
return 0;
}
KEEP(*(__##name##_of_table_end))
#define CLKSRC_OF_TABLES() OF_TABLE(CONFIG_CLKSRC_OF, clksrc)
+ ++ ++ +#define CLKEVT_OF_TABLES() OF_TABLE(CONFIG_CLKEVT_OF, clkevt)
#define IRQCHIP_OF_MATCH_TABLE() OF_TABLE(CONFIG_IRQCHIP, irqchip)
#define CLK_OF_TABLES() OF_TABLE(CONFIG_COMMON_CLK, clk)
#define IOMMU_OF_TABLES() OF_TABLE(CONFIG_OF_IOMMU, iommu)
*/
#ifndef RO_AFTER_INIT_DATA
#define RO_AFTER_INIT_DATA \
- -- -- - __start_data_ro_after_init = .; \
- - __start_ro_after_init = .; \
+++++++ + VMLINUX_SYMBOL(__start_ro_after_init) = .; \
*(.data..ro_after_init) \
- -- -- - __end_data_ro_after_init = .;
- - __end_ro_after_init = .;
+++++++ + VMLINUX_SYMBOL(__end_ro_after_init) = .;
#endif
/*
CLK_OF_TABLES() \
RESERVEDMEM_OF_TABLES() \
CLKSRC_OF_TABLES() \
+ ++ ++ + CLKEVT_OF_TABLES() \
IOMMU_OF_TABLES() \
CPU_METHOD_OF_TABLES() \
CPUIDLE_METHOD_OF_TABLES() \
IRQCHIP_OF_MATCH_TABLE() \
ACPI_PROBE_TABLE(irqchip) \
ACPI_PROBE_TABLE(clksrc) \
----- --- ACPI_PROBE_TABLE(iort) \
EARLYCON_TABLE()
#define INIT_TEXT \
#include <asm/errno.h>
#ifdef CONFIG_IOMMU_DMA
++++++++ #include <linux/dma-mapping.h>
#include <linux/iommu.h>
#include <linux/msi.h>
/* The DMA API isn't _quite_ the whole story, though... */
void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg);
+++++ +++void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
#else
{
}
+++++ +++static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
+++++ +++{
+++++ +++}
+++++ +++
#endif /* CONFIG_IOMMU_DMA */
#endif /* __KERNEL__ */
#endif /* __DMA_IOMMU_H */
#ifndef __LINUX_IOMMU_H
#define __LINUX_IOMMU_H
++++++++ #include <linux/scatterlist.h>
++++++++ #include <linux/device.h>
++++++++ #include <linux/types.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/of.h>
-------- #include <linux/types.h>
-------- #include <linux/scatterlist.h>
-------- #include <trace/events/iommu.h>
#define IOMMU_READ (1 << 0)
#define IOMMU_WRITE (1 << 1)
#define IOMMU_NOEXEC (1 << 3)
#define IOMMU_MMIO (1 << 4) /* e.g. things like MSI doorbells */
/*
---- ---- * This is to make the IOMMU API setup privileged
---- ---- * mapppings accessible by the master only at higher
---- ---- * privileged execution level and inaccessible at
---- ---- * less privileged levels.
++++ ++++ * Where the bus hardware includes a privilege level as part of its access type
++++ ++++ * markings, and certain devices are capable of issuing transactions marked as
++++ ++++ * either 'supervisor' or 'user', the IOMMU_PRIV flag requests that the other
++++ ++++ * given permission flags only apply to accesses at the higher privilege level,
++++ ++++ * and that unprivileged transactions should have as little access as possible.
++++ ++++ * This would usually imply the same permissions as kernel mappings on the CPU,
++++ ++++ * if the IOMMU page table format is equivalent.
*/
#define IOMMU_PRIV (1 << 5)
};
/* These are the possible reserved region types */
- - -#define IOMMU_RESV_DIRECT (1 << 0)
- - -#define IOMMU_RESV_RESERVED (1 << 1)
- - -#define IOMMU_RESV_MSI (1 << 2)
+ + +enum iommu_resv_type {
+ + + /* Memory regions which must be mapped 1:1 at all times */
+ + + IOMMU_RESV_DIRECT,
+ + + /* Arbitrary "never map this or give it to a device" address ranges */
+ + + IOMMU_RESV_RESERVED,
+ + + /* Hardware MSI region (untranslated) */
+ + + IOMMU_RESV_MSI,
+ + + /* Software-managed MSI translation window */
+ + + IOMMU_RESV_SW_MSI,
+ + +};
/**
* struct iommu_resv_region - descriptor for a reserved memory region
phys_addr_t start;
size_t length;
int prot;
- - - int type;
+ + + enum iommu_resv_type type;
};
#ifdef CONFIG_IOMMU_API
extern void iommu_put_resv_regions(struct device *dev, struct list_head *list);
extern int iommu_request_dm_for_dev(struct device *dev);
extern struct iommu_resv_region *
- - -iommu_alloc_resv_region(phys_addr_t start, size_t length, int prot, int type);
+ + +iommu_alloc_resv_region(phys_addr_t start, size_t length, int prot,
+ + + enum iommu_resv_type type);
extern int iommu_get_group_resv_regions(struct iommu_group *group,
struct list_head *head);
phys_addr_t offset, u64 size,
int prot);
extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr);
-------- /**
-------- * report_iommu_fault() - report about an IOMMU fault to the IOMMU framework
-------- * @domain: the iommu domain where the fault has happened
-------- * @dev: the device where the fault has happened
-------- * @iova: the faulting address
-------- * @flags: mmu fault flags (e.g. IOMMU_FAULT_READ/IOMMU_FAULT_WRITE/...)
-------- *
-------- * This function should be called by the low-level IOMMU implementations
-------- * whenever IOMMU faults happen, to allow high-level users, that are
-------- * interested in such events, to know about them.
-------- *
-------- * This event may be useful for several possible use cases:
-------- * - mere logging of the event
-------- * - dynamic TLB/PTE loading
-------- * - if restarting of the faulting device is required
-------- *
-------- * Returns 0 on success and an appropriate error code otherwise (if dynamic
-------- * PTE/TLB loading will one day be supported, implementations will be able
-------- * to tell whether it succeeded or not according to this return value).
-------- *
-------- * Specifically, -ENOSYS is returned if a fault handler isn't installed
-------- * (though fault handlers can also return -ENOSYS, in case they want to
-------- * elicit the default behavior of the IOMMU drivers).
-------- */
-------- static inline int report_iommu_fault(struct iommu_domain *domain,
-------- struct device *dev, unsigned long iova, int flags)
-------- {
-------- int ret = -ENOSYS;
--------
-------- /*
-------- * if upper layers showed interest and installed a fault handler,
-------- * invoke it.
-------- */
-------- if (domain->handler)
-------- ret = domain->handler(domain, dev, iova, flags,
-------- domain->handler_token);
--------
-------- trace_io_page_fault(dev, iova, flags);
-------- return ret;
-------- }
++++++++
++++++++ extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
++++++++ unsigned long iova, int flags);
static inline size_t iommu_map_sg(struct iommu_domain *domain,
unsigned long iova, struct scatterlist *sg,