Use kvm_pfn_t, a.k.a. u64, for the local 'pfn' variable when retrieving
a so called "remapped" hva/pfn pair. In theory, the hva could resolve to
a pfn in high memory on a 32-bit kernel.
This bug was inadvertantly exposed by commit bd2fae8da794 ("KVM: do not
assume PTE is writable after follow_pfn"), which added an error PFN value
to the mix, causing gcc to comlain about overflowing the unsigned long.
arch/x86/kvm/../../../virt/kvm/kvm_main.c: In function ‘hva_to_pfn_remapped’:
include/linux/kvm_host.h:89:30: error: conversion from ‘long long unsigned int’
to ‘long unsigned int’ changes value from
‘9218868437227405314’ to ‘2’ [-Werror=overflow]
89 | #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2)
| ^
virt/kvm/kvm_main.c:1935:9: note: in expansion of macro ‘KVM_PFN_ERR_RO_FAULT’
Cc: stable@vger.kernel.org Fixes: add6a0cd1c5b ("KVM: MMU: try to fix up page faults before giving up") Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210208201940.1258328-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Currently, the follow_pfn function is exported for modules but
follow_pte is not. However, follow_pfn is very easy to misuse,
because it does not provide protections (so most of its callers
assume the page is writable!) and because it returns after having
already unlocked the page table lock.
Provide instead a simplified version of follow_pte that does
not have the pmdpp and range arguments. The older version
survives as follow_invalidate_pte() for use by fs/dax.c.
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In order to convert an HVA to a PFN, KVM usually tries to use
the get_user_pages family of functinso. This however is not
possible for VM_IO vmas; in that case, KVM instead uses follow_pfn.
In doing this however KVM loses the information on whether the
PFN is writable. That is usually not a problem because the main
use of VM_IO vmas with KVM is for BARs in PCI device assignment,
however it is a bug. To fix it, use follow_pte and check pte_write
while under the protection of the PTE lock. The information can
be used to fail hva_to_pfn_remapped or passed back to the
caller via *writable.
Usage of follow_pfn was introduced in commit add6a0cd1c5b ("KVM: MMU: try to fix
up page faults before giving up", 2016-07-05); however, even older version
have the same issue, all the way back to commit 2e2e3738af33 ("KVM:
Handle vma regions with no backing page", 2008-07-20), as they also did
not check whether the PFN was writable.
Fixes: 2e2e3738af33 ("KVM: Handle vma regions with no backing page") Reported-by: David Stevens <stevensd@google.com> Cc: 3pvd@google.com Cc: Jann Horn <jannh@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Walk the list of MMU pages in reverse in kvm_mmu_zap_oldest_mmu_pages().
The list is FIFO, meaning new pages are inserted at the head and thus
the oldest pages are at the tail. Using a "forward" iterator causes KVM
to zap MMU pages that were just added, which obliterates guest
performance once the max number of shadow MMU pages is reached.
Fixes: 6b82ef2c9cf1 ("KVM: x86/mmu: Batch zap MMU pages when recycling oldest pages") Reported-by: Zdenek Kaspar <zkaspar82@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210113205030.3481307-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
It has been reported[0] that the Dell XPS 15 L502X exhibits similar
freezing behavior to the other systems[1] on this blacklist. The issue
was exposed by a prior change of mine to automatically load
dell_smm_hwmon on a wider set of XPS models. To fix the regression, add
this model to the blacklist.
Fixes: b8a13e5e8f37 ("hwmon: (dell-smm) Use one DMI match for all XPS models") Cc: stable@vger.kernel.org Reported-by: Bob Hepple <bob.hepple@gmail.com> Tested-by: Bob Hepple <bob.hepple@gmail.com> Signed-off-by: Thomas Hebb <tommyhebb@gmail.com> Reviewed-by: Pali Rohár <pali@kernel.org> Link: https://lore.kernel.org/r/a09eea7616881d40d2db2fb5fa2770dc6166bdae.1611456351.git.tommyhebb@gmail.com Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
HDA initialization is failing occasionally on Tegra210 and following
print is observed in the boot log. Because of this probe() fails and
no sound card is registered.
[16.800802] tegra-hda 70030000.hda: no codecs found!
Codecs request a state change and enumeration by the controller. In
failure cases this does not seem to happen as STATETS register reads 0.
The problem seems to be related to the HDA codec dependency on SOR
power domain. If it is gated during HDA probe then the failure is
observed. Building Tegra HDA driver into kernel image avoids this
failure but does not completely address the dependency part. Fix this
problem by adding 'power-domains' DT property for Tegra210 HDA. Note
that Tegra186 and Tegra194 HDA do this already.
Fixes: 742af7e7a0a1 ("arm64: tegra: Add Tegra210 support")
Depends-on: 96d1f078ff0 ("arm64: tegra: Add SOR power-domain for Tegra210") Cc: <stable@vger.kernel.org> Signed-off-by: Sameer Pujar <spujar@nvidia.com> Acked-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
This issue starts from linux-5.10-rc1, I reproduced this issue on my
Dell Inspiron 7447 with BT adapter 0cf3:e005, the kernel will print
out: "Bluetooth: hci0: don't support firmware rome 0x31010000", and
someone else also reported the similar issue to bugzilla #211571.
I found this is a regression introduced by 'commit b40f58b97386
("Bluetooth: btusb: Add Qualcomm Bluetooth SoC WCN6855 support"), the
patch assumed that if high ROM version is not zero, it is an adapter
on WCN6855, but many old adapters don't need to load rampatch or nvm,
and they have non-zero high ROM version.
To fix it, let the driver match the rom_version in the
qca_devices_table first, if there is no entry matched, check the
high ROM version, if it is not zero, we assume this adapter is ready
to work and no need to load rampatch and nvm like previously.
The HID subsystem allows an "HID report field" to have a different
number of "values" and "usages" when it is allocated. When a field
struct is created, the size of the usage array is guaranteed to be at
least as large as the values array, but it may be larger. This leads to
a potential out-of-bounds write in
__hidinput_change_resolution_multipliers() and an out-of-bounds read in
hidinput_count_leds().
To fix this, let's make sure that both the usage and value arrays are
the same size.
Cc: stable@vger.kernel.org Signed-off-by: Will McVicker <willmcvicker@google.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
However, as a runtime result, we get 2 instead of 1, meaning the dst
register does not contain (u32)-1 in this case. The reason is fairly
straight forward given the 0 test leaves the dst register as-is:
# ./bpftool p d x i 23
0: (b7) r0 = 0
1: (b7) r1 = -1
2: (b4) w2 = -1
3: (16) if w0 == 0x0 goto pc+1
4: (9c) w1 %= w0
5: (b7) r0 = 1
6: (1d) if r1 == r2 goto pc+1
7: (b7) r0 = 2
8: (95) exit
This was originally not an issue given the dst register was marked as
completely unknown (aka 64 bit unknown). However, after 468f6eafa6c4
("bpf: fix 32-bit ALU op verification") the verifier casts the register
output to 32 bit, and hence it becomes 32 bit unknown. Note that for
the case where the src register is unknown, the dst register is marked
64 bit unknown. After the fix, the register is truncated by the runtime
and the test passes:
When alt mode 6 is not available, fallback to the kernel <= 5.7 behavior
of always using alt mode 1.
Prior to kernel 5.8, btusb would always use alt mode 1 for WBS (Wide
Band Speech aka mSBC aka transparent SCO). In commit baac6276c0a9
("Bluetooth: btusb: handle mSBC audio over USB Endpoints") this
was changed to use alt mode 6, which is the recommended mode in the
Bluetooth spec (Specifications of the Bluetooth System, v5.0, Vol 4.B
§2.2.1). However, many if not most BT USB adapters do not support alt
mode 6. In fact, I have been unable to find any which do.
In kernel 5.8, this was changed to use alt mode 6, and if not available,
use alt mode 0. But mode 0 has a zero byte max packet length and can
not possibly work. It is just there as a zero-bandwidth dummy mode to
work around a USB flaw that would prevent device enumeration if
insufficient bandwidth were available for the lowest isoc mode
supported.
In effect, WBS was broken for all USB-BT adapters that do not support
alt 6, which appears to nearly all of them.
Then in commit 461f95f04f19 ("Bluetooth: btusb: USB alternate setting 1 for
WBS") the 5.7 behavior was restored, but only for Realtek adapters.
I've tested a Broadcom BRCM20702A and CSR 8510 adapter, both work with
the 5.7 behavior and do not with the 5.8.
So get rid of the Realtek specific flag and use the 5.7 behavior for all
adapters as a fallback when alt 6 is not available. This was the
kernel's behavior prior to 5.8 and I can find no adapters for which it
is not correct. And even if there is an adapter for which this does not
work, the current behavior would be to fall back to alt 0, which can not
possibly work either, and so is no better.
Al root-caused a new warning from syzbot to the ttyprintk tty driver
returning a write count larger than the data the tty layer actually gave
it. Which confused the tty write code mightily, and with the new
iov_iter based code, caused a WARNING in iov_iter_revert().
syzbot correctly bisected the source of the new warning to commit 9bb48c82aced ("tty: implement write_iter"), but the oddity goes back
much further, it just didn't get caught by anything before.
The function uses a goto-based loop, which may lead to an earlier error
getting discarded by a later iteration. Exit this ad-hoc loop when an
error was encountered.
The out-of-memory error path additionally fails to fill a structure
field looked at by xen_blkbk_unmap_prepare() before inspecting the
handle which does get properly set (to BLKBACK_INVALID_HANDLE).
Since the earlier exiting from the ad-hoc loop requires the same field
filling (invalidation) as that on the out-of-memory path, fold both
paths. While doing so, drop the pr_alert(), as extra log messages aren't
going to help the situation (the kernel will log oom conditions already
anyway).
In particular -ENOMEM may come back here, from set_foreign_p2m_mapping().
Don't make problems worse, the more that handling elsewhere (together
with map's status fields now indicating whether a mapping wasn't even
attempted, and hence has to be considered failed) doesn't require this
odd way of dealing with errors.
In particular -ENOMEM may come back here, from set_foreign_p2m_mapping().
Don't make problems worse, the more that handling elsewhere (together
with map's status fields now indicating whether a mapping wasn't even
attempted, and hence has to be considered failed) doesn't require this
odd way of dealing with errors.
In particular -ENOMEM may come back here, from set_foreign_p2m_mapping().
Don't make problems worse, the more that handling elsewhere (together
with map's status fields now indicating whether a mapping wasn't even
attempted, and hence has to be considered failed) doesn't require this
odd way of dealing with errors.
Failure of the kernel part of the mapping operation should also be
indicated as an error to the caller, or else it may assume the
respective kernel VA is okay to access.
Furthermore gnttab_map_refs() failing still requires recording
successfully mapped handles, so they can be unmapped subsequently. This
in turn requires there to be a way to tell full hypercall failure from
partial success - preset map_op status fields such that they won't
"happen" to look as if the operation succeeded.
Also again use GNTST_okay instead of implying its value (zero).
We may not skip setting the field in the unmap structure when
GNTMAP_device_map is in use - such an unmap would fail to release the
respective resources (a page ref in the hypervisor). Otoh the field
doesn't need setting at all when GNTMAP_device_map is not in use.
To record the value for unmapping, we also better don't use our local
p2m: In particular after a subsequent change it may not have got updated
for all the batch elements. Instead it can simply be taken from the
respective map's results.
We can additionally avoid playing this game altogether for the kernel
part of the mappings in (x86) PV mode.
Its sibling (set_foreign_p2m_mapping()) as well as the sibling of its
only caller (gnttab_map_refs()) don't clean up after themselves in case
of error. Higher level callers are expected to do so. However, in order
for that to really clean up any partially set up state, the operation
should not terminate upon encountering an entry in unexpected state. It
is particularly relevant to notice here that set_foreign_p2m_mapping()
would skip setting up a p2m entry if its grant mapping failed, but it
would continue to set up further p2m entries as long as their mappings
succeeded.
Arguably down the road set_foreign_p2m_mapping() may want its page state
related WARN_ON() also converted to an error return.
When building under either of stage1, noudeb, cross or autopkgtest
profiles udeb packages are not desired. Stop generating them, and drop
build-dependencies that are used just to create udebs.
Also note that dpkg in hirsute and up automatically buidlds packages
with a noudeb profile, thus hirsute+ builds will be now much faster.
This is a correct and backwards compatibles support for build profiles
and can be backported to any prior release, where by default packages
will be build with udebs.
BugLink: https://bugs.launchpad.net/bugs/1916095 Signed-off-by: Dimitri John Ledkov <xnox@ubuntu.com> Acked-by: Seth Forshee <seth.forshee@canonical.com> Acked-by: Tim Gardner <tim.gardner@canonical.com> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Seth Forshee [Fri, 19 Feb 2021 21:35:48 +0000 (15:35 -0600)]
UBUNTU: [Config] set CONFIG_PINCTRL_MSM8953=m on armhf generic-lpae
This option is built-in on armhf generic-lpae whereas it is a
module elsewhere. No explanation is given, so this is likely a
mistack. Change it to be a module.
Seth Forshee [Fri, 19 Feb 2021 21:33:09 +0000 (15:33 -0600)]
UBUNTU: [Config] set CONFIG_MIPI_I3C_HCI=m consistently
This option is disabled in a handful of configs without
explanation. Since it can be configured as a module there's
likely no harm to having it enabled as such.
Tejas Upadhyay [Mon, 18 Jan 2021 14:26:04 +0000 (22:26 +0800)]
UBUNTU: SAUCE: drm/i915/gen9_bc : Add TGP PCH support
BugLink: https://bugs.launchpad.net/bugs/1909457
We have TGP PCH support for Tigerlake and Rocketlake. Similarly
now TGP PCH can be used with Cometlake CPU.
Changes since V3 :
- Rebased to top drm-tip commit
- dev_priv replaced with i915 for new API
- Enable default Port B,C,D detection for TGP && GEN9_BC
Changes since V2 :
- IS_COMETLAKE replaced with IS_GEN9_BC
- VBT ddc pin remapping added
- Added dedicated HPD pin and DDC pin handling API
Changes since V1 :
- Matched HPD Pin mapping for PORT C and PORT D of CML CPU.
Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Signed-off-by: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Link: https://patchwork.freedesktop.org/patch/412664/ Signed-off-by: Chia-Lin Kao (AceLan) <acelan.kao@canonical.com> Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Lee Shawn C [Mon, 18 Jan 2021 14:26:03 +0000 (22:26 +0800)]
drm/i915/rkl: new rkl ddc map for different PCH
BugLink: https://bugs.launchpad.net/bugs/1909457
After boot into kernel. Driver configured ddc pin mapping based on
predefined table in parse_ddi_port(). Now driver configure rkl
ddc pin mapping depends on icp_ddc_pin_map[]. Then this table will
give incorrect gmbus port number to cause HDMI can't work.
Refer to commit cd0a89527d06 ("drm/i915/rkl: Add DDC pin mapping").
Create two ddc pin table for rkl TGP and CMP pch. Then HDMI can
works properly on rkl.
v2: update patch based on latest dinq branch.
v3: update ddc table for RKL+TGP sku.
RKL+CNP sku will load cnp_ddc_pin_map[] setting.
v4: modify the if/else judgment to avoid nesting.
v5: fix typo in v4.
Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Aditya Swarup <aditya.swarup@intel.com> Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Cooper Chiou <cooper.chiou@intel.com> Cc: Khaled Almahallawy <khaled.almahallawy@intel.com> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2577 Signed-off-by: Lee Shawn C <shawn.c.lee@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lyude Paul <lyude@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201117142629.28729-1-shawn.c.lee@intel.com
(cherry picked from commit 956aee8fa366cfd9d693aa6e7ef822b775980c01
drm-next) Signed-off-by: Chia-Lin Kao (AceLan) <acelan.kao@canonical.com> Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Previously we could simply copy scripts/module-common.lds from
the source tree, but as of 596b0474d3d9 "kbuild: preprocess
module linker script" in 5.10 the module linker script requires
preprocessing. Therefore we must get the script produced by the
kernel build.
Grab the linker script from the linux-headers package. For l-r-m
we can copy it from the installed location. During the kernel
build we can get it from the copy of the linux-headers contents
used for the dkms build.
Paolo Pisati [Thu, 18 Feb 2021 14:58:21 +0000 (15:58 +0100)]
UBUNTU: SAUCE: selftests: memory-hotplug: bump timeout to 10min
$ sudo make -C tools/testing/selftests/memory-hotplug run_tests
TAP version 13
1..1
...
15:11:09 DEBUG| [stdout] not ok 1 selftests: memory-hotplug: mem-on-off-test.sh # TIMEOUT 45 seconds
The memory-hotplug selftest can take up to several minutes, depending on memory
size and cpu speed of the testbench, so bump timeout to 10 minutes.
Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
UBUNTU: [Config] add Canonical Livepatch Service key to SYSTEM_TRUSTED_KEYS
Add Canonical Livepatch Service key to SYSTEM_TRUSTED_KEYS, such that
livepatch modules signed by Canonical are trusted out of the box, on
locked-down secureboot systems.
BugLink: https://bugs.launchpad.net/bugs/1898716 Signed-off-by: Dimitri John Ledkov <xnox@ubuntu.com>
[apw@canonical.com: move certification to cert framework.] Signed-off-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Andy Whitcroft [Thu, 18 Feb 2021 16:17:51 +0000 (16:17 +0000)]
UBUNTU: [Config] enable CONFIG_MODVERSIONS=y
In order to support the livepatch key we need to ensure we do not allow
that key to load modules which are not for the specific kernel. From
the documentation on kernel module signing:
If you use the same private key to sign modules for multiple kernel
configurations, you must ensure that the module version information is
sufficient to prevent loading a module into a different kernel. Either
set ``CONFIG_MODVERSIONS=y`` or ensure that each configuration has a
different kernel release string by changing ``EXTRAVERSION`` or
``CONFIG_LOCALVERSION``.
Seth Forshee [Fri, 5 Feb 2021 13:32:54 +0000 (07:32 -0600)]
UBUNTU: [Config] Set CONFIG_TMPFS_INODE64=n for s390x
Our testing turned up the following behavior on s390x with 5.9+:
# mount -t tmpfs nodev test
# mount -o remount,rw test
mount: /home/ubuntu/test: mount point not mounted or bad option.
# dmesg | tail -n2
[ 3597.759604] tmpfs: Cannot use inode64 with <64bit inums in kernel
This is because we have CONFIG_TMPFS_INODE64=y, but ino_t is only
32-bit on s390. tmpfs does not handle this situation well. It
sets the inode size to 64-bit on mount without checking the size
of ino_t. It then prints "inode64" in the mount options, so the
remount pases this option. At this point it does do a check of
the ino_t size, and refuses to mount.
Oddly, aside from the remount issue it doesn't appear that this
should cause any big problems. inode numbers might wrap, but
there's already logic in place to do that anyway for the inode32
mount option, and I can't see that the behavior will differ at
all. But upstream needs to handle this better, so let's disable
64-bit inodes in tmpfs for s390x until there's a fix.
Seth Forshee [Fri, 5 Feb 2021 18:10:37 +0000 (12:10 -0600)]
UBUNTU: SAUCE: tmpfs: Don't use 64-bit inodes by defulat with 32-bit ino_t
Currently there seems to be an assumption in tmpfs that 64-bit
architectures also have a 64-bit ino_t. This is not true; s390 at
least has a 32-bit ino_t. With CONFIG_TMPFS_INODE64=y tmpfs
mounts will get 64-bit inode numbers and display "inode64" in the
mount options, but passing the "inode64" mount option will fail.
This leads to the following behavior:
# mkdir mnt
# mount -t tmpfs nodev mnt
# mount -o remount,rw mnt
mount: /home/ubuntu/mnt: mount point not mounted or bad option.
As mount sees "inode64" in the mount options and thus passes it
in the options for the remount.
Ideally CONFIG_TMPFS_INODE64 would depend on sizeof(ino_t) < 8,
but I don't think it's possible to test for this (potentially
CONFIG_ARCH_HAS_64BIT_INO_T or similar could be added, but I'm
not sure whether or not that is wanted). So fix this by simply
refusing to honor the CONFIG_TMPFS_INODE64 setting when
sizeof(ino_t) < 8.
Signed-off-by: You-Sheng Yang <vicamo.yang@canonical.com>
(cherry picked from
https://patchwork.kernel.org/project/linux-input/patch/20210204083315.122952-1-vicamo.yang@canonical.com/) Signed-off-by: You-Sheng Yang <vicamo.yang@canonical.com> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Kai-Heng Feng [Thu, 4 Feb 2021 13:01:29 +0000 (21:01 +0800)]
r8169: Add support for another RTL8168FP
BugLink: https://bugs.launchpad.net/bugs/1914604
According to the vendor driver, the new chip with XID 0x54b is
essentially the same as the one with XID 0x54a, but it doesn't need the
firmware.
So add support accordingly.
(backported from commit e6d6ca6e12049dfbff6ac8b029678d2d2c55c34f linux-next) Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Andrea Righi [Thu, 14 Jan 2021 11:06:12 +0000 (12:06 +0100)]
UBUNTU: SAUCE: x86/entry: build thunk_$(BITS) only if CONFIG_PREEMPTION=y
With CONFIG_PREEMPTION disabled, arch/x86/entry/thunk_64.o is just an
empty object file.
With the newer binutils (tested with 2.35.90.20210113-1ubuntu1) the GNU
assembler doesn't generate a symbol table for empty object files and
objtool fails with the following error when a valid symbol table cannot
be found:
arch/x86/entry/thunk_64.o: warning: objtool: missing symbol table
To prevent this from happening, build thunk_$(BITS).o only if
CONFIG_PREEMPTION is enabled.
BugLink: https://bugs.launchpad.net/bugs/1911359 Fixes: 320100a5ffe5 ("x86/entry: Remove the TRACE_IRQS cruft") Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Seth Forshee [Fri, 29 Jan 2021 19:13:45 +0000 (13:13 -0600)]
UBUNTU: [Packaging] Don't disable CONFIG_DEBUG_INFO in headers packages
This config is enabled during the kernel build (though modules
are stripped), but we disable it in the config installed by our
headers packages so that dkms modules do not have debug
information. With 5.11 this is causing external modules to fail
to load, and the default behavior of dkms is to strip modules,
so it's unnecessary to disable CONFIG_DEBUG_INFO in the installed
config file. Stop disabling it so that external modules can load.
Chin-Yen Lee [Tue, 26 Jan 2021 07:49:26 +0000 (15:49 +0800)]
rtw88: reduce the log level for failure of tx report
BugLink: https://bugs.launchpad.net/bugs/1913263
Sometimes driver does not get tx report from firmware because wifi
environment is too noisy to get ack from AP about a TX frame,
or firmware is too busy to report driver in a estimated time.
But the condition will not affect wifi function or throughput.
So we reduce the log level to rtw_debug instead of scary backtrace.
Seth Forshee [Thu, 28 Jan 2021 15:30:09 +0000 (09:30 -0600)]
UBUNTU: SAUCE: selftests/seccomp: Accept any valid fd in user_notification_addfd
This test expects fds to have specific values, which works fine
when the test is run standalone. However, the kselftest runner
consumes a couple of extra fds for redirection when running
tests, so the test fails when run via kselftest.
Andy Whitcroft [Thu, 21 Jan 2021 12:07:46 +0000 (12:07 +0000)]
UBUNTU: [Packaging] nvidia -- use dkms-versions to define versions built
Currently each and every Nvidia version added or removed from
dkms-versions requires a pair of corresponding changes to debian/rules
and debian/rules.d/2-binary-arch.mk. Switch to using the listed versions
in debian/dkms-versions to generate the rules we need during build.
BugLink: https://bugs.launchpad.net/bugs/1912803 Acked-by: Kamal Mostafa <kamal@canonical.com> Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com> Signed-off-by: Andy Whitcroft <apw@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/1865402
ODM asks us to use get_display_mode command to confirm the scalar's
behavior, and Windows use this command, too.
To align the behavior with Windows, remove get_scalar_status command and
replace it with get_display_mode.
Signed-off-by: Chia-Lin Kao (AceLan) <acelan.kao@canonical.com> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Kai-Heng Feng [Thu, 21 Jan 2021 08:49:02 +0000 (16:49 +0800)]
thermal: intel: pch: Fix unexpected shutdown at critical temperature
BugLink: https://bugs.launchpad.net/bugs/1906168
Like previous patch, the intel_pch_thermal device is not in ACPI
ThermalZone namespace, so a critical trip doesn't mean shutdown.
Override the default .critical callback to prevent surprising thermal
shutdoown.
Kai-Heng Feng [Thu, 21 Jan 2021 08:49:01 +0000 (16:49 +0800)]
thermal: int340x: Fix unexpected shutdown at critical temperature
BugLink: https://bugs.launchpad.net/bugs/1906168
We are seeing thermal shutdown on Intel based mobile workstations, the
shutdown happens during the first trip handle in
thermal_zone_device_register():
kernel: thermal thermal_zone15: critical temperature reached (101 C), shutting down
However, we shouldn't do a thermal shutdown here, since
1) We may want to use a dedicated daemon, Intel's thermald in this case,
to handle thermal shutdown.
2) For ACPI based system, _CRT doesn't mean shutdown unless it's inside
ThermalZone namespace. ACPI Spec, 11.4.4 _CRT (Critical Temperature):
"... If this object it present under a device, the device’s driver
evaluates this object to determine the device’s critical cooling
temperature trip point. This value may then be used by the device’s
driver to program an internal device temperature sensor trip point."
So a "critical trip" here merely means we should take a more aggressive
cooling method.
As int340x device isn't present under ACPI ThermalZone, override the
default .critical callback to prevent surprising thermal shutdown.