Lucas De Marchi [Wed, 27 Sep 2023 19:38:59 +0000 (12:38 -0700)]
drm/xe/pat: Keep track of relevant indexes
Some of the PAT entries are relevant for internal driver use, which
varies per platform. Let the PAT early initialization set what they
should point to so the rest of the driver can use them where needed.
Lucas De Marchi [Wed, 27 Sep 2023 19:38:58 +0000 (12:38 -0700)]
drm/xe/pat: Prefer the arch/IP names
Both DG2 and PVC are derived from XeHP, but DG2 should not really
re-use something introduced by PVC, so it's odd to have DG2 re-using the
PVC programming for PAT. Let's prefer using the architecture and/or IP
names.
Lucas De Marchi [Wed, 27 Sep 2023 19:38:56 +0000 (12:38 -0700)]
drm/xe: Use vfunc to initialize PAT
Split the PAT initialization between SW-only and HW. The _early() only
sets up the ops and data structure that are used later to program the
tables. This allows the PAT to be easily extended to other platforms.
Lucas De Marchi [Wed, 27 Sep 2023 19:38:55 +0000 (12:38 -0700)]
drm/xe/migrate: Do not hand-encode pte
Instead of encoding the pte, call a new vfunc from xe_vm to handle that.
The encoding may not be the same on every platform, so keeping it in one
place helps to better support them.
Lucas De Marchi [Wed, 27 Sep 2023 19:38:54 +0000 (12:38 -0700)]
drm/xe: Use vfunc for pte/pde ppgtt encoding
Move the function to encode pte/pde to be vfuncs inside struct xe_vm.
This will allow to easily extend to platforms that don't have a
compatible encoding.
Lucas De Marchi [Wed, 27 Sep 2023 19:38:53 +0000 (12:38 -0700)]
drm/xe: Remove check for vma == NULL
vma at this point can never be NULL as otherwise it would crash earlier
in the only caller, xe_pt_stage_bind_entry(). Remove the extra check and
avoid adding and removing the bits from the pte.
Lucas De Marchi [Wed, 27 Sep 2023 19:38:52 +0000 (12:38 -0700)]
drm/xe: Normalize pte/pde encoding
Split functions that do only part of the pde/pte encoding and that can
be called by the different places. This normalizes how pde/pte are
encoded so they can be moved elsewhere in a subsequent change.
xe_pte_encode() was calling __pte_encode() with a NULL vma, which is the
opposite of what xe_pt_stage_bind_entry() does. Stop passing a NULL vma
and just split another function that deals with a vma rather than a bo.
Matt Roper [Wed, 27 Sep 2023 20:51:44 +0000 (13:51 -0700)]
drm/xe: Infer service copy functionality from engine list
On platforms with multiple BCS engines (i.e., PVC and Xe2), not all BCS
engines are created equal. The BCS0 engine is what the specs refer to
as a "resource copy engine," which supports the platform's full set of
copy/fill instructions. In contast, the non-BCS0 "service copy" engines
are more streamlined and only support a subset of the GPU instructions
supported by the resource copy engine. Platforms with both types of
copy engines always support the MEM_COPY and MEM_SET instructions which
can be used for simple copy and fill operations on either type of BCS
engine. Since the simple MEM_SET instruction meets the needs of Xe's
migrate code (and since the more elaborate XY_FAST_COLOR_BLT instruction
isn't available to use on service copy engines), we always prefer to use
MEM_SET for clearing buffers on our newer platforms.
We've been using a 'has_link_copy_engine' feature flag to keep track of
which platforms should use MEM_SET for fills. However a feature flag
like this is unnecessary since we can already derive the presence of
service copy engines (and in turn the MEM_SET instruction) just by
looking at the platform's pre-fusing engine list. Utilizing the engine
list for this also avoids mistakes like we've made on Xe2 where we
forget to set the feature flag in the IP definition.
For clarity, "service copy" is a general term that covers any blitter
engines that support a limited subset of the overall blitter instruction
set (in practice this is any non-BCS0 blitter engine). The "link copy
engines" introduced on PVC and the "paging copy engine" present in Xe2
are both instances of service copy engines.
drm/xe/irq: Clear GFX_MSTR_IRQ as part of IRQ reset
Starting with Xe_LP+, GFX_MSTR_IRQ contains status bits that have W1C
behavior. If we do not properly reset them, we would miss delivery of
interrupts if a pending bit is set when enabling IRQs.
As an example, the display part of our probe routine contains paths
where we wait for vblank interrupts. If a display interrupt was already
pending when enabling IRQs, we would time out waiting for the vblank.
That in fact happened recently when modprobing Xe on a Lunar Lake with a
specific configuration; and that's how we found out we were missing this
step in the IRQ enabling logic.
Fix the issue by clearing GFX_MSTR_IRQ as part of the IRQ reset.
v2:
- Make resetting GFX_MSTR_IRQ be the last step to avoid bit
re-latching. (Ville)
v3:
- Swap nesting order: guard loop with the IP version check instead of
doing the check at each iteration. (Lucas)
v4:
- Add braces for the "if" statement guarding the loop to make the
compiler happy. (Gustavo)
BSpec: 50875, 54028, 62357 Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20230926221914.106843-2-gustavo.sousa@intel.com Signed-off-by: Gustavo Sousa <gustavo.sousa@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Ensure alignment with PAGE_SIZE for the size parameter
passed to __xe_bo_create_locked()
v2: move size alignment under else condition (Lucas)
Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20230920213259.3458968-1-pallavi.mishra@intel.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Lucas De Marchi [Fri, 22 Sep 2023 17:43:20 +0000 (10:43 -0700)]
drm/xe: Accept a const xe device
Depending on the context, it's preferred to have a const pointer to make
sure nothing is modified underneath. The assert macros only ever read
data from xe/tile/gt for printing, so they can be made const by default.
Use the newly added drm_print_memory_stats helper to show memory
utilisation of our objects in drm/driver specific fdinfo output.
To collect the stats we walk the per memory regions object lists
and accumulate object size into the respective drm_memory_stats
categories.
Objects with multiple possible placements are reported in multiple
regions for total and shared sizes, while other categories are
counted only for the currently active region.
V4:
- Remove rcu lock - Auld/Thomas
- take refcnt only if its non-zero - Auld
- DMA_RESV_USAGE_BOOKKEEP covers all fences - Auld
- covert to xe_bo for public objects
V3:
- dont use xe_bo_get/put, not needed
- use designated initializer - Jani
- use list_for_each_entry_rcu
- Fix Checkpatch err - CI
V2:
- Use static initializer for mem_type - Himal/Jani
In order to show per client memory consumption, we
need tracking support APIs to add at every bo consumption
and removal. Adding APIs here to add tracking calls at
places wherever it is applicable.
V5:
- Rebase
V4:
- remove client bo before vm_put
- spin_lock_irqsave not required - Auld
V3:
- update .h to return xe_drm_client_remove_bo void
- protect xe_drm_client_remove_bo under CONFIG_PROC_FS check - Himal
- Fixed Checkpatch error - CI
V2:
- make xe_drm_client_remove_bo return void - Himal
drm/xe: Interface xe drm client with fdinfo interface
DRM core driver has introduced recently fdinfo interface to
show memory stats of individual drm client. Lets interface
xe drm client to fdinfo interface.
V2:
- cover call to xe_drm_client_fdinfo under PROC_FS
drm/xe: Add child contexts to the GuC context lookup
The CAT_ERROR message from the GuC provides the guc id of the context
that caused the problem, which can be a child context. We therefore
need to be able to match that id to the exec_queue that owns it, which
we do by adding child context to the context lookup.
While at it, fix the error path of the guc id allocation code to
correctly free the ids allocated for parallel queues.
v2: rebase on s/XE_WARN_ON/xe_assert
Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/590 Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Wed, 13 Sep 2023 23:14:17 +0000 (16:14 -0700)]
drm/xe/wa: Apply tile workarounds at probe/resume
Although the vast majority of workarounds the driver needs to implement
are either GT-based or display-based, there are occasionally workarounds
that reside outside those parts of the hardware (i.e., in they target
registers in the sgunit/soc); we can consider these to be "tile"
workarounds since there will be instance of these registers per tile.
The registers in question should only lose their values during a
function-level reset, so they only need to be applied during probe and
resume; the registers will not be affected by GT/engine resets.
Tile workarounds are rare (there's only one, 22010954014, that's
relevant to Xe at the moment) so it's probably not worth updating the
xe_rtp design to handle tile-level workarounds yet, although we may want
to consider that in the future if/when more of these show up on future
platforms.
Thomas Hellström [Wed, 20 Sep 2023 09:50:01 +0000 (11:50 +0200)]
drm/xe: Disallow pinning dma-bufs in VRAM
For now only support pinning in TT memory, for two reasons:
1) Avoid pinning in a placement not accessible to some importers.
2) Pinning in VRAM requires PIN accounting which is a to-do.
With the GPUVA conversion, the xe_bo::vmas member became replaced with
drm_gem_object::gpuva.list, however there was a couple of usage instances
left using the old member. Most notably the pipelined fence
enable_signaling.
Remove the xe_bo::vmas member completely, fix usage instances and
also enable this pipelined fence enable_signaling even for faulting
VM:s since we actually wait for bind fences to complete.
When testing a new binary and/or debugging binary-related issues, it is
useful to have the option to change which binary is loaded without
having to update and re-compile the kernel. To support this option, this
patch adds 2 new modparams to override the FW path for GuC and HuC. The
HuC modparam can also be set to an empty string to disable HuC loading.
Note that those modparams only take effect on platforms where we already
have a default FW, so we're sure there is support for FW loading and the
kernel isn't going to explode in an undefined path.
v2: simplify comment (John),
rebase on s/guc_submission_enabled/uc_enabled
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
1) the HuC is moved to "disabled" instead of "not supported"
2) the status is left uninitialized instead of "disabled" when the
modparam is used to disable support
3) due to #1, a number of checks are done against "disabled" instead of
the appropriate status.
Address all of those by making sure to follow the appropriate state
transition and checking against the required state.
v2: rebase on s/guc_submission_enabled/uc_enabled/
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
drm/xe/uc: Rename guc_submission_enabled() to uc_enabled()
The guc_submission_enabled() function is being used as a boolean toggle
for all firmwares and all related features, not just GuC submission. We
could add additional flags/functions to distinguish and allow different
use-cases (e.g. loading HuC but not using GuC submission), but given
that not using GuC is a debug-only scenario having a global switch for
all FWs is enough. However, we want to make it clear that this switch
turns off everything, so rename it to uc_enabled().
v2: rebase on s/XE_WARN_ON/xe_assert
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
v2:
Store last known value when device is awake return that while the GT is
suspended and then update the driver copy when read during awake.
v3:
1. drop init_samples, as storing counters before going to suspend should
be sufficient.
2. ported the "drm/i915/pmu: Make PMU sample array two-dimensional" and
dropped helpers to store and read samples.
3. use xe_device_mem_access_get_if_ongoing to check if device is active
before reading the OA registers.
4. dropped format attr as no longer needed
5. introduce xe_pmu_suspend to call engine_group_busyness_store
6. few other nits.
v4: minor nits.
v5: take forcewake when accessing the OAG registers
v6:
1. drop engine_busyness_sample_type
2. update UAPI documentation
v7:
1. update UAPI documentation
2. drop MEDIA_GT specific change for media busyness counter.
drm/xe: Use spinlock in forcewake instead of mutex
In PMU we need to access certain registers which fall under GT power
domain for which we need to take forcewake. But as PMU being an atomic
context can't expect to have any sleeping calls.
drm/xe/guc: Switch to major-only GuC FW tracking for MTL
Newer HuC binaries for MTL (8.5.1+) require GuC 70.7 or newer, so we
need to move on from 70.6.4. Given that the MTL GuC uses major-only
version matching in i915, we can do the same here instead of just
bumping the version (and having to push the versioned binaries,
because they're not there already for i915).
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
drm/xe: Use Xe assert macros instead of XE_WARN_ON macro
The XE_WARN_ON macro maps to WARN_ON which is not justified
in many cases where only a simple debug check is needed.
Replace the use of the XE_WARN_ON macro with the new xe_assert
macros which relies on drm_*. This takes a struct drm_device
argument, which is one of the main changes in this commit. The
other main change is that the condition is reversed, as with
XE_WARN_ON a message is displayed if the condition is true,
whereas with xe_assert it is if the condition is false.
v2:
- Rebase
- Keep WARN splats in xe_wopcm.c (Matt Roper)
Michal Wajdeczko [Tue, 12 Sep 2023 18:29:56 +0000 (20:29 +0200)]
drm/xe: Introduce Xe assert macros
As we are moving away from the controversial XE_BUG_ON macro,
relying just on WARN_ON or drm_err does not cover the cases
where we want to annotate functions with additional detailed
debug checks to assert that all prerequisites are satisfied,
without paying footprint or performance penalty on non-debug
builds, where all misuses introduced during code integration
were already fixed.
Introduce family of Xe assert macros that try to follow classic
assert() utility and can be compiled out on non-debug builds.
Macros are based on drm_WARN, but unlikely to origin, disallow
use in expressions since we will compile that code out.
As we are operating on the xe pointers, we can print additional
information about the device, like tile or GT identifier, that
is not available from generic WARN report:
Matthew Brost [Mon, 11 Sep 2023 21:10:32 +0000 (14:10 -0700)]
drm/xe: Fix fence reservation accouting
Both execs and the preempt rebind worker can issue rebinds. Rebinds
require a fence, per tile, inserted into dma-resv slots of the VM and
BO (if external). The fence reservation accouting did not take into
account the number of fences required for rebinds, fix this.
v2: Rebase
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reported-by: Christopher Snowhill <kode54@gmail.com> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/518 Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
drm/xe: Convert remaining instances of ttm_eu_reserve_buffers to drm_exec
The VM_BIND functionality and vma destruction was locking
potentially multiple dma_resv objects using the
ttm_eu_reserve_buffers() function. Rework those to use the drm_exec
helper, taking care that any calls to xe_bo_validate() ends up
inside an unsealed locking transaction.
v4:
- Remove an unbalanced xe_bo_put() (igt and Matthew Brost)
v5:
- Rebase conflict
drm/xe: Rework xe_exec and the VM rebind worker to use the drm_exec helper
Replace the calls to ttm_eu_reserve_buffers() by using the drm_exec
helper instead. Also make sure the locking loop covers any calls to
xe_bo_validate() / ttm_bo_validate() so that these function calls may
easily benefit from being called from within an unsealed locking
transaction and may thus perform blocking dma_resv locks in the future.
For the unlock we remove an assert that the vm->rebind_list is empty
when locks are released. Since if the error path is hit with a partly
locked list, that assert may no longer hold true we chose to remove it.
v3:
- Don't accept duplicate bo locks in the rebind worker.
v5:
- Loop over drm_exec objects in reverse when unlocking.
v6:
- We can't keep the WW ticket when retrying validation on OOM. Fix.
Lucas De Marchi [Fri, 8 Sep 2023 22:52:27 +0000 (15:52 -0700)]
drm/xe/mmio: Account for GSI offset when checking ranges
Change xe_mmio_in_range() to use the same logic to account for the GT's
adj_offset as the read and write functions. This is needed when checking
ranges for the MCR registers if the GT has an offset to adjust.
Rodrigo Vivi [Wed, 30 Aug 2023 21:47:15 +0000 (17:47 -0400)]
drm/xe/uapi: Remove useless max_page_size
The min_page_size is useful information to ensure alignment and it is
an API actually in use. However max_page_size doesn't bring any useful
information to the userspace hence being not used at all.
So, let's remove and only bring it back if that ever gets used.
Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Francois Dugast <francois.dugast@intel.com>
Lucas De Marchi [Wed, 6 Sep 2023 01:20:53 +0000 (18:20 -0700)]
drm/xe: Fix LRC workarounds
Fix 2 issues when writing LRC workarounds by copying the same handling
done when processing other RTP entries:
For masked registers, it was not correctly setting the upper 16bits.
Differently than i915, the entry itself doesn't set the upper bits
for masked registers: this is done when applying them. Testing on ADL-P:
All of these registers are masked registers, so writing to them without
the relevant bits in the upper 16b doesn't have any effect.
Also, this adds support to regular registers; previously it was assumed
that LRC entries would only contain masked registers. However this is
not true. 0x6604 is not a masked register, but used in workarounds for
e.g. ADL-P. See commit 28cf243a341a ("drm/i915/gt: Fix context
workarounds with non-masked regs"). In the same test with ADL-P as
above:
Lucas De Marchi [Wed, 6 Sep 2023 01:20:50 +0000 (18:20 -0700)]
drm/xe/reg_sr: Simplify check for masked registers
For all RTP actions, clr_bits is a superset of the bits being modified.
That's also why the check for "changing all bits" can be done with
`clr_bits + 1`. So always use clr_bits for setting the upper bits of a
masked register.
Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://lore.kernel.org/r/20230906012053.1733755-2-lucas.demarchi@intel.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Fri, 1 Sep 2023 14:28:26 +0000 (15:28 +0100)]
drm/xe/selftests: make eviction test tile centric
The concern here is that we may have platforms with dedicated media GT,
and we anyway allocate the object on the tile, which just means running
the same test twice (i.e primary vs media GT).
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Brost [Thu, 17 Aug 2023 03:15:38 +0000 (20:15 -0700)]
drm/xe: Fix array of binds
If multiple bind ops in an array of binds touch the same address range
invalid GPUVA operations are generated as each GPUVA operation is
generated based on the orignal GPUVA state. To fix this, after each
GPUVA operations is generated, commit the GPUVA operation updating the
GPUVA state so subsequent bind ops can see a current GPUVA state.
Matthew Brost [Mon, 14 Aug 2023 03:19:20 +0000 (20:19 -0700)]
drm/xe: Fixup unwind on VM ops errors
Remap ops have 3 parts: unmap, prev, and next. The commit step can fail
on any of these. Add a flag for each to these so the unwind is only done
the steps that have been committed.
Matthew Auld [Tue, 29 Aug 2023 16:28:43 +0000 (17:28 +0100)]
drm/xe: fix has_llc on rkl
Matches i915. Assumption going forward is that non-llc + igpu is only a
thing on MTL+ which should have explicit coherency pat_index settings
for COH_NONE, 1WAY and 2WAY.
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Pallavi Mishra <pallavi.mishra@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Pallavi Mishra <pallavi.mishra@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Thu, 24 Aug 2023 16:04:45 +0000 (17:04 +0100)]
drm/xe: nuke GuC on unload
On PVC unloading followed by reloading the module often results in a
completely dead machine (seems to be plaguing CI). Resetting the GuC
like we do at load seems to cure it at least when locally testing this.
v2:
- Move pc_fini into guc_fini. We want to do the GuC reset just after
calling pc_fini, otherwise we encounter communication failures. It
also seems like a good idea to do the reset before we start releasing
the various other GuC resources. In the case of pc_fini there is an
explicit stop, but for other stuff like logs, ads, ctb there is not.
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/542
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/597 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
drm/xe/pvc: Use fast copy engines as migrate engine on PVC
Some copy hardware engine instances are faster than others on PVC.
Use a virtual engine of these plus the reserved instance for the migrate
engine on PVC. The idea being if a fast instance is available it will be
used and the throughput of kernel copies, clears, and pagefault
servicing will be higher.
v2: Use OOB WA, use all copy engines if no WA is required
Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Wa_16017236439 requires that we update BCS_SWCTRL
(via indirect context batch buffer) to set 64B
transfers when running on an even-numbered BCS
engine and 256B on an odd-numbered BCS engine.
The current only submission in the driver that doesn't use a vm is the
WA setup. We still pass a vm structure (the migration one), but we don't
actually use it at submission time and we instead have an hack to use
GGTT for this particular engine.
Instead of special-casing the WA engine, we can skip providing a VM and
use that as selector for whether to use GGTT or PPGTT. As part of this
change, we can drop the special engine flag for the WA engine and switch
the WA submission to use the standard job functions instead of dedicated
ones.
If an engine is only destroyed on driver unload, we can skip its
clean-up steps with the GuC because the GuC is going to be tuned off as
well, so it doesn't matter if we're in sync with it or not. Currently,
we apply this optimization to all engines marked as kernel, but this
stops us to supporting kernel engines that don't stick around until
unload. To remove this limitation, add a separate flag to indicate if
the engine is expected to only be destryed on driver unload and use that
to trigger the optimzation.
While at it, add a small comment to explain what each engine flag
represents.
Kernel queues can submit privileged batches directly in GGTT, so they
don't always need a vm. The submission front-end already supports
creating and submitting jobs without a vm, but some parts of the
back-end assume the vm is always there. Fix this by handling a lack of
vm in the back-end as well.
Matt Roper [Wed, 23 Aug 2023 00:33:14 +0000 (17:33 -0700)]
drm/xe: Drop xe_mmio_write64()
The only possible 64-bit register writes in the driver come from the
highly questionable MMIO ioctl. That ioctl's register write support
only operates for userspace running as root and cannot be used by any
real userspace; it exists solely to support the "xe_reg" debug tool in
IGT. Since the spec indicates that hardware does not officially support
64-bit register accesses, there's no reason to allow such 64-bit writes,
even for debugging.
Bspec: 60027 Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://lore.kernel.org/r/20230823003312.1356779-4-matthew.d.roper@intel.com Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Wed, 23 Aug 2023 00:33:13 +0000 (17:33 -0700)]
drm/xe: Avoid 64-bit register reads
Intel hardware officially only supports GTTMMADR register accesses of
32-bits or less (although 64-bit accesses to device memory and PTEs in
the GSM are fine). Even though we do usually seem to get back
reasonable values when performing readq() operations on registers in
BAR0, we shouldn't rely on this violation of the spec working
consistently. It's likely that even when we do get proper register
values back the hardware is internally satisfying the request via a
non-atomic sequence of two 32-bit reads, which can be problematic for
timestamps and counters if rollover of the lower bits is not considered.
Replace xe_mmio_read64() with xe_mmio_read64_2x32() that implements
64-bit register reads as two 32-bit reads and attempts to ensure that
the upper dword has stabilized to avoid problematic rollovers for
counter and timestamp registers.
v2:
- Move function from xe_mmio.h to xe_mmio.c. (Lucas)
- Convert comment to kerneldoc and note that it shouldn't be used on
registers where reads may trigger side effects. (Lucas)
Bspec: 60027 Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://lore.kernel.org/r/20230823003312.1356779-3-matthew.d.roper@intel.com Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:15 +0000 (09:06 -0700)]
drm/xe/xe2: Program GuC's MOCS on Xe2 and beyond
As with PVC, Xe2 platforms require that the index of an uncached MOCS
entry be programmed into the GUC_SHIM_CONTROL register. This will
likely be needed on future platforms as well.
Xe2 also extends the size of the MOCS index register field from two bits
to four bits. Since these extra bits were unused on PVC, it should be
safe to just increase the size of the mask.
Bspec: 60592 Cc: Haridhar Kalvala <haridhar.kalvala@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:13 +0000 (09:06 -0700)]
drm/xe/xe2: Track VA bits independently of max page table level
Starting with Xe2, a 5-level page table is always used, regardless of
the actual virtual address range supported by the platform. The two
values need to be tracked separately in the device descriptor since Xe2
platforms only have a 48 bit virtual address range.
Bspec: 59505, 65637, 70817 Cc: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:11 +0000 (09:06 -0700)]
drm/xe/xe2: Define Xe2_LPG IP features
Define a common set of Xe2 graphics feature flags and definitions that
will be used for all platforms in this family.
Several of the feature flags are inherited unchanged from Xe_HP and/or
Xe_HPC platforms:
- dma_mask_size remains 46 (Bspec 70817)
- supports_usm=1 (Bspec 59651)
- has_flatccs=1 (Bspec 58797)
- has_asid=1 (Bspec 59654, 59265, 60288)
- has_range_tlb_invalidate=1 (Bspec 71126)
However some of them still need proper implementation in the driver to
be used, so they are disabled.
Notable Xe2-specific changes:
- All Xe2 platforms use a five-level page table, regardless of the
virtual address space for the platform. (Bspec 59505)
The graphics engine mask represents the Xe2 architecture engines (Bspec
60149), but individual platforms may have a reduced set of usable
engines, as reflected by their fusing.
Cc: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:10 +0000 (09:06 -0700)]
drm/xe/xe2: AuxCCS is no longer used
Starting with Xe2, all platforms (including igpu platforms) use FlatCCS
compression rather than AuxCCS. Similar to PVC, any future platforms
that don't support FlatCCS should not attempt to fall back to AuxCCS
programming.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:07 +0000 (09:06 -0700)]
drm/xe/xe2: Add MCR register steering for media GT
Xe2 media has a few types of MCR registers, but all except for "GPMXMT"
can safely steer to instance 0,0. GPMXMT follows the same rules that
MTL's OADDRM ranges did, so it can re-use the same enum value.
Bspec: 71186 Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:06 +0000 (09:06 -0700)]
drm/xe/xe2: Add MCR register steering for primary GT
Xe2 uses the same steering control register and steering semaphore
register as MTL. As with recent platforms, group/instance 0,0 is
sufficient to target a non-terminated instance for most classes of MCR
registers; the only types of ranges that need to consider platform
fusing to find a non-terminated instance are SLICE/DSS ranges and a new
SQIDI_PSMI type of range.
Note that the range of valid bits in XE2_NODE_ENABLE_MASK may be reduced
for some Xe2 SKUs. However the lowest bits are always valid and only
the lowest instance is obtained via __ffs(), so there's no need to
complicate the masking with extra platform/subplatform checks.
Also note that Wa_14017387313 suggests skipping MCR lock acquisition
around GAM and GAMWKR registers to prevent MCR register accesses in an
interrupt handler from deadlocking when the steering semaphore is
already held outside the interrupt context. At this time Xe never
issues MCR accesses from within an interrupt handler so the workaround
is not currently needed.
v2:
- [0x008700-0x0087FF] range to extend up to 0x887F (Matt Attwood)
- [0x00EF00-0x00F4FF] -> [0x00F000, 0xFFFF] to follow latest
bspec version (Bala)
Bspec: 71185 Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Thu, 17 Aug 2023 23:04:12 +0000 (16:04 -0700)]
drm/xe: Stop tracking 4-tile support
The choice of Y-major tiling format (either the legacy "TileY" or the
newer "Tile4") is based on graphics IP version (12.50 and beyond have
Tile4, earlier platforms have TileY). The tracking in xe was originally
added to allow re-using display from i915. However as of i915 commit 4ebf43d0488f ("drm/i915: Eliminate has_4tile feature flag"), the display
code determines TileY vs Tile4 itself, so this can be removed from xe.
drm/xe: enable idle msg and set hysteresis for GSCCS
On MTL (and only on MTL) the GSCCS defaults with idle messaging
disabled. This means that, once awoken, the GSCCS will never signal its
idleness to the GT. To allow the GT to enter the proper low-power state,
we need therefore to turn idle messaging on. As part of this, we also
need to set a proper hysteresis value for the engine.
v2: use MEDIA_VERSION() and CLR() for the RTP rule and action, add reg
bit define in descending order (Matt)
Like the BCS, the GSCCS doesn't have any special HW that needs handling
when emitting commands, so we can re-use the same emit_job code. To make
it clear that this is now a shared low-level function, it has been
renamed to use the "simple" postfix, instead of "copy", to indicate that
it applies to all engines that don't need any additional engine-specific
handling.
The queue name assignment is identical in both GuC and execlists
backends, so we can move it to a common function. This will make adding
a new entry in the next patch slightly cleaner.
Oak Zeng [Fri, 14 Jul 2023 14:42:07 +0000 (10:42 -0400)]
drm/xe: Improve vram info debug printing
Print both device physical address range and CPU io range
of vram. Also print vram's actual size, usable size excluding
stolen memory, and CPU io accessible size.
V1:
- Add back small BAR device info (Matt)
Signed-off-by: Oak Zeng <oak.zeng@intel.com> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>