#
38f922a5 |
| 23-Nov-2023 |
Luben Tuikov <ltuikov89@gmail.com> |
drm/sched: Reverse run-queue priority enumeration
Reverse run-queue priority enumeration such that the higest priority is now 0, and for each consecutive integer the prioirty diminishes.
Run-queues
drm/sched: Reverse run-queue priority enumeration
Reverse run-queue priority enumeration such that the higest priority is now 0, and for each consecutive integer the prioirty diminishes.
Run-queues correspond to priorities. To an external observer a scheduler created with a single run-queue, and another created with DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule sched->sched_rq[0] with the same "priority", as that index run-queue exists in both schedulers, i.e. a scheduler with one run-queue or many. This patch makes it so.
In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for any scheduler created with any allowable number of run-queues (priorities), 0 to DRM_SCHED_PRIORITY_COUNT.
Cc: Rob Clark <robdclark@gmail.com> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Danilo Krummrich <dakr@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> Reviewed-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231124052752.6915-6-ltuikov89@gmail.com
show more ...
|
#
fe375c74 |
| 15-Nov-2023 |
Luben Tuikov <ltuikov89@gmail.com> |
drm/sched: Rename priority MIN to LOW
Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW.
This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler priorities in ascending order, DRM
drm/sched: Rename priority MIN to LOW
Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW.
This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler priorities in ascending order, DRM_SCHED_PRIORITY_LOW, DRM_SCHED_PRIORITY_NORMAL, DRM_SCHED_PRIORITY_HIGH, DRM_SCHED_PRIORITY_KERNEL.
Cc: Rob Clark <robdclark@gmail.com> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Danilo Krummrich <dakr@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> Reviewed-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231124052752.6915-5-ltuikov89@gmail.com
show more ...
|
#
a78422e9 |
| 10-Nov-2023 |
Danilo Krummrich <dakr@redhat.com> |
drm/sched: implement dynamic job-flow control
Currently, job flow control is implemented simply by limiting the number of jobs in flight. Therefore, a scheduler is initialized with a credit limit th
drm/sched: implement dynamic job-flow control
Currently, job flow control is implemented simply by limiting the number of jobs in flight. Therefore, a scheduler is initialized with a credit limit that corresponds to the number of jobs which can be sent to the hardware.
This implies that for each job, drivers need to account for the maximum job size possible in order to not overflow the ring buffer.
However, there are drivers, such as Nouveau, where the job size has a rather large range. For such drivers it can easily happen that job submissions not even filling the ring by 1% can block subsequent submissions, which, in the worst case, can lead to the ring run dry.
In order to overcome this issue, allow for tracking the actual job size instead of the number of jobs. Therefore, add a field to track a job's credit count, which represents the number of credits a job contributes to the scheduler's credit limit.
Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Luben Tuikov <ltuikov89@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231110001638.71750-1-dakr@redhat.com
show more ...
|
#
f3123c25 |
| 09-Nov-2023 |
Luben Tuikov <ltuikov89@gmail.com> |
drm/sched: Qualify drm_sched_wakeup() by drm_sched_entity_is_ready()
Don't "wake up" the GPU scheduler unless the entity is ready, as well as we can queue to the scheduler, i.e. there is no point in
drm/sched: Qualify drm_sched_wakeup() by drm_sched_entity_is_ready()
Don't "wake up" the GPU scheduler unless the entity is ready, as well as we can queue to the scheduler, i.e. there is no point in waking up the scheduler for the entity unless the entity is ready.
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> Fixes: bc8d6a9df99038 ("drm/sched: Don't disturb the entity when in RR-mode scheduling") Reviewed-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231110000123.72565-2-ltuikov89@gmail.com
show more ...
|
#
f12af4c4 |
| 02-Nov-2023 |
Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
drm/sched: Drop suffix from drm_sched_wakeup_if_can_queue
Because a) helper is exported to other parts of the scheduler and b) there isn't a plain drm_sched_wakeup to begin with, I think we can drop
drm/sched: Drop suffix from drm_sched_wakeup_if_can_queue
Because a) helper is exported to other parts of the scheduler and b) there isn't a plain drm_sched_wakeup to begin with, I think we can drop the suffix and by doing so separate the intimiate knowledge between the scheduler components a bit better.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Luben Tuikov <ltuikov89@gmail.com> Cc: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231102105538.391648-6-tvrtko.ursulin@linux.intel.com Reviewed-by: Luben Tuikov <ltuikov89@gmail.com> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
show more ...
|
#
3c6c7ca4 |
| 31-Oct-2023 |
Matthew Brost <matthew.brost@intel.com> |
drm/sched: Add a helper to queue TDR immediately
Add a helper whereby a driver can invoke TDR immediately.
v2: - Drop timeout args, rename function, use mod delayed work (Luben) v3: - s/XE/Xe (Lu
drm/sched: Add a helper to queue TDR immediately
Add a helper whereby a driver can invoke TDR immediately.
v2: - Drop timeout args, rename function, use mod delayed work (Luben) v3: - s/XE/Xe (Luben) - present tense in commit message (Luben) - Adjust comment for drm_sched_tdr_queue_imm (Luben) v4: - Adjust commit message (Luben)
Cc: Luben Tuikov <luben.tuikov@amd.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://lore.kernel.org/r/20231031032439.1558703-6-matthew.brost@intel.com Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
show more ...
|
#
f7fe64ad |
| 31-Oct-2023 |
Matthew Brost <matthew.brost@intel.com> |
drm/sched: Split free_job into own work item
Rather than call free_job and run_job in same work item have a dedicated work item for each. This aligns with the design and intended use of work queues.
drm/sched: Split free_job into own work item
Rather than call free_job and run_job in same work item have a dedicated work item for each. This aligns with the design and intended use of work queues.
v2: - Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting timestamp in free_job() work item (Danilo) v3: - Drop forward dec of drm_sched_select_entity (Boris) - Return in drm_sched_run_job_work if entity NULL (Boris) v4: - Replace dequeue with peek and invert logic (Luben) - Wrap to 100 lines (Luben) - Update comments for *_queue / *_queue_if_ready functions (Luben) v5: - Drop peek argument, blindly reinit idle (Luben) - s/drm_sched_free_job_queue_if_ready/drm_sched_free_job_queue_if_done (Luben) - Update work_run_job & work_free_job kernel doc (Luben) v6: - Do not move drm_sched_select_entity in file (Luben)
Signed-off-by: Matthew Brost <matthew.brost@intel.com> Link: https://lore.kernel.org/r/20231031032439.1558703-4-matthew.brost@intel.com Reviewed-by: Luben Tuikov <ltuikov89@gmail.com> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
show more ...
|
#
a6149f03 |
| 31-Oct-2023 |
Matthew Brost <matthew.brost@intel.com> |
drm/sched: Convert drm scheduler to use a work queue rather than kthread
In Xe, the new Intel GPU driver, a choice has made to have a 1 to 1 mapping between a drm_gpu_scheduler and drm_sched_entity.
drm/sched: Convert drm scheduler to use a work queue rather than kthread
In Xe, the new Intel GPU driver, a choice has made to have a 1 to 1 mapping between a drm_gpu_scheduler and drm_sched_entity. At first this seems a bit odd but let us explain the reasoning below.
1. In Xe the submission order from multiple drm_sched_entity is not guaranteed to be the same completion even if targeting the same hardware engine. This is because in Xe we have a firmware scheduler, the GuC, which allowed to reorder, timeslice, and preempt submissions. If a using shared drm_gpu_scheduler across multiple drm_sched_entity, the TDR falls apart as the TDR expects submission order == completion order. Using a dedicated drm_gpu_scheduler per drm_sched_entity solve this problem.
2. In Xe submissions are done via programming a ring buffer (circular buffer), a drm_gpu_scheduler provides a limit on number of jobs, if the limit of number jobs is set to RING_SIZE / MAX_SIZE_PER_JOB we get flow control on the ring for free.
A problem with this design is currently a drm_gpu_scheduler uses a kthread for submission / job cleanup. This doesn't scale if a large number of drm_gpu_scheduler are used. To work around the scaling issue, use a worker rather than kthread for submission / job cleanup.
v2: - (Rob Clark) Fix msm build - Pass in run work queue v3: - (Boris) don't have loop in worker v4: - (Tvrtko) break out submit ready, stop, start helpers into own patch v5: - (Boris) default to ordered work queue v6: - (Luben / checkpatch) fix alignment in msm_ringbuffer.c - (Luben) s/drm_sched_submit_queue/drm_sched_wqueue_enqueue - (Luben) Update comment for drm_sched_wqueue_enqueue - (Luben) Positive check for submit_wq in drm_sched_init - (Luben) s/alloc_submit_wq/own_submit_wq v7: - (Luben) s/drm_sched_wqueue_enqueue/drm_sched_run_job_queue v8: - (Luben) Adjust var names / comments
Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://lore.kernel.org/r/20231031032439.1558703-3-matthew.brost@intel.com Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
show more ...
|
#
35963cf2 |
| 31-Oct-2023 |
Matthew Brost <matthew.brost@intel.com> |
drm/sched: Add drm_sched_wqueue_* helpers
Add scheduler wqueue ready, stop, and start helpers to hide the implementation details of the scheduler from the drivers.
v2: - s/sched_wqueue/sched_wque
drm/sched: Add drm_sched_wqueue_* helpers
Add scheduler wqueue ready, stop, and start helpers to hide the implementation details of the scheduler from the drivers.
v2: - s/sched_wqueue/sched_wqueue (Luben) - Remove the extra white line after the return-statement (Luben) - update drm_sched_wqueue_ready comment (Luben)
Cc: Luben Tuikov <luben.tuikov@amd.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://lore.kernel.org/r/20231031032439.1558703-2-matthew.brost@intel.com Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
show more ...
|
#
56e44960 |
| 15-Oct-2023 |
Luben Tuikov <luben.tuikov@amd.com> |
drm/sched: Convert the GPU scheduler to variable number of run-queues
The GPU scheduler has now a variable number of run-queues, which are set up at drm_sched_init() time. This way, each driver anno
drm/sched: Convert the GPU scheduler to variable number of run-queues
The GPU scheduler has now a variable number of run-queues, which are set up at drm_sched_init() time. This way, each driver announces how many run-queues it requires (supports) per each GPU scheduler it creates. Note, that run-queues correspond to scheduler "priorities", thus if the number of run-queues is set to 1 at drm_sched_init(), then that scheduler supports a single run-queue, i.e. single "priority". If a driver further sets a single entity per run-queue, then this creates a 1-to-1 correspondence between a scheduler and a scheduled entity.
Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Russell King <linux+etnaviv@armlinux.org.uk> Cc: Qiang Yu <yuq825@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Danilo Krummrich <dakr@redhat.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Boris Brezillon <boris.brezillon@collabora.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Emma Anholt <emma@anholt.net> Cc: etnaviv@lists.freedesktop.org Cc: lima@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: nouveau@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20231023032251.164775-1-luben.tuikov@amd.com
show more ...
|
#
fa8391ad |
| 17-Oct-2023 |
Luben Tuikov <luben.tuikov@amd.com> |
gpu/drm: Eliminate DRM_SCHED_PRIORITY_UNSET
Eliminate DRM_SCHED_PRIORITY_UNSET, value of -2, whose only user was amdgpu. Furthermore, eliminate an index bug, in that when amdgpu boots, it calls drm_
gpu/drm: Eliminate DRM_SCHED_PRIORITY_UNSET
Eliminate DRM_SCHED_PRIORITY_UNSET, value of -2, whose only user was amdgpu. Furthermore, eliminate an index bug, in that when amdgpu boots, it calls drm_sched_entity_init() with DRM_SCHED_PRIORITY_UNSET, which uses it to index sched->sched_rq[].
Cc: Alex Deucher <Alexander.Deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Acked-by: Alex Deucher <Alexander.Deucher@amd.com> Link: https://lore.kernel.org/r/20231017035656.8211-2-luben.tuikov@amd.com
show more ...
|
#
db8b4968 |
| 23-Jun-2023 |
Boris Brezillon <boris.brezillon@collabora.com> |
drm/sched: Call drm_sched_fence_set_parent() from drm_sched_fence_scheduled()
Drivers that can delegate waits to the firmware/GPU pass the scheduled fence to drm_sched_job_add_dependency(), and issu
drm/sched: Call drm_sched_fence_set_parent() from drm_sched_fence_scheduled()
Drivers that can delegate waits to the firmware/GPU pass the scheduled fence to drm_sched_job_add_dependency(), and issue wait commands to the firmware/GPU at job submission time. For this to be possible, they need all their 'native' dependencies to have a valid parent since this is where the actual HW fence information are encoded.
In drm_sched_main(), we currently call drm_sched_fence_set_parent() after drm_sched_fence_scheduled(), leaving a short period of time during which the job depending on this fence can be submitted.
Since setting parent and signaling the fence are two things that are kinda related (you can't have a parent if the job hasn't been scheduled), it probably makes sense to pass the parent fence to drm_sched_fence_scheduled() and let it call drm_sched_fence_set_parent() before it signals the scheduled fence.
Here is a detailed description of the race we are fixing here:
Thread A Thread B
- calls drm_sched_fence_scheduled() - signals s_fence->scheduled which wakes up thread B
- entity dep signaled, checking the next dep - no more deps waiting - entity is picked for job submission by drm_gpu_scheduler - run_job() is called - run_job() tries to collect native fence info from s_fence->parent, but it's NULL => BOOM, we can't do our native wait
- calls drm_sched_fence_set_parent()
v2: * Fix commit message
v3: * Add a detailed description of the race to the commit message * Add Luben's R-b
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> Cc: Frank Binns <frank.binns@imgtec.com> Cc: Sarah Walker <sarah.walker@imgtec.com> Cc: Donald Robson <donald.robson@imgtec.com> Cc: Luben Tuikov <luben.tuikov@amd.com> Cc: David Airlie <airlied@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: "Christian König" <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230623075204.382350-1-boris.brezillon@collabora.com
show more ...
|
#
3655c590 |
| 17-May-2023 |
Luben Tuikov <luben.tuikov@amd.com> |
drm/sched: Rename to drm_sched_wakeup_if_can_queue()
Rename drm_sched_wakeup() to drm_sched_wakeup_if_canqueue() since the former is misleading, as it wakes up the GPU scheduler _only if_ more jobs
drm/sched: Rename to drm_sched_wakeup_if_can_queue()
Rename drm_sched_wakeup() to drm_sched_wakeup_if_canqueue() since the former is misleading, as it wakes up the GPU scheduler _only if_ more jobs can be queued to the underlying hardware.
This distinction is important to make, since the wake conditional in the GPU scheduler thread wakes up when other conditions are also true, e.g. when there are jobs to be cleaned. For instance, a user might want to wake up the scheduler only because there are more jobs to clean, but whether we can queue more jobs is irrelevant.
v2: Separate "canqueue" to "can_queue". (Alex D.)
Cc: Christian König <christian.koenig@amd.com> Cc: Alex Deucher <Alexander.Deucher@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://lore.kernel.org/r/20230517233550.377847-2-luben.tuikov@amd.com Reviewed-by: Alex Deucher <Alexander.Deucher@amd.com>
show more ...
|
#
70102d77 |
| 17-Apr-2023 |
Christian König <christian.koenig@amd.com> |
drm/scheduler: add drm_sched_entity_error and use rcu for last_scheduled
Switch to using RCU handling for the last scheduled job and add a function to return the error code of it.
Signed-off-by: Ch
drm/scheduler: add drm_sched_entity_error and use rcu for last_scheduled
Switch to using RCU handling for the last scheduled job and add a function to return the error code of it.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230420115752.31470-2-christian.koenig@amd.com
show more ...
|
#
539f9ee4 |
| 17-Apr-2023 |
Christian König <christian.koenig@amd.com> |
drm/scheduler: properly forward fence errors
When a hw fence is signaled with an error properly forward that to the finished fence.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewe
drm/scheduler: properly forward fence errors
When a hw fence is signaled with an error properly forward that to the finished fence.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230420115752.31470-1-christian.koenig@amd.com
show more ...
|
#
baad1097 |
| 30-Mar-2023 |
Lucas Stach <l.stach@pengutronix.de> |
Revert "drm/scheduler: track GPU active time per entity"
This reverts commit df622729ddbf as it introduces a use-after-free, which isn't easy to fix without going back to the design drawing board.
Revert "drm/scheduler: track GPU active time per entity"
This reverts commit df622729ddbf as it introduces a use-after-free, which isn't easy to fix without going back to the design drawing board.
Reported-by: Danilo Krummrich <dakr@redhat.com> Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
show more ...
|
#
f3823da7 |
| 21-Sep-2021 |
Rob Clark <robdclark@gmail.com> |
drm/scheduler: Add fence deadline support
As the finished fence is the one that is exposed to userspace, and therefore the one that other operations, like atomic update, would block on, we need to p
drm/scheduler: Add fence deadline support
As the finished fence is the one that is exposed to userspace, and therefore the one that other operations, like atomic update, would block on, we need to propagate the deadline from from the finished fence to the actual hw fence.
v2: Split into drm_sched_fence_set_parent() (ckoenig) v3: Ensure a thread calling drm_sched_fence_set_deadline_finished() sees fence->parent set before drm_sched_fence_set_parent() does this test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT).
Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Luben Tuikov <luben.tuikov@amd.com>
show more ...
|
#
c087bbb6 |
| 09-Feb-2023 |
Maíra Canal <mcanal@igalia.com> |
drm/sched: Create wrapper to add a syncobj dependency to job
In order to add a syncobj's fence as a dependency to a job, it is necessary to call drm_syncobj_find_fence() to find the fence and then a
drm/sched: Create wrapper to add a syncobj dependency to job
In order to add a syncobj's fence as a dependency to a job, it is necessary to call drm_syncobj_find_fence() to find the fence and then add the dependency with drm_sched_job_add_dependency(). So, wrap these steps in one single function, drm_sched_job_add_syncobj_dependency().
Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Signed-off-by: Maíra Canal <mcanal@igalia.com> Signed-off-by: Maíra Canal <mairacanal@riseup.net> Link: https://patchwork.freedesktop.org/patch/msgid/20230209124447.467867-2-mcanal@igalia.com
show more ...
|
#
df622729 |
| 01-Feb-2023 |
Lucas Stach <l.stach@pengutronix.de> |
drm/scheduler: track GPU active time per entity
Track the accumulated time that jobs from this entity were active on the GPU. This allows drivers using the scheduler to trivially implement the DRM f
drm/scheduler: track GPU active time per entity
Track the accumulated time that jobs from this entity were active on the GPU. This allows drivers using the scheduler to trivially implement the DRM fdinfo when the hardware doesn't provide more specific information than signalling job completion anyways.
[Bagas: Append missing colon to @elapsed_ns] Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com> Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
show more ...
|
#
cb3076e9 |
| 26-Oct-2022 |
Christian König <christian.koenig@amd.com> |
drm/scheduler: cleanup define
Remove some not implemented function define
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Link: https
drm/scheduler: cleanup define
Remove some not implemented function define
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221109095010.141189-4-christian.koenig@amd.com
show more ...
|
#
06a2d7cc |
| 26-Oct-2022 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: revert "implement tdr advanced mode"
This reverts commit e6c6338f393b74ac0b303d567bb918b44ae7ad75.
This feature basically re-submits one job after another to figure out which one was th
drm/amdgpu: revert "implement tdr advanced mode"
This reverts commit e6c6338f393b74ac0b303d567bb918b44ae7ad75.
This feature basically re-submits one job after another to figure out which one was the one causing a hang.
This is obviously incompatible with gang-submit which requires that multiple jobs run at the same time. It's also absolutely not helpful to crash the hardware multiple times if a clean recovery is desired.
For testing and debugging environments we should rather disable recovery alltogether to be able to inspect the state with a hw debugger.
Additional to that the sw implementation is clearly buggy and causes reference count issues for the hardware fence.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
a82f30b0 |
| 29-Sep-2022 |
Christian König <christian.koenig@amd.com> |
drm/scheduler: rename dependency callback into prepare_job
This now matches much better what this is doing.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <lube
drm/scheduler: rename dependency callback into prepare_job
This now matches much better what this is doing.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221014084641.128280-14-christian.koenig@amd.com
show more ...
|
#
2cf9886e |
| 29-Sep-2022 |
Christian König <christian.koenig@amd.com> |
drm/scheduler: remove drm_sched_dependency_optimized
Not used any more.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patc
drm/scheduler: remove drm_sched_dependency_optimized
Not used any more.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221014084641.128280-12-christian.koenig@amd.com
show more ...
|
#
4d5230b5 |
| 28-Sep-2022 |
Christian König <christian.koenig@amd.com> |
drm/scheduler: add drm_sched_job_add_resv_dependencies
Add a new function to update job dependencies from a resv obj.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tu
drm/scheduler: add drm_sched_job_add_resv_dependencies
Add a new function to update job dependencies from a resv obj.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221014084641.128280-3-christian.koenig@amd.com
show more ...
|
#
7b476aff |
| 07-Oct-2022 |
Christian König <christian.koenig@amd.com> |
drm/sched: add DRM_SCHED_FENCE_DONT_PIPELINE flag
Setting this flag on a scheduler fence prevents pipelining of jobs depending on this fence. In other words we always insert a full CPU round trip be
drm/sched: add DRM_SCHED_FENCE_DONT_PIPELINE flag
Setting this flag on a scheduler fence prevents pipelining of jobs depending on this fence. In other words we always insert a full CPU round trip before dependent jobs are pushed to the pipeline.
Signed-off-by: Christian König <christian.koenig@amd.com> Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2113#note_1579296 Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Luben Tuikov <luben.tuikov@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221014081553.114899-1-christian.koenig@amd.com
show more ...
|