This is the follow-up work to support cluster scheduler. Previously
we have added cluster level in the scheduler[1] to make tasks spread
between clusters to bring more memory bandwidth and decrease
cache contention. But it may hurt some workloads which are sensitive
to the communication latency as they will be placed across clusters.
We modified the select_idle_cpu() on the wake affine path in this
series, expecting the wake affined task to be woken more likely on
the same cluster with the waker. The latency will be decreased as
the waker and wakee in the same cluster may benefit from the hot
L3 cache tag.
[1] https://lore.kernel.org/lkml/20210924085104.44806-1-21cnbao@gmail.com/
Hi Tim and Barry,
This the modified patch of packing path of cluster scheduler and tests
have been done on Kunpeng 920 2-socket 4-NUMA 128core platform, with
8 clusters on each NUMA. Patches based on 5.15-rc1.
Barry Song (2):
sched: Add per_cpu cluster domain info
sched/fair: Scan from the first cpu of cluster if presents in
select_idle_cpu
include/linux/sched/sd_flags.h | 9 +++++++++
include/linux/sched/topology.h | 2 +-
kernel/sched/fair.c | 10 +++++++---
kernel/sched/sched.h | 1 +
kernel/sched/topology.c | 5 +++++
5 files changed, 23 insertions(+), 4 deletions(-)
--
2.33.0
Hi all,
I am recently getting questions about hooking the interconnect framework
to SCMI, so i am starting a discussion on this problem and see who might
be interested in it.
The SCMI spec contains various protocols like the "Performance domain
management protocol". But none of the protocols mentioned in the current
spec (3.0) seem to fit well into the concept we are using to scale
interconnect bandwidth in Linux. I see that people are working in this
area and there is already some support for clocks, resets etc. I am
wondering what would be the right approach to support also interconnect
bus scaling via SCMI.
The interconnect framework is part of the linux kernel and it's goal
is to manage the hardware and tune it to the most optimal power-
performance profile according to the aggregated bandwidth demand between
the various endpoints in the system (SoC). This is based on the requests
coming from consumer drivers.
As interconnects scaling does not map directly to any of the currently
available protocols in the SCMI spec, i am curious whether there is
work in progress on some other protocol that could support managing
resources based on path endpoints (instead of a single ID). The
interconnect framework doesn't populate every possible path, but
it exposes endpoints to client drivers and the path lookup is dynamic,
based on what the clients request. Maybe the SCMI host could also expose
all possible endpoints and let the guest request a path from the host,
based on those endpoints.
There are already suggestions to create vendor-specific SCMI protocols
for that, but i fear that we may end up with more than one protocol for
the same thing, so that's why it might be best to discuss it in public
and have a common solution that works for everyone.
Thanks,
Georgi
Hi,
OP-TEE Contributions (LOC) monthly meeting is planned for Thursday Oct 28
@17.00 (UTC+2).
Following topics are on the agenda:
- Generic Clock Framework and Peripheral security- Clément Léger
- Discussion on device driver initialization/probing - Etienne Carriere
If you have any other topics you'd like to discuss, please let us know and
we can schedule them.
Meeting details:
---------------
Date/time: October 28(a)17.00 (UTC+2)
https://everytimezone.com/s/3f83a9ab
Connection details: https://www.trustedfirmware.org/meetings/
Meeting notes: http://bit.ly/loc-notes
Regards,
Ruchika on behalf of the Linaro OP-TEE team
Hi All,
I think most of you may be busy with the Linux Plumbers Conference this
week. Please let me know if these is something to discuss or follow next
Monday.
Regards,
Jammy