Hi,
We are carrying out some NUMA related test activities (some of which we
discussed on this forum) on the Taishan 2280 v2 server.
For those tests we need memory attached to all 4 NUMA nodes equally.
That's why we added 2 extra 32GB memory modules to the existing 2.
The manual says that they should be placed into slot:
000 (CPU A) 100 (CPU A) 020 (CPU B) and 120 (CPU B).
BMC console as well as startup logs show that the memory modules get
properly detected in these slots.
But after Linux booted we don't have the expected 32 GB per NUMA node.
There is node 1 & 3 with 64GB and node 0 & 2 with no memory instead.
We didn't find any BIOS options which could explain this memory
distribution.
Do you have any hints why this is happening? Any help with this issue is
highly appreciated.
Thanks!
-- Dietmar
Hi all,
The slides and video for the meeting today has been published.
https://linaro.atlassian.net/wiki/spaces/LOD/pages/28585361507/2021-09-01+M…
Regards,
Jammy
On Tue, 31 Aug 2021 at 20:46, Jonathan Cameron via Linaro-open-discussions <
linaro-open-discussions(a)op-lists.linaro.org> wrote:
> On Thu, 26 Aug 2021 14:16:02 +0000
> Jonathan Cameron via Linaro-open-discussions <
> linaro-open-discussions(a)op-lists.linaro.org> wrote:
>
> > On Fri, 20 Aug 2021 11:48:41 +0100
> > Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com> wrote:
> >
> > > On Wed, Aug 18, 2021 at 08:05:18PM +0800, Jammy Zhou wrote:
> > > > If we move it to 1st or 2nd, is there any topic to discuss?
> Otherwise,
> > > > maybe we can cancel it for this month.
> > >
> > > I would be _extremely_ grateful if Jonathan could run a session on
> > > his series:
> > >
> > >
> https://lore.kernel.org/linux-pci/20210804161839.3492053-1-Jonathan.Cameron…
> > >
> > > in preparation for LPC.
> >
> > Sure - would be good to get my thoughts in order on this and doing it
> for next
> > week will stop me leaving it all to the last minute.
> >
> > > Actually, I wanted to ask if there is a kernel
> > > tree/branch I can pull in order to review those patches, I am
> struggling
> > > to find a commit base to apply them.
> >
> > My bad. I got lazy in a build up to vacation and didn't put one up
> anywhere.
> > The various trees involved are rather too dynamic for just pointing at
> them
> > and saying apply these series (which have merge conflicts to resolve).
> > Naughty me....
> >
> > Currently wading through the messages backlog, but will aim to have a
> branch up sometime
> > tomorrow + ideally some more detailed instructions on getting it up and
> running.
>
> Hi Lorenzo / All,
>
> https://github.com/hisilicon/kernel-dev/tree/doe-spdm-v1 rebased to
> 5.14-rc7
> https://github.com/hisilicon/qemu/tree/cxl-hacks rebased to qemu/master
> as of earlier today.
>
> For qemu side of things you need to be running spdm_responsder_emu --trans
> PCI_DOE
> from https://github.com/DMTF/spdm-emu first (that will act as server to
> qemu acting
> as a client). Various parameters allow you to change the algorithms
> advertised and the
> kernel code should work for all the ones CMA mandates (but nothing beyond
> that for now).
>
> For the cxl device the snippet of qemu commandline needed is:
> -device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-mem1,
> id=cxl-pmem0,size=2G,spdm=true
>
> Otherwise much the same as https://people.kernel.org/jic23/
>
> Build at least the cxl_pci driver as a module as we need to poke the
> certificate into the keychain
> before that (find the cert in spdm_emu tree).
> Instructions to do that with keyctl and evmctl are in the cover letter of
> the patch series.
>
> Hopefully I'll find some time next week to put together some proper
> instructions and email them
> in reply to the original posting.
>
> Jonathan
>
>
>
> >
> > Jonathan
> >
> > >
> > > Thanks,
> > > Lorenzo
> > >
> > > > On Tue, 17 Aug 2021 at 21:42, Jonathan Cameron <
> Jonathan.Cameron(a)huawei.com>
> > > > wrote:
> > > >
> > > > On Tue, 17 Aug 2021 12:28:45 +0100
> > > > Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com> wrote:
> > > >
> > > > > On Mon, Aug 16, 2021 at 08:41:29AM +0000, Jonathan Cameron
> via
> > > > Linaro-open-discussions wrote:
> > > > > >
> > > > > >
> > > > > > Hi Jammy,
> > > > > >
> > > > > > I'll be away until the 26th, but obviously that should be no
> barrier
> > > > > > to topics I am less involved with as I can easily catch up
> later.
> > > > >
> > > > > Maybe it is best to reschedule it early September since it is
> holiday
> > > > > season. I could do September 1st and 2nd and the following
> week.
> > > >
> > > > 1st and 2nd work for me. Early the following week also fine.
> > > > Note Linaro connect is 8-10th Sept so we should avoid any
> clashes.
> > > >
> > > > Jonathan
> > > >
> > > >
> > > > >
> > > > > Please let me know what you think.
> > > > >
> > > > > Thanks,
> > > > > Lorenzo
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > Jonathan
> > > > > >
> > > > > >
> > > > > > ________________________________
> > > > > >
> > > > > > Jonathan Cameron
> > > > > > Mobile: +44-7870588074
> > > > > > Email: jonathan.cameron(a)huawei.com<mailto:
> jonathan.cameron(a)huawei.com>
> > > > > > From:Jammy Zhou via Linaro-open-discussions <
> > > > linaro-open-discussions(a)op-lists.linaro.org>
> > > > > > To:Lorenzo Pieralisi via Linaro-open-discussions <
> > > > linaro-open-discussions(a)op-lists.linaro.org>
> > > > > > Date:2021-08-16 06:01:50
> > > > > > Subject:[Linaro-open-discussions] LOD Meeting Agenda for
> August 23
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > We're going to have the next monthly call for Linaro Open
> Discussions
> > > > in
> > > > > > one week. Please let me know if you have any topic for
> discussion.
> > > > > >
> > > > > > Thanks,
> > > > > > Jammy
> > > > > > --
> > > > > > Linaro-open-discussions mailing list
> > > > > >
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> > > > > >
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> > > > > > --
> > > > > > Linaro-open-discussions mailing list
> > > > > >
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> > > > > >
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> > > >
> > > >
> >
>
> --
> Linaro-open-discussions mailing list
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
>
On Mon, Aug 16, 2021 at 08:41:29AM +0000, Jonathan Cameron via Linaro-open-discussions wrote:
>
>
> Hi Jammy,
>
> I'll be away until the 26th, but obviously that should be no barrier
> to topics I am less involved with as I can easily catch up later.
Maybe it is best to reschedule it early September since it is holiday
season. I could do September 1st and 2nd and the following week.
Please let me know what you think.
Thanks,
Lorenzo
>
> Thanks
>
> Jonathan
>
>
> ________________________________
>
> Jonathan Cameron
> Mobile: +44-7870588074
> Email: jonathan.cameron(a)huawei.com<mailto:jonathan.cameron@huawei.com>
> From:Jammy Zhou via Linaro-open-discussions <linaro-open-discussions(a)op-lists.linaro.org>
> To:Lorenzo Pieralisi via Linaro-open-discussions <linaro-open-discussions(a)op-lists.linaro.org>
> Date:2021-08-16 06:01:50
> Subject:[Linaro-open-discussions] LOD Meeting Agenda for August 23
>
> Hi all,
>
> We're going to have the next monthly call for Linaro Open Discussions in
> one week. Please let me know if you have any topic for discussion.
>
> Thanks,
> Jammy
> --
> Linaro-open-discussions mailing list
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> --
> Linaro-open-discussions mailing list
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
Hi Jammy,
I'll be away until the 26th, but obviously that should be no barrier to topics I am less involved with as I can easily catch up later.
Thanks
Jonathan
________________________________
Jonathan Cameron
Mobile: +44-7870588074
Email: jonathan.cameron(a)huawei.com<mailto:jonathan.cameron@huawei.com>
From:Jammy Zhou via Linaro-open-discussions <linaro-open-discussions(a)op-lists.linaro.org>
To:Lorenzo Pieralisi via Linaro-open-discussions <linaro-open-discussions(a)op-lists.linaro.org>
Date:2021-08-16 06:01:50
Subject:[Linaro-open-discussions] LOD Meeting Agenda for August 23
Hi all,
We're going to have the next monthly call for Linaro Open Discussions in
one week. Please let me know if you have any topic for discussion.
Thanks,
Jammy
--
Linaro-open-discussions mailing list
https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Homehttps://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
Hi all,
We're going to have the next monthly call for Linaro Open Discussions in
one week. Please let me know if you have any topic for discussion.
Thanks,
Jammy
Hi,
as agreed in the LOD call today I asked Arunachalam to share his demo
set-up guidelines following his demo at LVC:
https://connect.linaro.org/resources/lvc21/lvc21-305/
He agreed to share the FVP based set-up - so that it can be
reused/refactored with QEmu by Huawei developers who are
interested.
For any questions/query let's use this public thread.
Thank you very much.
Lorenzo
On 7/22/21 1:37 PM, Song Bao Hua (Barry Song) wrote:
> Regarding squashing scheduler level, what you say is true. However,
> as long as we are changing code based on if(sched_cluster_present)
> in wake_affine path, we need to care about this.
I'm wondering if we can use SD_CLUSTER flag I introduced in my
patch 2 sent yesterday to determine if we have a cluster
sched domain instead of sched_cluster_present? So if we squash
the cluster sched domain we'll skip the logic for the sched domain,
instead of having a global flag.
>
>> If the CLS domain equals the MC domain, finally the CLS will stay and
>> MC will be destroyed. Nothing need to do then. If the CLS domain equals
>> the SMT domain, SMT will stay and we need to modify the
>> @sched_cluster_present then. But I doubt whether there is a real case
>> than CLS will be the same as SMT.
>
> It doesn't matter if there is a real hardware. What does matter is
> how the topology patch(patch 1) will parse ACPI table and whether
> ACPI has a particular flag for cluster.
>
> Thanks
> Barry
>
Barry & Yicong,
I've added this series to allow for run time control of cluster
scheduling via /proc/sys/kernel/sched_cluster_enabled.
I've defaulted the setting to off as this is probably the safest
option and will encounter the least resistance.
I've also added a SD_CLUSTER flag in patch 2. It may be handy
if we want to do any cluster specific scheduling operation in
a cluster sched domain.
This could be a follow on after the main patchset is posted.
I've tested it on my x86 machine. Wonder if you can test
it on your ARM system to make sure it works properly there.
Will appreciate your feedback and review.
Tim
Tim Chen (3):
sched: Create SDTL_SKIP flag to skip topology level
sched: Add SD_CLUSTER topology flag to cluster sched domain
sched: Add runtime knob sysctl_sched_cluster_enabled
arch/x86/kernel/smpboot.c | 8 +++++
drivers/base/arch_topology.c | 7 ++++
include/linux/sched/sd_flags.h | 7 ++++
include/linux/sched/sysctl.h | 6 ++++
include/linux/sched/topology.h | 3 +-
include/linux/topology.h | 7 ++++
kernel/sched/core.c | 1 +
kernel/sched/sched.h | 6 ++++
kernel/sched/topology.c | 58 +++++++++++++++++++++++++++++++++-
kernel/sysctl.c | 11 +++++++
10 files changed, 112 insertions(+), 2 deletions(-)
--
2.20.1