[Linaro-open-discussions] Notes on Linaro Open Discussions meeting 4 Nov 2020

Jonathan Cameron Jonathan.Cameron at Huawei.com
Wed Nov 25 16:14:07 UTC 2020


On Wed, 25 Nov 2020 16:05:51 +0000
Mike Holmes <mike.holmes at linaro.org> wrote:

> On Wed, Nov 25, 2020 at 4:01 PM Lorenzo Pieralisi via
> Linaro-open-discussions <linaro-open-discussions at op-lists.linaro.org> wrote:
> 
> > On Wed, Nov 25, 2020 at 03:22:19PM +0000, Jonathan Cameron wrote:  
> > > On Wed, 25 Nov 2020 13:58:38 +0000
> > > Lorenzo Pieralisi <lorenzo.pieralisi at arm.com> wrote:
> > >  
> > > > On Wed, Nov 25, 2020 at 12:51:49PM +0000, Jonathan Cameron wrote:
> > > >
> > > > [...]
> > > >  
> > > > > > Is December 2nd confirmed as next session ? If possible 3PM GMT  
> > works  
> > > > > > better for me; the NUMA topic raised by Jonathan is another  
> > interesting  
> > > > > > topic for debate.  
> > > > >
> > > > > Just to check, which NUMA topic?  
> > > >
> > > >  
> > https://op-lists.linaro.org/pipermail/linaro-open-discussions/2020-November/000016.html  
> > > >
> > > > It would be useful (for me certainly) if you can give an update on
> > > > what's still pending in the items in the discussion above (eg it is
> > > > not clear what the "PCI fix" is) + how it is linked to CXL.  
> > >
> > > Sure, that should be a fairly short topic, but I'm happy to try to fill  
> > in the gaps.  
> > > @Mike, can you add "Generic initiators - pending items" to the agenda.
> > >
> > > For the PCI 'fix'  it is this one that got applied and then reverted in  
> > 4.20 timeframe  
> > >  
> > http://patchwork.ozlabs.org/project/linux-pci/patch/20180912152140.3676-2-Jonathan.Cameron@huawei.com/  
> > > I've not rebased or tested it recently so I'll check that it still  
> > appears to be right  
> > > before the call.
> > >  
> > > >
> > > > The vCPU hotplug is also worth adding (even though I do not know what
> > > > was discussed at KVM forum).  
> > >
> > > I was just about to email about that one.  From our side, all that is  
> > currently  
> > > going on around vCPU Hotplug is a rebase.   At KVM forum Mark R kindly  
> > offered  
> > > to see if he could find out answers to a few open questions.  Perhaps  
> > catch  
> > > up with Mark, or see if he can make the call?  
> >
> > Ok, will catch up with him, it is probably better if I have some time
> > to do it (ie postpone the call for a week).
> >  
> > > > Furthermore, I am keen on discussing this:
> > > >
> > > >  
> > https://lore.kernel.org/linux-arm-kernel/20201123065410.1915-1-lushenming@huawei.com  
> > > >
> > > > if the submitters are available, it would help to get some context in
> > > > relation to upstream discussions.  
> > >
> > > I've messaged lushenming so hopefully we can sort out something on that  
> > front.  
> > >
> > > Looks like finding a time next week is proving challenging.  If that
> > > fails, perhaps we should try for the week after?  
> >
> > Yes we can, it would be easier to prepare the topics and find a suitable
> > day as well, next week it looks like it is challenging. It would also
> > give us some time to extend the invite.
> >
> > Please let me know and apologies to Mike for all these emails to get it
> > organized.
> >  
> 
> No problem, as soon as it is confirmed we want to try for the following
> week I can set up a poll to pick the day and hour

Definitely sounds like a better plan. Fingers crossed people have slightly
more open schedules that week!

Thanks for sorting this out Mike.

Jonathan

> 
> 
> >
> > Thanks,
> > Lorenzo
> >  
> > > Thanks,
> > >
> > > Jonathan
> > >
> > >  
> > > >
> > > > Thanks !
> > > > Lorenzo
> > > >  
> > > > > I've lost track of what we are talking about.
> > > > >  
> > > > > > Other than that we can slot in the topics that
> > > > > > weren't discussed last time:
> > > > > >
> > > > > >  
> > https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home  
> > > > > >
> > > > > > even though those require a bit of preparation so the sooner we  
> > finalize  
> > > > > > the schedule the better.  
> > > > >
> > > > > Seems we have clashed with some internal Huawei activities so some  
> > people who  
> > > > > would normally be active are snowed under this week.
> > > > >
> > > > > Jonathan
> > > > >  
> > > > > >
> > > > > > Please let me know, thanks.
> > > > > >
> > > > > > Lorenzo
> > > > > >  
> > > > > > > * DT alignment. Don't want different solutions for each firmware  
> > type.  
> > > > > > > * Lorenzo / Sami to check IORT revision E is final.
> > > > > > >
> > > > > > > SVA
> > > > > > > ===
> > > > > > >
> > > > > > > Zangfei gave summary:
> > > > > > >  - Huawei has devices that are not PCIe but are presented as  
> > such.  
> > > > > > >  - They support stall mode for SVA (spec violation)
> > > > > > >  - Resistance from kernel maintainers to maintaining a white  
> > list for any quirk. Fine to fix  
> > > > > > >    it once (JPB), but not to keep doing so.
> > > > > > >  - Note that stall mode not yet supported at all (JPB to send  
> > out this cycle).  
> > > > > > >  - If longer term fix need add can't be done via PCISIG etc then  
> > need to convince  
> > > > > > >    PCI and SMMU maintainers.   Noted that quirk is very little  
> > code.  
> > > > > > >
> > > > > > > * Other SVA topics.
> > > > > > >   - Mentioned virtual SVA (no actually problems just expressing  
> > interest!)  
> > > > > > >   - Would need Eric Auger, wasn't on topic list so Eric not on  
> > call.  
> > > > > > >
> > > > > > > AI: Nothing planned until after JPB has upstreamed stall mode.  
> > Hard to have discussion before that.  
> > > > > > >
> > > > > > > DVFS
> > > > > > > ====
> > > > > > >
> > > > > > > guohanjun
> > > > > > >
> > > > > > > Solutions exist for
> > > > > > > * CPU DVFS (voltage + frequency scaling)
> > > > > > > * PCIe device power states etc
> > > > > > >
> > > > > > > No standard way of controlling Uncore voltage and frequency for  
> > ACPI based systems.  
> > > > > > >
> > > > > > > 3 options:
> > > > > > > 1. MMIO / kernel driver.
> > > > > > > 2. PSCI via trusted firmware and system management controller.
> > > > > > > 3. ACPI (wrapping up an op region and SCMI)
> > > > > > >
> > > > > > > Clarifications / discussions.
> > > > > > >  * Vincent G: Power states, or voltage frequency of interest?  
> > Ans Voltage Freq  
> > > > > > >  * Considered SCMI?  Ans: Works only for DT as SCMI under ACPI  
> > is wrapped up in AML  
> > > > > > >    so looks like an ACPI interface.
> > > > > > >  * Sudeep H: Necessary to trace CPU freq?  Yes.
> > > > > > >  * Sudeep H: Why not do it in firmware entirely?  Ans. Not just  
> > CPU.  For example PCI device accessing  
> > > > > > >    memory may well need the ring bus to be fast.
> > > > > > >  * Vincent G: Bandwidth affected?  Yes. VG: mobile does this by  
> > specifying a BW requirement (via SCMI.-  
> > > > > > >  * Sudeep H Observed need to expose it via ACPI spec. (option 3  
> > above).  
> > > > > > >  * Sudeep H: Does PCI also need fine-grain control? We might  
> > need to add to the spec.  
> > > > > > >  * Sudeep H:  What are the requirements? gaohanjun: Now we just  
> > frequency scaling.  
> > > > > > >  * Jonathan C: Noted PCI power state is not enough.  It's  
> > workload dependent.  
> > > > > > >  * Sudeep H: We need to gather all the info, need to talk in  
> > ASWG about DVFS  
> > > > > > >  * Jonathan C:  For now direct control probably makes sense.  
> > Whilst it would be nice to have  
> > > > > > >    a detailed enough system description in a standard way to  
> > make general software that is a  
> > > > > > >    big spec job.
> > > > > > >  * Jonathan C: Seems like true standard SW will not happen any  
> > time soon.  
> > > > > > >
> > > > > > > AI: RFC to the linux-pm / linux-acpi Rafael and those in this  
> > discussion to ask about  
> > > > > > >     interest in adding per device DVFS to ACPI spec.  Possibly  
> > pursue code first ACPI  
> > > > > > >     approach.
> > > > > > >
> > > > > > > If I've miss listed or "volunteered" anyone for AIs they didn't  
> > agree to then please  
> > > > > > > correct that.
> > > > > > >
> > > > > > > Thanks all for contributions. I for one found it a very useful  
> > call!  
> > > > > > >
> > > > > > > Jonathan
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Linaro-open-discussions mailing list
> > > > > > >  
> > https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home  
> > > > > > >  
> > https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions  
> > > > >  
> > >  
> > --
> > Linaro-open-discussions mailing list
> > https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> > https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> >  
> 
> 



More information about the Linaro-open-discussions mailing list