Hi Tim, I'd like to introduce my colleague Yicong to you. Yicong is also working on scheduler with me.
Hi Yicong and Tim, I am planning to send the normal patchset not RFC after 5.14-rc1 if this patchset from TianTao can be merged before that: https://lore.kernel.org/lkml/1622712162-7028-1-git-send-email-tiantao6@hisil...
But firstly, I will only send the patchset supporting spreading as packing path is quite tricky. If we put them together, hardly maintainers can review it. Since SCHED_CLUSTER's default status is disabled in Kconfig, lacking the consideration of packing path won't hurt those workloads who like packing.
Three patches in this email thread are for the first patchset with spreading path only. Note patch 1/3 will be rebased againest: https://lore.kernel.org/lkml/1622712162-7028-1-git-send-email-tiantao6@hisil... https://lore.kernel.org/lkml/20210611052249.25776-1-song.bao.hua@hisilicon.c...
In the commit log, I put some TODOs which might need some benchmark data from Tim and Yicong.
Hi Tim, we don't have Jacobsville machine, I will appreciate a lot if you can provide some benchmark data for x86. Would you please work on this?
Thanks Barry
Barry Song (1): scheduler: add scheduler level for clusters
Jonathan Cameron (1): topology: Represent clusters of CPUs within a die
Tim Chen (1): scheduler: Add cluster scheduler level for x86
Documentation/admin-guide/cputopology.rst | 26 +++++++-- arch/arm64/Kconfig | 7 +++ arch/arm64/kernel/topology.c | 2 + arch/x86/Kconfig | 8 +++ arch/x86/include/asm/smp.h | 7 +++ arch/x86/include/asm/topology.h | 3 + arch/x86/kernel/cpu/cacheinfo.c | 1 + arch/x86/kernel/cpu/common.c | 3 + arch/x86/kernel/smpboot.c | 44 ++++++++++++++- drivers/acpi/pptt.c | 67 +++++++++++++++++++++++ drivers/base/arch_topology.c | 15 +++++ drivers/base/topology.c | 10 ++++ include/linux/acpi.h | 5 ++ include/linux/arch_topology.h | 5 ++ include/linux/sched/topology.h | 7 +++ include/linux/topology.h | 13 +++++ kernel/sched/topology.c | 5 ++ 17 files changed, 223 insertions(+), 5 deletions(-)
From: Jonathan Cameron Jonathan.Cameron@huawei.com
Both ACPI and DT provide the ability to describe additional layers of topology between that of individual cores and higher level constructs such as the level at which the last level cache is shared. In ACPI this can be represented in PPTT as a Processor Hierarchy Node Structure [1] that is the parent of the CPU cores and in turn has a parent Processor Hierarchy Nodes Structure representing a higher level of topology.
For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus.
+-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +---+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ ++ | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +--+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | +---------+ +-----------------------------------+
That means the cost to transfer ownership of a cacheline between CPUs within a cluster is lower than between CPUs in different clusters on the same die. Hence, it can make sense to tell the scheduler to use the cache affinity of the cluster to make better decision on thread migration.
This patch simply exposes this information to userspace libraries like hwloc by providing cluster_cpus and related sysfs attributes. PoC of HWLOC support at [2].
Note this patch only handle the ACPI case.
Special consideration is needed for SMT processors, where it is necessary to move 2 levels up the hierarchy from the leaf nodes (thus skipping the processor core level).
Note that arm64 / ACPI does not provide any means of identifying a die level in the topology but that may be unrelate to the cluster level.
[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node structure (Type 0) [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Barry Song song.bao.hua@hisilicon.com Signed-off-by: Tian Tao tiantao6@hisilicon.com --- -difference with RFC v6; detect a valid cluster ID before using offset - done by Tiantao; -Todo: rebase againest tiantao's new topology cpumap APIs and new ABI doc.
Documentation/admin-guide/cputopology.rst | 26 +++++++-- arch/arm64/kernel/topology.c | 2 + drivers/acpi/pptt.c | 67 +++++++++++++++++++++++ drivers/base/arch_topology.c | 15 +++++ drivers/base/topology.c | 10 ++++ include/linux/acpi.h | 5 ++ include/linux/arch_topology.h | 5 ++ include/linux/topology.h | 6 ++ 8 files changed, 132 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst index b90dafcc8237..f9d374560047 100644 --- a/Documentation/admin-guide/cputopology.rst +++ b/Documentation/admin-guide/cputopology.rst @@ -24,6 +24,12 @@ core_id: identifier (rather than the kernel's). The actual value is architecture and platform dependent.
+cluster_id: + + the Cluster ID of cpuX. Typically it is the hardware platform's + identifier (rather than the kernel's). The actual value is + architecture and platform dependent. + book_id:
the book ID of cpuX. Typically it is the hardware platform's @@ -56,6 +62,14 @@ package_cpus_list: human-readable list of CPUs sharing the same physical_package_id. (deprecated name: "core_siblings_list")
+cluster_cpus: + + internal kernel map of CPUs within the same cluster. + +cluster_cpus_list: + + human-readable list of CPUs within the same cluster. + die_cpus:
internal kernel map of CPUs within the same die. @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
#define topology_physical_package_id(cpu) #define topology_die_id(cpu) + #define topology_cluster_id(cpu) #define topology_core_id(cpu) #define topology_book_id(cpu) #define topology_drawer_id(cpu) #define topology_sibling_cpumask(cpu) #define topology_core_cpumask(cpu) + #define topology_cluster_cpumask(cpu) #define topology_die_cpumask(cpu) #define topology_book_cpumask(cpu) #define topology_drawer_cpumask(cpu) @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
1) topology_physical_package_id: -1 2) topology_die_id: -1 -3) topology_core_id: 0 -4) topology_sibling_cpumask: just the given CPU -5) topology_core_cpumask: just the given CPU -6) topology_die_cpumask: just the given CPU +3) topology_cluster_id: -1 +4) topology_core_id: 0 +5) topology_sibling_cpumask: just the given CPU +6) topology_core_cpumask: just the given CPU +7) topology_cluster_cpumask: just the given CPU +8) topology_die_cpumask: just the given CPU
For architectures that don't support books (CONFIG_SCHED_BOOK) there are no default definitions for topology_book_id() and topology_book_cpumask(). diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 4dd14a6620c1..9ab78ad826e2 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void) cpu_topology[cpu].thread_id = -1; cpu_topology[cpu].core_id = topology_id; } + topology_id = find_acpi_cpu_topology_cluster(cpu); + cpu_topology[cpu].cluster_id = topology_id; topology_id = find_acpi_cpu_topology_package(cpu); cpu_topology[cpu].package_id = topology_id;
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c index 4ae93350b70d..7effa3176484 100644 --- a/drivers/acpi/pptt.c +++ b/drivers/acpi/pptt.c @@ -736,6 +736,73 @@ int find_acpi_cpu_topology_package(unsigned int cpu) ACPI_PPTT_PHYSICAL_PACKAGE); }
+/** + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value + * @cpu: Kernel logical CPU number + * + * Determine a topology unique cluster ID for the given CPU/thread. + * This ID can then be used to group peers, which will have matching ids. + * + * The cluster, if present is the level of topology above CPUs. In a + * multi-thread CPU, it will be the level above the CPU, not the thread. + * It may not exist in single CPU systems. In simple multi-CPU systems, + * it may be equal to the package topology level. + * + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found + * or there is no toplogy level above the CPU.. + * Otherwise returns a value which represents the package for this CPU. + */ + +int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + struct acpi_table_header *table; + acpi_status status; + struct acpi_pptt_processor *cpu_node, *cluster_node; + u32 acpi_cpu_id; + int retval; + int is_thread; + + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); + if (ACPI_FAILURE(status)) { + acpi_pptt_warn_missing(); + return -ENOENT; + } + + acpi_cpu_id = get_acpi_id_for_cpu(cpu); + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); + if (cpu_node == NULL || !cpu_node->parent) { + retval = -ENOENT; + goto put_table; + } + + is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD; + cluster_node = fetch_pptt_node(table, cpu_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + if (is_thread) { + if (!cluster_node->parent) { + retval = -ENOENT; + goto put_table; + } + cluster_node = fetch_pptt_node(table, cluster_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + } + if (cluster_node->flags & ACPI_PPTT_ACPI_PROCESSOR_ID_VALID) + retval = cluster_node->acpi_processor_id; + else + retval = ACPI_PTR_DIFF(cluster_node, table); + +put_table: + acpi_put_table(table); + + return retval; +} + /** * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag * @cpu: Kernel logical CPU number diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index c1179edc0f3b..0e1070aec26c 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -583,6 +583,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return core_mask; }
+const struct cpumask *cpu_clustergroup_mask(int cpu) +{ + return &cpu_topology[cpu].cluster_sibling; +} + void update_siblings_masks(unsigned int cpuid) { struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; @@ -600,6 +605,11 @@ void update_siblings_masks(unsigned int cpuid) if (cpuid_topo->package_id != cpu_topo->package_id) continue;
+ if (cpuid_topo->cluster_id == cpu_topo->cluster_id) { + cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling); + cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling); + } + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
@@ -618,6 +628,9 @@ static void clear_cpu_topology(int cpu) cpumask_clear(&cpu_topo->llc_sibling); cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
+ cpumask_clear(&cpu_topo->cluster_sibling); + cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling); + cpumask_clear(&cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); @@ -633,6 +646,7 @@ void __init reset_cpu_topology(void)
cpu_topo->thread_id = -1; cpu_topo->core_id = -1; + cpu_topo->cluster_id = -1; cpu_topo->package_id = -1; cpu_topo->llc_id = -1;
@@ -648,6 +662,7 @@ void remove_cpu_topology(unsigned int cpu) cpumask_clear_cpu(cpu, topology_core_cpumask(sibling)); for_each_cpu(sibling, topology_sibling_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); + for_each_cpu(sibling, topology_llc_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
diff --git a/drivers/base/topology.c b/drivers/base/topology.c index 4d254fcc93d1..7157ac08ff57 100644 --- a/drivers/base/topology.c +++ b/drivers/base/topology.c @@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id); define_id_show_func(die_id); static DEVICE_ATTR_RO(die_id);
+define_id_show_func(cluster_id); +static DEVICE_ATTR_RO(cluster_id); + define_id_show_func(core_id); static DEVICE_ATTR_RO(core_id);
@@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask); static DEVICE_ATTR_RO(core_siblings); static DEVICE_ATTR_RO(core_siblings_list);
+define_siblings_show_func(cluster_cpus, cluster_cpumask); +static DEVICE_ATTR_RO(cluster_cpus); +static DEVICE_ATTR_RO(cluster_cpus_list); + define_siblings_show_func(die_cpus, die_cpumask); static DEVICE_ATTR_RO(die_cpus); static DEVICE_ATTR_RO(die_cpus_list); @@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list); static struct attribute *default_attrs[] = { &dev_attr_physical_package_id.attr, &dev_attr_die_id.attr, + &dev_attr_cluster_id.attr, &dev_attr_core_id.attr, &dev_attr_thread_siblings.attr, &dev_attr_thread_siblings_list.attr, @@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = { &dev_attr_core_cpus_list.attr, &dev_attr_core_siblings.attr, &dev_attr_core_siblings_list.attr, + &dev_attr_cluster_cpus.attr, + &dev_attr_cluster_cpus_list.attr, &dev_attr_die_cpus.attr, &dev_attr_die_cpus_list.attr, &dev_attr_package_cpus.attr, diff --git a/include/linux/acpi.h b/include/linux/acpi.h index c60745f657e9..cae0403c6989 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -1330,6 +1330,7 @@ static inline int lpit_read_residency_count_address(u64 *address) #ifdef CONFIG_ACPI_PPTT int acpi_pptt_cpu_is_thread(unsigned int cpu); int find_acpi_cpu_topology(unsigned int cpu, int level); +int find_acpi_cpu_topology_cluster(unsigned int cpu); int find_acpi_cpu_topology_package(unsigned int cpu); int find_acpi_cpu_topology_hetero_id(unsigned int cpu); int find_acpi_cpu_cache_topology(unsigned int cpu, int level); @@ -1342,6 +1343,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level) { return -EINVAL; } +static inline int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + return -EINVAL; +} static inline int find_acpi_cpu_topology_package(unsigned int cpu) { return -EINVAL; diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h index f180240dc95f..b97cea83b25e 100644 --- a/include/linux/arch_topology.h +++ b/include/linux/arch_topology.h @@ -62,10 +62,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus, struct cpu_topology { int thread_id; int core_id; + int cluster_id; int package_id; int llc_id; cpumask_t thread_sibling; cpumask_t core_sibling; + cpumask_t cluster_sibling; cpumask_t llc_sibling; };
@@ -73,13 +75,16 @@ struct cpu_topology { extern struct cpu_topology cpu_topology[NR_CPUS];
#define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id) +#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id) #define topology_core_id(cpu) (cpu_topology[cpu].core_id) #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) #define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) +#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling) #define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling) void init_cpu_topology(void); void store_cpu_topology(unsigned int cpuid); const struct cpumask *cpu_coregroup_mask(int cpu); +const struct cpumask *cpu_clustergroup_mask(int cpu); void update_siblings_masks(unsigned int cpu); void remove_cpu_topology(unsigned int cpuid); void reset_cpu_topology(void); diff --git a/include/linux/topology.h b/include/linux/topology.h index 7634cd737061..80d27d717631 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_die_id #define topology_die_id(cpu) ((void)(cpu), -1) #endif +#ifndef topology_cluster_id +#define topology_cluster_id(cpu) ((void)(cpu), -1) +#endif #ifndef topology_core_id #define topology_core_id(cpu) ((void)(cpu), 0) #endif @@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_core_cpumask #define topology_core_cpumask(cpu) cpumask_of(cpu) #endif +#ifndef topology_cluster_cpumask +#define topology_cluster_cpumask(cpu) cpumask_of(cpu) +#endif #ifndef topology_die_cpumask #define topology_die_cpumask(cpu) cpumask_of(cpu) #endif
ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will help spread tasks among clusters, thus decrease the contention and improve the throughput. Verified by Mel's mm-tests by writing config files as below to define the number of threads for stream,e.g configs/config-workload-stream-omp-4threads: export STREAM_SIZE=$((1048576*512)) export STREAM_THREADS=4 export STREAM_METHOD=omp export STREAM_ITERATIONS=5 export STREAM_BUILD_FLAGS="-lm -Ofast"
Ran the stream benchmark on kunpeng920 with 4numa nodes and each node has 24core by commands like: numactl -N 0 -m 0 ./run-mmtests.sh -c \ configs/config-workload-stream-omp-4threads tip-sched-core-4threads
and compared the cases between tip/sched/core and tip/sched/core with cluster scheduler. The result is as below:
4threads stream (on 1numa * 24cores = 24cores) stream stream 4threads 4threads-cluster-scheduler MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%) MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%) MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%) MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1numa * 24cores = 24cores) stream stream 6threads 6threads-cluster-scheduler MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%) MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%) MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%) MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1numa * 24cores = 24cores) stream stream 12threads 12threads-cluster-scheduler MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%) MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%) MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%) MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
The result was generated by commands like: ../../compare-kernels.sh --baseline tip-sched-core-4threads \ --compare tip-sched-core-4threads-cluster-scheduler
Thus, it could help memory-bound workload especially under medium load. For example, ran mmtests configs/config-workload-lkp-compress benchmark on 4numa*24cores=96 cores kunpeng920, 12,21,30 threads present the best improvement:
lkp-pbzip2 (on 4numa * 24cores = 96cores)
lkp lkp compress-w/o-cluster compress-w/-cluster Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%* Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%* Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%* Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%* Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%* Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%* Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%* Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%* Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
Ran the same benchmark by "numactl -N 0 -m 0" from 2 threads to 24 threads on numa node0 with 24 cores:
lkp-pbzip2 (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%* Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%* Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%* Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%* Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%* Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%) Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance improvement.
Similar improvement was seen on lkp-pixz though the improvement is smaller:
lkp-pixz (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%* Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%) Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%* Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%* Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%* Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%* Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
On the other hand, it is slightly helpful to cpu-bound tasks. With configs/config-workload-kernbench like: export KERNBENCH_ITERATIONS=3 export KERNBENCH_MIN_THREADS=$((NUMCPUS/4)) export KERNBENCH_MAX_THREADS=$((NUMCPUS)) export KERNBENCH_CONFIG=allmodconfig export KERNBENCH_TARGETS=vmlinux,modules export KERNBENCH_SKIP_WARMUP=yes export MMTESTS_THREAD_CUTOFF= export KERNBENCH_VERSION=5.3 Ran kernbench by 24,48,96 threads to compile an entire kernel without numactl binding, each case run 3 iterations: 24 threads w/o and w/ cluster-scheduler: w/o 10:03.26 10:00.46 10:01.09 w/ 10:01.11 10:00.83 9:58.64
48 threads w/o and w/ cluster-scheduler: w/o 5:33.96 5:34.28 5:34.06 w/ 5:32.65 5:32.57 5:33.25
96 threads w/o and w/ cluster-scheduler: w/o 3:33.34 3:31.22 3:31.31 w/ 3:32.22 3:30.47 3:32.69
kernbench (on 4numa * 24cores = 96cores) kernbench kernbench w/o-cluster w/-cluster Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%) Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%) Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%) Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%) Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%) Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%) Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%) Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%) Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%) Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%* Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%* Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%) Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%* Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%* Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%* Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%) Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%* Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
[ Hi Yicong, Is it possible for you to run similar SPECrate mcf test with Tim and get some supportive data here?
Thanks Barry ]
Signed-off-by: Barry Song song.bao.hua@hisilicon.com --- arch/arm64/Kconfig | 7 +++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 5 +++++ 4 files changed, 26 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..3b54ea4e1bd7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters(sharing internal + bus or sharing LLC cache tag). If unsure say N here. + config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778b7c91..2f9166f6dec8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{ + return SD_SHARE_PKG_RESOURCES; +} +#endif + #ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d717631..0b3704ad13c8 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif
+#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{ + return topology_cluster_cpumask(cpu); +} +#endif + static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 55a0a243e871..c7523dc7aab7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1511,6 +1511,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif + +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) }, +#endif + #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
On 2021/6/16 17:36, Barry Song wrote:
ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will help spread tasks among clusters, thus decrease the contention and improve the throughput. Verified by Mel's mm-tests by writing config files as below to define the number of threads for stream,e.g configs/config-workload-stream-omp-4threads: export STREAM_SIZE=$((1048576*512)) export STREAM_THREADS=4 export STREAM_METHOD=omp export STREAM_ITERATIONS=5 export STREAM_BUILD_FLAGS="-lm -Ofast"
Ran the stream benchmark on kunpeng920 with 4numa nodes and each node has 24core by commands like: numactl -N 0 -m 0 ./run-mmtests.sh -c \ configs/config-workload-stream-omp-4threads tip-sched-core-4threads
and compared the cases between tip/sched/core and tip/sched/core with cluster scheduler. The result is as below:
4threads stream (on 1numa * 24cores = 24cores) stream stream 4threads 4threads-cluster-scheduler MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%) MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%) MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%) MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1numa * 24cores = 24cores) stream stream 6threads 6threads-cluster-scheduler MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%) MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%) MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%) MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1numa * 24cores = 24cores) stream stream 12threads 12threads-cluster-scheduler MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%) MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%) MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%) MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
The result was generated by commands like: ../../compare-kernels.sh --baseline tip-sched-core-4threads \ --compare tip-sched-core-4threads-cluster-scheduler
Thus, it could help memory-bound workload especially under medium load. For example, ran mmtests configs/config-workload-lkp-compress benchmark on 4numa*24cores=96 cores kunpeng920, 12,21,30 threads present the best improvement:
lkp-pbzip2 (on 4numa * 24cores = 96cores)
lkp lkp compress-w/o-cluster compress-w/-cluster
Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%* Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%* Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%* Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%* Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%* Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%* Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%* Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%* Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
Ran the same benchmark by "numactl -N 0 -m 0" from 2 threads to 24 threads on numa node0 with 24 cores:
lkp-pbzip2 (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%* Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%* Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%* Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%* Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%* Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%) Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance improvement.
Similar improvement was seen on lkp-pixz though the improvement is smaller:
lkp-pixz (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%* Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%) Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%* Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%* Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%* Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%* Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
On the other hand, it is slightly helpful to cpu-bound tasks. With configs/config-workload-kernbench like: export KERNBENCH_ITERATIONS=3 export KERNBENCH_MIN_THREADS=$((NUMCPUS/4)) export KERNBENCH_MAX_THREADS=$((NUMCPUS)) export KERNBENCH_CONFIG=allmodconfig export KERNBENCH_TARGETS=vmlinux,modules export KERNBENCH_SKIP_WARMUP=yes export MMTESTS_THREAD_CUTOFF= export KERNBENCH_VERSION=5.3 Ran kernbench by 24,48,96 threads to compile an entire kernel without numactl binding, each case run 3 iterations: 24 threads w/o and w/ cluster-scheduler: w/o 10:03.26 10:00.46 10:01.09 w/ 10:01.11 10:00.83 9:58.64
48 threads w/o and w/ cluster-scheduler: w/o 5:33.96 5:34.28 5:34.06 w/ 5:32.65 5:32.57 5:33.25
96 threads w/o and w/ cluster-scheduler: w/o 3:33.34 3:31.22 3:31.31 w/ 3:32.22 3:30.47 3:32.69
kernbench (on 4numa * 24cores = 96cores) kernbench kernbench w/o-cluster w/-cluster Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%) Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%) Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%) Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%) Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%) Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%) Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%) Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%) Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%) Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%* Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%* Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%) Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%* Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%* Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%* Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%) Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%* Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
[ Hi Yicong, Is it possible for you to run similar SPECrate mcf test with Tim and get some supportive data here?
For our Kunpeng 920, I run the whole intrate suite firstly with 32 copies, and didn't bind the NUMA. Here's the result
Base Base Base Benchmarks Copies Run Time Rate --------------- ------- --------- --------- 500.perlbench_r 32 w/o 580(w 578) w/o 87.8(w 88.2) * 502.gcc_r 32 w/o 500(w 504) w/o 90.5(w 90.0) * 505.mcf_r 32 w/o 764(w 767) w/o 67.7(w 67.4) * 520.omnetpp_r 32 w/o 1030(w 1024) w/o 40.7(w 41.0) * 523.xalancbmk_r 32 w/o 584(w 584) w/o 57.9(w 57.8) * 525.x264_r 32 w/o 285(w 284) w/o 196 (w 197) * 531.deepsjeng_r 32 w/o 336(w 338) w/o 109 (w 108) * 541.leela_r 32 w/o 570(w 569) w/o 93.0(w 93.1) * 548.exchange2_r 32 w/o 526(w 532) w/o 160 (w 157) * 557.xz_r 32 w/o 538(w 542) w/o 64.2(w 63.8) * Est. SPECrate2017_int_base w/o 87.4(w 87.2)
(w/o is without the patch, the bigger the rate is the better)
Then I test the mcf_r alone with different copies and bind to NUMA 0:
Base Base Run Time Rate ------- --------- 4 Copies w/o 618 (w 580) w/o 10.5 (w 11.1) 8 Copies w/o 645 (w 647) w/o 20.0 (w 20) 16 Copies w/o 849 (w 844) w/o 30.4 (w 30.6)
As I checked from the htop, the tasks running on the cpu didn't spread through the clusters rigidly.
I didn't apply Patch #3 as I met some conflicts and didn't try to resolve it. As we're testing on arm64 I think it's okay to test without patch #3.
The machine I have tested have 128 cores in 2 sockets and 4 numas with 32 cores each. Of course, still 4 cores in one cluster. Below are the memory info through numa:
available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 257190 MB node 0 free: 254203 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 258005 MB node 1 free: 257191 MB node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 2 size: 96763 MB node 2 free: 96158 MB node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 3 size: 127540 MB node 3 free: 126922 MB node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
Any comments? I notice Tim observed that sleep and wakeup will have some influences. So I wonder whether the speccpu intrate test also suffers from this.
Thanks, Yicong
Thanks Barry ]
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 5 +++++ 4 files changed, 26 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..3b54ea4e1bd7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER
- bool "Cluster scheduler support"
- help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778b7c91..2f9166f6dec8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
- return SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d717631..0b3704ad13c8 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
- return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 55a0a243e871..c7523dc7aab7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1511,6 +1511,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
On 6/17/21 4:33 AM, Yicong Yang wrote:
(w/o is without the patch, the bigger the rate is the better)
Then I test the mcf_r alone with different copies and bind to NUMA 0:
Base Base Run Time Rate ------- ---------
4 Copies w/o 618 (w 580) w/o 10.5 (w 11.1) 8 Copies w/o 645 (w 647) w/o 20.0 (w 20) 16 Copies w/o 849 (w 844) w/o 30.4 (w 30.6)
As I checked from the htop, the tasks running on the cpu didn't spread through the clusters rigidly.
Looking at the code, it seems like the active load balance path should run to move a running task from CPU in the overloaded cluster to the empty cluster in the test in the 4 copies test.
Wonder whether in the mean time the task has slept as we do active balance via cpu_stop, which takes some time to stop the CPUs. So we fail to move the task as it is no longer running.
I am wondering if we are incurring more active load balance cpu stop overhead without reaping the benefit of actually balancing the tasks.
Do you notice increase in the rate of calls to active_load_balance_cpu_stop for the 4 copies case compared to vanilla kernel?
I didn't apply Patch #3 as I met some conflicts and didn't try to resolve it. As we're testing on arm64 I think it's okay to test without patch #3.
The machine I have tested have 128 cores in 2 sockets and 4 numas with 32 cores each. Of course, still 4 cores in one cluster. Below are the memory info through numa:
Any comments? I notice Tim observed that sleep and wakeup will have some influences. So I wonder whether the speccpu intrate test also suffers from this.
This could be the case. Probably should check to see if a single copy mcf has some sleep or get blocked frequently.
Tim
-----Original Message----- From: yangyicong Sent: Thursday, June 17, 2021 11:34 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; yangyicong yangyicong@huawei.com; tim.c.chen@linux.intel.com; linaro-open-discussions@op-lists.linaro.org Cc: guodong.xu@linaro.org; tangchengchang tangchengchang@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com; Jonathan Cameron jonathan.cameron@huawei.com; Linuxarm linuxarm@huawei.com; yangyicong yangyicong@huawei.com Subject: Re: [PATCH 2/3] scheduler: add scheduler level for clusters
On 2021/6/16 17:36, Barry Song wrote:
ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will help spread tasks among clusters, thus decrease the contention and improve the throughput. Verified by Mel's mm-tests by writing config files as below to define the number of threads for stream,e.g configs/config-workload-stream-omp-4threads: export STREAM_SIZE=$((1048576*512)) export STREAM_THREADS=4 export STREAM_METHOD=omp export STREAM_ITERATIONS=5 export STREAM_BUILD_FLAGS="-lm -Ofast"
Ran the stream benchmark on kunpeng920 with 4numa nodes and each node has 24core by commands like: numactl -N 0 -m 0 ./run-mmtests.sh -c \ configs/config-workload-stream-omp-4threads tip-sched-core-4threads
and compared the cases between tip/sched/core and tip/sched/core with cluster scheduler. The result is as below:
4threads stream (on 1numa * 24cores = 24cores) stream stream 4threads 4threads-cluster-scheduler MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%) MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%) MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%) MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1numa * 24cores = 24cores) stream stream 6threads 6threads-cluster-scheduler MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%) MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%) MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%) MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1numa * 24cores = 24cores) stream stream 12threads 12threads-cluster-scheduler MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%) MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%) MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%) MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
The result was generated by commands like: ../../compare-kernels.sh --baseline tip-sched-core-4threads \ --compare tip-sched-core-4threads-cluster-scheduler
Thus, it could help memory-bound workload especially under medium load. For example, ran mmtests configs/config-workload-lkp-compress benchmark on 4numa*24cores=96 cores kunpeng920, 12,21,30 threads present the best improvement:
lkp-pbzip2 (on 4numa * 24cores = 96cores)
lkp lkp compress-w/o-cluster compress-w/-cluster
Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%* Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%* Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%* Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%* Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%* Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%* Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%* Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%* Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
Ran the same benchmark by "numactl -N 0 -m 0" from 2 threads to 24 threads on numa node0 with 24 cores:
lkp-pbzip2 (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%* Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%* Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%* Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%* Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%* Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%) Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance improvement.
Similar improvement was seen on lkp-pixz though the improvement is smaller:
lkp-pixz (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%* Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%) Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%* Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%* Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%* Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%* Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
On the other hand, it is slightly helpful to cpu-bound tasks. With configs/config-workload-kernbench like: export KERNBENCH_ITERATIONS=3 export KERNBENCH_MIN_THREADS=$((NUMCPUS/4)) export KERNBENCH_MAX_THREADS=$((NUMCPUS)) export KERNBENCH_CONFIG=allmodconfig export KERNBENCH_TARGETS=vmlinux,modules export KERNBENCH_SKIP_WARMUP=yes export MMTESTS_THREAD_CUTOFF= export KERNBENCH_VERSION=5.3 Ran kernbench by 24,48,96 threads to compile an entire kernel without numactl binding, each case run 3 iterations: 24 threads w/o and w/ cluster-scheduler: w/o 10:03.26 10:00.46 10:01.09 w/ 10:01.11 10:00.83 9:58.64
48 threads w/o and w/ cluster-scheduler: w/o 5:33.96 5:34.28 5:34.06 w/ 5:32.65 5:32.57 5:33.25
96 threads w/o and w/ cluster-scheduler: w/o 3:33.34 3:31.22 3:31.31 w/ 3:32.22 3:30.47 3:32.69
kernbench (on 4numa * 24cores = 96cores) kernbench kernbench w/o-cluster w/-cluster Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%) Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%) Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%) Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%) Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%) Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%) Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%) Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%) Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%) Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%* Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%* Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%) Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%* Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%* Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%* Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%) Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%* Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
[ Hi Yicong, Is it possible for you to run similar SPECrate mcf test with Tim and get some supportive data here?
For our Kunpeng 920, I run the whole intrate suite firstly with 32 copies, and didn't bind the NUMA. Here's the result
Base Base Base
Benchmarks Copies Run Time Rate
500.perlbench_r 32 w/o 580(w 578) w/o 87.8(w 88.2) * 502.gcc_r 32 w/o 500(w 504) w/o 90.5(w 90.0) * 505.mcf_r 32 w/o 764(w 767) w/o 67.7(w 67.4) * 520.omnetpp_r 32 w/o 1030(w 1024) w/o 40.7(w 41.0) * 523.xalancbmk_r 32 w/o 584(w 584) w/o 57.9(w 57.8) * 525.x264_r 32 w/o 285(w 284) w/o 196 (w 197) * 531.deepsjeng_r 32 w/o 336(w 338) w/o 109 (w 108) * 541.leela_r 32 w/o 570(w 569) w/o 93.0(w 93.1) * 548.exchange2_r 32 w/o 526(w 532) w/o 160 (w 157) * 557.xz_r 32 w/o 538(w 542) w/o 64.2(w 63.8) * Est. SPECrate2017_int_base w/o 87.4(w 87.2)
(w/o is without the patch, the bigger the rate is the better)
Then I test the mcf_r alone with different copies and bind to NUMA 0:
Base Base Run Time Rate ------- ---------
4 Copies w/o 618 (w 580) w/o 10.5 (w 11.1) 8 Copies w/o 645 (w 647) w/o 20.0 (w 20) 16 Copies w/o 849 (w 844) w/o 30.4 (w 30.6)
As I checked from the htop, the tasks running on the cpu didn't spread through the clusters rigidly.
I didn't apply Patch #3 as I met some conflicts and didn't try to resolve it. As we're testing on arm64 I think it's okay to test without patch #3.
Patchset is based on tip/sched/core:
git log --oneline fd0c52f scheduler: add scheduler level for clusters 515f6f7 topology: Represent clusters of CPUs within a die 1faa491 sched/debug: Remove obsolete init_schedstats() a9e906b Merge branch 'sched/urgent' into sched/core, to pick up fixes fcf6631 sched/pelt: Ensure that *_sum is always synced with *_avg 475ea6c sched: Don't defer CPU pick to migration_cpu_stop()
The machine I have tested have 128 cores in 2 sockets and 4 numas with 32 cores each. Of course, still 4 cores in one cluster. Below are the memory info through numa:
available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
23 24 25 26 27 28 29 30 31 node 0 size: 257190 MB node 0 free: 254203 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 258005 MB node 1 free: 257191 MB node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 2 size: 96763 MB node 2 free: 96158 MB node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 3 size: 127540 MB node 3 free: 126922 MB node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
Any comments? I notice Tim observed that sleep and wakeup will have some influences. So I wonder whether the speccpu intrate test also suffers from this.
Firstly, we need to check if CLS has been enabled via SCHED_CLUSTER in menuconfig: root@ubuntu:/sys/kernel/debug/sched/domains/cpu0# ls domain0 domain1 domain2 domain3 domain4
root@ubuntu:/sys/kernel/debug/sched/domains/cpu0# cat domain0/name CLS
root@ubuntu:/sys/devices/system/cpu/cpu0/topology# cat cluster_cpus_list 0-3 root@ubuntu:/sys/devices/system/cpu/cpu0/topology#
Thanks, Yicong
Thanks Barry ]
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 5 +++++ 4 files changed, 26 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..3b54ea4e1bd7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER
- bool "Cluster scheduler support"
- help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/topology.h
b/include/linux/sched/topology.h
index 8f0f778b7c91..2f9166f6dec8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
- return SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d717631..0b3704ad13c8 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int
cpu)
} #endif
+#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
- return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 55a0a243e871..c7523dc7aab7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1511,6 +1511,11 @@ static struct sched_domain_topology_level
default_topology[] = {
#ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
Thanks Barry
Hi Barry and Tim,
As Barry pointed I didn't enable the CONFIG_SCHED_CLUSTER... I'd like to share some updated results with the config correctly enabled.
I re-run the mcf_r with 4,8,16 copies on NUMA 0, the result is like: Base Base Run Time Rate ------- --------- 4 Copies w/o 580 (w 570) w/o 11.1 (w 11.3) 8 Copies w/o 647 (w 605) w/o 20.0 (w 21.4) 16 Copies w/o 844 (w 844) w/o 30.6 (w 30.6)
Seems there is a ~7% improvement for 8 Copies but little changed for 4 and 16 copies.
This time from htop the tasks are spread through clusters well.
For the 4 copies I use
perf stat -e probe:active_load_balance_cpu_stop -- ./bin/runcpu default.cfg 505.mcf_r
to check the different of 'active_load_balance_cpu_stop' with and without the patch. There is no difference and the counts are both 0.
I also run the whole intrate suite on another machine, same model as the one above. 32 Copies is lanuch in the whole system without binding to a specific NUMA node. And seems there is some positive results.
(x264_r is not included as there is a bug for x264 while compiling with gcc 10: https://www.spec.org/cpu2017/Docs/benchmarks/625.x264_s.html I'll fix this in the following test)
[w/o] Base Base Base Benchmarks Copies Run Time Rate --------------- ------- --------- --------- 500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate --------------- ------- --------- --------- 500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
The iteration of the test is 1, and I'm going to increase it to 5 to see the average result.
Thanks, Yicong
On 2021/6/17 19:33, Yicong Yang wrote:
On 2021/6/16 17:36, Barry Song wrote:
ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will help spread tasks among clusters, thus decrease the contention and improve the throughput. Verified by Mel's mm-tests by writing config files as below to define the number of threads for stream,e.g configs/config-workload-stream-omp-4threads: export STREAM_SIZE=$((1048576*512)) export STREAM_THREADS=4 export STREAM_METHOD=omp export STREAM_ITERATIONS=5 export STREAM_BUILD_FLAGS="-lm -Ofast"
Ran the stream benchmark on kunpeng920 with 4numa nodes and each node has 24core by commands like: numactl -N 0 -m 0 ./run-mmtests.sh -c \ configs/config-workload-stream-omp-4threads tip-sched-core-4threads
and compared the cases between tip/sched/core and tip/sched/core with cluster scheduler. The result is as below:
4threads stream (on 1numa * 24cores = 24cores) stream stream 4threads 4threads-cluster-scheduler MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%) MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%) MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%) MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1numa * 24cores = 24cores) stream stream 6threads 6threads-cluster-scheduler MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%) MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%) MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%) MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1numa * 24cores = 24cores) stream stream 12threads 12threads-cluster-scheduler MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%) MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%) MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%) MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
The result was generated by commands like: ../../compare-kernels.sh --baseline tip-sched-core-4threads \ --compare tip-sched-core-4threads-cluster-scheduler
Thus, it could help memory-bound workload especially under medium load. For example, ran mmtests configs/config-workload-lkp-compress benchmark on 4numa*24cores=96 cores kunpeng920, 12,21,30 threads present the best improvement:
lkp-pbzip2 (on 4numa * 24cores = 96cores)
lkp lkp compress-w/o-cluster compress-w/-cluster
Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%* Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%* Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%* Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%* Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%* Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%* Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%* Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%* Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
Ran the same benchmark by "numactl -N 0 -m 0" from 2 threads to 24 threads on numa node0 with 24 cores:
lkp-pbzip2 (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%* Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%* Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%* Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%* Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%* Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%) Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance improvement.
Similar improvement was seen on lkp-pixz though the improvement is smaller:
lkp-pixz (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%* Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%) Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%* Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%* Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%* Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%* Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
On the other hand, it is slightly helpful to cpu-bound tasks. With configs/config-workload-kernbench like: export KERNBENCH_ITERATIONS=3 export KERNBENCH_MIN_THREADS=$((NUMCPUS/4)) export KERNBENCH_MAX_THREADS=$((NUMCPUS)) export KERNBENCH_CONFIG=allmodconfig export KERNBENCH_TARGETS=vmlinux,modules export KERNBENCH_SKIP_WARMUP=yes export MMTESTS_THREAD_CUTOFF= export KERNBENCH_VERSION=5.3 Ran kernbench by 24,48,96 threads to compile an entire kernel without numactl binding, each case run 3 iterations: 24 threads w/o and w/ cluster-scheduler: w/o 10:03.26 10:00.46 10:01.09 w/ 10:01.11 10:00.83 9:58.64
48 threads w/o and w/ cluster-scheduler: w/o 5:33.96 5:34.28 5:34.06 w/ 5:32.65 5:32.57 5:33.25
96 threads w/o and w/ cluster-scheduler: w/o 3:33.34 3:31.22 3:31.31 w/ 3:32.22 3:30.47 3:32.69
kernbench (on 4numa * 24cores = 96cores) kernbench kernbench w/o-cluster w/-cluster Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%) Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%) Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%) Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%) Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%) Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%) Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%) Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%) Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%) Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%* Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%* Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%) Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%* Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%* Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%* Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%) Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%* Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
[ Hi Yicong, Is it possible for you to run similar SPECrate mcf test with Tim and get some supportive data here?
For our Kunpeng 920, I run the whole intrate suite firstly with 32 copies, and didn't bind the NUMA. Here's the result
Base Base Base
Benchmarks Copies Run Time Rate
500.perlbench_r 32 w/o 580(w 578) w/o 87.8(w 88.2) * 502.gcc_r 32 w/o 500(w 504) w/o 90.5(w 90.0) * 505.mcf_r 32 w/o 764(w 767) w/o 67.7(w 67.4) * 520.omnetpp_r 32 w/o 1030(w 1024) w/o 40.7(w 41.0) * 523.xalancbmk_r 32 w/o 584(w 584) w/o 57.9(w 57.8) * 525.x264_r 32 w/o 285(w 284) w/o 196 (w 197) * 531.deepsjeng_r 32 w/o 336(w 338) w/o 109 (w 108) * 541.leela_r 32 w/o 570(w 569) w/o 93.0(w 93.1) * 548.exchange2_r 32 w/o 526(w 532) w/o 160 (w 157) * 557.xz_r 32 w/o 538(w 542) w/o 64.2(w 63.8) * Est. SPECrate2017_int_base w/o 87.4(w 87.2)
(w/o is without the patch, the bigger the rate is the better)
Then I test the mcf_r alone with different copies and bind to NUMA 0:
Base Base Run Time Rate ------- ---------
4 Copies w/o 618 (w 580) w/o 10.5 (w 11.1) 8 Copies w/o 645 (w 647) w/o 20.0 (w 20) 16 Copies w/o 849 (w 844) w/o 30.4 (w 30.6)
As I checked from the htop, the tasks running on the cpu didn't spread through the clusters rigidly.
I didn't apply Patch #3 as I met some conflicts and didn't try to resolve it. As we're testing on arm64 I think it's okay to test without patch #3.
The machine I have tested have 128 cores in 2 sockets and 4 numas with 32 cores each. Of course, still 4 cores in one cluster. Below are the memory info through numa:
available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 257190 MB node 0 free: 254203 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 258005 MB node 1 free: 257191 MB node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 2 size: 96763 MB node 2 free: 96158 MB node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 3 size: 127540 MB node 3 free: 126922 MB node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
Any comments? I notice Tim observed that sleep and wakeup will have some influences. So I wonder whether the speccpu intrate test also suffers from this.
Thanks, Yicong
Thanks Barry ]
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 5 +++++ 4 files changed, 26 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..3b54ea4e1bd7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER
- bool "Cluster scheduler support"
- help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778b7c91..2f9166f6dec8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
- return SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d717631..0b3704ad13c8 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
- return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 55a0a243e871..c7523dc7aab7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1511,6 +1511,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
.
On 6/18/21 3:27 AM, Yicong Yang wrote:
Hi Barry and Tim,
As Barry pointed I didn't enable the CONFIG_SCHED_CLUSTER... I'd like to share some updated results with the config correctly enabled.
I re-run the mcf_r with 4,8,16 copies on NUMA 0, the result is like: Base Base Run Time Rate ------- --------- 4 Copies w/o 580 (w 570) w/o 11.1 (w 11.3) 8 Copies w/o 647 (w 605) w/o 20.0 (w 21.4) 16 Copies w/o 844 (w 844) w/o 30.6 (w 30.6)
Seems there is a ~7% improvement for 8 Copies but little changed for 4 and 16 copies.
Thanks for the update. This looks pretty good. I expect the best improvement to be for the 8 copies case as you have 8 cluster and the load balancing should lead to the best placement for that case.
Tim
This time from htop the tasks are spread through clusters well.
For the 4 copies I use
perf stat -e probe:active_load_balance_cpu_stop -- ./bin/runcpu default.cfg 505.mcf_r
to check the different of 'active_load_balance_cpu_stop' with and without the patch. There is no difference and the counts are both 0.
I also run the whole intrate suite on another machine, same model as the one above. 32 Copies is lanuch in the whole system without binding to a specific NUMA node. And seems there is some positive results.
(x264_r is not included as there is a bug for x264 while compiling with gcc 10: https://www.spec.org/cpu2017/Docs/benchmarks/625.x264_s.html I'll fix this in the following test)
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
The iteration of the test is 1, and I'm going to increase it to 5 to see the average result.
Thanks, Yicong
On 2021/6/17 19:33, Yicong Yang wrote:
On 2021/6/16 17:36, Barry Song wrote:
ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will help spread tasks among clusters, thus decrease the contention and improve the throughput. Verified by Mel's mm-tests by writing config files as below to define the number of threads for stream,e.g configs/config-workload-stream-omp-4threads: export STREAM_SIZE=$((1048576*512)) export STREAM_THREADS=4 export STREAM_METHOD=omp export STREAM_ITERATIONS=5 export STREAM_BUILD_FLAGS="-lm -Ofast"
Ran the stream benchmark on kunpeng920 with 4numa nodes and each node has 24core by commands like: numactl -N 0 -m 0 ./run-mmtests.sh -c \ configs/config-workload-stream-omp-4threads tip-sched-core-4threads
and compared the cases between tip/sched/core and tip/sched/core with cluster scheduler. The result is as below:
4threads stream (on 1numa * 24cores = 24cores) stream stream 4threads 4threads-cluster-scheduler MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%) MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%) MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%) MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1numa * 24cores = 24cores) stream stream 6threads 6threads-cluster-scheduler MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%) MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%) MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%) MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1numa * 24cores = 24cores) stream stream 12threads 12threads-cluster-scheduler MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%) MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%) MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%) MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
The result was generated by commands like: ../../compare-kernels.sh --baseline tip-sched-core-4threads \ --compare tip-sched-core-4threads-cluster-scheduler
Thus, it could help memory-bound workload especially under medium load. For example, ran mmtests configs/config-workload-lkp-compress benchmark on 4numa*24cores=96 cores kunpeng920, 12,21,30 threads present the best improvement:
lkp-pbzip2 (on 4numa * 24cores = 96cores)
lkp lkp compress-w/o-cluster compress-w/-cluster
Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%* Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%* Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%* Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%* Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%* Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%* Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%* Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%* Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
Ran the same benchmark by "numactl -N 0 -m 0" from 2 threads to 24 threads on numa node0 with 24 cores:
lkp-pbzip2 (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%* Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%* Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%* Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%* Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%* Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%) Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance improvement.
Similar improvement was seen on lkp-pixz though the improvement is smaller:
lkp-pixz (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%* Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%) Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%* Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%* Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%* Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%* Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
On the other hand, it is slightly helpful to cpu-bound tasks. With configs/config-workload-kernbench like: export KERNBENCH_ITERATIONS=3 export KERNBENCH_MIN_THREADS=$((NUMCPUS/4)) export KERNBENCH_MAX_THREADS=$((NUMCPUS)) export KERNBENCH_CONFIG=allmodconfig export KERNBENCH_TARGETS=vmlinux,modules export KERNBENCH_SKIP_WARMUP=yes export MMTESTS_THREAD_CUTOFF= export KERNBENCH_VERSION=5.3 Ran kernbench by 24,48,96 threads to compile an entire kernel without numactl binding, each case run 3 iterations: 24 threads w/o and w/ cluster-scheduler: w/o 10:03.26 10:00.46 10:01.09 w/ 10:01.11 10:00.83 9:58.64
48 threads w/o and w/ cluster-scheduler: w/o 5:33.96 5:34.28 5:34.06 w/ 5:32.65 5:32.57 5:33.25
96 threads w/o and w/ cluster-scheduler: w/o 3:33.34 3:31.22 3:31.31 w/ 3:32.22 3:30.47 3:32.69
kernbench (on 4numa * 24cores = 96cores) kernbench kernbench w/o-cluster w/-cluster Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%) Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%) Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%) Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%) Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%) Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%) Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%) Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%) Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%) Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%* Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%* Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%) Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%* Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%* Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%* Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%) Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%* Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
[ Hi Yicong, Is it possible for you to run similar SPECrate mcf test with Tim and get some supportive data here?
For our Kunpeng 920, I run the whole intrate suite firstly with 32 copies, and didn't bind the NUMA. Here's the result
Base Base Base
Benchmarks Copies Run Time Rate
500.perlbench_r 32 w/o 580(w 578) w/o 87.8(w 88.2) * 502.gcc_r 32 w/o 500(w 504) w/o 90.5(w 90.0) * 505.mcf_r 32 w/o 764(w 767) w/o 67.7(w 67.4) * 520.omnetpp_r 32 w/o 1030(w 1024) w/o 40.7(w 41.0) * 523.xalancbmk_r 32 w/o 584(w 584) w/o 57.9(w 57.8) * 525.x264_r 32 w/o 285(w 284) w/o 196 (w 197) * 531.deepsjeng_r 32 w/o 336(w 338) w/o 109 (w 108) * 541.leela_r 32 w/o 570(w 569) w/o 93.0(w 93.1) * 548.exchange2_r 32 w/o 526(w 532) w/o 160 (w 157) * 557.xz_r 32 w/o 538(w 542) w/o 64.2(w 63.8) * Est. SPECrate2017_int_base w/o 87.4(w 87.2)
(w/o is without the patch, the bigger the rate is the better)
Then I test the mcf_r alone with different copies and bind to NUMA 0:
Base Base Run Time Rate ------- ---------
4 Copies w/o 618 (w 580) w/o 10.5 (w 11.1) 8 Copies w/o 645 (w 647) w/o 20.0 (w 20) 16 Copies w/o 849 (w 844) w/o 30.4 (w 30.6)
As I checked from the htop, the tasks running on the cpu didn't spread through the clusters rigidly.
I didn't apply Patch #3 as I met some conflicts and didn't try to resolve it. As we're testing on arm64 I think it's okay to test without patch #3.
The machine I have tested have 128 cores in 2 sockets and 4 numas with 32 cores each. Of course, still 4 cores in one cluster. Below are the memory info through numa:
available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 257190 MB node 0 free: 254203 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 258005 MB node 1 free: 257191 MB node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 2 size: 96763 MB node 2 free: 96158 MB node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 3 size: 127540 MB node 3 free: 126922 MB node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
Any comments? I notice Tim observed that sleep and wakeup will have some influences. So I wonder whether the speccpu intrate test also suffers from this.
Thanks, Yicong
Thanks Barry ]
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 5 +++++ 4 files changed, 26 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..3b54ea4e1bd7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER
- bool "Cluster scheduler support"
- help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778b7c91..2f9166f6dec8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
- return SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d717631..0b3704ad13c8 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
- return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 55a0a243e871..c7523dc7aab7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1511,6 +1511,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
.
-----Original Message----- From: Tim Chen [mailto:tim.c.chen@linux.intel.com] Sent: Saturday, June 19, 2021 4:22 AM To: yangyicong yangyicong@huawei.com; Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; yangyicong yangyicong@huawei.com; linaro-open-discussions@op-lists.linaro.org Cc: guodong.xu@linaro.org; tangchengchang tangchengchang@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com; Jonathan Cameron jonathan.cameron@huawei.com; Linuxarm linuxarm@huawei.com Subject: Re: [PATCH 2/3] scheduler: add scheduler level for clusters
On 6/18/21 3:27 AM, Yicong Yang wrote:
Hi Barry and Tim,
As Barry pointed I didn't enable the CONFIG_SCHED_CLUSTER... I'd like to share some updated results with the config correctly enabled.
I re-run the mcf_r with 4,8,16 copies on NUMA 0, the result is like: Base Base Run Time Rate ------- --------- 4 Copies w/o 580 (w 570) w/o 11.1 (w 11.3) 8 Copies w/o 647 (w 605) w/o 20.0 (w 21.4) 16 Copies w/o 844 (w 844) w/o 30.6 (w 30.6)
Seems there is a ~7% improvement for 8 Copies but little changed for 4 and 16 copies.
Thanks for the update. This looks pretty good. I expect the best improvement to be for the 8 copies case as you have 8 cluster and the load balancing should lead to the best placement for that case.
Cool. Tim, looking forward to your benchmark data :-)
BTW, I can't reproduce the problem of you reported for Jacobsville on Kunpeng920. There is no active balance at all. Tasks go to sleep and wake-up on the same CPU always:
root@ubuntu:~# cat 1.c #include <stdio.h> #include <time.h> #include <unistd.h>
int main(void) { int i,j;
while (1) { i++; j += i; if (i % 100000000 == 0) usleep(5000); } } root@ubuntu:~# gcc 1.c root@ubuntu:~# numactl -N 0 ./a.out& [1] 47376 root@ubuntu:~# numactl -N 0 ./a.out& [2] 47377 root@ubuntu:~# numactl -N 0 ./a.out& [3] 47378 root@ubuntu:~# numactl -N 0 ./a.out& [4] 47379 root@ubuntu:~# numactl -N 0 ./a.out& [5] 47380 root@ubuntu:~# numactl -N 0 ./a.out& [6] 47381
For example, 47381 is always on CPU2 according to tracepoints:
root@ubuntu:~# echo 47381 > /sys/kernel/debug/tracing/set_event_pid root@ubuntu:~# echo 'sched_wakeup' >> /sys/kernel/debug/tracing/set_event root@ubuntu:~# cd /sys/kernel/debug/tracing/
root@ubuntu:/sys/kernel/debug/tracing# cat trace # tracer: nop # # entries-in-buffer/entries-written: 603/603 #P:96 # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / delay # TASK-PID CPU# |||| TIMESTAMP FUNCTION # | | | |||| | | a.out-47381 [002] dNs4 453068.007424: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453068.167195: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453068.455418: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453068.471418: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453068.514829: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453068.862448: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453069.209889: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453069.475410: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453069.557502: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453069.905142: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453070.235485: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453070.435402: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453070.471402: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453070.582992: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453070.930635: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453071.278266: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453071.491394: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453071.625787: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453071.973386: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453072.321048: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453072.451386: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453072.471385: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453072.668679: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453073.016315: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453073.363936: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453073.475378: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453073.711471: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453074.059073: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453074.407004: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453074.435369: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453074.471369: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453074.754663: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453075.102294: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453075.450059: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453075.491361: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453075.797665: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453076.145224: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453076.451353: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453076.471353: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453076.492878: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453076.840526: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453077.188146: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453077.475345: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453077.536035: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453077.883650: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453078.231191: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453078.435337: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453078.471337: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453078.578818: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453078.926457: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453079.274086: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453079.491328: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453079.621732: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453079.969356: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453080.317034: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453080.451321: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453080.475320: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453080.664640: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453081.012269: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453081.359921: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 a.out-47381 [002] dNs4 453081.475312: sched_wakeup: comm=kworker/2:1 pid=500 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453081.707578: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453082.055220: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002 <idle>-0 [002] dNh4 453082.402820: sched_wakeup: comm=a.out pid=47381 prio=120 target_cpu=002
Tim
This time from htop the tasks are spread through clusters well.
For the 4 copies I use
perf stat -e probe:active_load_balance_cpu_stop -- ./bin/runcpu default.cfg
505.mcf_r
to check the different of 'active_load_balance_cpu_stop' with and without the patch. There is no difference and the counts are both 0.
I also run the whole intrate suite on another machine, same model as the one above. 32 Copies is lanuch in the whole system without binding to a specific NUMA node. And seems there is some positive results.
(x264_r is not included as there is a bug for x264 while compiling with gcc 10: https://www.spec.org/cpu2017/Docs/benchmarks/625.x264_s.html I'll fix this in the following test)
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
The iteration of the test is 1, and I'm going to increase it to 5 to see the average result.
Thanks, Yicong
On 2021/6/17 19:33, Yicong Yang wrote:
On 2021/6/16 17:36, Barry Song wrote:
ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will help spread tasks among clusters, thus decrease the contention and improve the throughput. Verified by Mel's mm-tests by writing config files as below to define the number of threads for stream,e.g configs/config-workload-stream-omp-4threads: export STREAM_SIZE=$((1048576*512)) export STREAM_THREADS=4 export STREAM_METHOD=omp export STREAM_ITERATIONS=5 export STREAM_BUILD_FLAGS="-lm -Ofast"
Ran the stream benchmark on kunpeng920 with 4numa nodes and each node has 24core by commands like: numactl -N 0 -m 0 ./run-mmtests.sh -c \ configs/config-workload-stream-omp-4threads tip-sched-core-4threads
and compared the cases between tip/sched/core and tip/sched/core with cluster scheduler. The result is as below:
4threads stream (on 1numa * 24cores = 24cores) stream stream 4threads 4threads-cluster-scheduler MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%) MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%) MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%) MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1numa * 24cores = 24cores) stream stream 6threads 6threads-cluster-scheduler MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%) MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%) MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%) MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1numa * 24cores = 24cores) stream stream 12threads 12threads-cluster-scheduler MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%) MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%) MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%) MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
The result was generated by commands like: ../../compare-kernels.sh --baseline tip-sched-core-4threads \ --compare tip-sched-core-4threads-cluster-scheduler
Thus, it could help memory-bound workload especially under medium load. For example, ran mmtests configs/config-workload-lkp-compress benchmark on 4numa*24cores=96 cores kunpeng920, 12,21,30 threads present the best improvement:
lkp-pbzip2 (on 4numa * 24cores = 96cores)
lkp lkp compress-w/o-cluster compress-w/-cluster
Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%* Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%* Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%* Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%* Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%* Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%* Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%* Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%* Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
Ran the same benchmark by "numactl -N 0 -m 0" from 2 threads to 24 threads on numa node0 with 24 cores:
lkp-pbzip2 (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%* Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%* Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%* Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%* Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%* Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%) Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance improvement.
Similar improvement was seen on lkp-pixz though the improvement is smaller:
lkp-pixz (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%* Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%) Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%* Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%* Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%* Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%* Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
On the other hand, it is slightly helpful to cpu-bound tasks. With configs/config-workload-kernbench like: export KERNBENCH_ITERATIONS=3 export KERNBENCH_MIN_THREADS=$((NUMCPUS/4)) export KERNBENCH_MAX_THREADS=$((NUMCPUS)) export KERNBENCH_CONFIG=allmodconfig export KERNBENCH_TARGETS=vmlinux,modules export KERNBENCH_SKIP_WARMUP=yes export MMTESTS_THREAD_CUTOFF= export KERNBENCH_VERSION=5.3 Ran kernbench by 24,48,96 threads to compile an entire kernel without numactl binding, each case run 3 iterations: 24 threads w/o and w/ cluster-scheduler: w/o 10:03.26 10:00.46 10:01.09 w/ 10:01.11 10:00.83 9:58.64
48 threads w/o and w/ cluster-scheduler: w/o 5:33.96 5:34.28 5:34.06 w/ 5:32.65 5:32.57 5:33.25
96 threads w/o and w/ cluster-scheduler: w/o 3:33.34 3:31.22 3:31.31 w/ 3:32.22 3:30.47 3:32.69
kernbench (on 4numa * 24cores = 96cores) kernbench kernbench w/o-cluster w/-cluster Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%) Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%) Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%) Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%) Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%) Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%) Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%) Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%) Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%) Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%* Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%* Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%) Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%* Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%* Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%* Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%) Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%* Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
[ Hi Yicong, Is it possible for you to run similar SPECrate mcf test with Tim and get some supportive data here?
For our Kunpeng 920, I run the whole intrate suite firstly with 32 copies, and didn't bind the NUMA. Here's the result
Base Base Base
Benchmarks Copies Run Time Rate
500.perlbench_r 32 w/o 580(w 578) w/o 87.8(w 88.2) * 502.gcc_r 32 w/o 500(w 504) w/o 90.5(w 90.0) * 505.mcf_r 32 w/o 764(w 767) w/o 67.7(w 67.4) * 520.omnetpp_r 32 w/o 1030(w 1024) w/o 40.7(w 41.0) * 523.xalancbmk_r 32 w/o 584(w 584) w/o 57.9(w 57.8) * 525.x264_r 32 w/o 285(w 284) w/o 196 (w 197) * 531.deepsjeng_r 32 w/o 336(w 338) w/o 109 (w 108) * 541.leela_r 32 w/o 570(w 569) w/o 93.0(w 93.1) * 548.exchange2_r 32 w/o 526(w 532) w/o 160 (w 157) * 557.xz_r 32 w/o 538(w 542) w/o 64.2(w 63.8) * Est. SPECrate2017_int_base w/o 87.4(w 87.2)
(w/o is without the patch, the bigger the rate is the better)
Then I test the mcf_r alone with different copies and bind to NUMA 0:
Base Base Run Time Rate ------- ---------
4 Copies w/o 618 (w 580) w/o 10.5 (w 11.1) 8 Copies w/o 645 (w 647) w/o 20.0 (w 20) 16 Copies w/o 849 (w 844) w/o 30.4 (w 30.6)
As I checked from the htop, the tasks running on the cpu didn't spread through the clusters rigidly.
I didn't apply Patch #3 as I met some conflicts and didn't try to resolve it. As we're testing on arm64 I think it's okay to test without patch #3.
The machine I have tested have 128 cores in 2 sockets and 4 numas with 32 cores each. Of course, still 4 cores in one cluster. Below are the memory info through numa:
available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22 23 24 25 26 27
28 29 30 31 node 0 size: 257190 MB node 0 free: 254203 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56
57 58 59 60 61 62 63 node 1 size: 258005 MB node 1 free: 257191 MB node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82
83 84 85 86 87 88
89 90 91 92 93 94 95 node 2 size: 96763 MB node 2 free: 96158 MB node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110
111 112 113 114
115 116 117 118 119 120 121 122 123 124 125 126 127 node 3 size: 127540 MB node 3 free: 126922 MB node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
Any comments? I notice Tim observed that sleep and wakeup will have some
influences. So I wonder
whether the speccpu intrate test also suffers from this.
Thanks, Yicong
Thanks Barry ]
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 5 +++++ 4 files changed, 26 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..3b54ea4e1bd7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER
- bool "Cluster scheduler support"
- help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/topology.h
b/include/linux/sched/topology.h
index 8f0f778b7c91..2f9166f6dec8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
- return SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d717631..0b3704ad13c8 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int
cpu)
} #endif
+#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
- return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 55a0a243e871..c7523dc7aab7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1511,6 +1511,11 @@ static struct sched_domain_topology_level
default_topology[] = {
#ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
.
Hi Barry and Tim,
Some more updates for the speccpu. Most results are rather positive and as expected.
For the mcf_r alone with numa 0 bound (5 iterations included):
[w/o] min max aver stddev 4 Copies 10.7 11.2 11 0.187082869 8 Copies 19.4 20.1 19.7 0.250998008 16 Copies 30.1 30.7 30.5 0.240831892
[w] min max aver stddev aver. enhancement 4 Copies 11.3 11.4 11.3 0.054772256 2.73% 8 Copies 21.1 21.4 21.3 0.114017543 8.12% 16 Copies 30.5 30.8 30.7 0.130384048 0.66%
8 Copies is the best case, as there are 8 clusters per NUMA and the threads can spread through the clusters well. For 4, 8, 16 copies we have less bouncing with cluster scheduler level, as the standard deviation is smaller.
For all intrate suite without numa bound and with 32 copies (4 iterations included):
[w/o] Min Max Aver Stddev 500.perlbench_r 86 87.1 86.7 0.496655481 502.gcc_r 89.9 91 90.25 0.519615242 505.mcf_r 71.5 73.6 72.375 0.970824392 520.omnetpp_r 40.3 41.2 40.6 0.40824829 523.xalancbmk_r 57.4 59.2 58.1 0.836660027 525.x264_r 197 198 197.5 0.577350269 531.deepsjeng_r 109 110 109.25 0.5 541.leela_r 95.4 95.6 95.525 0.095742711 548.exchange2_r 163 163 163 0 557.xz_r 64.2 65.2 64.85 0.450924975
Est. SPECrate2017_int_base 88.1
[w] Min Max Aver Stddev aver. Enhancement 500.perlbench_r 87.3 87.9 87.625 0.2500 1.07% 502.gcc_r 93.1 95.3 94.5 0.9933 4.71% 505.mcf_r 77 81.4 79.075 2.3514 9.26% 520.omnetpp_r 42.8 43.5 43.15 0.3109 6.28% 523.xalancbmk_r 60.6 62 61.45 0.6455 5.77% 525.x264_r 197 198 197.75 0.5000 0.13% 531.deepsjeng_r 109 110 109.75 0.5000 0.46% 541.leela_r 95.5 95.6 95.55 0.0577 0.03% 548.exchange2_r 163 163 163 0.0000 0.00% 557.xz_r 66 66.5 66.3 0.2160 2.24%
Est. SPECrate2017_int_base 90.7 (+2.95%)
The mcf_r performs even better. Although the stddev is larger than w/o result, but the minimum rate(77) is bigger than the maximum rate(73.6) of w/o result.
Well some benchmarks are not affected by the patch, I guess they'are cpu bound. We cannot decrease increase the bandwidth by place these threads inter-clusters.
The test machines are both 2 socket with 128 cores, 32 cores per numa and 4 cores per cluster.
Thanks, Yicong
On 2021/6/18 18:27, Yicong Yang wrote:
Hi Barry and Tim,
As Barry pointed I didn't enable the CONFIG_SCHED_CLUSTER... I'd like to share some updated results with the config correctly enabled.
I re-run the mcf_r with 4,8,16 copies on NUMA 0, the result is like: Base Base Run Time Rate ------- --------- 4 Copies w/o 580 (w 570) w/o 11.1 (w 11.3) 8 Copies w/o 647 (w 605) w/o 20.0 (w 21.4) 16 Copies w/o 844 (w 844) w/o 30.6 (w 30.6)
Seems there is a ~7% improvement for 8 Copies but little changed for 4 and 16 copies.
This time from htop the tasks are spread through clusters well.
For the 4 copies I use
perf stat -e probe:active_load_balance_cpu_stop -- ./bin/runcpu default.cfg 505.mcf_r
to check the different of 'active_load_balance_cpu_stop' with and without the patch. There is no difference and the counts are both 0.
I also run the whole intrate suite on another machine, same model as the one above. 32 Copies is lanuch in the whole system without binding to a specific NUMA node. And seems there is some positive results.
(x264_r is not included as there is a bug for x264 while compiling with gcc 10: https://www.spec.org/cpu2017/Docs/benchmarks/625.x264_s.html I'll fix this in the following test)
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
The iteration of the test is 1, and I'm going to increase it to 5 to see the average result.
Thanks, Yicong
On 2021/6/17 19:33, Yicong Yang wrote:
On 2021/6/16 17:36, Barry Song wrote:
ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will help spread tasks among clusters, thus decrease the contention and improve the throughput. Verified by Mel's mm-tests by writing config files as below to define the number of threads for stream,e.g configs/config-workload-stream-omp-4threads: export STREAM_SIZE=$((1048576*512)) export STREAM_THREADS=4 export STREAM_METHOD=omp export STREAM_ITERATIONS=5 export STREAM_BUILD_FLAGS="-lm -Ofast"
Ran the stream benchmark on kunpeng920 with 4numa nodes and each node has 24core by commands like: numactl -N 0 -m 0 ./run-mmtests.sh -c \ configs/config-workload-stream-omp-4threads tip-sched-core-4threads
and compared the cases between tip/sched/core and tip/sched/core with cluster scheduler. The result is as below:
4threads stream (on 1numa * 24cores = 24cores) stream stream 4threads 4threads-cluster-scheduler MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%) MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%) MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%) MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1numa * 24cores = 24cores) stream stream 6threads 6threads-cluster-scheduler MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%) MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%) MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%) MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1numa * 24cores = 24cores) stream stream 12threads 12threads-cluster-scheduler MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%) MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%) MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%) MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
The result was generated by commands like: ../../compare-kernels.sh --baseline tip-sched-core-4threads \ --compare tip-sched-core-4threads-cluster-scheduler
Thus, it could help memory-bound workload especially under medium load. For example, ran mmtests configs/config-workload-lkp-compress benchmark on 4numa*24cores=96 cores kunpeng920, 12,21,30 threads present the best improvement:
lkp-pbzip2 (on 4numa * 24cores = 96cores)
lkp lkp compress-w/o-cluster compress-w/-cluster
Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%* Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%* Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%* Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%* Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%* Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%* Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%* Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%* Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
Ran the same benchmark by "numactl -N 0 -m 0" from 2 threads to 24 threads on numa node0 with 24 cores:
lkp-pbzip2 (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%* Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%* Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%* Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%* Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%* Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%) Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance improvement.
Similar improvement was seen on lkp-pixz though the improvement is smaller:
lkp-pixz (on 1numa * 24cores = 24cores) lkp lkp compress-1numa-w/o-cluster compress-1numa-w/-cluster Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%* Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%) Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%* Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%* Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%* Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%* Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
On the other hand, it is slightly helpful to cpu-bound tasks. With configs/config-workload-kernbench like: export KERNBENCH_ITERATIONS=3 export KERNBENCH_MIN_THREADS=$((NUMCPUS/4)) export KERNBENCH_MAX_THREADS=$((NUMCPUS)) export KERNBENCH_CONFIG=allmodconfig export KERNBENCH_TARGETS=vmlinux,modules export KERNBENCH_SKIP_WARMUP=yes export MMTESTS_THREAD_CUTOFF= export KERNBENCH_VERSION=5.3 Ran kernbench by 24,48,96 threads to compile an entire kernel without numactl binding, each case run 3 iterations: 24 threads w/o and w/ cluster-scheduler: w/o 10:03.26 10:00.46 10:01.09 w/ 10:01.11 10:00.83 9:58.64
48 threads w/o and w/ cluster-scheduler: w/o 5:33.96 5:34.28 5:34.06 w/ 5:32.65 5:32.57 5:33.25
96 threads w/o and w/ cluster-scheduler: w/o 3:33.34 3:31.22 3:31.31 w/ 3:32.22 3:30.47 3:32.69
kernbench (on 4numa * 24cores = 96cores) kernbench kernbench w/o-cluster w/-cluster Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%) Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%) Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%) Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%) Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%) Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%) Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%) Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%) Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%) Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%* Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%* Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%) Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%* Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%* Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%* Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%) Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%* Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
[ Hi Yicong, Is it possible for you to run similar SPECrate mcf test with Tim and get some supportive data here?
For our Kunpeng 920, I run the whole intrate suite firstly with 32 copies, and didn't bind the NUMA. Here's the result
Base Base Base
Benchmarks Copies Run Time Rate
500.perlbench_r 32 w/o 580(w 578) w/o 87.8(w 88.2) * 502.gcc_r 32 w/o 500(w 504) w/o 90.5(w 90.0) * 505.mcf_r 32 w/o 764(w 767) w/o 67.7(w 67.4) * 520.omnetpp_r 32 w/o 1030(w 1024) w/o 40.7(w 41.0) * 523.xalancbmk_r 32 w/o 584(w 584) w/o 57.9(w 57.8) * 525.x264_r 32 w/o 285(w 284) w/o 196 (w 197) * 531.deepsjeng_r 32 w/o 336(w 338) w/o 109 (w 108) * 541.leela_r 32 w/o 570(w 569) w/o 93.0(w 93.1) * 548.exchange2_r 32 w/o 526(w 532) w/o 160 (w 157) * 557.xz_r 32 w/o 538(w 542) w/o 64.2(w 63.8) * Est. SPECrate2017_int_base w/o 87.4(w 87.2)
(w/o is without the patch, the bigger the rate is the better)
Then I test the mcf_r alone with different copies and bind to NUMA 0:
Base Base Run Time Rate ------- ---------
4 Copies w/o 618 (w 580) w/o 10.5 (w 11.1) 8 Copies w/o 645 (w 647) w/o 20.0 (w 20) 16 Copies w/o 849 (w 844) w/o 30.4 (w 30.6)
As I checked from the htop, the tasks running on the cpu didn't spread through the clusters rigidly.
I didn't apply Patch #3 as I met some conflicts and didn't try to resolve it. As we're testing on arm64 I think it's okay to test without patch #3.
The machine I have tested have 128 cores in 2 sockets and 4 numas with 32 cores each. Of course, still 4 cores in one cluster. Below are the memory info through numa:
available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 257190 MB node 0 free: 254203 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 258005 MB node 1 free: 257191 MB node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 2 size: 96763 MB node 2 free: 96158 MB node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 3 size: 127540 MB node 3 free: 126922 MB node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
Any comments? I notice Tim observed that sleep and wakeup will have some influences. So I wonder whether the speccpu intrate test also suffers from this.
Thanks, Yicong
Thanks Barry ]
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 5 +++++ 4 files changed, 26 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..3b54ea4e1bd7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER
- bool "Cluster scheduler support"
- help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778b7c91..2f9166f6dec8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
- return SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d717631..0b3704ad13c8 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
- return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 55a0a243e871..c7523dc7aab7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1511,6 +1511,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
.
.
On 6/18/21 3:27 AM, Yicong Yang wrote:
Hi Barry and Tim,
As Barry pointed I didn't enable the CONFIG_SCHED_CLUSTER... I'd like to share some updated results with the config correctly enabled.
I re-run the mcf_r with 4,8,16 copies on NUMA 0, the result is like: Base Base Run Time Rate ------- --------- 4 Copies w/o 580 (w 570) w/o 11.1 (w 11.3) 8 Copies w/o 647 (w 605) w/o 20.0 (w 21.4) 16 Copies w/o 844 (w 844) w/o 30.6 (w 30.6)
Barry and Yicong,
Our benchmark team helped me run mcf on a Jacobsville that has 24 Atom cores, arranged into 6 clusters of 4 cores each. The benchmark numbers from the benchmark team is as follow:
Improvement over baseline kernel for mcf_r copies run time base rate 1 -0.1% -0.2% 6 25.1% 25.1% 12 18.8% 19.0% 24 0.3% 0.3%
So this looks pretty good. It is even better than what Yicong saw. I probed into their system's task distribution, and saw some pretty bad clumping for the vanilla kernel without the L2 cluster domain for the 6 and 12 copies case. With the extra domain for cluster, the load does get evened out between the clusters.
The load balancing helps a lot at moderate load point for mcf. As expected, there is little change in performance for the single copy case and the fully loaded 24 copies case.
Seems there is a ~7% improvement for 8 Copies but little changed for 4 and 16 copies.
This time from htop the tasks are spread through clusters well.
For the 4 copies I use
perf stat -e probe:active_load_balance_cpu_stop -- ./bin/runcpu default.cfg 505.mcf_r
to check the different of 'active_load_balance_cpu_stop' with and without the patch. There is no difference and the counts are both 0.
I also run the whole intrate suite on another machine, same model as the one above. 32 Copies is lanuch in the whole system without binding to a specific NUMA node. And seems there is some positive results.
(x264_r is not included as there is a bug for x264 while compiling with gcc 10: https://www.spec.org/cpu2017/Docs/benchmarks/625.x264_s.html I'll fix this in the following test)
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
You have 24 cores right? So this is the case with a little bit of overload? It is nice we are also seeing improvement here.
Tim
On 2021/6/25 1:56, Tim Chen wrote:
On 6/18/21 3:27 AM, Yicong Yang wrote:
Hi Barry and Tim,
As Barry pointed I didn't enable the CONFIG_SCHED_CLUSTER... I'd like to share some updated results with the config correctly enabled.
I re-run the mcf_r with 4,8,16 copies on NUMA 0, the result is like: Base Base Run Time Rate ------- --------- 4 Copies w/o 580 (w 570) w/o 11.1 (w 11.3) 8 Copies w/o 647 (w 605) w/o 20.0 (w 21.4) 16 Copies w/o 844 (w 844) w/o 30.6 (w 30.6)
Barry and Yicong,
Our benchmark team helped me run mcf on a Jacobsville that has 24 Atom cores, arranged into 6 clusters of 4 cores each. The benchmark numbers from the benchmark team is as follow:
Improvement over baseline kernel for mcf_r copies run time base rate 1 -0.1% -0.2% 6 25.1% 25.1% 12 18.8% 19.0% 24 0.3% 0.3%
So this looks pretty good. It is even better than what Yicong saw. I probed into their system's task distribution, and saw some pretty bad clumping for the vanilla kernel without the L2 cluster domain for the 6 and 12 copies case. With the extra domain for cluster, the load does get evened out between the clusters.
The load balancing helps a lot at moderate load point for mcf. As expected, there is little change in performance for the single copy case and the fully loaded 24 copies case.
Seems there is a ~7% improvement for 8 Copies but little changed for 4 and 16 copies.
This time from htop the tasks are spread through clusters well.
For the 4 copies I use
perf stat -e probe:active_load_balance_cpu_stop -- ./bin/runcpu default.cfg 505.mcf_r
to check the different of 'active_load_balance_cpu_stop' with and without the patch. There is no difference and the counts are both 0.
I also run the whole intrate suite on another machine, same model as the one above. 32 Copies is lanuch in the whole system without binding to a specific NUMA node. And seems there is some positive results.
(x264_r is not included as there is a bug for x264 while compiling with gcc 10: https://www.spec.org/cpu2017/Docs/benchmarks/625.x264_s.html I'll fix this in the following test)
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
You have 24 cores right? So this is the case with a little bit of overload? It is nice we are also seeing improvement here.
On this machine there is 4 numa and 32 cores for each numa. We have two types of machine, Barry's result is from 24 cores per numa machine and mine is from 32 cores per numa machine.
Tim
.
On 6/24/21 6:16 PM, Yicong Yang wrote:
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
You have 24 cores right? So this is the case with a little bit of overload? It is nice we are also seeing improvement here.
On this machine there is 4 numa and 32 cores for each numa. We have two types of machine, Barry's result is from 24 cores per numa machine and mine is from 32 cores per numa machine.
Then this is a bit odd as we should have 1 task per core for both vanilla and patched kernel and the performance should be similar.
Tim
-----Original Message----- From: Tim Chen [mailto:tim.c.chen@linux.intel.com] Sent: Friday, June 25, 2021 1:42 PM To: yangyicong yangyicong@huawei.com; Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; yangyicong yangyicong@huawei.com; linaro-open-discussions@op-lists.linaro.org Cc: guodong.xu@linaro.org; tangchengchang tangchengchang@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com; Jonathan Cameron jonathan.cameron@huawei.com; Linuxarm linuxarm@huawei.com Subject: Re: [PATCH 2/3] scheduler: add scheduler level for clusters
On 6/24/21 6:16 PM, Yicong Yang wrote:
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
You have 24 cores right? So this is the case with a little bit of overload? It is nice we are also seeing improvement here.
On this machine there is 4 numa and 32 cores for each numa. We have two types of machine, Barry's result is from 24 cores per numa machine and mine is from 32 cores per numa machine.
Then this is a bit odd as we should have 1 task per core for both vanilla and patched kernel and the performance should be similar.
Tim, we have two types of machines: (A)4numa * 24 core (B)4numa * 32 core.
Two types of test: (1) numactl -N 0 -m 0 <bench> workload will run on only one numa with 24 or 32 cores (2) <bench> without numactl workload will run on all four numa with 96 or 128 cores
yicong's 32copies result came from machine B and Case 2.
8 copies result came from machine B and Case 1.
Tim
Thanks Barry
Hi Tim,
On 2021/6/25 9:41, Tim Chen wrote:
On 6/24/21 6:16 PM, Yicong Yang wrote:
[w/o] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 584 87.2 * 502.gcc_r 32 503 90.2 * 505.mcf_r 32 745 69.4 * 520.omnetpp_r 32 1031 40.7 * 523.xalancbmk_r 32 597 56.6 * 525.x264_r 1 -- CE 531.deepsjeng_r 32 336 109 * 541.leela_r 32 556 95.4 * 548.exchange2_r 32 513 163 * 557.xz_r 32 530 65.2 * Est. SPECrate2017_int_base 80.3
[w] Base Base Base Benchmarks Copies Run Time Rate
500.perlbench_r 32 580 87.8 (+0.688%) * 502.gcc_r 32 477 95.1 (+5.432%) * 505.mcf_r 32 644 80.3 (+13.574%) * 520.omnetpp_r 32 942 44.6 (+9.58%) * 523.xalancbmk_r 32 560 60.4 (+6.714%%) * 525.x264_r 1 -- CE 531.deepsjeng_r 32 337 109 (+0.000%) * 541.leela_r 32 554 95.6 (+0.210%) * 548.exchange2_r 32 515 163 (+0.000%) * 557.xz_r 32 524 66.0 (+1.227%) * Est. SPECrate2017_int_base 83.7 (+4.062%)
You have 24 cores right? So this is the case with a little bit of overload? It is nice we are also seeing improvement here.
On this machine there is 4 numa and 32 cores for each numa. We have two types of machine, Barry's result is from 24 cores per numa machine and mine is from 32 cores per numa machine.
Then this is a bit odd as we should have 1 task per core for both vanilla and patched kernel and the performance should be similar.
Seems I didn't illustrate it clearly. sorry.
for the mcf_r 4,8,16 copies test, I bound them to numa 0.
for the 32 copies intrate suite test, I didn't bind to a specific numa node. So they're bound to the whole system. Ideally we'll have 1 task per numa with cluster scheduler as there is 32 clusters in the whole system.
The machine info: CPU(s): 128 Thread(s) per core: 1 Core(s) per cluster: 4 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 4 NUMA node0 CPU(s): 0-31 NUMA node1 CPU(s): 32-63 NUMA node2 CPU(s): 64-95 NUMA node3 CPU(s): 96-127
Thanks, Yicong
From: Tim Chen tim.c.chen@linux.intel.com
There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce is shared among a cluster of cores instead of being exclusive to one single core.
To prevent oversubscription of L2 cache, load should be balanced between such L2 clusters, especially for tasks with no shared data.
On benchmark such as SPECrate mcf test, this change provides a boost to performance on medium load system on Jacobsville. [ Hi Tim, Could you please post some supportive data here?
Thanks Barry ]
Note that this added domain level will increase migrations between CPUs. So this is not necessarily a universal win if the migration cost of balancing L2 load outweigh the benefit from reduced L2 contention. This change tends to benefit CPU bound threads that get moved around much less.
Signed-off-by: Tim Chen tim.c.chen@linux.intel.com Signed-off-by: Barry Song song.bao.hua@hisilicon.com --- arch/x86/Kconfig | 8 ++++++ arch/x86/include/asm/smp.h | 7 ++++++ arch/x86/include/asm/topology.h | 3 +++ arch/x86/kernel/cpu/cacheinfo.c | 1 + arch/x86/kernel/cpu/common.c | 3 +++ arch/x86/kernel/smpboot.c | 44 ++++++++++++++++++++++++++++++++- 6 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0045e1b44190..5a9777707fbb 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1007,6 +1007,14 @@ config NR_CPUS This is purely to save memory: each supported CPU adds about 8KB to the kernel image.
+config SCHED_CLUSTER + bool "Cluster scheduler support" + default n + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters of CPUs + sharing L2 cache. If unsure say N here. + config SCHED_SMT def_bool y if SMP
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index 630ff08532be..08b0e90623ad 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -16,7 +16,9 @@ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_map); DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map); /* cpus sharing the last level cache: */ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map); +DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map); DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id); +DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id); DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
static inline struct cpumask *cpu_llc_shared_mask(int cpu) @@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu) return per_cpu(cpu_llc_shared_map, cpu); }
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu) +{ + return per_cpu(cpu_l2c_shared_map, cpu); +} + DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid); diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index 9239399e5491..2548d824f103 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -103,17 +103,20 @@ static inline void setup_node_to_cpumask_map(void) { } #include <asm-generic/topology.h>
extern const struct cpumask *cpu_coregroup_mask(int cpu); +extern const struct cpumask *cpu_clustergroup_mask(int cpu);
#define topology_logical_package_id(cpu) (cpu_data(cpu).logical_proc_id) #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id) #define topology_logical_die_id(cpu) (cpu_data(cpu).logical_die_id) #define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id) +#define topology_cluster_id(cpu) (per_cpu(cpu_l2c_id, cpu)) #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
extern unsigned int __max_die_per_package;
#ifdef CONFIG_SMP #define topology_die_cpumask(cpu) (per_cpu(cpu_die_map, cpu)) +#define topology_cluster_cpumask(cpu) (cpu_clustergroup_mask(cpu)) #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu)) #define topology_sibling_cpumask(cpu) (per_cpu(cpu_sibling_map, cpu))
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c index d66af2950e06..3528987fef1d 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c) l2 = new_l2; #ifdef CONFIG_SMP per_cpu(cpu_llc_id, cpu) = l2_id; + per_cpu(cpu_l2c_id, cpu) = l2_id; #endif }
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index a1b756c49a93..8bf1049ff614 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -78,6 +78,9 @@ EXPORT_SYMBOL(smp_num_siblings); /* Last level cache ID of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
+/* L2 cache ID of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID; + /* correctly size the local cpu masks */ void __init setup_cpu_local_masks(void) { diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 7770245cc7fa..3162d0fc6b3c 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -101,6 +101,8 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map); + /* Per CPU bogomips and other parameters */ DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info); EXPORT_PER_CPU_SYMBOL(cpu_info); @@ -466,6 +468,21 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) return false; }
+static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) +{ + int cpu1 = c->cpu_index, cpu2 = o->cpu_index; + + /* Do not match if we do not have a valid APICID for cpu: */ + if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID) + return false; + + /* Do not match if L2 cache id does not match: */ + if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2)) + return false; + + return topology_sane(c, o, "l2c"); +} + /* * Unlike the other levels, we do not enforce keeping a * multicore group inside a NUMA node. If this happens, we will @@ -525,7 +542,7 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) }
-#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC) +#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC) static inline int x86_sched_itmt_flags(void) { return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0; @@ -543,12 +560,21 @@ static int x86_smt_flags(void) return cpu_smt_flags() | x86_sched_itmt_flags(); } #endif +#ifdef CONFIG_SCHED_CLUSTER +static int x86_cluster_flags(void) +{ + return cpu_cluster_flags() | x86_sched_itmt_flags(); +} +#endif #endif
static struct sched_domain_topology_level x86_numa_in_package_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, #endif +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) }, +#endif #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) }, #endif @@ -559,6 +585,9 @@ static struct sched_domain_topology_level x86_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, #endif +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) }, +#endif #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) }, #endif @@ -586,6 +615,7 @@ void set_cpu_sibling_map(int cpu) if (!has_mp) { cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu)); cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu)); + cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu)); cpumask_set_cpu(cpu, topology_core_cpumask(cpu)); cpumask_set_cpu(cpu, topology_die_cpumask(cpu)); c->booted_cores = 1; @@ -604,6 +634,9 @@ void set_cpu_sibling_map(int cpu) if ((i == cpu) || (has_mp && match_llc(c, o))) link_mask(cpu_llc_shared_mask, cpu, i);
+ if ((i == cpu) || (has_mp && match_l2c(c, o))) + link_mask(cpu_l2c_shared_mask, cpu, i); + if ((i == cpu) || (has_mp && match_die(c, o))) link_mask(topology_die_cpumask, cpu, i); } @@ -651,6 +684,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return cpu_llc_shared_mask(cpu); }
+const struct cpumask *cpu_clustergroup_mask(int cpu) +{ + return cpu_l2c_shared_mask(cpu); +} + static void impress_friends(void) { int cpu; @@ -1334,6 +1372,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus) zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL); zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL); zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL); + zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL); }
/* @@ -1558,7 +1597,10 @@ static void remove_siblinginfo(int cpu) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling)); + for_each_cpu(sibling, cpu_l2c_shared_mask(cpu)) + cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling)); cpumask_clear(cpu_llc_shared_mask(cpu)); + cpumask_clear(cpu_l2c_shared_mask(cpu)); cpumask_clear(topology_sibling_cpumask(cpu)); cpumask_clear(topology_core_cpumask(cpu)); cpumask_clear(topology_die_cpumask(cpu));
On 6/16/21 2:36 AM, Barry Song wrote:
Hi Tim, I'd like to introduce my colleague Yicong to you. Yicong is also working on scheduler with me.
Hi Yicong and Tim, I am planning to send the normal patchset not RFC after 5.14-rc1 if this patchset from TianTao can be merged before that: https://lore.kernel.org/lkml/1622712162-7028-1-git-send-email-tiantao6@hisil...
But firstly, I will only send the patchset supporting spreading as packing path is quite tricky. If we put them together, hardly maintainers can review it. Since SCHED_CLUSTER's default status is disabled in Kconfig, lacking the consideration of packing path won't hurt those workloads who like packing.
Three patches in this email thread are for the first patchset with spreading path only. Note patch 1/3 will be rebased againest: https://lore.kernel.org/lkml/1622712162-7028-1-git-send-email-tiantao6@hisil... https://lore.kernel.org/lkml/20210611052249.25776-1-song.bao.hua@hisilicon.c...
In the commit log, I put some TODOs which might need some benchmark data from Tim and Yicong.
Hi Tim, we don't have Jacobsville machine, I will appreciate a lot if you can provide some benchmark data for x86. Would you please work on this?
Barry,
I did a quick test, not on SPECrate but on some simple tests. This patchset will work okay for cpu bound tasks. But for tasks with sleep and wake up, we want to find the cpu in the same cluster first. Otherewise the wakeup will not respect the cluster.
Tim
Thanks Barry
Barry Song (1): scheduler: add scheduler level for clusters
Jonathan Cameron (1): topology: Represent clusters of CPUs within a die
Tim Chen (1): scheduler: Add cluster scheduler level for x86
Documentation/admin-guide/cputopology.rst | 26 +++++++-- arch/arm64/Kconfig | 7 +++ arch/arm64/kernel/topology.c | 2 + arch/x86/Kconfig | 8 +++ arch/x86/include/asm/smp.h | 7 +++ arch/x86/include/asm/topology.h | 3 + arch/x86/kernel/cpu/cacheinfo.c | 1 + arch/x86/kernel/cpu/common.c | 3 + arch/x86/kernel/smpboot.c | 44 ++++++++++++++- drivers/acpi/pptt.c | 67 +++++++++++++++++++++++ drivers/base/arch_topology.c | 15 +++++ drivers/base/topology.c | 10 ++++ include/linux/acpi.h | 5 ++ include/linux/arch_topology.h | 5 ++ include/linux/sched/topology.h | 7 +++ include/linux/topology.h | 13 +++++ kernel/sched/topology.c | 5 ++ 17 files changed, 223 insertions(+), 5 deletions(-)
-----Original Message----- From: Tim Chen [mailto:tim.c.chen@linux.intel.com] Sent: Thursday, June 17, 2021 9:28 AM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; yangyicong yangyicong@huawei.com; linaro-open-discussions@op-lists.linaro.org Cc: guodong.xu@linaro.org; tangchengchang tangchengchang@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com; Jonathan Cameron jonathan.cameron@huawei.com; Linuxarm linuxarm@huawei.com Subject: Re: [PATCH 0/3] cluster-scheduler upstream plan
On 6/16/21 2:36 AM, Barry Song wrote:
Hi Tim, I'd like to introduce my colleague Yicong to you. Yicong is also working on scheduler with me.
Hi Yicong and Tim, I am planning to send the normal patchset not RFC after 5.14-rc1 if this patchset from TianTao can be merged before that:
https://lore.kernel.org/lkml/1622712162-7028-1-git-send-email-tiantao6@his ilicon.com/
But firstly, I will only send the patchset supporting spreading as packing path is quite tricky. If we put them together, hardly maintainers can review it. Since SCHED_CLUSTER's default status is disabled in Kconfig, lacking the consideration of packing path won't hurt those workloads who like packing.
Three patches in this email thread are for the first patchset with spreading path only. Note patch 1/3 will be rebased againest:
https://lore.kernel.org/lkml/1622712162-7028-1-git-send-email-tiantao6@his ilicon.com/
https://lore.kernel.org/lkml/20210611052249.25776-1-song.bao.hua@hisilicon .com/
In the commit log, I put some TODOs which might need some benchmark data from Tim and Yicong.
Hi Tim, we don't have Jacobsville machine, I will appreciate a lot if you can provide some benchmark data for x86. Would you please work on this?
Barry,
I did a quick test, not on SPECrate but on some simple tests. This patchset will work okay for cpu bound tasks. But for tasks with sleep and wake up, we want to find the cpu in the same cluster first. Otherewise the wakeup will not respect the cluster.
Tim, thanks for your testing. I think this is as expected. My plan is upstreaming the whole cluster scheduler in two stages. 1. patchset to support the basic cluster topology and support spreading tasks only. 2. patchset to support wake_up path to support packing related tasks after 1 is done.
So in 1, cluster_sched is default disabled. And in the commit log, we will mention 2 is not supported yet. Since 2 is much more tricky than 1, if 1 and 2 are put together, I am worried it is difficult for maintainers to review it.
So in the 1st patchset, we give some workload examples which like spreading and also say packing workload won't get benefit and it will be supported in a separate patchset afterwards. Does it make sense to you?
In my 2/3, I tested stream, lkp-compress(lkp-pbzip2), kernbench, they were improved.
Thanks Barry
Tim
Thanks Barry
Barry Song (1): scheduler: add scheduler level for clusters
Jonathan Cameron (1): topology: Represent clusters of CPUs within a die
Tim Chen (1): scheduler: Add cluster scheduler level for x86
Documentation/admin-guide/cputopology.rst | 26 +++++++-- arch/arm64/Kconfig | 7 +++ arch/arm64/kernel/topology.c | 2 + arch/x86/Kconfig | 8 +++ arch/x86/include/asm/smp.h | 7 +++ arch/x86/include/asm/topology.h | 3 + arch/x86/kernel/cpu/cacheinfo.c | 1 + arch/x86/kernel/cpu/common.c | 3 + arch/x86/kernel/smpboot.c | 44 ++++++++++++++- drivers/acpi/pptt.c | 67 +++++++++++++++++++++++ drivers/base/arch_topology.c | 15 +++++ drivers/base/topology.c | 10 ++++ include/linux/acpi.h | 5 ++ include/linux/arch_topology.h | 5 ++ include/linux/sched/topology.h | 7 +++ include/linux/topology.h | 13 +++++ kernel/sched/topology.c | 5 ++ 17 files changed, 223 insertions(+), 5 deletions(-)
Barry,
I did a quick test, not on SPECrate but on some simple tests. This patchset will work okay for cpu bound tasks. But for tasks with sleep and wake up, we want to find the cpu in the same cluster first. Otherewise the wakeup will not respect the cluster.
Tim, thanks for your testing. I think this is as expected. My plan is upstreaming the whole cluster scheduler in two stages.
- patchset to support the basic cluster topology and support spreading tasks only.
- patchset to support wake_up path to support packing related tasks after 1 is done.
So in 1, cluster_sched is default disabled. And in the commit log, we will mention 2 is not supported yet. Since 2 is much more tricky than 1, if 1 and 2 are put together, I am worried it is difficult for maintainers to review it.
So in the 1st patchset, we give some workload examples which like spreading and also say packing workload won't get benefit and it will be supported in a separate patchset afterwards. Does it make sense to you?
I agree.
In my 2/3, I tested stream, lkp-compress(lkp-pbzip2), kernbench, they were improved.
I think my previous mail was not quite clear. Actually the issue I observed was for spreading and not packing. In my test there is no relationship between the tasks running. Just that it sleep for a short while and wake up. So I am just observing to see if the spreading is done properly.
The simple test I did was the following:
#include <stdio.h> #include <time.h> #include <unistd.h>
int main(void) { int i,j;
while (1) { i++; j += i; if (i % 100000000 == 0) usleep(5000); } }
This will keep a CPU on my test system close to 99% busy. I ran multiple copies of this program to see if it gets spread out.
I observe that with this small sleep, I have some clumping in cluster that's not spread out into idle cluster. Wonder if you see the same.
Tim
-----Original Message----- From: Tim Chen [mailto:tim.c.chen@linux.intel.com] Sent: Thursday, June 17, 2021 11:42 AM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; yangyicong yangyicong@huawei.com; linaro-open-discussions@op-lists.linaro.org Cc: guodong.xu@linaro.org; tangchengchang tangchengchang@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com; Jonathan Cameron jonathan.cameron@huawei.com; Linuxarm linuxarm@huawei.com Subject: Re: [PATCH 0/3] cluster-scheduler upstream plan
Barry,
I did a quick test, not on SPECrate but on some simple tests. This patchset will work okay for cpu bound tasks. But for tasks with sleep and wake up, we want to find the cpu in the same cluster first. Otherewise the wakeup will not respect the cluster.
Tim, thanks for your testing. I think this is as expected. My plan is upstreaming the whole cluster scheduler in two stages.
- patchset to support the basic cluster topology and support spreading tasks only.
- patchset to support wake_up path to support packing related tasks after 1 is done.
So in 1, cluster_sched is default disabled. And in the commit log, we will
mention
2 is not supported yet. Since 2 is much more tricky than 1, if 1 and 2 are
put
together, I am worried it is difficult for maintainers to review it.
So in the 1st patchset, we give some workload examples which like spreading and also say packing workload won't get benefit and it will be supported in a separate patchset afterwards. Does it make sense to you?
I agree.
In my 2/3, I tested stream, lkp-compress(lkp-pbzip2), kernbench, they were improved.
I think my previous mail was not quite clear. Actually the issue I observed was for spreading and not packing. In my test there is no relationship between the tasks running. Just that it sleep for a short while and wake up. So I am just observing to see if the spreading is done properly.
The simple test I did was the following:
#include <stdio.h> #include <time.h> #include <unistd.h>
int main(void) { int i,j;
while (1) { i++; j += i; if (i % 100000000 == 0) usleep(5000); }
}
This will keep a CPU on my test system close to 99% busy. I ran multiple copies of this program to see if it gets spread out.
I observe that with this small sleep, I have some clumping in cluster that's not spread out into idle cluster. Wonder if you see the same.
I tried this case on kunpeng920 with 4numa*24cores=96cores by running 6 a.out on numa0 as below:
root@ubuntu:~# cat 1.c #include <stdio.h> #include <time.h> #include <unistd.h>
int main(void) { int i,j;
while (1) { i++; j += i; if (i % 100000000 == 0) usleep(5000); } } root@ubuntu:~# gcc 1.c root@ubuntu:~# numactl -N 0 ./a.out & [1] 27321 root@ubuntu:~# numactl -N 0 ./a.out & [2] 27322 root@ubuntu:~# numactl -N 0 ./a.out & [3] 27323 root@ubuntu:~# numactl -N 0 ./a.out & [4] 27324 root@ubuntu:~# numactl -N 0 ./a.out & [5] 27325 root@ubuntu:~# numactl -N 0 ./a.out & [6] 27326
I found they are spreading quite well, you can get pictures from: http://www.linuxep.com/patches/6-a.out.png
each 4 cores get 1 a.out.
Tim
Thanks Barry
linaro-open-discussions@op-lists.linaro.org