Unverified Commit 1b709179 authored by openeuler-ci-bot's avatar openeuler-ci-bot Committed by Gitee
Browse files

!229 Intel SPR: Adding uncore PMU support and fix uprobes rbtree usage for OLK-5.10

Merge Pull Request from: @allen-shi 
 
This is a cherry-pick of [PR82](https://gitee.com/openeuler/kernel/pulls/82) and [PR120](https://gitee.com/openeuler/kernel/pulls/120) from openEuler-22.09 branch.
[PR120](https://gitee.com/openeuler/kernel/pulls/120) is to fix the issue for [PR82](https://gitee.com/openeuler/kernel/pulls/82).

### For [PR82](https://gitee.com/openeuler/kernel/pulls/82), the patch set is to add uncore PMU support for Intel Sapphire Rapids platform.

It includes generic uncore discovery support and SPR specific uncore event support.

Generic uncore discovery support contains:
a) Feature patches from upstream 5.13-rc1(5 commits):
c4c55e36 perf/x86/intel/uncore: Generic support for the MMIO type of uncore blocks
42839ef4 perf/x86/intel/uncore: Generic support for the PCI type of uncore blocks
6477dc39 perf/x86/intel/uncore: Rename uncore_notifier to uncore_pci_sub_notifier
d6c75413 perf/x86/intel/uncore: Generic support for the MSR type of uncore blocks
edae1f06 perf/x86/intel/uncore: Parse uncore discovery tables

b) To fix rb_find/rb_add implicit declaration errors, adding rbtree helper patches(v5.12-rc1, 7 commits):
798172b1 rbtree, timerqueue: Use rb_add_cached()
5a798725 rbtree, rtmutex: Use rb_add_cached()
a905e84e rbtree, uprobes: Use rbtree helpers
a3b89864 rbtree, perf: Use new rbtree helpers
8ecca394 rbtree, sched/deadline: Use rb_add_cached()
bf9be9a1 rbtree, sched/fair: Use rb_add_cached()
2d24dd57 rbtree: Add generic add and find helpers

c) To fix error(too few arguments to function ‘uncore_pci_pmu_register’), add dependent patches(5.12-rc1, 2):
9a7832ce perf/x86/intel/uncore: With > 8 nodes, get pci bus die id from NUMA info
ba9506be perf/x86/intel/uncore: Store the logical die id instead of the physical die id.

SPR platform specific uncore support contains:
a) Feature upstream patches from mainline v5.15-rc1(15 commits):
c76826a6 perf/x86/intel/uncore: Support IMC free-running counters on Sapphire Rapids server
0378c93a perf/x86/intel/uncore: Support IIO free-running counters on Sapphire Rapids server
1583971b perf/x86/intel/uncore: Factor out snr_uncore_mmio_map()
8053f2d7 perf/x86/intel/uncore: Add alias PMU name
0d771caf perf/x86/intel/uncore: Add Sapphire Rapids server MDF support
2a8e51ea perf/x86/intel/uncore: Add Sapphire Rapids server M3UPI support
da5a9156 perf/x86/intel/uncore: Add Sapphire Rapids server UPI support
f57191ed perf/x86/intel/uncore: Add Sapphire Rapids server M2M support
85f2e30f perf/x86/intel/uncore: Add Sapphire Rapids server IMC support
0654dfdc perf/x86/intel/uncore: Add Sapphire Rapids server PCU support
f85ef898 perf/x86/intel/uncore: Add Sapphire Rapids server M2PCIe support
e199eb51 perf/x86/intel/uncore: Add Sapphire Rapids server IRP support
3ba7095b perf/x86/intel/uncore: Add Sapphire Rapids server IIO support
949b1138 perf/x86/intel/uncore: Add Sapphire Rapids server CHA support
c54c53d9 perf/x86/intel/uncore: Add Sapphire Rapids server framework

b) Two SPR model name related changes to make above patches apply cleanly(2 commits):
(5.14-rc2) 28188cc4 x86/cpu: Fix core name for Sapphire Rapids
(5.13-rc1) 53375a5a x86/cpu: Resort and comment Intel models

c) Some SPR uncore related bugfixes(6 commits):
v5.16-rc1:
4034fb20 perf/x86/intel/uncore: Fix Intel SPR M3UPI event constraints
f01d7d55 perf/x86/intel/uncore: Fix Intel SPR M2PCIE event constraints
67c5d443 perf/x86/intel/uncore: Fix Intel SPR IIO event constraints
9d756e40 perf/x86/intel/uncore: Fix Intel SPR CHA event constraints
e2bb9fab perf/x86/intel/uncore: Fix invalid unit check
v5.13-rc6:
4a0e3ff3 perf/x86/intel/uncore: Fix a kernel WARNING triggered by maxcpus=1

 **Intel-kernel issue:** 
[#I5BECO](https://gitee.com/openeuler/intel-kernel/issues/I5BECO)

 **Test:** 
With this patch set, on SPR:

```
# cat /sys/devices/uncore_cha_1/alias
uncore_type_0_1
# perf stat -a -e uncore_imc_0/event=0x1/ -- sleep 1

 Performance counter stats for 'system wide':

     2,407,096,566      uncore_imc_0/event=0x1/

       1.002850766 seconds time elapsed
# perf stat -a -e uncore_imc_free_running_0/rpq_cycles/ -- sleep 1

 Performance counter stats for 'system wide':

        13,879,446      uncore_imc_free_running_0/rpq_cycles/

       1.002852701 seconds time elapsed
```

Without this patch set, the "uncore_cha_1" like devices are not available under /sys/devices, and the above like uncore events will be "not supported".

 **Known issue:** 
N/A

 **Default config change:** 
N/A



### For [PR120](https://gitee.com/openeuler/kernel/pulls/120), it is to cherry-pick upstream fix for commit c6bc9bd06dff ("rbtree, uprobes: Use rbtree helpers")

 **BPFTrace Issue** 
[#I5RUM5](https://gitee.com/src-openeuler/bpftrace/issues/I5RUM5)

 **Tests** 
1, run bpftrace /usr/share/bpftrace/tools/bashreadline.bt without the fix, we can see the core dump
2, Apply the fix, and run bpftrace /usr/share/bpftrace/tools/bashreadline.bt, the issue disappears.

 **Known Issue** 
N/A

 **Default config change** 
N/A 
 
Link:https://gitee.com/openeuler/kernel/pulls/229

 
Reviewed-by: default avatarJun Tian <jun.j.tian@intel.com>
Reviewed-by: default avatarZheng Zengkai <zhengzengkai@huawei.com>
Signed-off-by: default avatarZheng Zengkai <zhengzengkai@huawei.com>
parents fd124538 17f30168
Loading
Loading
Loading
Loading
+13 −0
Original line number Diff line number Diff line
What:		/sys/bus/event_source/devices/uncore_*/alias
Date:		June 2021
KernelVersion:	5.15
Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description:	Read-only.  An attribute to describe the alias name of
		the uncore PMU if an alias exists on some platforms.
		The 'perf(1)' tool should treat both names the same.
		They both can be used to access the uncore PMU.

		Example:

		$ cat /sys/devices/uncore_cha_2/alias
		uncore_type_0_2
+1 −1
Original line number Diff line number Diff line
@@ -3,6 +3,6 @@ obj-$(CONFIG_CPU_SUP_INTEL) += core.o bts.o
obj-$(CONFIG_CPU_SUP_INTEL)		+= ds.o knc.o
obj-$(CONFIG_CPU_SUP_INTEL)		+= lbr.o p4.o p6.o pt.o
obj-$(CONFIG_PERF_EVENTS_INTEL_UNCORE)	+= intel-uncore.o
intel-uncore-objs			:= uncore.o uncore_nhmex.o uncore_snb.o uncore_snbep.o
intel-uncore-objs			:= uncore.o uncore_nhmex.o uncore_snb.o uncore_snbep.o uncore_discovery.o
obj-$(CONFIG_PERF_EVENTS_INTEL_CSTATE)	+= intel-cstate.o
intel-cstate-objs			:= cstate.o
+205 −66
Original line number Diff line number Diff line
@@ -4,8 +4,13 @@
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#include "uncore.h"
#include "uncore_discovery.h"

static struct intel_uncore_type *empty_uncore[] = { NULL, };
static bool uncore_no_discover;
module_param(uncore_no_discover, bool, 0);
MODULE_PARM_DESC(uncore_no_discover, "Don't enable the Intel uncore PerfMon discovery mechanism "
				     "(default: enable the discovery mechanism).");
struct intel_uncore_type *empty_uncore[] = { NULL, };
struct intel_uncore_type **uncore_msr_uncores = empty_uncore;
struct intel_uncore_type **uncore_pci_uncores = empty_uncore;
struct intel_uncore_type **uncore_mmio_uncores = empty_uncore;
@@ -31,21 +36,21 @@ struct event_constraint uncore_constraint_empty =

MODULE_LICENSE("GPL");

int uncore_pcibus_to_physid(struct pci_bus *bus)
int uncore_pcibus_to_dieid(struct pci_bus *bus)
{
	struct pci2phy_map *map;
	int phys_id = -1;
	int die_id = -1;

	raw_spin_lock(&pci2phy_map_lock);
	list_for_each_entry(map, &pci2phy_map_head, list) {
		if (map->segment == pci_domain_nr(bus)) {
			phys_id = map->pbus_to_physid[bus->number];
			die_id = map->pbus_to_dieid[bus->number];
			break;
		}
	}
	raw_spin_unlock(&pci2phy_map_lock);

	return phys_id;
	return die_id;
}

static void uncore_free_pcibus_map(void)
@@ -86,7 +91,7 @@ struct pci2phy_map *__find_pci2phy_map(int segment)
	alloc = NULL;
	map->segment = segment;
	for (i = 0; i < 256; i++)
		map->pbus_to_physid[i] = -1;
		map->pbus_to_dieid[i] = -1;
	list_add_tail(&map->list, &pci2phy_map_head);

end:
@@ -332,7 +337,6 @@ static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type,

	uncore_pmu_init_hrtimer(box);
	box->cpu = -1;
	box->pci_phys_id = -1;
	box->dieid = -1;

	/* set default hrtimer timeout */
@@ -830,6 +834,45 @@ static const struct attribute_group uncore_pmu_attr_group = {
	.attrs = uncore_pmu_attrs,
};

void uncore_get_alias_name(char *pmu_name, struct intel_uncore_pmu *pmu)
{
	struct intel_uncore_type *type = pmu->type;

	if (type->num_boxes == 1)
		sprintf(pmu_name, "uncore_type_%u", type->type_id);
	else {
		sprintf(pmu_name, "uncore_type_%u_%d",
			type->type_id, type->box_ids[pmu->pmu_idx]);
	}
}

static void uncore_get_pmu_name(struct intel_uncore_pmu *pmu)
{
	struct intel_uncore_type *type = pmu->type;

	/*
	 * No uncore block name in discovery table.
	 * Use uncore_type_&typeid_&boxid as name.
	 */
	if (!type->name) {
		uncore_get_alias_name(pmu->name, pmu);
		return;
	}

	if (type->num_boxes == 1) {
		if (strlen(type->name) > 0)
			sprintf(pmu->name, "uncore_%s", type->name);
		else
			sprintf(pmu->name, "uncore");
	} else {
		/*
		 * Use the box ID from the discovery table if applicable.
		 */
		sprintf(pmu->name, "uncore_%s_%d", type->name,
			type->box_ids ? type->box_ids[pmu->pmu_idx] : pmu->pmu_idx);
	}
}

static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
{
	int ret;
@@ -856,15 +899,7 @@ static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
		pmu->pmu.attr_update = pmu->type->attr_update;
	}

	if (pmu->type->num_boxes == 1) {
		if (strlen(pmu->type->name) > 0)
			sprintf(pmu->name, "uncore_%s", pmu->type->name);
		else
			sprintf(pmu->name, "uncore");
	} else {
		sprintf(pmu->name, "uncore_%s_%d", pmu->type->name,
			pmu->pmu_idx);
	}
	uncore_get_pmu_name(pmu);

	ret = perf_pmu_register(&pmu->pmu, pmu->name, -1);
	if (!ret)
@@ -905,6 +940,10 @@ static void uncore_type_exit(struct intel_uncore_type *type)
		kfree(type->pmus);
		type->pmus = NULL;
	}
	if (type->box_ids) {
		kfree(type->box_ids);
		type->box_ids = NULL;
	}
	kfree(type->events_group);
	type->events_group = NULL;
}
@@ -993,28 +1032,48 @@ uncore_types_init(struct intel_uncore_type **types, bool setid)
/*
 * Get the die information of a PCI device.
 * @pdev: The PCI device.
 * @phys_id: The physical socket id which the device maps to.
 * @die: The die id which the device maps to.
 */
static int uncore_pci_get_dev_die_info(struct pci_dev *pdev,
				       int *phys_id, int *die)
static int uncore_pci_get_dev_die_info(struct pci_dev *pdev, int *die)
{
	*phys_id = uncore_pcibus_to_physid(pdev->bus);
	if (*phys_id < 0)
		return -ENODEV;

	*die = (topology_max_die_per_package() > 1) ? *phys_id :
				topology_phys_to_logical_pkg(*phys_id);
	*die = uncore_pcibus_to_dieid(pdev->bus);
	if (*die < 0)
		return -EINVAL;

	return 0;
}

static struct intel_uncore_pmu *
uncore_pci_find_dev_pmu_from_types(struct pci_dev *pdev)
{
	struct intel_uncore_type **types = uncore_pci_uncores;
	struct intel_uncore_type *type;
	u64 box_ctl;
	int i, die;

	for (; *types; types++) {
		type = *types;
		for (die = 0; die < __uncore_max_dies; die++) {
			for (i = 0; i < type->num_boxes; i++) {
				if (!type->box_ctls[die])
					continue;
				box_ctl = type->box_ctls[die] + type->pci_offsets[i];
				if (pdev->devfn == UNCORE_DISCOVERY_PCI_DEVFN(box_ctl) &&
				    pdev->bus->number == UNCORE_DISCOVERY_PCI_BUS(box_ctl) &&
				    pci_domain_nr(pdev->bus) == UNCORE_DISCOVERY_PCI_DOMAIN(box_ctl))
					return &type->pmus[i];
			}
		}
	}

	return NULL;
}

/*
 * Find the PMU of a PCI device.
 * @pdev: The PCI device.
 * @ids: The ID table of the available PCI devices with a PMU.
 *       If NULL, search the whole uncore_pci_uncores.
 */
static struct intel_uncore_pmu *
uncore_pci_find_dev_pmu(struct pci_dev *pdev, const struct pci_device_id *ids)
@@ -1024,6 +1083,9 @@ uncore_pci_find_dev_pmu(struct pci_dev *pdev, const struct pci_device_id *ids)
	kernel_ulong_t data;
	unsigned int devfn;

	if (!ids)
		return uncore_pci_find_dev_pmu_from_types(pdev);

	while (ids && ids->vendor) {
		if ((ids->vendor == pdev->vendor) &&
		    (ids->device == pdev->device)) {
@@ -1046,13 +1108,12 @@ uncore_pci_find_dev_pmu(struct pci_dev *pdev, const struct pci_device_id *ids)
 * @pdev: The PCI device.
 * @type: The corresponding PMU type of the device.
 * @pmu: The corresponding PMU of the device.
 * @phys_id: The physical socket id which the device maps to.
 * @die: The die id which the device maps to.
 */
static int uncore_pci_pmu_register(struct pci_dev *pdev,
				   struct intel_uncore_type *type,
				   struct intel_uncore_pmu *pmu,
				   int phys_id, int die)
				   int die)
{
	struct intel_uncore_box *box;
	int ret;
@@ -1070,7 +1131,6 @@ static int uncore_pci_pmu_register(struct pci_dev *pdev,
		WARN_ON_ONCE(pmu->func_id != pdev->devfn);

	atomic_inc(&box->refcnt);
	box->pci_phys_id = phys_id;
	box->dieid = die;
	box->pci_dev = pdev;
	box->pmu = pmu;
@@ -1097,9 +1157,9 @@ static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id
{
	struct intel_uncore_type *type;
	struct intel_uncore_pmu *pmu = NULL;
	int phys_id, die, ret;
	int die, ret;

	ret = uncore_pci_get_dev_die_info(pdev, &phys_id, &die);
	ret = uncore_pci_get_dev_die_info(pdev, &die);
	if (ret)
		return ret;

@@ -1132,7 +1192,7 @@ static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id
		pmu = &type->pmus[UNCORE_PCI_DEV_IDX(id->driver_data)];
	}

	ret = uncore_pci_pmu_register(pdev, type, pmu, phys_id, die);
	ret = uncore_pci_pmu_register(pdev, type, pmu, die);

	pci_set_drvdata(pdev, pmu->boxes[die]);

@@ -1142,17 +1202,12 @@ static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id
/*
 * Unregister the PMU of a PCI device
 * @pmu: The corresponding PMU is unregistered.
 * @phys_id: The physical socket id which the device maps to.
 * @die: The die id which the device maps to.
 */
static void uncore_pci_pmu_unregister(struct intel_uncore_pmu *pmu,
				      int phys_id, int die)
static void uncore_pci_pmu_unregister(struct intel_uncore_pmu *pmu, int die)
{
	struct intel_uncore_box *box = pmu->boxes[die];

	if (WARN_ON_ONCE(phys_id != box->pci_phys_id))
		return;

	pmu->boxes[die] = NULL;
	if (atomic_dec_return(&pmu->activeboxes) == 0)
		uncore_pmu_unregister(pmu);
@@ -1164,9 +1219,9 @@ static void uncore_pci_remove(struct pci_dev *pdev)
{
	struct intel_uncore_box *box;
	struct intel_uncore_pmu *pmu;
	int i, phys_id, die;
	int i, die;

	if (uncore_pci_get_dev_die_info(pdev, &phys_id, &die))
	if (uncore_pci_get_dev_die_info(pdev, &die))
		return;

	box = pci_get_drvdata(pdev);
@@ -1185,35 +1240,43 @@ static void uncore_pci_remove(struct pci_dev *pdev)

	pci_set_drvdata(pdev, NULL);

	uncore_pci_pmu_unregister(pmu, phys_id, die);
	uncore_pci_pmu_unregister(pmu, die);
}

static int uncore_bus_notify(struct notifier_block *nb,
			     unsigned long action, void *data)
			     unsigned long action, void *data,
			     const struct pci_device_id *ids)
{
	struct device *dev = data;
	struct pci_dev *pdev = to_pci_dev(dev);
	struct intel_uncore_pmu *pmu;
	int phys_id, die;
	int die;

	/* Unregister the PMU when the device is going to be deleted. */
	if (action != BUS_NOTIFY_DEL_DEVICE)
		return NOTIFY_DONE;

	pmu = uncore_pci_find_dev_pmu(pdev, uncore_pci_sub_driver->id_table);
	pmu = uncore_pci_find_dev_pmu(pdev, ids);
	if (!pmu)
		return NOTIFY_DONE;

	if (uncore_pci_get_dev_die_info(pdev, &phys_id, &die))
	if (uncore_pci_get_dev_die_info(pdev, &die))
		return NOTIFY_DONE;

	uncore_pci_pmu_unregister(pmu, phys_id, die);
	uncore_pci_pmu_unregister(pmu, die);

	return NOTIFY_OK;
}

static struct notifier_block uncore_notifier = {
	.notifier_call = uncore_bus_notify,
static int uncore_pci_sub_bus_notify(struct notifier_block *nb,
				     unsigned long action, void *data)
{
	return uncore_bus_notify(nb, action, data,
				 uncore_pci_sub_driver->id_table);
}

static struct notifier_block uncore_pci_sub_notifier = {
	.notifier_call = uncore_pci_sub_bus_notify,
};

static void uncore_pci_sub_driver_init(void)
@@ -1224,7 +1287,7 @@ static void uncore_pci_sub_driver_init(void)
	struct pci_dev *pci_sub_dev;
	bool notify = false;
	unsigned int devfn;
	int phys_id, die;
	int die;

	while (ids && ids->vendor) {
		pci_sub_dev = NULL;
@@ -1244,24 +1307,65 @@ static void uncore_pci_sub_driver_init(void)
			if (!pmu)
				continue;

			if (uncore_pci_get_dev_die_info(pci_sub_dev,
							&phys_id, &die))
			if (uncore_pci_get_dev_die_info(pci_sub_dev, &die))
				continue;

			if (!uncore_pci_pmu_register(pci_sub_dev, type, pmu,
						     phys_id, die))
						     die))
				notify = true;
		}
		ids++;
	}

	if (notify && bus_register_notifier(&pci_bus_type, &uncore_notifier))
	if (notify && bus_register_notifier(&pci_bus_type, &uncore_pci_sub_notifier))
		notify = false;

	if (!notify)
		uncore_pci_sub_driver = NULL;
}

static int uncore_pci_bus_notify(struct notifier_block *nb,
				     unsigned long action, void *data)
{
	return uncore_bus_notify(nb, action, data, NULL);
}

static struct notifier_block uncore_pci_notifier = {
	.notifier_call = uncore_pci_bus_notify,
};


static void uncore_pci_pmus_register(void)
{
	struct intel_uncore_type **types = uncore_pci_uncores;
	struct intel_uncore_type *type;
	struct intel_uncore_pmu *pmu;
	struct pci_dev *pdev;
	u64 box_ctl;
	int i, die;

	for (; *types; types++) {
		type = *types;
		for (die = 0; die < __uncore_max_dies; die++) {
			for (i = 0; i < type->num_boxes; i++) {
				if (!type->box_ctls[die])
					continue;
				box_ctl = type->box_ctls[die] + type->pci_offsets[i];
				pdev = pci_get_domain_bus_and_slot(UNCORE_DISCOVERY_PCI_DOMAIN(box_ctl),
								   UNCORE_DISCOVERY_PCI_BUS(box_ctl),
								   UNCORE_DISCOVERY_PCI_DEVFN(box_ctl));
				if (!pdev)
					continue;
				pmu = &type->pmus[i];

				uncore_pci_pmu_register(pdev, type, pmu, die);
			}
		}
	}

	bus_register_notifier(&pci_bus_type, &uncore_pci_notifier);
}

static int __init uncore_pci_init(void)
{
	size_t size;
@@ -1278,12 +1382,15 @@ static int __init uncore_pci_init(void)
	if (ret)
		goto errtype;

	if (uncore_pci_driver) {
		uncore_pci_driver->probe = uncore_pci_probe;
		uncore_pci_driver->remove = uncore_pci_remove;

		ret = pci_register_driver(uncore_pci_driver);
		if (ret)
			goto errtype;
	} else
		uncore_pci_pmus_register();

	if (uncore_pci_sub_driver)
		uncore_pci_sub_driver_init();
@@ -1306,8 +1413,11 @@ static void uncore_pci_exit(void)
	if (pcidrv_registered) {
		pcidrv_registered = false;
		if (uncore_pci_sub_driver)
			bus_unregister_notifier(&pci_bus_type, &uncore_notifier);
			bus_unregister_notifier(&pci_bus_type, &uncore_pci_sub_notifier);
		if (uncore_pci_driver)
			pci_unregister_driver(uncore_pci_driver);
		else
			bus_unregister_notifier(&pci_bus_type, &uncore_pci_notifier);
		uncore_types_exit(uncore_pci_uncores);
		kfree(uncore_extra_pci_dev);
		uncore_free_pcibus_map();
@@ -1556,6 +1666,7 @@ struct intel_uncore_init_fun {
	void	(*cpu_init)(void);
	int	(*pci_init)(void);
	void	(*mmio_init)(void);
	bool	use_discovery;
};

static const struct intel_uncore_init_fun nhm_uncore_init __initconst = {
@@ -1648,6 +1759,19 @@ static const struct intel_uncore_init_fun snr_uncore_init __initconst = {
	.mmio_init = snr_uncore_mmio_init,
};

static const struct intel_uncore_init_fun spr_uncore_init __initconst = {
	.cpu_init = spr_uncore_cpu_init,
	.pci_init = spr_uncore_pci_init,
	.mmio_init = spr_uncore_mmio_init,
	.use_discovery = true,
};

static const struct intel_uncore_init_fun generic_uncore_init __initconst = {
	.cpu_init = intel_uncore_generic_uncore_cpu_init,
	.pci_init = intel_uncore_generic_uncore_pci_init,
	.mmio_init = intel_uncore_generic_uncore_mmio_init,
};

static const struct x86_cpu_id intel_uncore_match[] __initconst = {
	X86_MATCH_INTEL_FAM6_MODEL(NEHALEM_EP,		&nhm_uncore_init),
	X86_MATCH_INTEL_FAM6_MODEL(NEHALEM,		&nhm_uncore_init),
@@ -1683,6 +1807,7 @@ static const struct x86_cpu_id intel_uncore_match[] __initconst = {
	X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X,		&icx_uncore_init),
	X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L,		&tgl_l_uncore_init),
	X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE,		&tgl_uncore_init),
	X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X,	&spr_uncore_init),
	X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D,	&snr_uncore_init),
	{},
};
@@ -1694,17 +1819,26 @@ static int __init intel_uncore_init(void)
	struct intel_uncore_init_fun *uncore_init;
	int pret = 0, cret = 0, mret = 0, ret;

	id = x86_match_cpu(intel_uncore_match);
	if (!id)
		return -ENODEV;

	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
		return -ENODEV;

	__uncore_max_dies =
		topology_max_packages() * topology_max_die_per_package();

	id = x86_match_cpu(intel_uncore_match);
	if (!id) {
		if (!uncore_no_discover && intel_uncore_has_discovery_tables())
			uncore_init = (struct intel_uncore_init_fun *)&generic_uncore_init;
		else
			return -ENODEV;
	} else {
		uncore_init = (struct intel_uncore_init_fun *)id->driver_data;
		if (uncore_no_discover && uncore_init->use_discovery)
			return -ENODEV;
		if (uncore_init->use_discovery && !intel_uncore_has_discovery_tables())
			return -ENODEV;
	}

	if (uncore_init->pci_init) {
		pret = uncore_init->pci_init();
		if (!pret)
@@ -1721,8 +1855,10 @@ static int __init intel_uncore_init(void)
		mret = uncore_mmio_init();
	}

	if (cret && pret && mret)
		return -ENODEV;
	if (cret && pret && mret) {
		ret = -ENODEV;
		goto free_discovery;
	}

	/* Install hotplug callbacks to setup the targets for each package */
	ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
@@ -1737,6 +1873,8 @@ static int __init intel_uncore_init(void)
	uncore_types_exit(uncore_msr_uncores);
	uncore_types_exit(uncore_mmio_uncores);
	uncore_pci_exit();
free_discovery:
	intel_uncore_clear_discovery_tables();
	return ret;
}
module_init(intel_uncore_init);
@@ -1747,5 +1885,6 @@ static void __exit intel_uncore_exit(void)
	uncore_types_exit(uncore_msr_uncores);
	uncore_types_exit(uncore_mmio_uncores);
	uncore_pci_exit();
	intel_uncore_clear_discovery_tables();
}
module_exit(intel_uncore_exit);
+15 −4
Original line number Diff line number Diff line
@@ -50,6 +50,7 @@ struct intel_uncore_type {
	int perf_ctr_bits;
	int fixed_ctr_bits;
	int num_freerunning_types;
	int type_id;
	unsigned perf_ctr;
	unsigned event_ctl;
	unsigned event_mask;
@@ -57,6 +58,7 @@ struct intel_uncore_type {
	unsigned fixed_ctr;
	unsigned fixed_ctl;
	unsigned box_ctl;
	u64 *box_ctls;	/* Unit ctrl addr of the first box of each die */
	union {
		unsigned msr_offset;
		unsigned mmio_offset;
@@ -65,7 +67,12 @@ struct intel_uncore_type {
	unsigned num_shared_regs:8;
	unsigned single_fixed:1;
	unsigned pair_ctr_ctl:1;
	union {
		unsigned *msr_offsets;
		unsigned *pci_offsets;
		unsigned *mmio_offsets;
	};
	unsigned *box_ids;
	struct event_constraint unconstrainted;
	struct event_constraint *constraints;
	struct intel_uncore_pmu *pmus;
@@ -124,7 +131,6 @@ struct intel_uncore_extra_reg {
};

struct intel_uncore_box {
	int pci_phys_id;
	int dieid;	/* Logical die ID */
	int n_active;	/* number of active events */
	int n_events;
@@ -173,11 +179,11 @@ struct freerunning_counters {
struct pci2phy_map {
	struct list_head list;
	int segment;
	int pbus_to_physid[256];
	int pbus_to_dieid[256];
};

struct pci2phy_map *__find_pci2phy_map(int segment);
int uncore_pcibus_to_physid(struct pci_bus *bus);
int uncore_pcibus_to_dieid(struct pci_bus *bus);

ssize_t uncore_event_show(struct device *dev,
			  struct device_attribute *attr, char *buf);
@@ -547,7 +553,9 @@ struct event_constraint *
uncore_get_constraint(struct intel_uncore_box *box, struct perf_event *event);
void uncore_put_constraint(struct intel_uncore_box *box, struct perf_event *event);
u64 uncore_shared_reg_config(struct intel_uncore_box *box, int idx);
void uncore_get_alias_name(char *pmu_name, struct intel_uncore_pmu *pmu);

extern struct intel_uncore_type *empty_uncore[];
extern struct intel_uncore_type **uncore_msr_uncores;
extern struct intel_uncore_type **uncore_pci_uncores;
extern struct intel_uncore_type **uncore_mmio_uncores;
@@ -592,6 +600,9 @@ void snr_uncore_mmio_init(void);
int icx_uncore_pci_init(void);
void icx_uncore_cpu_init(void);
void icx_uncore_mmio_init(void);
int spr_uncore_pci_init(void);
void spr_uncore_cpu_init(void);
void spr_uncore_mmio_init(void);

/* uncore_nhmex.c */
void nhmex_uncore_cpu_init(void);
+622 −0

File added.

Preview size limit exceeded, changes collapsed.

Loading