Merge tag 'pci-v5.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas: "Enumeration: - Consolidate _HPP/_HPX stuff in pci-acpi.c and simplify it (Krzysztof Wilczynski) - Fix incorrect PCIe device types and remove dev->has_secondary_link to simplify code that deals with upstream/downstream ports (Mika Westerberg) - After suspend, restore Resizable BAR size bits correctly for 1MB BARs (Sumit Saxena) - Enable PCI_MSI_IRQ_DOMAIN support for RISC-V (Wesley Terpstra) Virtualization: - Add ACS quirks for iProc PAXB (Abhinav Ratna), Amazon Annapurna Labs (Ali Saidi) - Move sysfs SR-IOV functions to iov.c (Kelsey Skunberg) - Remove group write permissions from sysfs sriov_numvfs, sriov_drivers_autoprobe (Kelsey Skunberg) Hotplug: - Simplify pciehp indicator control (Denis Efremov) Peer-to-peer DMA: - Allow P2P DMA between root ports for whitelisted bridges (Logan Gunthorpe) - Whitelist some Intel host bridges for P2P DMA (Logan Gunthorpe) - DMA map P2P DMA requests that traverse host bridge (Logan Gunthorpe) Amazon Annapurna Labs host bridge driver: - Add DT binding and controller driver (Jonathan Chocron) Hyper-V host bridge driver: - Fix hv_pci_dev->pci_slot use-after-free (Dexuan Cui) - Fix PCI domain number collisions (Haiyang Zhang) - Use instance ID bytes 4 & 5 as PCI domain numbers (Haiyang Zhang) - Fix build errors on non-SYSFS config (Randy Dunlap) i.MX6 host bridge driver: - Limit DBI register length (Stefan Agner) Intel VMD host bridge driver: - Fix config addressing issues (Jon Derrick) Layerscape host bridge driver: - Add bar_fixed_64bit property to endpoint driver (Xiaowei Bao) - Add CONFIG_PCI_LAYERSCAPE_EP to build EP/RC drivers separately (Xiaowei Bao) Mediatek host bridge driver: - Add MT7629 controller support (Jianjun Wang) Mobiveil host bridge driver: - Fix CPU base address setup (Hou Zhiqiang) - Make "num-lanes" property optional (Hou Zhiqiang) Tegra host bridge driver: - Fix OF node reference leak (Nishka Dasgupta) - Disable MSI for root ports to work around design problem (Vidya Sagar) - Add Tegra194 DT binding and controller support (Vidya Sagar) - Add support for sideband pins and slot regulators (Vidya Sagar) - Add PIPE2UPHY support (Vidya Sagar) Misc: - Remove unused pci_block_cfg_access() et al (Kelsey Skunberg) - Unexport pci_bus_get(), etc (Kelsey Skunberg) - Hide PM, VC, link speed, ATS, ECRC, PTM constants and interfaces in the PCI core (Kelsey Skunberg) - Clean up sysfs DEVICE_ATTR() usage (Kelsey Skunberg) - Mark expected switch fall-through (Gustavo A. R. Silva) - Propagate errors for optional regulators and PHYs (Thierry Reding) - Fix kernel command line resource_alignment parameter issues (Logan Gunthorpe)" * tag 'pci-v5.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (112 commits) PCI: Add pci_irq_vector() and other stubs when !CONFIG_PCI arm64: tegra: Add PCIe slot supply information in p2972-0000 platform arm64: tegra: Add configuration for PCIe C5 sideband signals PCI: tegra: Add support to enable slot regulators PCI: tegra: Add support to configure sideband pins PCI: vmd: Fix shadow offsets to reflect spec changes PCI: vmd: Fix config addressing when using bus offsets PCI: dwc: Add validation that PCIe core is set to correct mode PCI: dwc: al: Add Amazon Annapurna Labs PCIe controller driver dt-bindings: PCI: Add Amazon's Annapurna Labs PCIe host bridge binding PCI: Add quirk to disable MSI-X support for Amazon's Annapurna Labs Root Port PCI/VPD: Prevent VPD access for Amazon's Annapurna Labs Root Port PCI: Add ACS quirk for Amazon Annapurna Labs root ports PCI: Add Amazon's Annapurna Labs vendor ID MAINTAINERS: Add PCI native host/endpoint controllers designated reviewer PCI: hv: Use bytes 4 and 5 from instance ID as the PCI domain numbers dt-bindings: PCI: tegra: Add PCIe slot supplies regulator entries dt-bindings: PCI: tegra: Add sideband pins configuration entries PCI: tegra: Add Tegra194 PCIe support PCI: Get rid of dev->has_secondary_link flag ...
This commit is contained in:
@@ -131,13 +131,29 @@ config PCI_KEYSTONE_EP
|
||||
DesignWare core functions to implement the driver.
|
||||
|
||||
config PCI_LAYERSCAPE
|
||||
bool "Freescale Layerscape PCIe controller"
|
||||
bool "Freescale Layerscape PCIe controller - Host mode"
|
||||
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select MFD_SYSCON
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want PCIe controller support on Layerscape SoCs.
|
||||
Say Y here if you want to enable PCIe controller support on Layerscape
|
||||
SoCs to work in Host mode.
|
||||
This controller can work either as EP or RC. The RCW[HOST_AGT_PEX]
|
||||
determines which PCIe controller works in EP mode and which PCIe
|
||||
controller works in RC mode.
|
||||
|
||||
config PCI_LAYERSCAPE_EP
|
||||
bool "Freescale Layerscape PCIe controller - Endpoint mode"
|
||||
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
|
||||
depends on PCI_ENDPOINT
|
||||
select PCIE_DW_EP
|
||||
help
|
||||
Say Y here if you want to enable PCIe controller support on Layerscape
|
||||
SoCs to work in Endpoint mode.
|
||||
This controller can work either as EP or RC. The RCW[HOST_AGT_PEX]
|
||||
determines which PCIe controller works in EP mode and which PCIe
|
||||
controller works in RC mode.
|
||||
|
||||
config PCI_HISI
|
||||
depends on OF && (ARM64 || COMPILE_TEST)
|
||||
@@ -220,6 +236,16 @@ config PCI_MESON
|
||||
and therefore the driver re-uses the DesignWare core functions to
|
||||
implement the driver.
|
||||
|
||||
config PCIE_TEGRA194
|
||||
tristate "NVIDIA Tegra194 (and later) PCIe controller"
|
||||
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_DW_HOST
|
||||
select PHY_TEGRA194_P2U
|
||||
help
|
||||
Say Y here if you want support for DesignWare core based PCIe host
|
||||
controller found in NVIDIA Tegra194 SoC.
|
||||
|
||||
config PCIE_UNIPHIER
|
||||
bool "Socionext UniPhier PCIe controllers"
|
||||
depends on ARCH_UNIPHIER || COMPILE_TEST
|
||||
@@ -230,4 +256,16 @@ config PCIE_UNIPHIER
|
||||
Say Y here if you want PCIe controller support on UniPhier SoCs.
|
||||
This driver supports LD20 and PXs3 SoCs.
|
||||
|
||||
config PCIE_AL
|
||||
bool "Amazon Annapurna Labs PCIe controller"
|
||||
depends on OF && (ARM64 || COMPILE_TEST)
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here to enable support of the Amazon's Annapurna Labs PCIe
|
||||
controller IP on Amazon SoCs. The PCIe controller uses the DesignWare
|
||||
core plus Annapurna Labs proprietary hardware wrappers. This is
|
||||
required only for DT-based platforms. ACPI platforms with the
|
||||
Annapurna Labs PCIe controller don't need to enable this.
|
||||
|
||||
endmenu
|
||||
|
@@ -8,13 +8,15 @@ obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
|
||||
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
|
||||
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
|
||||
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
|
||||
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o pci-layerscape-ep.o
|
||||
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
|
||||
obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o
|
||||
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
|
||||
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
|
||||
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
|
||||
obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o
|
||||
obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o
|
||||
obj-$(CONFIG_PCI_MESON) += pci-meson.o
|
||||
obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
|
||||
obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
|
||||
|
||||
# The following drivers are for devices that use the generic ACPI
|
||||
|
@@ -465,7 +465,7 @@ static int __init exynos_pcie_probe(struct platform_device *pdev)
|
||||
|
||||
ep->phy = devm_of_phy_get(dev, np, NULL);
|
||||
if (IS_ERR(ep->phy)) {
|
||||
if (PTR_ERR(ep->phy) == -EPROBE_DEFER)
|
||||
if (PTR_ERR(ep->phy) != -ENODEV)
|
||||
return PTR_ERR(ep->phy);
|
||||
|
||||
ep->phy = NULL;
|
||||
|
@@ -57,6 +57,7 @@ enum imx6_pcie_variants {
|
||||
struct imx6_pcie_drvdata {
|
||||
enum imx6_pcie_variants variant;
|
||||
u32 flags;
|
||||
int dbi_length;
|
||||
};
|
||||
|
||||
struct imx6_pcie {
|
||||
@@ -1173,8 +1174,8 @@ static int imx6_pcie_probe(struct platform_device *pdev)
|
||||
|
||||
imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie");
|
||||
if (IS_ERR(imx6_pcie->vpcie)) {
|
||||
if (PTR_ERR(imx6_pcie->vpcie) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
if (PTR_ERR(imx6_pcie->vpcie) != -ENODEV)
|
||||
return PTR_ERR(imx6_pcie->vpcie);
|
||||
imx6_pcie->vpcie = NULL;
|
||||
}
|
||||
|
||||
@@ -1212,6 +1213,7 @@ static const struct imx6_pcie_drvdata drvdata[] = {
|
||||
.variant = IMX6Q,
|
||||
.flags = IMX6_PCIE_FLAG_IMX6_PHY |
|
||||
IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE,
|
||||
.dbi_length = 0x200,
|
||||
},
|
||||
[IMX6SX] = {
|
||||
.variant = IMX6SX,
|
||||
@@ -1254,6 +1256,37 @@ static struct platform_driver imx6_pcie_driver = {
|
||||
.shutdown = imx6_pcie_shutdown,
|
||||
};
|
||||
|
||||
static void imx6_pcie_quirk(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_bus *bus = dev->bus;
|
||||
struct pcie_port *pp = bus->sysdata;
|
||||
|
||||
/* Bus parent is the PCI bridge, its parent is this platform driver */
|
||||
if (!bus->dev.parent || !bus->dev.parent->parent)
|
||||
return;
|
||||
|
||||
/* Make sure we only quirk devices associated with this driver */
|
||||
if (bus->dev.parent->parent->driver != &imx6_pcie_driver.driver)
|
||||
return;
|
||||
|
||||
if (bus->number == pp->root_bus_nr) {
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci);
|
||||
|
||||
/*
|
||||
* Limit config length to avoid the kernel reading beyond
|
||||
* the register set and causing an abort on i.MX 6Quad
|
||||
*/
|
||||
if (imx6_pcie->drvdata->dbi_length) {
|
||||
dev->cfg_size = imx6_pcie->drvdata->dbi_length;
|
||||
dev_info(&dev->dev, "Limiting cfg_size to %d\n",
|
||||
dev->cfg_size);
|
||||
}
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_SYNOPSYS, 0xabcd,
|
||||
PCI_CLASS_BRIDGE_PCI, 8, imx6_pcie_quirk);
|
||||
|
||||
static int __init imx6_pcie_init(void)
|
||||
{
|
||||
#ifdef CONFIG_ARM
|
||||
|
@@ -44,6 +44,7 @@ static const struct pci_epc_features ls_pcie_epc_features = {
|
||||
.linkup_notifier = false,
|
||||
.msi_capable = true,
|
||||
.msix_capable = false,
|
||||
.bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4),
|
||||
};
|
||||
|
||||
static const struct pci_epc_features*
|
||||
|
@@ -91,3 +91,368 @@ struct pci_ecam_ops al_pcie_ops = {
|
||||
};
|
||||
|
||||
#endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */
|
||||
|
||||
#ifdef CONFIG_PCIE_AL
|
||||
|
||||
#include <linux/of_pci.h>
|
||||
#include "pcie-designware.h"
|
||||
|
||||
#define AL_PCIE_REV_ID_2 2
|
||||
#define AL_PCIE_REV_ID_3 3
|
||||
#define AL_PCIE_REV_ID_4 4
|
||||
|
||||
#define AXI_BASE_OFFSET 0x0
|
||||
|
||||
#define DEVICE_ID_OFFSET 0x16c
|
||||
|
||||
#define DEVICE_REV_ID 0x0
|
||||
#define DEVICE_REV_ID_DEV_ID_MASK GENMASK(31, 16)
|
||||
|
||||
#define DEVICE_REV_ID_DEV_ID_X4 0
|
||||
#define DEVICE_REV_ID_DEV_ID_X8 2
|
||||
#define DEVICE_REV_ID_DEV_ID_X16 4
|
||||
|
||||
#define OB_CTRL_REV1_2_OFFSET 0x0040
|
||||
#define OB_CTRL_REV3_5_OFFSET 0x0030
|
||||
|
||||
#define CFG_TARGET_BUS 0x0
|
||||
#define CFG_TARGET_BUS_MASK_MASK GENMASK(7, 0)
|
||||
#define CFG_TARGET_BUS_BUSNUM_MASK GENMASK(15, 8)
|
||||
|
||||
#define CFG_CONTROL 0x4
|
||||
#define CFG_CONTROL_SUBBUS_MASK GENMASK(15, 8)
|
||||
#define CFG_CONTROL_SEC_BUS_MASK GENMASK(23, 16)
|
||||
|
||||
struct al_pcie_reg_offsets {
|
||||
unsigned int ob_ctrl;
|
||||
};
|
||||
|
||||
struct al_pcie_target_bus_cfg {
|
||||
u8 reg_val;
|
||||
u8 reg_mask;
|
||||
u8 ecam_mask;
|
||||
};
|
||||
|
||||
struct al_pcie {
|
||||
struct dw_pcie *pci;
|
||||
void __iomem *controller_base; /* base of PCIe unit (not DW core) */
|
||||
struct device *dev;
|
||||
resource_size_t ecam_size;
|
||||
unsigned int controller_rev_id;
|
||||
struct al_pcie_reg_offsets reg_offsets;
|
||||
struct al_pcie_target_bus_cfg target_bus_cfg;
|
||||
};
|
||||
|
||||
#define PCIE_ECAM_DEVFN(x) (((x) & 0xff) << 12)
|
||||
|
||||
#define to_al_pcie(x) dev_get_drvdata((x)->dev)
|
||||
|
||||
static inline u32 al_pcie_controller_readl(struct al_pcie *pcie, u32 offset)
|
||||
{
|
||||
return readl_relaxed(pcie->controller_base + offset);
|
||||
}
|
||||
|
||||
static inline void al_pcie_controller_writel(struct al_pcie *pcie, u32 offset,
|
||||
u32 val)
|
||||
{
|
||||
writel_relaxed(val, pcie->controller_base + offset);
|
||||
}
|
||||
|
||||
static int al_pcie_rev_id_get(struct al_pcie *pcie, unsigned int *rev_id)
|
||||
{
|
||||
u32 dev_rev_id_val;
|
||||
u32 dev_id_val;
|
||||
|
||||
dev_rev_id_val = al_pcie_controller_readl(pcie, AXI_BASE_OFFSET +
|
||||
DEVICE_ID_OFFSET +
|
||||
DEVICE_REV_ID);
|
||||
dev_id_val = FIELD_GET(DEVICE_REV_ID_DEV_ID_MASK, dev_rev_id_val);
|
||||
|
||||
switch (dev_id_val) {
|
||||
case DEVICE_REV_ID_DEV_ID_X4:
|
||||
*rev_id = AL_PCIE_REV_ID_2;
|
||||
break;
|
||||
case DEVICE_REV_ID_DEV_ID_X8:
|
||||
*rev_id = AL_PCIE_REV_ID_3;
|
||||
break;
|
||||
case DEVICE_REV_ID_DEV_ID_X16:
|
||||
*rev_id = AL_PCIE_REV_ID_4;
|
||||
break;
|
||||
default:
|
||||
dev_err(pcie->dev, "Unsupported dev_id_val (0x%x)\n",
|
||||
dev_id_val);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dev_dbg(pcie->dev, "dev_id_val: 0x%x\n", dev_id_val);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int al_pcie_reg_offsets_set(struct al_pcie *pcie)
|
||||
{
|
||||
switch (pcie->controller_rev_id) {
|
||||
case AL_PCIE_REV_ID_2:
|
||||
pcie->reg_offsets.ob_ctrl = OB_CTRL_REV1_2_OFFSET;
|
||||
break;
|
||||
case AL_PCIE_REV_ID_3:
|
||||
case AL_PCIE_REV_ID_4:
|
||||
pcie->reg_offsets.ob_ctrl = OB_CTRL_REV3_5_OFFSET;
|
||||
break;
|
||||
default:
|
||||
dev_err(pcie->dev, "Unsupported controller rev_id: 0x%x\n",
|
||||
pcie->controller_rev_id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void al_pcie_target_bus_set(struct al_pcie *pcie,
|
||||
u8 target_bus,
|
||||
u8 mask_target_bus)
|
||||
{
|
||||
u32 reg;
|
||||
|
||||
reg = FIELD_PREP(CFG_TARGET_BUS_MASK_MASK, mask_target_bus) |
|
||||
FIELD_PREP(CFG_TARGET_BUS_BUSNUM_MASK, target_bus);
|
||||
|
||||
al_pcie_controller_writel(pcie, AXI_BASE_OFFSET +
|
||||
pcie->reg_offsets.ob_ctrl + CFG_TARGET_BUS,
|
||||
reg);
|
||||
}
|
||||
|
||||
static void __iomem *al_pcie_conf_addr_map(struct al_pcie *pcie,
|
||||
unsigned int busnr,
|
||||
unsigned int devfn)
|
||||
{
|
||||
struct al_pcie_target_bus_cfg *target_bus_cfg = &pcie->target_bus_cfg;
|
||||
unsigned int busnr_ecam = busnr & target_bus_cfg->ecam_mask;
|
||||
unsigned int busnr_reg = busnr & target_bus_cfg->reg_mask;
|
||||
struct pcie_port *pp = &pcie->pci->pp;
|
||||
void __iomem *pci_base_addr;
|
||||
|
||||
pci_base_addr = (void __iomem *)((uintptr_t)pp->va_cfg0_base +
|
||||
(busnr_ecam << 20) +
|
||||
PCIE_ECAM_DEVFN(devfn));
|
||||
|
||||
if (busnr_reg != target_bus_cfg->reg_val) {
|
||||
dev_dbg(pcie->pci->dev, "Changing target bus busnum val from 0x%x to 0x%x\n",
|
||||
target_bus_cfg->reg_val, busnr_reg);
|
||||
target_bus_cfg->reg_val = busnr_reg;
|
||||
al_pcie_target_bus_set(pcie,
|
||||
target_bus_cfg->reg_val,
|
||||
target_bus_cfg->reg_mask);
|
||||
}
|
||||
|
||||
return pci_base_addr;
|
||||
}
|
||||
|
||||
static int al_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
|
||||
unsigned int devfn, int where, int size,
|
||||
u32 *val)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct al_pcie *pcie = to_al_pcie(pci);
|
||||
unsigned int busnr = bus->number;
|
||||
void __iomem *pci_addr;
|
||||
int rc;
|
||||
|
||||
pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn);
|
||||
|
||||
rc = dw_pcie_read(pci_addr + where, size, val);
|
||||
|
||||
dev_dbg(pci->dev, "%d-byte config read from %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n",
|
||||
size, pci_domain_nr(bus), bus->number,
|
||||
PCI_SLOT(devfn), PCI_FUNC(devfn), where,
|
||||
(pci_addr + where), *val);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int al_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
|
||||
unsigned int devfn, int where, int size,
|
||||
u32 val)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct al_pcie *pcie = to_al_pcie(pci);
|
||||
unsigned int busnr = bus->number;
|
||||
void __iomem *pci_addr;
|
||||
int rc;
|
||||
|
||||
pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn);
|
||||
|
||||
rc = dw_pcie_write(pci_addr + where, size, val);
|
||||
|
||||
dev_dbg(pci->dev, "%d-byte config write to %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n",
|
||||
size, pci_domain_nr(bus), bus->number,
|
||||
PCI_SLOT(devfn), PCI_FUNC(devfn), where,
|
||||
(pci_addr + where), val);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void al_pcie_config_prepare(struct al_pcie *pcie)
|
||||
{
|
||||
struct al_pcie_target_bus_cfg *target_bus_cfg;
|
||||
struct pcie_port *pp = &pcie->pci->pp;
|
||||
unsigned int ecam_bus_mask;
|
||||
u32 cfg_control_offset;
|
||||
u8 subordinate_bus;
|
||||
u8 secondary_bus;
|
||||
u32 cfg_control;
|
||||
u32 reg;
|
||||
|
||||
target_bus_cfg = &pcie->target_bus_cfg;
|
||||
|
||||
ecam_bus_mask = (pcie->ecam_size >> 20) - 1;
|
||||
if (ecam_bus_mask > 255) {
|
||||
dev_warn(pcie->dev, "ECAM window size is larger than 256MB. Cutting off at 256\n");
|
||||
ecam_bus_mask = 255;
|
||||
}
|
||||
|
||||
/* This portion is taken from the transaction address */
|
||||
target_bus_cfg->ecam_mask = ecam_bus_mask;
|
||||
/* This portion is taken from the cfg_target_bus reg */
|
||||
target_bus_cfg->reg_mask = ~target_bus_cfg->ecam_mask;
|
||||
target_bus_cfg->reg_val = pp->busn->start & target_bus_cfg->reg_mask;
|
||||
|
||||
al_pcie_target_bus_set(pcie, target_bus_cfg->reg_val,
|
||||
target_bus_cfg->reg_mask);
|
||||
|
||||
secondary_bus = pp->busn->start + 1;
|
||||
subordinate_bus = pp->busn->end;
|
||||
|
||||
/* Set the valid values of secondary and subordinate buses */
|
||||
cfg_control_offset = AXI_BASE_OFFSET + pcie->reg_offsets.ob_ctrl +
|
||||
CFG_CONTROL;
|
||||
|
||||
cfg_control = al_pcie_controller_readl(pcie, cfg_control_offset);
|
||||
|
||||
reg = cfg_control &
|
||||
~(CFG_CONTROL_SEC_BUS_MASK | CFG_CONTROL_SUBBUS_MASK);
|
||||
|
||||
reg |= FIELD_PREP(CFG_CONTROL_SUBBUS_MASK, subordinate_bus) |
|
||||
FIELD_PREP(CFG_CONTROL_SEC_BUS_MASK, secondary_bus);
|
||||
|
||||
al_pcie_controller_writel(pcie, cfg_control_offset, reg);
|
||||
}
|
||||
|
||||
static int al_pcie_host_init(struct pcie_port *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct al_pcie *pcie = to_al_pcie(pci);
|
||||
int rc;
|
||||
|
||||
rc = al_pcie_rev_id_get(pcie, &pcie->controller_rev_id);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = al_pcie_reg_offsets_set(pcie);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
al_pcie_config_prepare(pcie);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dw_pcie_host_ops al_pcie_host_ops = {
|
||||
.rd_other_conf = al_pcie_rd_other_conf,
|
||||
.wr_other_conf = al_pcie_wr_other_conf,
|
||||
.host_init = al_pcie_host_init,
|
||||
};
|
||||
|
||||
static int al_add_pcie_port(struct pcie_port *pp,
|
||||
struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
int ret;
|
||||
|
||||
pp->ops = &al_pcie_host_ops;
|
||||
|
||||
ret = dw_pcie_host_init(pp);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to initialize host\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dw_pcie_ops dw_pcie_ops = {
|
||||
};
|
||||
|
||||
static int al_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct resource *controller_res;
|
||||
struct resource *ecam_res;
|
||||
struct resource *dbi_res;
|
||||
struct al_pcie *al_pcie;
|
||||
struct dw_pcie *pci;
|
||||
|
||||
al_pcie = devm_kzalloc(dev, sizeof(*al_pcie), GFP_KERNEL);
|
||||
if (!al_pcie)
|
||||
return -ENOMEM;
|
||||
|
||||
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
|
||||
if (!pci)
|
||||
return -ENOMEM;
|
||||
|
||||
pci->dev = dev;
|
||||
pci->ops = &dw_pcie_ops;
|
||||
|
||||
al_pcie->pci = pci;
|
||||
al_pcie->dev = dev;
|
||||
|
||||
dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
|
||||
pci->dbi_base = devm_pci_remap_cfg_resource(dev, dbi_res);
|
||||
if (IS_ERR(pci->dbi_base)) {
|
||||
dev_err(dev, "couldn't remap dbi base %pR\n", dbi_res);
|
||||
return PTR_ERR(pci->dbi_base);
|
||||
}
|
||||
|
||||
ecam_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
|
||||
if (!ecam_res) {
|
||||
dev_err(dev, "couldn't find 'config' reg in DT\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
al_pcie->ecam_size = resource_size(ecam_res);
|
||||
|
||||
controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
"controller");
|
||||
al_pcie->controller_base = devm_ioremap_resource(dev, controller_res);
|
||||
if (IS_ERR(al_pcie->controller_base)) {
|
||||
dev_err(dev, "couldn't remap controller base %pR\n",
|
||||
controller_res);
|
||||
return PTR_ERR(al_pcie->controller_base);
|
||||
}
|
||||
|
||||
dev_dbg(dev, "From DT: dbi_base: %pR, controller_base: %pR\n",
|
||||
dbi_res, controller_res);
|
||||
|
||||
platform_set_drvdata(pdev, al_pcie);
|
||||
|
||||
return al_add_pcie_port(&pci->pp, pdev);
|
||||
}
|
||||
|
||||
static const struct of_device_id al_pcie_of_match[] = {
|
||||
{ .compatible = "amazon,al-alpine-v2-pcie",
|
||||
},
|
||||
{ .compatible = "amazon,al-alpine-v3-pcie",
|
||||
},
|
||||
{},
|
||||
};
|
||||
|
||||
static struct platform_driver al_pcie_driver = {
|
||||
.driver = {
|
||||
.name = "al-pcie",
|
||||
.of_match_table = al_pcie_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = al_pcie_probe,
|
||||
};
|
||||
builtin_platform_driver(al_pcie_driver);
|
||||
|
||||
#endif /* CONFIG_PCIE_AL*/
|
||||
|
@@ -118,11 +118,10 @@ static int armada8k_pcie_setup_phys(struct armada8k_pcie *pcie)
|
||||
|
||||
for (i = 0; i < ARMADA8K_PCIE_MAX_LANES; i++) {
|
||||
pcie->phy[i] = devm_of_phy_get_by_index(dev, node, i);
|
||||
if (IS_ERR(pcie->phy[i]) &&
|
||||
(PTR_ERR(pcie->phy[i]) == -EPROBE_DEFER))
|
||||
return PTR_ERR(pcie->phy[i]);
|
||||
|
||||
if (IS_ERR(pcie->phy[i])) {
|
||||
if (PTR_ERR(pcie->phy[i]) != -ENODEV)
|
||||
return PTR_ERR(pcie->phy[i]);
|
||||
|
||||
pcie->phy[i] = NULL;
|
||||
continue;
|
||||
}
|
||||
|
@@ -40,39 +40,6 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
|
||||
__dw_pcie_ep_reset_bar(pci, bar, 0);
|
||||
}
|
||||
|
||||
static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
|
||||
u8 cap)
|
||||
{
|
||||
u8 cap_id, next_cap_ptr;
|
||||
u16 reg;
|
||||
|
||||
if (!cap_ptr)
|
||||
return 0;
|
||||
|
||||
reg = dw_pcie_readw_dbi(pci, cap_ptr);
|
||||
cap_id = (reg & 0x00ff);
|
||||
|
||||
if (cap_id > PCI_CAP_ID_MAX)
|
||||
return 0;
|
||||
|
||||
if (cap_id == cap)
|
||||
return cap_ptr;
|
||||
|
||||
next_cap_ptr = (reg & 0xff00) >> 8;
|
||||
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
|
||||
}
|
||||
|
||||
static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap)
|
||||
{
|
||||
u8 next_cap_ptr;
|
||||
u16 reg;
|
||||
|
||||
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
|
||||
next_cap_ptr = (reg & 0x00ff);
|
||||
|
||||
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no,
|
||||
struct pci_epf_header *hdr)
|
||||
{
|
||||
@@ -531,6 +498,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
int ret;
|
||||
u32 reg;
|
||||
void *addr;
|
||||
u8 hdr_type;
|
||||
unsigned int nbars;
|
||||
unsigned int offset;
|
||||
struct pci_epc *epc;
|
||||
@@ -595,6 +563,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
if (ep->ops->ep_init)
|
||||
ep->ops->ep_init(ep);
|
||||
|
||||
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE);
|
||||
if (hdr_type != PCI_HEADER_TYPE_NORMAL) {
|
||||
dev_err(pci->dev, "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n",
|
||||
hdr_type);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
|
||||
if (ret < 0)
|
||||
epc->max_functions = 1;
|
||||
@@ -612,9 +587,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI);
|
||||
ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
|
||||
|
||||
ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX);
|
||||
ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
|
||||
|
||||
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
|
||||
if (offset) {
|
||||
|
@@ -323,6 +323,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
||||
struct pci_bus *child;
|
||||
struct pci_host_bridge *bridge;
|
||||
struct resource *cfg_res;
|
||||
u32 hdr_type;
|
||||
int ret;
|
||||
|
||||
raw_spin_lock_init(&pci->pp.lock);
|
||||
@@ -464,6 +465,21 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
||||
goto err_free_msi;
|
||||
}
|
||||
|
||||
ret = dw_pcie_rd_own_conf(pp, PCI_HEADER_TYPE, 1, &hdr_type);
|
||||
if (ret != PCIBIOS_SUCCESSFUL) {
|
||||
dev_err(pci->dev, "Failed reading PCI_HEADER_TYPE cfg space reg (ret: 0x%x)\n",
|
||||
ret);
|
||||
ret = pcibios_err_to_errno(ret);
|
||||
goto err_free_msi;
|
||||
}
|
||||
if (hdr_type != PCI_HEADER_TYPE_BRIDGE) {
|
||||
dev_err(pci->dev,
|
||||
"PCIe controller is not set to bridge type (hdr_type: 0x%x)!\n",
|
||||
hdr_type);
|
||||
ret = -EIO;
|
||||
goto err_free_msi;
|
||||
}
|
||||
|
||||
pp->root_bus_nr = pp->busn->start;
|
||||
|
||||
bridge->dev.parent = dev;
|
||||
@@ -628,6 +644,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
|
||||
u32 val, ctrl, num_ctrls;
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
|
||||
/*
|
||||
* Enable DBI read-only registers for writing/updating configuration.
|
||||
* Write permission gets disabled towards the end of this function.
|
||||
*/
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
|
||||
dw_pcie_setup(pci);
|
||||
|
||||
if (!pp->ops->msi_host_init) {
|
||||
@@ -650,12 +672,10 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
|
||||
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
|
||||
|
||||
/* Setup interrupt pins */
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE);
|
||||
val &= 0xffff00ff;
|
||||
val |= 0x00000100;
|
||||
dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
/* Setup bus numbers */
|
||||
val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS);
|
||||
@@ -687,15 +707,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
|
||||
|
||||
dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0);
|
||||
|
||||
/* Enable write permission for the DBI read-only register */
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
/* Program correct class for RC */
|
||||
dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
|
||||
/* Better disable write permission right after the update */
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val);
|
||||
val |= PORT_LOGIC_SPEED_CHANGE;
|
||||
dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val);
|
||||
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
|
||||
|
@@ -14,6 +14,86 @@
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
||||
/*
|
||||
* These interfaces resemble the pci_find_*capability() interfaces, but these
|
||||
* are for configuring host controllers, which are bridges *to* PCI devices but
|
||||
* are not PCI devices themselves.
|
||||
*/
|
||||
static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
|
||||
u8 cap)
|
||||
{
|
||||
u8 cap_id, next_cap_ptr;
|
||||
u16 reg;
|
||||
|
||||
if (!cap_ptr)
|
||||
return 0;
|
||||
|
||||
reg = dw_pcie_readw_dbi(pci, cap_ptr);
|
||||
cap_id = (reg & 0x00ff);
|
||||
|
||||
if (cap_id > PCI_CAP_ID_MAX)
|
||||
return 0;
|
||||
|
||||
if (cap_id == cap)
|
||||
return cap_ptr;
|
||||
|
||||
next_cap_ptr = (reg & 0xff00) >> 8;
|
||||
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
|
||||
}
|
||||
|
||||
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
|
||||
{
|
||||
u8 next_cap_ptr;
|
||||
u16 reg;
|
||||
|
||||
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
|
||||
next_cap_ptr = (reg & 0x00ff);
|
||||
|
||||
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
|
||||
|
||||
static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
|
||||
u8 cap)
|
||||
{
|
||||
u32 header;
|
||||
int ttl;
|
||||
int pos = PCI_CFG_SPACE_SIZE;
|
||||
|
||||
/* minimum 8 bytes per capability */
|
||||
ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
|
||||
|
||||
if (start)
|
||||
pos = start;
|
||||
|
||||
header = dw_pcie_readl_dbi(pci, pos);
|
||||
/*
|
||||
* If we have no capabilities, this is indicated by cap ID,
|
||||
* cap version and next pointer all being 0.
|
||||
*/
|
||||
if (header == 0)
|
||||
return 0;
|
||||
|
||||
while (ttl-- > 0) {
|
||||
if (PCI_EXT_CAP_ID(header) == cap && pos != start)
|
||||
return pos;
|
||||
|
||||
pos = PCI_EXT_CAP_NEXT(header);
|
||||
if (pos < PCI_CFG_SPACE_SIZE)
|
||||
break;
|
||||
|
||||
header = dw_pcie_readl_dbi(pci, pos);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
|
||||
{
|
||||
return dw_pcie_find_next_ext_capability(pci, 0, cap);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
|
||||
|
||||
int dw_pcie_read(void __iomem *addr, int size, u32 *val)
|
||||
{
|
||||
if (!IS_ALIGNED((uintptr_t)addr, size)) {
|
||||
@@ -376,10 +456,11 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
|
||||
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
|
||||
}
|
||||
|
||||
dev_err(pci->dev, "Phy link never came up\n");
|
||||
dev_info(pci->dev, "Phy link never came up\n");
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_wait_for_link);
|
||||
|
||||
int dw_pcie_link_up(struct dw_pcie *pci)
|
||||
{
|
||||
@@ -423,8 +504,10 @@ void dw_pcie_setup(struct dw_pcie *pci)
|
||||
|
||||
|
||||
ret = of_property_read_u32(np, "num-lanes", &lanes);
|
||||
if (ret)
|
||||
lanes = 0;
|
||||
if (ret) {
|
||||
dev_dbg(pci->dev, "property num-lanes isn't found\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* Set the number of lanes */
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL);
|
||||
@@ -466,4 +549,11 @@ void dw_pcie_setup(struct dw_pcie *pci)
|
||||
break;
|
||||
}
|
||||
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
|
||||
|
||||
if (of_property_read_bool(np, "snps,enable-cdm-check")) {
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
|
||||
val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
|
||||
PCIE_PL_CHK_REG_CHK_REG_START;
|
||||
dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
|
||||
}
|
||||
}
|
||||
|
@@ -86,6 +86,15 @@
|
||||
#define PCIE_MISC_CONTROL_1_OFF 0x8BC
|
||||
#define PCIE_DBI_RO_WR_EN BIT(0)
|
||||
|
||||
#define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20
|
||||
#define PCIE_PL_CHK_REG_CHK_REG_START BIT(0)
|
||||
#define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1)
|
||||
#define PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR BIT(16)
|
||||
#define PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR BIT(17)
|
||||
#define PCIE_PL_CHK_REG_CHK_REG_COMPLETE BIT(18)
|
||||
|
||||
#define PCIE_PL_CHK_REG_ERR_ADDR 0xB28
|
||||
|
||||
/*
|
||||
* iATU Unroll-specific register definitions
|
||||
* From 4.80 core version the address translation will be made by unroll
|
||||
@@ -251,6 +260,9 @@ struct dw_pcie {
|
||||
#define to_dw_pcie_from_ep(endpoint) \
|
||||
container_of((endpoint), struct dw_pcie, ep)
|
||||
|
||||
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
|
||||
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
|
||||
|
||||
int dw_pcie_read(void __iomem *addr, int size, u32 *val);
|
||||
int dw_pcie_write(void __iomem *addr, int size, u32 val);
|
||||
|
||||
|
@@ -340,8 +340,8 @@ static int histb_pcie_probe(struct platform_device *pdev)
|
||||
|
||||
hipcie->vpcie = devm_regulator_get_optional(dev, "vpcie");
|
||||
if (IS_ERR(hipcie->vpcie)) {
|
||||
if (PTR_ERR(hipcie->vpcie) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
if (PTR_ERR(hipcie->vpcie) != -ENODEV)
|
||||
return PTR_ERR(hipcie->vpcie);
|
||||
hipcie->vpcie = NULL;
|
||||
}
|
||||
|
||||
|
@@ -436,7 +436,7 @@ static int kirin_pcie_host_init(struct pcie_port *pp)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct dw_pcie_ops kirin_dw_pcie_ops = {
|
||||
static const struct dw_pcie_ops kirin_dw_pcie_ops = {
|
||||
.read_dbi = kirin_pcie_read_dbi,
|
||||
.write_dbi = kirin_pcie_write_dbi,
|
||||
.link_up = kirin_pcie_link_up,
|
||||
|
1732
drivers/pci/controller/dwc/pcie-tegra194.c
Normal file
1732
drivers/pci/controller/dwc/pcie-tegra194.c
Normal file
File diff suppressed because it is too large
Load Diff
@@ -43,9 +43,8 @@ static struct pci_config_window *gen_pci_init(struct device *dev,
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
err = devm_add_action(dev, gen_pci_unmap_cfg, cfg);
|
||||
err = devm_add_action_or_reset(dev, gen_pci_unmap_cfg, cfg);
|
||||
if (err) {
|
||||
gen_pci_unmap_cfg(cfg);
|
||||
goto err_out;
|
||||
}
|
||||
return cfg;
|
||||
|
@@ -2809,6 +2809,48 @@ static void put_hvpcibus(struct hv_pcibus_device *hbus)
|
||||
complete(&hbus->remove_event);
|
||||
}
|
||||
|
||||
#define HVPCI_DOM_MAP_SIZE (64 * 1024)
|
||||
static DECLARE_BITMAP(hvpci_dom_map, HVPCI_DOM_MAP_SIZE);
|
||||
|
||||
/*
|
||||
* PCI domain number 0 is used by emulated devices on Gen1 VMs, so define 0
|
||||
* as invalid for passthrough PCI devices of this driver.
|
||||
*/
|
||||
#define HVPCI_DOM_INVALID 0
|
||||
|
||||
/**
|
||||
* hv_get_dom_num() - Get a valid PCI domain number
|
||||
* Check if the PCI domain number is in use, and return another number if
|
||||
* it is in use.
|
||||
*
|
||||
* @dom: Requested domain number
|
||||
*
|
||||
* return: domain number on success, HVPCI_DOM_INVALID on failure
|
||||
*/
|
||||
static u16 hv_get_dom_num(u16 dom)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
if (test_and_set_bit(dom, hvpci_dom_map) == 0)
|
||||
return dom;
|
||||
|
||||
for_each_clear_bit(i, hvpci_dom_map, HVPCI_DOM_MAP_SIZE) {
|
||||
if (test_and_set_bit(i, hvpci_dom_map) == 0)
|
||||
return i;
|
||||
}
|
||||
|
||||
return HVPCI_DOM_INVALID;
|
||||
}
|
||||
|
||||
/**
|
||||
* hv_put_dom_num() - Mark the PCI domain number as free
|
||||
* @dom: Domain number to be freed
|
||||
*/
|
||||
static void hv_put_dom_num(u16 dom)
|
||||
{
|
||||
clear_bit(dom, hvpci_dom_map);
|
||||
}
|
||||
|
||||
/**
|
||||
* hv_pci_probe() - New VMBus channel probe, for a root PCI bus
|
||||
* @hdev: VMBus's tracking struct for this root PCI bus
|
||||
@@ -2820,6 +2862,7 @@ static int hv_pci_probe(struct hv_device *hdev,
|
||||
const struct hv_vmbus_device_id *dev_id)
|
||||
{
|
||||
struct hv_pcibus_device *hbus;
|
||||
u16 dom_req, dom;
|
||||
char *name;
|
||||
int ret;
|
||||
|
||||
@@ -2835,19 +2878,34 @@ static int hv_pci_probe(struct hv_device *hdev,
|
||||
hbus->state = hv_pcibus_init;
|
||||
|
||||
/*
|
||||
* The PCI bus "domain" is what is called "segment" in ACPI and
|
||||
* other specs. Pull it from the instance ID, to get something
|
||||
* unique. Bytes 8 and 9 are what is used in Windows guests, so
|
||||
* do the same thing for consistency. Note that, since this code
|
||||
* only runs in a Hyper-V VM, Hyper-V can (and does) guarantee
|
||||
* that (1) the only domain in use for something that looks like
|
||||
* a physical PCI bus (which is actually emulated by the
|
||||
* hypervisor) is domain 0 and (2) there will be no overlap
|
||||
* between domains derived from these instance IDs in the same
|
||||
* VM.
|
||||
* The PCI bus "domain" is what is called "segment" in ACPI and other
|
||||
* specs. Pull it from the instance ID, to get something usually
|
||||
* unique. In rare cases of collision, we will find out another number
|
||||
* not in use.
|
||||
*
|
||||
* Note that, since this code only runs in a Hyper-V VM, Hyper-V
|
||||
* together with this guest driver can guarantee that (1) The only
|
||||
* domain used by Gen1 VMs for something that looks like a physical
|
||||
* PCI bus (which is actually emulated by the hypervisor) is domain 0.
|
||||
* (2) There will be no overlap between domains (after fixing possible
|
||||
* collisions) in the same VM.
|
||||
*/
|
||||
hbus->sysdata.domain = hdev->dev_instance.b[9] |
|
||||
hdev->dev_instance.b[8] << 8;
|
||||
dom_req = hdev->dev_instance.b[5] << 8 | hdev->dev_instance.b[4];
|
||||
dom = hv_get_dom_num(dom_req);
|
||||
|
||||
if (dom == HVPCI_DOM_INVALID) {
|
||||
dev_err(&hdev->device,
|
||||
"Unable to use dom# 0x%hx or other numbers", dom_req);
|
||||
ret = -EINVAL;
|
||||
goto free_bus;
|
||||
}
|
||||
|
||||
if (dom != dom_req)
|
||||
dev_info(&hdev->device,
|
||||
"PCI dom# 0x%hx has collision, using 0x%hx",
|
||||
dom_req, dom);
|
||||
|
||||
hbus->sysdata.domain = dom;
|
||||
|
||||
hbus->hdev = hdev;
|
||||
refcount_set(&hbus->remove_lock, 1);
|
||||
@@ -2862,7 +2920,7 @@ static int hv_pci_probe(struct hv_device *hdev,
|
||||
hbus->sysdata.domain);
|
||||
if (!hbus->wq) {
|
||||
ret = -ENOMEM;
|
||||
goto free_bus;
|
||||
goto free_dom;
|
||||
}
|
||||
|
||||
ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0,
|
||||
@@ -2946,6 +3004,8 @@ close:
|
||||
vmbus_close(hdev->channel);
|
||||
destroy_wq:
|
||||
destroy_workqueue(hbus->wq);
|
||||
free_dom:
|
||||
hv_put_dom_num(hbus->sysdata.domain);
|
||||
free_bus:
|
||||
free_page((unsigned long)hbus);
|
||||
return ret;
|
||||
@@ -3008,8 +3068,8 @@ static int hv_pci_remove(struct hv_device *hdev)
|
||||
/* Remove the bus from PCI's point of view. */
|
||||
pci_lock_rescan_remove();
|
||||
pci_stop_root_bus(hbus->pci_bus);
|
||||
pci_remove_root_bus(hbus->pci_bus);
|
||||
hv_pci_remove_slots(hbus);
|
||||
pci_remove_root_bus(hbus->pci_bus);
|
||||
pci_unlock_rescan_remove();
|
||||
hbus->state = hv_pcibus_removed;
|
||||
}
|
||||
@@ -3027,6 +3087,9 @@ static int hv_pci_remove(struct hv_device *hdev)
|
||||
put_hvpcibus(hbus);
|
||||
wait_for_completion(&hbus->remove_event);
|
||||
destroy_workqueue(hbus->wq);
|
||||
|
||||
hv_put_dom_num(hbus->sysdata.domain);
|
||||
|
||||
free_page((unsigned long)hbus);
|
||||
return 0;
|
||||
}
|
||||
@@ -3058,6 +3121,9 @@ static void __exit exit_hv_pci_drv(void)
|
||||
|
||||
static int __init init_hv_pci_drv(void)
|
||||
{
|
||||
/* Set the invalid domain number's bit, so it will not be used */
|
||||
set_bit(HVPCI_DOM_INVALID, hvpci_dom_map);
|
||||
|
||||
/* Initialize PCI block r/w interface */
|
||||
hvpci_block_ops.read_block = hv_read_config_block;
|
||||
hvpci_block_ops.write_block = hv_write_config_block;
|
||||
|
@@ -2237,14 +2237,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
|
||||
err = of_pci_get_devfn(port);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to parse address: %d\n", err);
|
||||
return err;
|
||||
goto err_node_put;
|
||||
}
|
||||
|
||||
index = PCI_SLOT(err);
|
||||
|
||||
if (index < 1 || index > soc->num_ports) {
|
||||
dev_err(dev, "invalid port number: %d\n", index);
|
||||
return -EINVAL;
|
||||
err = -EINVAL;
|
||||
goto err_node_put;
|
||||
}
|
||||
|
||||
index--;
|
||||
@@ -2253,12 +2254,13 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to parse # of lanes: %d\n",
|
||||
err);
|
||||
return err;
|
||||
goto err_node_put;
|
||||
}
|
||||
|
||||
if (value > 16) {
|
||||
dev_err(dev, "invalid # of lanes: %u\n", value);
|
||||
return -EINVAL;
|
||||
err = -EINVAL;
|
||||
goto err_node_put;
|
||||
}
|
||||
|
||||
lanes |= value << (index << 3);
|
||||
@@ -2272,13 +2274,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
|
||||
lane += value;
|
||||
|
||||
rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL);
|
||||
if (!rp)
|
||||
return -ENOMEM;
|
||||
if (!rp) {
|
||||
err = -ENOMEM;
|
||||
goto err_node_put;
|
||||
}
|
||||
|
||||
err = of_address_to_resource(port, 0, &rp->regs);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to parse address: %d\n", err);
|
||||
return err;
|
||||
goto err_node_put;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&rp->list);
|
||||
@@ -2330,6 +2334,10 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
|
||||
err_node_put:
|
||||
of_node_put(port);
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@@ -93,12 +93,9 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
|
||||
pcie->need_ib_cfg = of_property_read_bool(np, "dma-ranges");
|
||||
|
||||
/* PHY use is optional */
|
||||
pcie->phy = devm_phy_get(dev, "pcie-phy");
|
||||
if (IS_ERR(pcie->phy)) {
|
||||
if (PTR_ERR(pcie->phy) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
pcie->phy = NULL;
|
||||
}
|
||||
pcie->phy = devm_phy_optional_get(dev, "pcie-phy");
|
||||
if (IS_ERR(pcie->phy))
|
||||
return PTR_ERR(pcie->phy);
|
||||
|
||||
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources,
|
||||
&iobase);
|
||||
|
@@ -73,6 +73,7 @@
|
||||
#define PCIE_MSI_VECTOR 0x0c0
|
||||
|
||||
#define PCIE_CONF_VEND_ID 0x100
|
||||
#define PCIE_CONF_DEVICE_ID 0x102
|
||||
#define PCIE_CONF_CLASS_ID 0x106
|
||||
|
||||
#define PCIE_INT_MASK 0x420
|
||||
@@ -141,12 +142,16 @@ struct mtk_pcie_port;
|
||||
/**
|
||||
* struct mtk_pcie_soc - differentiate between host generations
|
||||
* @need_fix_class_id: whether this host's class ID needed to be fixed or not
|
||||
* @need_fix_device_id: whether this host's device ID needed to be fixed or not
|
||||
* @device_id: device ID which this host need to be fixed
|
||||
* @ops: pointer to configuration access functions
|
||||
* @startup: pointer to controller setting functions
|
||||
* @setup_irq: pointer to initialize IRQ functions
|
||||
*/
|
||||
struct mtk_pcie_soc {
|
||||
bool need_fix_class_id;
|
||||
bool need_fix_device_id;
|
||||
unsigned int device_id;
|
||||
struct pci_ops *ops;
|
||||
int (*startup)(struct mtk_pcie_port *port);
|
||||
int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node);
|
||||
@@ -630,8 +635,6 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc)
|
||||
}
|
||||
|
||||
chained_irq_exit(irqchip, desc);
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
static int mtk_pcie_setup_irq(struct mtk_pcie_port *port,
|
||||
@@ -696,6 +699,9 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
|
||||
writew(val, port->base + PCIE_CONF_CLASS_ID);
|
||||
}
|
||||
|
||||
if (soc->need_fix_device_id)
|
||||
writew(soc->device_id, port->base + PCIE_CONF_DEVICE_ID);
|
||||
|
||||
/* 100ms timeout value should be enough for Gen1/2 training */
|
||||
err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val,
|
||||
!!(val & PCIE_PORT_LINKUP_V2), 20,
|
||||
@@ -1216,11 +1222,21 @@ static const struct mtk_pcie_soc mtk_pcie_soc_mt7622 = {
|
||||
.setup_irq = mtk_pcie_setup_irq,
|
||||
};
|
||||
|
||||
static const struct mtk_pcie_soc mtk_pcie_soc_mt7629 = {
|
||||
.need_fix_class_id = true,
|
||||
.need_fix_device_id = true,
|
||||
.device_id = PCI_DEVICE_ID_MEDIATEK_7629,
|
||||
.ops = &mtk_pcie_ops_v2,
|
||||
.startup = mtk_pcie_startup_port_v2,
|
||||
.setup_irq = mtk_pcie_setup_irq,
|
||||
};
|
||||
|
||||
static const struct of_device_id mtk_pcie_ids[] = {
|
||||
{ .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 },
|
||||
{ .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 },
|
||||
{ .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 },
|
||||
{ .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_mt7622 },
|
||||
{ .compatible = "mediatek,mt7629-pcie", .data = &mtk_pcie_soc_mt7629 },
|
||||
{},
|
||||
};
|
||||
|
||||
|
@@ -88,6 +88,7 @@
|
||||
#define AMAP_CTRL_TYPE_MASK 3
|
||||
|
||||
#define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win)
|
||||
#define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win)
|
||||
#define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win)
|
||||
#define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win)
|
||||
#define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win)
|
||||
@@ -462,7 +463,7 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
|
||||
}
|
||||
|
||||
static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
|
||||
u64 pci_addr, u32 type, u64 size)
|
||||
u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
|
||||
{
|
||||
u32 value;
|
||||
u64 size64 = ~(size - 1);
|
||||
@@ -482,7 +483,10 @@ static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
|
||||
csr_writel(pcie, upper_32_bits(size64),
|
||||
PAB_EXT_PEX_AMAP_SIZEN(win_num));
|
||||
|
||||
csr_writel(pcie, pci_addr, PAB_PEX_AMAP_AXI_WIN(win_num));
|
||||
csr_writel(pcie, lower_32_bits(cpu_addr),
|
||||
PAB_PEX_AMAP_AXI_WIN(win_num));
|
||||
csr_writel(pcie, upper_32_bits(cpu_addr),
|
||||
PAB_EXT_PEX_AMAP_AXI_WIN(win_num));
|
||||
|
||||
csr_writel(pcie, lower_32_bits(pci_addr),
|
||||
PAB_PEX_AMAP_PEX_WIN_L(win_num));
|
||||
@@ -624,7 +628,7 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie)
|
||||
CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res));
|
||||
|
||||
/* memory inbound translation window */
|
||||
program_ib_windows(pcie, WIN_NUM_0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE);
|
||||
program_ib_windows(pcie, WIN_NUM_0, 0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE);
|
||||
|
||||
/* Get the I/O and memory ranges from DT */
|
||||
resource_list_for_each_entry(win, &pcie->resources) {
|
||||
|
@@ -608,29 +608,29 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip)
|
||||
|
||||
rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v");
|
||||
if (IS_ERR(rockchip->vpcie12v)) {
|
||||
if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
if (PTR_ERR(rockchip->vpcie12v) != -ENODEV)
|
||||
return PTR_ERR(rockchip->vpcie12v);
|
||||
dev_info(dev, "no vpcie12v regulator found\n");
|
||||
}
|
||||
|
||||
rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3");
|
||||
if (IS_ERR(rockchip->vpcie3v3)) {
|
||||
if (PTR_ERR(rockchip->vpcie3v3) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
if (PTR_ERR(rockchip->vpcie3v3) != -ENODEV)
|
||||
return PTR_ERR(rockchip->vpcie3v3);
|
||||
dev_info(dev, "no vpcie3v3 regulator found\n");
|
||||
}
|
||||
|
||||
rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8");
|
||||
if (IS_ERR(rockchip->vpcie1v8)) {
|
||||
if (PTR_ERR(rockchip->vpcie1v8) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
if (PTR_ERR(rockchip->vpcie1v8) != -ENODEV)
|
||||
return PTR_ERR(rockchip->vpcie1v8);
|
||||
dev_info(dev, "no vpcie1v8 regulator found\n");
|
||||
}
|
||||
|
||||
rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9");
|
||||
if (IS_ERR(rockchip->vpcie0v9)) {
|
||||
if (PTR_ERR(rockchip->vpcie0v9) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
if (PTR_ERR(rockchip->vpcie0v9) != -ENODEV)
|
||||
return PTR_ERR(rockchip->vpcie0v9);
|
||||
dev_info(dev, "no vpcie0v9 regulator found\n");
|
||||
}
|
||||
|
||||
|
@@ -31,6 +31,9 @@
|
||||
#define PCI_REG_VMLOCK 0x70
|
||||
#define MB2_SHADOW_EN(vmlock) (vmlock & 0x2)
|
||||
|
||||
#define MB2_SHADOW_OFFSET 0x2000
|
||||
#define MB2_SHADOW_SIZE 16
|
||||
|
||||
enum vmd_features {
|
||||
/*
|
||||
* Device may contain registers which hint the physical location of the
|
||||
@@ -94,6 +97,7 @@ struct vmd_dev {
|
||||
struct resource resources[3];
|
||||
struct irq_domain *irq_domain;
|
||||
struct pci_bus *bus;
|
||||
u8 busn_start;
|
||||
|
||||
struct dma_map_ops dma_ops;
|
||||
struct dma_domain dma_domain;
|
||||
@@ -440,7 +444,8 @@ static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
|
||||
unsigned int devfn, int reg, int len)
|
||||
{
|
||||
char __iomem *addr = vmd->cfgbar +
|
||||
(bus->number << 20) + (devfn << 12) + reg;
|
||||
((bus->number - vmd->busn_start) << 20) +
|
||||
(devfn << 12) + reg;
|
||||
|
||||
if ((addr - vmd->cfgbar) + len >=
|
||||
resource_size(&vmd->dev->resource[VMD_CFGBAR]))
|
||||
@@ -563,7 +568,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
||||
unsigned long flags;
|
||||
LIST_HEAD(resources);
|
||||
resource_size_t offset[2] = {0};
|
||||
resource_size_t membar2_offset = 0x2000, busn_start = 0;
|
||||
resource_size_t membar2_offset = 0x2000;
|
||||
struct pci_bus *child;
|
||||
|
||||
/*
|
||||
@@ -576,7 +581,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
||||
u32 vmlock;
|
||||
int ret;
|
||||
|
||||
membar2_offset = 0x2018;
|
||||
membar2_offset = MB2_SHADOW_OFFSET + MB2_SHADOW_SIZE;
|
||||
ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock);
|
||||
if (ret || vmlock == ~0)
|
||||
return -ENODEV;
|
||||
@@ -588,9 +593,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
||||
if (!membar2)
|
||||
return -ENOMEM;
|
||||
offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
|
||||
readq(membar2 + 0x2008);
|
||||
readq(membar2 + MB2_SHADOW_OFFSET);
|
||||
offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
|
||||
readq(membar2 + 0x2010);
|
||||
readq(membar2 + MB2_SHADOW_OFFSET + 8);
|
||||
pci_iounmap(vmd->dev, membar2);
|
||||
}
|
||||
}
|
||||
@@ -606,14 +611,14 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
||||
pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig);
|
||||
if (BUS_RESTRICT_CAP(vmcap) &&
|
||||
(BUS_RESTRICT_CFG(vmconfig) == 0x1))
|
||||
busn_start = 128;
|
||||
vmd->busn_start = 128;
|
||||
}
|
||||
|
||||
res = &vmd->dev->resource[VMD_CFGBAR];
|
||||
vmd->resources[0] = (struct resource) {
|
||||
.name = "VMD CFGBAR",
|
||||
.start = busn_start,
|
||||
.end = busn_start + (resource_size(res) >> 20) - 1,
|
||||
.start = vmd->busn_start,
|
||||
.end = vmd->busn_start + (resource_size(res) >> 20) - 1,
|
||||
.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
|
||||
};
|
||||
|
||||
@@ -681,8 +686,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
||||
pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
|
||||
pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
|
||||
|
||||
vmd->bus = pci_create_root_bus(&vmd->dev->dev, busn_start, &vmd_ops,
|
||||
sd, &resources);
|
||||
vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
|
||||
&vmd_ops, sd, &resources);
|
||||
if (!vmd->bus) {
|
||||
pci_free_resource_list(&resources);
|
||||
irq_domain_remove(vmd->irq_domain);
|
||||
|
Reference in New Issue
Block a user