Merge tag 'pci-v4.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull pci updates from Bjorn Helgaas: - Decode AER errors with names similar to "lspci" (Tyler Baicar) - Expose AER statistics in sysfs (Rajat Jain) - Clear AER status bits selectively based on the type of recovery (Oza Pawandeep) - Honor "pcie_ports=native" even if HEST sets FIRMWARE_FIRST (Alexandru Gagniuc) - Don't clear AER status bits if we're using the "Firmware-First" strategy where firmware owns the registers (Alexandru Gagniuc) - Use sysfs_match_string() to simplify ASPM sysfs parsing (Andy Shevchenko) - Remove unnecessary includes of <linux/pci-aspm.h> (Bjorn Helgaas) - Defer DPC event handling to work queue (Keith Busch) - Use threaded IRQ for DPC bottom half (Keith Busch) - Print AER status while handling DPC events (Keith Busch) - Work around IDT switch ACS Source Validation erratum (James Puthukattukaran) - Emit diagnostics for all cases of PCIe Link downtraining (Links operating slower than they're capable of) (Alexandru Gagniuc) - Skip VFs when configuring Max Payload Size (Myron Stowe) - Reduce Root Port Max Payload Size if necessary when hot-adding a device below it (Myron Stowe) - Simplify SHPC existence/permission checks (Bjorn Helgaas) - Remove hotplug sample skeleton driver (Lukas Wunner) - Convert pciehp to threaded IRQ handling (Lukas Wunner) - Improve pciehp tolerance of missed events and initially unstable links (Lukas Wunner) - Clear spurious pciehp events on resume (Lukas Wunner) - Add pciehp runtime PM support, including for Thunderbolt controllers (Lukas Wunner) - Support interrupts from pciehp bridges in D3hot (Lukas Wunner) - Mark fall-through switch cases before enabling -Wimplicit-fallthrough (Gustavo A. R. Silva) - Move DMA-debug PCI init from arch code to PCI core (Christoph Hellwig) - Fix pci_request_irq() usage of IRQF_ONESHOT when no handler is supplied (Heiner Kallweit) - Unify PCI and DMA direction #defines (Shunyong Yang) - Add PCI_DEVICE_DATA() macro (Andy Shevchenko) - Check for VPD completion before checking for timeout (Bert Kenward) - Limit Netronome NFP5000 config space size to work around erratum (Jakub Kicinski) - Set IRQCHIP_ONESHOT_SAFE for PCI MSI irqchips (Heiner Kallweit) - Document ACPI description of PCI host bridges (Bjorn Helgaas) - Add "pci=disable_acs_redir=" parameter to disable ACS redirection for peer-to-peer DMA support (we don't have the peer-to-peer support yet; this is just one piece) (Logan Gunthorpe) - Clean up devm_of_pci_get_host_bridge_resources() resource allocation (Jan Kiszka) - Fixup resizable BARs after suspend/resume (Christian König) - Make "pci=earlydump" generic (Sinan Kaya) - Fix ROM BAR access routines to stay in bounds and check for signature correctly (Rex Zhu) - Add DMA alias quirk for Microsemi Switchtec NTB (Doug Meyer) - Expand documentation for pci_add_dma_alias() (Logan Gunthorpe) - To avoid bus errors, enable PASID only if entire path supports End-End TLP prefixes (Sinan Kaya) - Unify slot and bus reset functions and remove hotplug knowledge from callers (Sinan Kaya) - Add Function-Level Reset quirks for Intel and Samsung NVMe devices to fix guest reboot issues (Alex Williamson) - Add function 1 DMA alias quirk for Marvell 88SS9183 PCIe SSD Controller (Bjorn Helgaas) - Remove Xilinx AXI-PCIe host bridge arch dependency (Palmer Dabbelt) - Remove Aardvark outbound window configuration (Evan Wang) - Fix Aardvark bridge window sizing issue (Zachary Zhang) - Convert Aardvark to use pci_host_probe() to reduce code duplication (Thomas Petazzoni) - Correct the Cadence cdns_pcie_writel() signature (Alan Douglas) - Add Cadence support for optional generic PHYs (Alan Douglas) - Add Cadence power management ops (Alan Douglas) - Remove redundant variable from Cadence driver (Colin Ian King) - Add Kirin MSI support (Xiaowei Song) - Drop unnecessary root_bus_nr setting from exynos, imx6, keystone, armada8k, artpec6, designware-plat, histb, qcom, spear13xx (Shawn Guo) - Move link notification settings from DesignWare core to individual drivers (Gustavo Pimentel) - Add endpoint library MSI-X interfaces (Gustavo Pimentel) - Correct signature of endpoint library IRQ interfaces (Gustavo Pimentel) - Add DesignWare endpoint library MSI-X callbacks (Gustavo Pimentel) - Add endpoint library MSI-X test support (Gustavo Pimentel) - Remove unnecessary GFP_ATOMIC from Hyper-V "new child" allocation (Jia-Ju Bai) - Add more devices to Broadcom PAXC quirk (Ray Jui) - Work around corrupted Broadcom PAXC config space to enable SMMU and GICv3 ITS (Ray Jui) - Disable MSI parsing to work around broken Broadcom PAXC logic in some devices (Ray Jui) - Hide unconfigured functions to work around a Broadcom PAXC defect (Ray Jui) - Lower iproc log level to reduce console output during boot (Ray Jui) - Fix mobiveil iomem/phys_addr_t type usage (Lorenzo Pieralisi) - Fix mobiveil missing include file (Lorenzo Pieralisi) - Add mobiveil Kconfig/Makefile support (Lorenzo Pieralisi) - Fix mvebu I/O space remapping issues (Thomas Petazzoni) - Use generic pci_host_bridge in mvebu instead of ARM-specific API (Thomas Petazzoni) - Whitelist VMD devices with fast interrupt handlers to avoid sharing vectors with slow handlers (Keith Busch) * tag 'pci-v4.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (153 commits) PCI/AER: Don't clear AER bits if error handling is Firmware-First PCI: Limit config space size for Netronome NFP5000 PCI/MSI: Set IRQCHIP_ONESHOT_SAFE for PCI-MSI irqchips PCI/VPD: Check for VPD access completion before checking for timeout PCI: Add PCI_DEVICE_DATA() macro to fully describe device ID entry PCI: Match Root Port's MPS to endpoint's MPSS as necessary PCI: Skip MPS logic for Virtual Functions (VFs) PCI: Add function 1 DMA alias quirk for Marvell 88SS9183 PCI: Check for PCIe Link downtraining PCI: Add ACS Redirect disable quirk for Intel Sunrise Point PCI: Add device-specific ACS Redirect disable infrastructure PCI: Convert device-specific ACS quirks from NULL termination to ARRAY_SIZE PCI: Add "pci=disable_acs_redir=" parameter for peer-to-peer support PCI: Allow specifying devices using a base bus and path of devfns PCI: Make specifying PCI devices in kernel parameters reusable PCI: Hide ACS quirk declarations inside PCI core PCI: Delay after FLR of Intel DC P3700 NVMe PCI: Disable Samsung SM961/PM961 NVMe before FLR PCI: Export pcie_has_flr() PCI: mvebu: Drop bogus comment above mvebu_pcie_map_registers() ...
This commit is contained in:
@@ -102,7 +102,7 @@ config PCI_HOST_GENERIC
|
||||
|
||||
config PCIE_XILINX
|
||||
bool "Xilinx AXI PCIe host bridge support"
|
||||
depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC) || COMPILE_TEST
|
||||
depends on OF || COMPILE_TEST
|
||||
help
|
||||
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
|
||||
Host Bridge driver.
|
||||
@@ -239,6 +239,16 @@ config PCIE_MEDIATEK
|
||||
Say Y here if you want to enable PCIe controller support on
|
||||
MediaTek SoCs.
|
||||
|
||||
config PCIE_MOBIVEIL
|
||||
bool "Mobiveil AXI PCIe controller"
|
||||
depends on ARCH_ZYNQMP || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
help
|
||||
Say Y here if you want to enable support for the Mobiveil AXI PCIe
|
||||
Soft IP. It has up to 8 outbound and inbound windows
|
||||
for address translation and it is a PCIe Gen4 IP.
|
||||
|
||||
config PCIE_TANGO_SMP8759
|
||||
bool "Tango SMP8759 PCIe controller (DANGEROUS)"
|
||||
depends on ARCH_TANGO && PCI_MSI && OF
|
||||
|
@@ -26,6 +26,7 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
|
||||
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
|
||||
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
|
||||
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
|
||||
obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o
|
||||
obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o
|
||||
obj-$(CONFIG_VMD) += vmd.o
|
||||
# pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
|
||||
|
@@ -370,7 +370,7 @@ static void dra7xx_pcie_raise_msi_irq(struct dra7xx_pcie *dra7xx,
|
||||
}
|
||||
|
||||
static int dra7xx_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
enum pci_epc_irq_type type, u8 interrupt_num)
|
||||
enum pci_epc_irq_type type, u16 interrupt_num)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
|
||||
|
@@ -421,7 +421,6 @@ static int __init exynos_add_pcie_port(struct exynos_pcie *ep,
|
||||
}
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &exynos_pcie_host_ops;
|
||||
|
||||
ret = dw_pcie_host_init(pp);
|
||||
|
@@ -667,7 +667,6 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
|
||||
}
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &imx6_pcie_host_ops;
|
||||
|
||||
ret = dw_pcie_host_init(pp);
|
||||
|
@@ -347,7 +347,6 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie,
|
||||
}
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &keystone_pcie_host_ops;
|
||||
ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np);
|
||||
if (ret) {
|
||||
|
@@ -172,7 +172,6 @@ static int armada8k_add_pcie_port(struct armada8k_pcie *pcie,
|
||||
struct device *dev = &pdev->dev;
|
||||
int ret;
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &armada8k_pcie_host_ops;
|
||||
|
||||
pp->irq = platform_get_irq(pdev, 0);
|
||||
|
@@ -399,7 +399,6 @@ static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie,
|
||||
}
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &artpec6_pcie_host_ops;
|
||||
|
||||
ret = dw_pcie_host_init(pp);
|
||||
@@ -428,7 +427,7 @@ static void artpec6_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
}
|
||||
|
||||
static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
enum pci_epc_irq_type type, u8 interrupt_num)
|
||||
enum pci_epc_irq_type type, u16 interrupt_num)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
|
||||
|
@@ -40,6 +40,39 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
|
||||
__dw_pcie_ep_reset_bar(pci, bar, 0);
|
||||
}
|
||||
|
||||
static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
|
||||
u8 cap)
|
||||
{
|
||||
u8 cap_id, next_cap_ptr;
|
||||
u16 reg;
|
||||
|
||||
reg = dw_pcie_readw_dbi(pci, cap_ptr);
|
||||
next_cap_ptr = (reg & 0xff00) >> 8;
|
||||
cap_id = (reg & 0x00ff);
|
||||
|
||||
if (!next_cap_ptr || cap_id > PCI_CAP_ID_MAX)
|
||||
return 0;
|
||||
|
||||
if (cap_id == cap)
|
||||
return cap_ptr;
|
||||
|
||||
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
|
||||
}
|
||||
|
||||
static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap)
|
||||
{
|
||||
u8 next_cap_ptr;
|
||||
u16 reg;
|
||||
|
||||
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
|
||||
next_cap_ptr = (reg & 0x00ff);
|
||||
|
||||
if (!next_cap_ptr)
|
||||
return 0;
|
||||
|
||||
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no,
|
||||
struct pci_epf_header *hdr)
|
||||
{
|
||||
@@ -213,36 +246,84 @@ static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no,
|
||||
|
||||
static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no)
|
||||
{
|
||||
int val;
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
u32 val, reg;
|
||||
|
||||
val = dw_pcie_readw_dbi(pci, MSI_MESSAGE_CONTROL);
|
||||
if (!(val & MSI_CAP_MSI_EN_MASK))
|
||||
if (!ep->msi_cap)
|
||||
return -EINVAL;
|
||||
|
||||
val = (val & MSI_CAP_MME_MASK) >> MSI_CAP_MME_SHIFT;
|
||||
reg = ep->msi_cap + PCI_MSI_FLAGS;
|
||||
val = dw_pcie_readw_dbi(pci, reg);
|
||||
if (!(val & PCI_MSI_FLAGS_ENABLE))
|
||||
return -EINVAL;
|
||||
|
||||
val = (val & PCI_MSI_FLAGS_QSIZE) >> 4;
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 encode_int)
|
||||
static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts)
|
||||
{
|
||||
int val;
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
u32 val, reg;
|
||||
|
||||
val = dw_pcie_readw_dbi(pci, MSI_MESSAGE_CONTROL);
|
||||
val &= ~MSI_CAP_MMC_MASK;
|
||||
val |= (encode_int << MSI_CAP_MMC_SHIFT) & MSI_CAP_MMC_MASK;
|
||||
if (!ep->msi_cap)
|
||||
return -EINVAL;
|
||||
|
||||
reg = ep->msi_cap + PCI_MSI_FLAGS;
|
||||
val = dw_pcie_readw_dbi(pci, reg);
|
||||
val &= ~PCI_MSI_FLAGS_QMASK;
|
||||
val |= (interrupts << 1) & PCI_MSI_FLAGS_QMASK;
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
dw_pcie_writew_dbi(pci, MSI_MESSAGE_CONTROL, val);
|
||||
dw_pcie_writew_dbi(pci, reg, val);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no)
|
||||
{
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
u32 val, reg;
|
||||
|
||||
if (!ep->msix_cap)
|
||||
return -EINVAL;
|
||||
|
||||
reg = ep->msix_cap + PCI_MSIX_FLAGS;
|
||||
val = dw_pcie_readw_dbi(pci, reg);
|
||||
if (!(val & PCI_MSIX_FLAGS_ENABLE))
|
||||
return -EINVAL;
|
||||
|
||||
val &= PCI_MSIX_FLAGS_QSIZE;
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts)
|
||||
{
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
u32 val, reg;
|
||||
|
||||
if (!ep->msix_cap)
|
||||
return -EINVAL;
|
||||
|
||||
reg = ep->msix_cap + PCI_MSIX_FLAGS;
|
||||
val = dw_pcie_readw_dbi(pci, reg);
|
||||
val &= ~PCI_MSIX_FLAGS_QSIZE;
|
||||
val |= interrupts;
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
dw_pcie_writew_dbi(pci, reg, val);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_raise_irq(struct pci_epc *epc, u8 func_no,
|
||||
enum pci_epc_irq_type type, u8 interrupt_num)
|
||||
enum pci_epc_irq_type type, u16 interrupt_num)
|
||||
{
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
|
||||
@@ -282,32 +363,52 @@ static const struct pci_epc_ops epc_ops = {
|
||||
.unmap_addr = dw_pcie_ep_unmap_addr,
|
||||
.set_msi = dw_pcie_ep_set_msi,
|
||||
.get_msi = dw_pcie_ep_get_msi,
|
||||
.set_msix = dw_pcie_ep_set_msix,
|
||||
.get_msix = dw_pcie_ep_get_msix,
|
||||
.raise_irq = dw_pcie_ep_raise_irq,
|
||||
.start = dw_pcie_ep_start,
|
||||
.stop = dw_pcie_ep_stop,
|
||||
};
|
||||
|
||||
int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct device *dev = pci->dev;
|
||||
|
||||
dev_err(dev, "EP cannot trigger legacy IRQs\n");
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
u8 interrupt_num)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct pci_epc *epc = ep->epc;
|
||||
u16 msg_ctrl, msg_data;
|
||||
u32 msg_addr_lower, msg_addr_upper;
|
||||
u32 msg_addr_lower, msg_addr_upper, reg;
|
||||
u64 msg_addr;
|
||||
bool has_upper;
|
||||
int ret;
|
||||
|
||||
if (!ep->msi_cap)
|
||||
return -EINVAL;
|
||||
|
||||
/* Raise MSI per the PCI Local Bus Specification Revision 3.0, 6.8.1. */
|
||||
msg_ctrl = dw_pcie_readw_dbi(pci, MSI_MESSAGE_CONTROL);
|
||||
reg = ep->msi_cap + PCI_MSI_FLAGS;
|
||||
msg_ctrl = dw_pcie_readw_dbi(pci, reg);
|
||||
has_upper = !!(msg_ctrl & PCI_MSI_FLAGS_64BIT);
|
||||
msg_addr_lower = dw_pcie_readl_dbi(pci, MSI_MESSAGE_ADDR_L32);
|
||||
reg = ep->msi_cap + PCI_MSI_ADDRESS_LO;
|
||||
msg_addr_lower = dw_pcie_readl_dbi(pci, reg);
|
||||
if (has_upper) {
|
||||
msg_addr_upper = dw_pcie_readl_dbi(pci, MSI_MESSAGE_ADDR_U32);
|
||||
msg_data = dw_pcie_readw_dbi(pci, MSI_MESSAGE_DATA_64);
|
||||
reg = ep->msi_cap + PCI_MSI_ADDRESS_HI;
|
||||
msg_addr_upper = dw_pcie_readl_dbi(pci, reg);
|
||||
reg = ep->msi_cap + PCI_MSI_DATA_64;
|
||||
msg_data = dw_pcie_readw_dbi(pci, reg);
|
||||
} else {
|
||||
msg_addr_upper = 0;
|
||||
msg_data = dw_pcie_readw_dbi(pci, MSI_MESSAGE_DATA_32);
|
||||
reg = ep->msi_cap + PCI_MSI_DATA_32;
|
||||
msg_data = dw_pcie_readw_dbi(pci, reg);
|
||||
}
|
||||
msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower;
|
||||
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
|
||||
@@ -322,6 +423,64 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
u16 interrupt_num)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct pci_epc *epc = ep->epc;
|
||||
u16 tbl_offset, bir;
|
||||
u32 bar_addr_upper, bar_addr_lower;
|
||||
u32 msg_addr_upper, msg_addr_lower;
|
||||
u32 reg, msg_data, vec_ctrl;
|
||||
u64 tbl_addr, msg_addr, reg_u64;
|
||||
void __iomem *msix_tbl;
|
||||
int ret;
|
||||
|
||||
reg = ep->msix_cap + PCI_MSIX_TABLE;
|
||||
tbl_offset = dw_pcie_readl_dbi(pci, reg);
|
||||
bir = (tbl_offset & PCI_MSIX_TABLE_BIR);
|
||||
tbl_offset &= PCI_MSIX_TABLE_OFFSET;
|
||||
tbl_offset >>= 3;
|
||||
|
||||
reg = PCI_BASE_ADDRESS_0 + (4 * bir);
|
||||
bar_addr_upper = 0;
|
||||
bar_addr_lower = dw_pcie_readl_dbi(pci, reg);
|
||||
reg_u64 = (bar_addr_lower & PCI_BASE_ADDRESS_MEM_TYPE_MASK);
|
||||
if (reg_u64 == PCI_BASE_ADDRESS_MEM_TYPE_64)
|
||||
bar_addr_upper = dw_pcie_readl_dbi(pci, reg + 4);
|
||||
|
||||
tbl_addr = ((u64) bar_addr_upper) << 32 | bar_addr_lower;
|
||||
tbl_addr += (tbl_offset + ((interrupt_num - 1) * PCI_MSIX_ENTRY_SIZE));
|
||||
tbl_addr &= PCI_BASE_ADDRESS_MEM_MASK;
|
||||
|
||||
msix_tbl = ioremap_nocache(ep->phys_base + tbl_addr,
|
||||
PCI_MSIX_ENTRY_SIZE);
|
||||
if (!msix_tbl)
|
||||
return -EINVAL;
|
||||
|
||||
msg_addr_lower = readl(msix_tbl + PCI_MSIX_ENTRY_LOWER_ADDR);
|
||||
msg_addr_upper = readl(msix_tbl + PCI_MSIX_ENTRY_UPPER_ADDR);
|
||||
msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower;
|
||||
msg_data = readl(msix_tbl + PCI_MSIX_ENTRY_DATA);
|
||||
vec_ctrl = readl(msix_tbl + PCI_MSIX_ENTRY_VECTOR_CTRL);
|
||||
|
||||
iounmap(msix_tbl);
|
||||
|
||||
if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT)
|
||||
return -EPERM;
|
||||
|
||||
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
|
||||
epc->mem->page_size);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
writel(msg_data, ep->msi_mem);
|
||||
|
||||
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
|
||||
{
|
||||
struct pci_epc *epc = ep->epc;
|
||||
@@ -386,15 +545,18 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
return -ENOMEM;
|
||||
ep->outbound_addr = addr;
|
||||
|
||||
if (ep->ops->ep_init)
|
||||
ep->ops->ep_init(ep);
|
||||
|
||||
epc = devm_pci_epc_create(dev, &epc_ops);
|
||||
if (IS_ERR(epc)) {
|
||||
dev_err(dev, "Failed to create epc device\n");
|
||||
return PTR_ERR(epc);
|
||||
}
|
||||
|
||||
ep->epc = epc;
|
||||
epc_set_drvdata(epc, ep);
|
||||
|
||||
if (ep->ops->ep_init)
|
||||
ep->ops->ep_init(ep);
|
||||
|
||||
ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
|
||||
if (ret < 0)
|
||||
epc->max_functions = 1;
|
||||
@@ -409,15 +571,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys,
|
||||
epc->mem->page_size);
|
||||
if (!ep->msi_mem) {
|
||||
dev_err(dev, "Failed to reserve memory for MSI\n");
|
||||
dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI);
|
||||
|
||||
epc->features = EPC_FEATURE_NO_LINKUP_NOTIFIER;
|
||||
EPC_FEATURE_SET_BAR(epc->features, BAR_0);
|
||||
ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX);
|
||||
|
||||
ep->epc = epc;
|
||||
epc_set_drvdata(epc, ep);
|
||||
dw_pcie_setup(pci);
|
||||
|
||||
return 0;
|
||||
|
@@ -70,24 +70,29 @@ static const struct dw_pcie_ops dw_pcie_ops = {
|
||||
static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct pci_epc *epc = ep->epc;
|
||||
enum pci_barno bar;
|
||||
|
||||
for (bar = BAR_0; bar <= BAR_5; bar++)
|
||||
dw_pcie_ep_reset_bar(pci, bar);
|
||||
|
||||
epc->features |= EPC_FEATURE_NO_LINKUP_NOTIFIER;
|
||||
epc->features |= EPC_FEATURE_MSIX_AVAILABLE;
|
||||
}
|
||||
|
||||
static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
enum pci_epc_irq_type type,
|
||||
u8 interrupt_num)
|
||||
u16 interrupt_num)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
|
||||
switch (type) {
|
||||
case PCI_EPC_IRQ_LEGACY:
|
||||
dev_err(pci->dev, "EP cannot trigger legacy IRQs\n");
|
||||
return -EINVAL;
|
||||
return dw_pcie_ep_raise_legacy_irq(ep, func_no);
|
||||
case PCI_EPC_IRQ_MSI:
|
||||
return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
|
||||
case PCI_EPC_IRQ_MSIX:
|
||||
return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num);
|
||||
default:
|
||||
dev_err(pci->dev, "UNKNOWN IRQ type\n");
|
||||
}
|
||||
@@ -118,7 +123,6 @@ static int dw_plat_add_pcie_port(struct dw_plat_pcie *dw_plat_pcie,
|
||||
return pp->msi_irq;
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &dw_plat_pcie_host_ops;
|
||||
|
||||
ret = dw_pcie_host_init(pp);
|
||||
|
@@ -96,17 +96,6 @@
|
||||
#define PCIE_GET_ATU_INB_UNR_REG_OFFSET(region) \
|
||||
((0x3 << 20) | ((region) << 9) | (0x1 << 8))
|
||||
|
||||
#define MSI_MESSAGE_CONTROL 0x52
|
||||
#define MSI_CAP_MMC_SHIFT 1
|
||||
#define MSI_CAP_MMC_MASK (7 << MSI_CAP_MMC_SHIFT)
|
||||
#define MSI_CAP_MME_SHIFT 4
|
||||
#define MSI_CAP_MSI_EN_MASK 0x1
|
||||
#define MSI_CAP_MME_MASK (7 << MSI_CAP_MME_SHIFT)
|
||||
#define MSI_MESSAGE_ADDR_L32 0x54
|
||||
#define MSI_MESSAGE_ADDR_U32 0x58
|
||||
#define MSI_MESSAGE_DATA_32 0x58
|
||||
#define MSI_MESSAGE_DATA_64 0x5C
|
||||
|
||||
#define MAX_MSI_IRQS 256
|
||||
#define MAX_MSI_IRQS_PER_CTRL 32
|
||||
#define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL)
|
||||
@@ -191,7 +180,7 @@ enum dw_pcie_as_type {
|
||||
struct dw_pcie_ep_ops {
|
||||
void (*ep_init)(struct dw_pcie_ep *ep);
|
||||
int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no,
|
||||
enum pci_epc_irq_type type, u8 interrupt_num);
|
||||
enum pci_epc_irq_type type, u16 interrupt_num);
|
||||
};
|
||||
|
||||
struct dw_pcie_ep {
|
||||
@@ -208,6 +197,8 @@ struct dw_pcie_ep {
|
||||
u32 num_ob_windows;
|
||||
void __iomem *msi_mem;
|
||||
phys_addr_t msi_mem_phys;
|
||||
u8 msi_cap; /* MSI capability offset */
|
||||
u8 msix_cap; /* MSI-X capability offset */
|
||||
};
|
||||
|
||||
struct dw_pcie_ops {
|
||||
@@ -357,8 +348,11 @@ static inline int dw_pcie_allocate_domains(struct pcie_port *pp)
|
||||
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep);
|
||||
int dw_pcie_ep_init(struct dw_pcie_ep *ep);
|
||||
void dw_pcie_ep_exit(struct dw_pcie_ep *ep);
|
||||
int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no);
|
||||
int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
u8 interrupt_num);
|
||||
int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
u16 interrupt_num);
|
||||
void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar);
|
||||
#else
|
||||
static inline void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
|
||||
@@ -374,12 +368,23 @@ static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
u8 interrupt_num)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
u16 interrupt_num)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
|
||||
{
|
||||
}
|
||||
|
@@ -420,7 +420,6 @@ static int histb_pcie_probe(struct platform_device *pdev)
|
||||
phy_init(hipcie->phy);
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &histb_pcie_host_ops;
|
||||
|
||||
platform_set_drvdata(pdev, hipcie);
|
||||
|
@@ -430,6 +430,9 @@ static int kirin_pcie_host_init(struct pcie_port *pp)
|
||||
{
|
||||
kirin_pcie_establish_link(pp);
|
||||
|
||||
if (IS_ENABLED(CONFIG_PCI_MSI))
|
||||
dw_pcie_msi_init(pp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -445,9 +448,34 @@ static const struct dw_pcie_host_ops kirin_pcie_host_ops = {
|
||||
.host_init = kirin_pcie_host_init,
|
||||
};
|
||||
|
||||
static int kirin_pcie_add_msi(struct dw_pcie *pci,
|
||||
struct platform_device *pdev)
|
||||
{
|
||||
int irq;
|
||||
|
||||
if (IS_ENABLED(CONFIG_PCI_MSI)) {
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
if (irq < 0) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to get MSI IRQ (%d)\n", irq);
|
||||
return irq;
|
||||
}
|
||||
|
||||
pci->pp.msi_irq = irq;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init kirin_add_pcie_port(struct dw_pcie *pci,
|
||||
struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = kirin_pcie_add_msi(pci, pdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pci->pp.ops = &kirin_pcie_host_ops;
|
||||
|
||||
return dw_pcie_host_init(&pci->pp);
|
||||
|
@@ -1251,7 +1251,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &qcom_pcie_dw_ops;
|
||||
|
||||
if (IS_ENABLED(CONFIG_PCI_MSI)) {
|
||||
|
@@ -210,7 +210,6 @@ static int spear13xx_add_pcie_port(struct spear13xx_pcie *spear13xx_pcie,
|
||||
return ret;
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &spear13xx_pcie_host_ops;
|
||||
|
||||
ret = dw_pcie_host_init(pp);
|
||||
|
@@ -111,24 +111,6 @@
|
||||
#define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C)
|
||||
#define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C)
|
||||
|
||||
/* PCIe window configuration */
|
||||
#define OB_WIN_BASE_ADDR 0x4c00
|
||||
#define OB_WIN_BLOCK_SIZE 0x20
|
||||
#define OB_WIN_REG_ADDR(win, offset) (OB_WIN_BASE_ADDR + \
|
||||
OB_WIN_BLOCK_SIZE * (win) + \
|
||||
(offset))
|
||||
#define OB_WIN_MATCH_LS(win) OB_WIN_REG_ADDR(win, 0x00)
|
||||
#define OB_WIN_MATCH_MS(win) OB_WIN_REG_ADDR(win, 0x04)
|
||||
#define OB_WIN_REMAP_LS(win) OB_WIN_REG_ADDR(win, 0x08)
|
||||
#define OB_WIN_REMAP_MS(win) OB_WIN_REG_ADDR(win, 0x0c)
|
||||
#define OB_WIN_MASK_LS(win) OB_WIN_REG_ADDR(win, 0x10)
|
||||
#define OB_WIN_MASK_MS(win) OB_WIN_REG_ADDR(win, 0x14)
|
||||
#define OB_WIN_ACTIONS(win) OB_WIN_REG_ADDR(win, 0x18)
|
||||
|
||||
/* PCIe window types */
|
||||
#define OB_PCIE_MEM 0x0
|
||||
#define OB_PCIE_IO 0x4
|
||||
|
||||
/* LMI registers base address and register offsets */
|
||||
#define LMI_BASE_ADDR 0x6000
|
||||
#define CFG_REG (LMI_BASE_ADDR + 0x0)
|
||||
@@ -247,34 +229,9 @@ static int advk_pcie_wait_for_link(struct advk_pcie *pcie)
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set PCIe address window register which could be used for memory
|
||||
* mapping.
|
||||
*/
|
||||
static void advk_pcie_set_ob_win(struct advk_pcie *pcie,
|
||||
u32 win_num, u32 match_ms,
|
||||
u32 match_ls, u32 mask_ms,
|
||||
u32 mask_ls, u32 remap_ms,
|
||||
u32 remap_ls, u32 action)
|
||||
{
|
||||
advk_writel(pcie, match_ls, OB_WIN_MATCH_LS(win_num));
|
||||
advk_writel(pcie, match_ms, OB_WIN_MATCH_MS(win_num));
|
||||
advk_writel(pcie, mask_ms, OB_WIN_MASK_MS(win_num));
|
||||
advk_writel(pcie, mask_ls, OB_WIN_MASK_LS(win_num));
|
||||
advk_writel(pcie, remap_ms, OB_WIN_REMAP_MS(win_num));
|
||||
advk_writel(pcie, remap_ls, OB_WIN_REMAP_LS(win_num));
|
||||
advk_writel(pcie, action, OB_WIN_ACTIONS(win_num));
|
||||
advk_writel(pcie, match_ls | BIT(0), OB_WIN_MATCH_LS(win_num));
|
||||
}
|
||||
|
||||
static void advk_pcie_setup_hw(struct advk_pcie *pcie)
|
||||
{
|
||||
u32 reg;
|
||||
int i;
|
||||
|
||||
/* Point PCIe unit MBUS decode windows to DRAM space */
|
||||
for (i = 0; i < 8; i++)
|
||||
advk_pcie_set_ob_win(pcie, i, 0, 0, 0, 0, 0, 0, 0);
|
||||
|
||||
/* Set to Direct mode */
|
||||
reg = advk_readl(pcie, CTRL_CONFIG_REG);
|
||||
@@ -433,6 +390,15 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
|
||||
int devfn)
|
||||
{
|
||||
if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
|
||||
int where, int size, u32 *val)
|
||||
{
|
||||
@@ -440,7 +406,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
|
||||
u32 reg;
|
||||
int ret;
|
||||
|
||||
if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) {
|
||||
if (!advk_pcie_valid_device(pcie, bus, devfn)) {
|
||||
*val = 0xffffffff;
|
||||
return PCIBIOS_DEVICE_NOT_FOUND;
|
||||
}
|
||||
@@ -494,7 +460,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
|
||||
int offset;
|
||||
int ret;
|
||||
|
||||
if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0)
|
||||
if (!advk_pcie_valid_device(pcie, bus, devfn))
|
||||
return PCIBIOS_DEVICE_NOT_FOUND;
|
||||
|
||||
if (where % size)
|
||||
@@ -843,12 +809,6 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie)
|
||||
|
||||
switch (resource_type(res)) {
|
||||
case IORESOURCE_IO:
|
||||
advk_pcie_set_ob_win(pcie, 1,
|
||||
upper_32_bits(res->start),
|
||||
lower_32_bits(res->start),
|
||||
0, 0xF8000000, 0,
|
||||
lower_32_bits(res->start),
|
||||
OB_PCIE_IO);
|
||||
err = devm_pci_remap_iospace(dev, res, iobase);
|
||||
if (err) {
|
||||
dev_warn(dev, "error %d: failed to map resource %pR\n",
|
||||
@@ -857,12 +817,6 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie)
|
||||
}
|
||||
break;
|
||||
case IORESOURCE_MEM:
|
||||
advk_pcie_set_ob_win(pcie, 0,
|
||||
upper_32_bits(res->start),
|
||||
lower_32_bits(res->start),
|
||||
0x0, 0xF8000000, 0,
|
||||
lower_32_bits(res->start),
|
||||
(2 << 20) | OB_PCIE_MEM);
|
||||
res_valid |= !(res->flags & IORESOURCE_PREFETCH);
|
||||
break;
|
||||
case IORESOURCE_BUS:
|
||||
@@ -889,7 +843,6 @@ static int advk_pcie_probe(struct platform_device *pdev)
|
||||
struct device *dev = &pdev->dev;
|
||||
struct advk_pcie *pcie;
|
||||
struct resource *res;
|
||||
struct pci_bus *bus, *child;
|
||||
struct pci_host_bridge *bridge;
|
||||
int ret, irq;
|
||||
|
||||
@@ -943,21 +896,13 @@ static int advk_pcie_probe(struct platform_device *pdev)
|
||||
bridge->map_irq = of_irq_parse_and_map_pci;
|
||||
bridge->swizzle_irq = pci_common_swizzle;
|
||||
|
||||
ret = pci_scan_root_bus_bridge(bridge);
|
||||
ret = pci_host_probe(bridge);
|
||||
if (ret < 0) {
|
||||
advk_pcie_remove_msi_irq_domain(pcie);
|
||||
advk_pcie_remove_irq_domain(pcie);
|
||||
return ret;
|
||||
}
|
||||
|
||||
bus = bridge->bus;
|
||||
|
||||
pci_bus_assign_resources(bus);
|
||||
|
||||
list_for_each_entry(child, &bus->children, node)
|
||||
pcie_bus_configure_settings(child);
|
||||
|
||||
pci_bus_add_devices(bus);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@@ -1546,7 +1546,7 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
hpdev = kzalloc(sizeof(*hpdev), GFP_ATOMIC);
|
||||
hpdev = kzalloc(sizeof(*hpdev), GFP_KERNEL);
|
||||
if (!hpdev)
|
||||
return NULL;
|
||||
|
||||
|
@@ -125,6 +125,7 @@ struct mvebu_pcie {
|
||||
struct platform_device *pdev;
|
||||
struct mvebu_pcie_port *ports;
|
||||
struct msi_controller *msi;
|
||||
struct list_head resources;
|
||||
struct resource io;
|
||||
struct resource realio;
|
||||
struct resource mem;
|
||||
@@ -800,7 +801,7 @@ static struct mvebu_pcie_port *mvebu_pcie_find_port(struct mvebu_pcie *pcie,
|
||||
static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
|
||||
int where, int size, u32 val)
|
||||
{
|
||||
struct mvebu_pcie *pcie = sys_to_pcie(bus->sysdata);
|
||||
struct mvebu_pcie *pcie = bus->sysdata;
|
||||
struct mvebu_pcie_port *port;
|
||||
int ret;
|
||||
|
||||
@@ -826,7 +827,7 @@ static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
|
||||
static int mvebu_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
|
||||
int size, u32 *val)
|
||||
{
|
||||
struct mvebu_pcie *pcie = sys_to_pcie(bus->sysdata);
|
||||
struct mvebu_pcie *pcie = bus->sysdata;
|
||||
struct mvebu_pcie_port *port;
|
||||
int ret;
|
||||
|
||||
@@ -857,36 +858,6 @@ static struct pci_ops mvebu_pcie_ops = {
|
||||
.write = mvebu_pcie_wr_conf,
|
||||
};
|
||||
|
||||
static int mvebu_pcie_setup(int nr, struct pci_sys_data *sys)
|
||||
{
|
||||
struct mvebu_pcie *pcie = sys_to_pcie(sys);
|
||||
int err, i;
|
||||
|
||||
pcie->mem.name = "PCI MEM";
|
||||
pcie->realio.name = "PCI I/O";
|
||||
|
||||
if (resource_size(&pcie->realio) != 0)
|
||||
pci_add_resource_offset(&sys->resources, &pcie->realio,
|
||||
sys->io_offset);
|
||||
|
||||
pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset);
|
||||
pci_add_resource(&sys->resources, &pcie->busn);
|
||||
|
||||
err = devm_request_pci_bus_resources(&pcie->pdev->dev, &sys->resources);
|
||||
if (err)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < pcie->nports; i++) {
|
||||
struct mvebu_pcie_port *port = &pcie->ports[i];
|
||||
|
||||
if (!port->base)
|
||||
continue;
|
||||
mvebu_pcie_setup_hw(port);
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev,
|
||||
const struct resource *res,
|
||||
resource_size_t start,
|
||||
@@ -917,31 +888,6 @@ static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev,
|
||||
return start;
|
||||
}
|
||||
|
||||
static void mvebu_pcie_enable(struct mvebu_pcie *pcie)
|
||||
{
|
||||
struct hw_pci hw;
|
||||
|
||||
memset(&hw, 0, sizeof(hw));
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
hw.msi_ctrl = pcie->msi;
|
||||
#endif
|
||||
|
||||
hw.nr_controllers = 1;
|
||||
hw.private_data = (void **)&pcie;
|
||||
hw.setup = mvebu_pcie_setup;
|
||||
hw.map_irq = of_irq_parse_and_map_pci;
|
||||
hw.ops = &mvebu_pcie_ops;
|
||||
hw.align_resource = mvebu_pcie_align_resource;
|
||||
|
||||
pci_common_init_dev(&pcie->pdev->dev, &hw);
|
||||
}
|
||||
|
||||
/*
|
||||
* Looks up the list of register addresses encoded into the reg =
|
||||
* <...> property for one that matches the given port/lane. Once
|
||||
* found, maps it.
|
||||
*/
|
||||
static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev,
|
||||
struct device_node *np,
|
||||
struct mvebu_pcie_port *port)
|
||||
@@ -1190,38 +1136,19 @@ static void mvebu_pcie_powerdown(struct mvebu_pcie_port *port)
|
||||
clk_disable_unprepare(port->clk);
|
||||
}
|
||||
|
||||
static int mvebu_pcie_probe(struct platform_device *pdev)
|
||||
/*
|
||||
* We can't use devm_of_pci_get_host_bridge_resources() because we
|
||||
* need to parse our special DT properties encoding the MEM and IO
|
||||
* apertures.
|
||||
*/
|
||||
static int mvebu_pcie_parse_request_resources(struct mvebu_pcie *pcie)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct mvebu_pcie *pcie;
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct device_node *child;
|
||||
int num, i, ret;
|
||||
unsigned int i;
|
||||
int ret;
|
||||
|
||||
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
|
||||
if (!pcie)
|
||||
return -ENOMEM;
|
||||
|
||||
pcie->pdev = pdev;
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
/* Get the PCIe memory and I/O aperture */
|
||||
mvebu_mbus_get_pcie_mem_aperture(&pcie->mem);
|
||||
if (resource_size(&pcie->mem) == 0) {
|
||||
dev_err(dev, "invalid memory aperture size\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mvebu_mbus_get_pcie_io_aperture(&pcie->io);
|
||||
|
||||
if (resource_size(&pcie->io) != 0) {
|
||||
pcie->realio.flags = pcie->io.flags;
|
||||
pcie->realio.start = PCIBIOS_MIN_IO;
|
||||
pcie->realio.end = min_t(resource_size_t,
|
||||
IO_SPACE_LIMIT,
|
||||
resource_size(&pcie->io));
|
||||
} else
|
||||
pcie->realio = pcie->io;
|
||||
INIT_LIST_HEAD(&pcie->resources);
|
||||
|
||||
/* Get the bus range */
|
||||
ret = of_pci_parse_bus_range(np, &pcie->busn);
|
||||
@@ -1229,6 +1156,58 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
|
||||
dev_err(dev, "failed to parse bus-range property: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
pci_add_resource(&pcie->resources, &pcie->busn);
|
||||
|
||||
/* Get the PCIe memory aperture */
|
||||
mvebu_mbus_get_pcie_mem_aperture(&pcie->mem);
|
||||
if (resource_size(&pcie->mem) == 0) {
|
||||
dev_err(dev, "invalid memory aperture size\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
pcie->mem.name = "PCI MEM";
|
||||
pci_add_resource(&pcie->resources, &pcie->mem);
|
||||
|
||||
/* Get the PCIe IO aperture */
|
||||
mvebu_mbus_get_pcie_io_aperture(&pcie->io);
|
||||
|
||||
if (resource_size(&pcie->io) != 0) {
|
||||
pcie->realio.flags = pcie->io.flags;
|
||||
pcie->realio.start = PCIBIOS_MIN_IO;
|
||||
pcie->realio.end = min_t(resource_size_t,
|
||||
IO_SPACE_LIMIT - SZ_64K,
|
||||
resource_size(&pcie->io) - 1);
|
||||
pcie->realio.name = "PCI I/O";
|
||||
|
||||
for (i = 0; i < resource_size(&pcie->realio); i += SZ_64K)
|
||||
pci_ioremap_io(i, pcie->io.start + i);
|
||||
|
||||
pci_add_resource(&pcie->resources, &pcie->realio);
|
||||
}
|
||||
|
||||
return devm_request_pci_bus_resources(dev, &pcie->resources);
|
||||
}
|
||||
|
||||
static int mvebu_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct mvebu_pcie *pcie;
|
||||
struct pci_host_bridge *bridge;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct device_node *child;
|
||||
int num, i, ret;
|
||||
|
||||
bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct mvebu_pcie));
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
|
||||
pcie = pci_host_bridge_priv(bridge);
|
||||
pcie->pdev = pdev;
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
ret = mvebu_pcie_parse_request_resources(pcie);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
num = of_get_available_child_count(np);
|
||||
|
||||
@@ -1272,20 +1251,24 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
|
||||
continue;
|
||||
}
|
||||
|
||||
mvebu_pcie_setup_hw(port);
|
||||
mvebu_pcie_set_local_dev_nr(port, 1);
|
||||
mvebu_sw_pci_bridge_init(port);
|
||||
}
|
||||
|
||||
pcie->nports = i;
|
||||
|
||||
for (i = 0; i < (IO_SPACE_LIMIT - SZ_64K); i += SZ_64K)
|
||||
pci_ioremap_io(i, pcie->io.start + i);
|
||||
list_splice_init(&pcie->resources, &bridge->windows);
|
||||
bridge->dev.parent = dev;
|
||||
bridge->sysdata = pcie;
|
||||
bridge->busnr = 0;
|
||||
bridge->ops = &mvebu_pcie_ops;
|
||||
bridge->map_irq = of_irq_parse_and_map_pci;
|
||||
bridge->swizzle_irq = pci_common_swizzle;
|
||||
bridge->align_resource = mvebu_pcie_align_resource;
|
||||
bridge->msi = pcie->msi;
|
||||
|
||||
mvebu_pcie_enable(pcie);
|
||||
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
return 0;
|
||||
return pci_host_probe(bridge);
|
||||
}
|
||||
|
||||
static const struct of_device_id mvebu_pcie_of_match_table[] = {
|
||||
|
@@ -238,7 +238,7 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
|
||||
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct cdns_pcie *pcie = &ep->pcie;
|
||||
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
|
||||
u16 flags, mmc, mme;
|
||||
u16 flags, mme;
|
||||
|
||||
/* Validate that the MSI feature is actually enabled. */
|
||||
flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
|
||||
@@ -249,7 +249,6 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
|
||||
* Get the Multiple Message Enable bitfield from the Message Control
|
||||
* register.
|
||||
*/
|
||||
mmc = (flags & PCI_MSI_FLAGS_QMASK) >> 1;
|
||||
mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
|
||||
|
||||
return mme;
|
||||
@@ -363,7 +362,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
|
||||
}
|
||||
|
||||
static int cdns_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn,
|
||||
enum pci_epc_irq_type type, u8 interrupt_num)
|
||||
enum pci_epc_irq_type type,
|
||||
u16 interrupt_num)
|
||||
{
|
||||
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
|
||||
@@ -439,6 +439,7 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
|
||||
struct pci_epc *epc;
|
||||
struct resource *res;
|
||||
int ret;
|
||||
int phy_count;
|
||||
|
||||
ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
|
||||
if (!ep)
|
||||
@@ -473,6 +474,12 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
|
||||
if (!ep->ob_addr)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = cdns_pcie_init_phy(dev, pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to init phy\n");
|
||||
return ret;
|
||||
}
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
pm_runtime_enable(dev);
|
||||
ret = pm_runtime_get_sync(dev);
|
||||
if (ret < 0) {
|
||||
@@ -521,6 +528,10 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
|
||||
|
||||
err_get_sync:
|
||||
pm_runtime_disable(dev);
|
||||
cdns_pcie_disable_phy(pcie);
|
||||
phy_count = pcie->phy_count;
|
||||
while (phy_count--)
|
||||
device_link_del(pcie->link[phy_count]);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -528,6 +539,7 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
|
||||
static void cdns_pcie_ep_shutdown(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct cdns_pcie *pcie = dev_get_drvdata(dev);
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_put_sync(dev);
|
||||
@@ -536,13 +548,14 @@ static void cdns_pcie_ep_shutdown(struct platform_device *pdev)
|
||||
|
||||
pm_runtime_disable(dev);
|
||||
|
||||
/* The PCIe controller can't be disabled. */
|
||||
cdns_pcie_disable_phy(pcie);
|
||||
}
|
||||
|
||||
static struct platform_driver cdns_pcie_ep_driver = {
|
||||
.driver = {
|
||||
.name = "cdns-pcie-ep",
|
||||
.of_match_table = cdns_pcie_ep_of_match,
|
||||
.pm = &cdns_pcie_pm_ops,
|
||||
},
|
||||
.probe = cdns_pcie_ep_probe,
|
||||
.shutdown = cdns_pcie_ep_shutdown,
|
||||
|
@@ -58,6 +58,11 @@ static void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
|
||||
|
||||
return pcie->reg_base + (where & 0xfff);
|
||||
}
|
||||
/* Check that the link is up */
|
||||
if (!(cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE) & 0x1))
|
||||
return NULL;
|
||||
/* Clear AXI link-down status */
|
||||
cdns_pcie_writel(pcie, CDNS_PCIE_AT_LINKDOWN, 0x0);
|
||||
|
||||
/* Update Output registers for AXI region 0. */
|
||||
addr0 = CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
|
||||
@@ -239,6 +244,7 @@ static int cdns_pcie_host_probe(struct platform_device *pdev)
|
||||
struct cdns_pcie *pcie;
|
||||
struct resource *res;
|
||||
int ret;
|
||||
int phy_count;
|
||||
|
||||
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
|
||||
if (!bridge)
|
||||
@@ -290,6 +296,13 @@ static int cdns_pcie_host_probe(struct platform_device *pdev)
|
||||
}
|
||||
pcie->mem_res = res;
|
||||
|
||||
ret = cdns_pcie_init_phy(dev, pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to init phy\n");
|
||||
return ret;
|
||||
}
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
pm_runtime_enable(dev);
|
||||
ret = pm_runtime_get_sync(dev);
|
||||
if (ret < 0) {
|
||||
@@ -322,15 +335,35 @@ static int cdns_pcie_host_probe(struct platform_device *pdev)
|
||||
|
||||
err_get_sync:
|
||||
pm_runtime_disable(dev);
|
||||
cdns_pcie_disable_phy(pcie);
|
||||
phy_count = pcie->phy_count;
|
||||
while (phy_count--)
|
||||
device_link_del(pcie->link[phy_count]);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void cdns_pcie_shutdown(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct cdns_pcie *pcie = dev_get_drvdata(dev);
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_put_sync(dev);
|
||||
if (ret < 0)
|
||||
dev_dbg(dev, "pm_runtime_put_sync failed\n");
|
||||
|
||||
pm_runtime_disable(dev);
|
||||
cdns_pcie_disable_phy(pcie);
|
||||
}
|
||||
|
||||
static struct platform_driver cdns_pcie_host_driver = {
|
||||
.driver = {
|
||||
.name = "cdns-pcie-host",
|
||||
.of_match_table = cdns_pcie_host_of_match,
|
||||
.pm = &cdns_pcie_pm_ops,
|
||||
},
|
||||
.probe = cdns_pcie_host_probe,
|
||||
.shutdown = cdns_pcie_shutdown,
|
||||
};
|
||||
builtin_platform_driver(cdns_pcie_host_driver);
|
||||
|
@@ -124,3 +124,126 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
|
||||
cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0);
|
||||
cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0);
|
||||
}
|
||||
|
||||
void cdns_pcie_disable_phy(struct cdns_pcie *pcie)
|
||||
{
|
||||
int i = pcie->phy_count;
|
||||
|
||||
while (i--) {
|
||||
phy_power_off(pcie->phy[i]);
|
||||
phy_exit(pcie->phy[i]);
|
||||
}
|
||||
}
|
||||
|
||||
int cdns_pcie_enable_phy(struct cdns_pcie *pcie)
|
||||
{
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < pcie->phy_count; i++) {
|
||||
ret = phy_init(pcie->phy[i]);
|
||||
if (ret < 0)
|
||||
goto err_phy;
|
||||
|
||||
ret = phy_power_on(pcie->phy[i]);
|
||||
if (ret < 0) {
|
||||
phy_exit(pcie->phy[i]);
|
||||
goto err_phy;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_phy:
|
||||
while (--i >= 0) {
|
||||
phy_power_off(pcie->phy[i]);
|
||||
phy_exit(pcie->phy[i]);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie)
|
||||
{
|
||||
struct device_node *np = dev->of_node;
|
||||
int phy_count;
|
||||
struct phy **phy;
|
||||
struct device_link **link;
|
||||
int i;
|
||||
int ret;
|
||||
const char *name;
|
||||
|
||||
phy_count = of_property_count_strings(np, "phy-names");
|
||||
if (phy_count < 1) {
|
||||
dev_err(dev, "no phy-names. PHY will not be initialized\n");
|
||||
pcie->phy_count = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
phy = devm_kzalloc(dev, sizeof(*phy) * phy_count, GFP_KERNEL);
|
||||
if (!phy)
|
||||
return -ENOMEM;
|
||||
|
||||
link = devm_kzalloc(dev, sizeof(*link) * phy_count, GFP_KERNEL);
|
||||
if (!link)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < phy_count; i++) {
|
||||
of_property_read_string_index(np, "phy-names", i, &name);
|
||||
phy[i] = devm_phy_optional_get(dev, name);
|
||||
if (IS_ERR(phy))
|
||||
return PTR_ERR(phy);
|
||||
|
||||
link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS);
|
||||
if (!link[i]) {
|
||||
ret = -EINVAL;
|
||||
goto err_link;
|
||||
}
|
||||
}
|
||||
|
||||
pcie->phy_count = phy_count;
|
||||
pcie->phy = phy;
|
||||
pcie->link = link;
|
||||
|
||||
ret = cdns_pcie_enable_phy(pcie);
|
||||
if (ret)
|
||||
goto err_link;
|
||||
|
||||
return 0;
|
||||
|
||||
err_link:
|
||||
while (--i >= 0)
|
||||
device_link_del(link[i]);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int cdns_pcie_suspend_noirq(struct device *dev)
|
||||
{
|
||||
struct cdns_pcie *pcie = dev_get_drvdata(dev);
|
||||
|
||||
cdns_pcie_disable_phy(pcie);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cdns_pcie_resume_noirq(struct device *dev)
|
||||
{
|
||||
struct cdns_pcie *pcie = dev_get_drvdata(dev);
|
||||
int ret;
|
||||
|
||||
ret = cdns_pcie_enable_phy(pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to enable phy\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
const struct dev_pm_ops cdns_pcie_pm_ops = {
|
||||
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq,
|
||||
cdns_pcie_resume_noirq)
|
||||
};
|
||||
|
@@ -8,6 +8,7 @@
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
||||
/*
|
||||
* Local Management Registers
|
||||
@@ -165,6 +166,9 @@
|
||||
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
|
||||
(CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
|
||||
|
||||
/* AXI link down register */
|
||||
#define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
|
||||
|
||||
enum cdns_pcie_rp_bar {
|
||||
RP_BAR0,
|
||||
RP_BAR1,
|
||||
@@ -229,6 +233,9 @@ struct cdns_pcie {
|
||||
struct resource *mem_res;
|
||||
bool is_rc;
|
||||
u8 bus;
|
||||
int phy_count;
|
||||
struct phy **phy;
|
||||
struct device_link **link;
|
||||
};
|
||||
|
||||
/* Register access */
|
||||
@@ -279,7 +286,7 @@ static inline void cdns_pcie_ep_fn_writew(struct cdns_pcie *pcie, u8 fn,
|
||||
}
|
||||
|
||||
static inline void cdns_pcie_ep_fn_writel(struct cdns_pcie *pcie, u8 fn,
|
||||
u32 reg, u16 value)
|
||||
u32 reg, u32 value)
|
||||
{
|
||||
writel(value, pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg);
|
||||
}
|
||||
@@ -307,5 +314,9 @@ void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, u8 fn,
|
||||
u32 r, u64 cpu_addr);
|
||||
|
||||
void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
|
||||
void cdns_pcie_disable_phy(struct cdns_pcie *pcie);
|
||||
int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
|
||||
int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
|
||||
extern const struct dev_pm_ops cdns_pcie_pm_ops;
|
||||
|
||||
#endif /* _PCIE_CADENCE_H */
|
||||
|
@@ -85,6 +85,8 @@
|
||||
#define IMAP_VALID_SHIFT 0
|
||||
#define IMAP_VALID BIT(IMAP_VALID_SHIFT)
|
||||
|
||||
#define IPROC_PCI_PM_CAP 0x48
|
||||
#define IPROC_PCI_PM_CAP_MASK 0xffff
|
||||
#define IPROC_PCI_EXP_CAP 0xac
|
||||
|
||||
#define IPROC_PCIE_REG_INVALID 0xffff
|
||||
@@ -375,6 +377,17 @@ static const u16 iproc_pcie_reg_paxc_v2[] = {
|
||||
[IPROC_PCIE_CFG_DATA] = 0x1fc,
|
||||
};
|
||||
|
||||
/*
|
||||
* List of device IDs of controllers that have corrupted capability list that
|
||||
* require SW fixup
|
||||
*/
|
||||
static const u16 iproc_pcie_corrupt_cap_did[] = {
|
||||
0x16cd,
|
||||
0x16f0,
|
||||
0xd802,
|
||||
0xd804
|
||||
};
|
||||
|
||||
static inline struct iproc_pcie *iproc_data(struct pci_bus *bus)
|
||||
{
|
||||
struct iproc_pcie *pcie = bus->sysdata;
|
||||
@@ -495,6 +508,49 @@ static unsigned int iproc_pcie_cfg_retry(void __iomem *cfg_data_p)
|
||||
return data;
|
||||
}
|
||||
|
||||
static void iproc_pcie_fix_cap(struct iproc_pcie *pcie, int where, u32 *val)
|
||||
{
|
||||
u32 i, dev_id;
|
||||
|
||||
switch (where & ~0x3) {
|
||||
case PCI_VENDOR_ID:
|
||||
dev_id = *val >> 16;
|
||||
|
||||
/*
|
||||
* Activate fixup for those controllers that have corrupted
|
||||
* capability list registers
|
||||
*/
|
||||
for (i = 0; i < ARRAY_SIZE(iproc_pcie_corrupt_cap_did); i++)
|
||||
if (dev_id == iproc_pcie_corrupt_cap_did[i])
|
||||
pcie->fix_paxc_cap = true;
|
||||
break;
|
||||
|
||||
case IPROC_PCI_PM_CAP:
|
||||
if (pcie->fix_paxc_cap) {
|
||||
/* advertise PM, force next capability to PCIe */
|
||||
*val &= ~IPROC_PCI_PM_CAP_MASK;
|
||||
*val |= IPROC_PCI_EXP_CAP << 8 | PCI_CAP_ID_PM;
|
||||
}
|
||||
break;
|
||||
|
||||
case IPROC_PCI_EXP_CAP:
|
||||
if (pcie->fix_paxc_cap) {
|
||||
/* advertise root port, version 2, terminate here */
|
||||
*val = (PCI_EXP_TYPE_ROOT_PORT << 4 | 2) << 16 |
|
||||
PCI_CAP_ID_EXP;
|
||||
}
|
||||
break;
|
||||
|
||||
case IPROC_PCI_EXP_CAP + PCI_EXP_RTCTL:
|
||||
/* Don't advertise CRS SV support */
|
||||
*val &= ~(PCI_EXP_RTCAP_CRSVIS << 16);
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
|
||||
int where, int size, u32 *val)
|
||||
{
|
||||
@@ -509,13 +565,10 @@ static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
|
||||
/* root complex access */
|
||||
if (busno == 0) {
|
||||
ret = pci_generic_config_read32(bus, devfn, where, size, val);
|
||||
if (ret != PCIBIOS_SUCCESSFUL)
|
||||
return ret;
|
||||
if (ret == PCIBIOS_SUCCESSFUL)
|
||||
iproc_pcie_fix_cap(pcie, where, val);
|
||||
|
||||
/* Don't advertise CRS SV support */
|
||||
if ((where & ~0x3) == IPROC_PCI_EXP_CAP + PCI_EXP_RTCTL)
|
||||
*val &= ~(PCI_EXP_RTCAP_CRSVIS << 16);
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
cfg_data_p = iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where);
|
||||
@@ -529,6 +582,25 @@ static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
|
||||
if (size <= 2)
|
||||
*val = (data >> (8 * (where & 3))) & ((1 << (size * 8)) - 1);
|
||||
|
||||
/*
|
||||
* For PAXC and PAXCv2, the total number of PFs that one can enumerate
|
||||
* depends on the firmware configuration. Unfortunately, due to an ASIC
|
||||
* bug, unconfigured PFs cannot be properly hidden from the root
|
||||
* complex. As a result, write access to these PFs will cause bus lock
|
||||
* up on the embedded processor
|
||||
*
|
||||
* Since all unconfigured PFs are left with an incorrect, staled device
|
||||
* ID of 0x168e (PCI_DEVICE_ID_NX2_57810), we try to catch those access
|
||||
* early here and reject them all
|
||||
*/
|
||||
#define DEVICE_ID_MASK 0xffff0000
|
||||
#define DEVICE_ID_SHIFT 16
|
||||
if (pcie->rej_unconfig_pf &&
|
||||
(where & CFG_ADDR_REG_NUM_MASK) == PCI_VENDOR_ID)
|
||||
if ((*val & DEVICE_ID_MASK) ==
|
||||
(PCI_DEVICE_ID_NX2_57810 << DEVICE_ID_SHIFT))
|
||||
return PCIBIOS_FUNC_NOT_SUPPORTED;
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
@@ -628,7 +700,7 @@ static int iproc_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
|
||||
struct iproc_pcie *pcie = iproc_data(bus);
|
||||
|
||||
iproc_pcie_apb_err_disable(bus, true);
|
||||
if (pcie->type == IPROC_PCIE_PAXB_V2)
|
||||
if (pcie->iproc_cfg_read)
|
||||
ret = iproc_pcie_config_read(bus, devfn, where, size, val);
|
||||
else
|
||||
ret = pci_generic_config_read32(bus, devfn, where, size, val);
|
||||
@@ -808,14 +880,14 @@ static inline int iproc_pcie_ob_write(struct iproc_pcie *pcie, int window_idx,
|
||||
writel(lower_32_bits(pci_addr), pcie->base + omap_offset);
|
||||
writel(upper_32_bits(pci_addr), pcie->base + omap_offset + 4);
|
||||
|
||||
dev_info(dev, "ob window [%d]: offset 0x%x axi %pap pci %pap\n",
|
||||
window_idx, oarr_offset, &axi_addr, &pci_addr);
|
||||
dev_info(dev, "oarr lo 0x%x oarr hi 0x%x\n",
|
||||
readl(pcie->base + oarr_offset),
|
||||
readl(pcie->base + oarr_offset + 4));
|
||||
dev_info(dev, "omap lo 0x%x omap hi 0x%x\n",
|
||||
readl(pcie->base + omap_offset),
|
||||
readl(pcie->base + omap_offset + 4));
|
||||
dev_dbg(dev, "ob window [%d]: offset 0x%x axi %pap pci %pap\n",
|
||||
window_idx, oarr_offset, &axi_addr, &pci_addr);
|
||||
dev_dbg(dev, "oarr lo 0x%x oarr hi 0x%x\n",
|
||||
readl(pcie->base + oarr_offset),
|
||||
readl(pcie->base + oarr_offset + 4));
|
||||
dev_dbg(dev, "omap lo 0x%x omap hi 0x%x\n",
|
||||
readl(pcie->base + omap_offset),
|
||||
readl(pcie->base + omap_offset + 4));
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -982,8 +1054,8 @@ static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx,
|
||||
iproc_pcie_reg_is_invalid(imap_offset))
|
||||
return -EINVAL;
|
||||
|
||||
dev_info(dev, "ib region [%d]: offset 0x%x axi %pap pci %pap\n",
|
||||
region_idx, iarr_offset, &axi_addr, &pci_addr);
|
||||
dev_dbg(dev, "ib region [%d]: offset 0x%x axi %pap pci %pap\n",
|
||||
region_idx, iarr_offset, &axi_addr, &pci_addr);
|
||||
|
||||
/*
|
||||
* Program the IARR registers. The upper 32-bit IARR register is
|
||||
@@ -993,9 +1065,9 @@ static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx,
|
||||
pcie->base + iarr_offset);
|
||||
writel(upper_32_bits(pci_addr), pcie->base + iarr_offset + 4);
|
||||
|
||||
dev_info(dev, "iarr lo 0x%x iarr hi 0x%x\n",
|
||||
readl(pcie->base + iarr_offset),
|
||||
readl(pcie->base + iarr_offset + 4));
|
||||
dev_dbg(dev, "iarr lo 0x%x iarr hi 0x%x\n",
|
||||
readl(pcie->base + iarr_offset),
|
||||
readl(pcie->base + iarr_offset + 4));
|
||||
|
||||
/*
|
||||
* Now program the IMAP registers. Each IARR region may have one or
|
||||
@@ -1009,10 +1081,10 @@ static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx,
|
||||
writel(upper_32_bits(axi_addr),
|
||||
pcie->base + imap_offset + ib_map->imap_addr_offset);
|
||||
|
||||
dev_info(dev, "imap window [%d] lo 0x%x hi 0x%x\n",
|
||||
window_idx, readl(pcie->base + imap_offset),
|
||||
readl(pcie->base + imap_offset +
|
||||
ib_map->imap_addr_offset));
|
||||
dev_dbg(dev, "imap window [%d] lo 0x%x hi 0x%x\n",
|
||||
window_idx, readl(pcie->base + imap_offset),
|
||||
readl(pcie->base + imap_offset +
|
||||
ib_map->imap_addr_offset));
|
||||
|
||||
imap_offset += ib_map->imap_window_offset;
|
||||
axi_addr += size;
|
||||
@@ -1144,10 +1216,22 @@ static int iproc_pcie_paxb_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void iproc_pcie_paxc_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr)
|
||||
static void iproc_pcie_paxc_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr,
|
||||
bool enable)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
if (!enable) {
|
||||
/*
|
||||
* Disable PAXC MSI steering. All write transfers will be
|
||||
* treated as non-MSI transfers
|
||||
*/
|
||||
val = iproc_pcie_read_reg(pcie, IPROC_PCIE_MSI_EN_CFG);
|
||||
val &= ~MSI_ENABLE_CFG;
|
||||
iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_EN_CFG, val);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Program bits [43:13] of address of GITS_TRANSLATER register into
|
||||
* bits [30:0] of the MSI base address register. In fact, in all iProc
|
||||
@@ -1201,7 +1285,7 @@ static int iproc_pcie_msi_steer(struct iproc_pcie *pcie,
|
||||
return ret;
|
||||
break;
|
||||
case IPROC_PCIE_PAXC_V2:
|
||||
iproc_pcie_paxc_v2_msi_steer(pcie, msi_addr);
|
||||
iproc_pcie_paxc_v2_msi_steer(pcie, msi_addr, true);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
@@ -1271,6 +1355,7 @@ static int iproc_pcie_rev_init(struct iproc_pcie *pcie)
|
||||
break;
|
||||
case IPROC_PCIE_PAXB:
|
||||
regs = iproc_pcie_reg_paxb;
|
||||
pcie->iproc_cfg_read = true;
|
||||
pcie->has_apb_err_disable = true;
|
||||
if (pcie->need_ob_cfg) {
|
||||
pcie->ob_map = paxb_ob_map;
|
||||
@@ -1293,10 +1378,14 @@ static int iproc_pcie_rev_init(struct iproc_pcie *pcie)
|
||||
case IPROC_PCIE_PAXC:
|
||||
regs = iproc_pcie_reg_paxc;
|
||||
pcie->ep_is_internal = true;
|
||||
pcie->iproc_cfg_read = true;
|
||||
pcie->rej_unconfig_pf = true;
|
||||
break;
|
||||
case IPROC_PCIE_PAXC_V2:
|
||||
regs = iproc_pcie_reg_paxc_v2;
|
||||
pcie->ep_is_internal = true;
|
||||
pcie->iproc_cfg_read = true;
|
||||
pcie->rej_unconfig_pf = true;
|
||||
pcie->need_msi_steer = true;
|
||||
break;
|
||||
default:
|
||||
@@ -1427,6 +1516,24 @@ int iproc_pcie_remove(struct iproc_pcie *pcie)
|
||||
}
|
||||
EXPORT_SYMBOL(iproc_pcie_remove);
|
||||
|
||||
/*
|
||||
* The MSI parsing logic in certain revisions of Broadcom PAXC based root
|
||||
* complex does not work and needs to be disabled
|
||||
*/
|
||||
static void quirk_paxc_disable_msi_parsing(struct pci_dev *pdev)
|
||||
{
|
||||
struct iproc_pcie *pcie = iproc_data(pdev->bus);
|
||||
|
||||
if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE)
|
||||
iproc_pcie_paxc_v2_msi_steer(pcie, 0, false);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16f0,
|
||||
quirk_paxc_disable_msi_parsing);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0xd802,
|
||||
quirk_paxc_disable_msi_parsing);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0xd804,
|
||||
quirk_paxc_disable_msi_parsing);
|
||||
|
||||
MODULE_AUTHOR("Ray Jui <rjui@broadcom.com>");
|
||||
MODULE_DESCRIPTION("Broadcom iPROC PCIe common driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
@@ -58,8 +58,13 @@ struct iproc_msi;
|
||||
* @phy: optional PHY device that controls the Serdes
|
||||
* @map_irq: function callback to map interrupts
|
||||
* @ep_is_internal: indicates an internal emulated endpoint device is connected
|
||||
* @iproc_cfg_read: indicates the iProc config read function should be used
|
||||
* @rej_unconfig_pf: indicates the root complex needs to detect and reject
|
||||
* enumeration against unconfigured physical functions emulated in the ASIC
|
||||
* @has_apb_err_disable: indicates the controller can be configured to prevent
|
||||
* unsupported request from being forwarded as an APB bus error
|
||||
* @fix_paxc_cap: indicates the controller has corrupted capability list in its
|
||||
* config space registers and requires SW based fixup
|
||||
*
|
||||
* @need_ob_cfg: indicates SW needs to configure the outbound mapping window
|
||||
* @ob: outbound mapping related parameters
|
||||
@@ -84,7 +89,10 @@ struct iproc_pcie {
|
||||
struct phy *phy;
|
||||
int (*map_irq)(const struct pci_dev *, u8, u8);
|
||||
bool ep_is_internal;
|
||||
bool iproc_cfg_read;
|
||||
bool rej_unconfig_pf;
|
||||
bool has_apb_err_disable;
|
||||
bool fix_paxc_cap;
|
||||
|
||||
bool need_ob_cfg;
|
||||
struct iproc_pcie_ob ob;
|
||||
|
@@ -23,6 +23,8 @@
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "../pci.h"
|
||||
|
||||
/* register offsets and bit positions */
|
||||
|
||||
/*
|
||||
@@ -130,7 +132,7 @@ struct mobiveil_pcie {
|
||||
void __iomem *config_axi_slave_base; /* endpoint config base */
|
||||
void __iomem *csr_axi_slave_base; /* root port config base */
|
||||
void __iomem *apb_csr_base; /* MSI register base */
|
||||
void __iomem *pcie_reg_base; /* Physical PCIe Controller Base */
|
||||
phys_addr_t pcie_reg_base; /* Physical PCIe Controller Base */
|
||||
struct irq_domain *intx_domain;
|
||||
raw_spinlock_t intx_mask_lock;
|
||||
int irq;
|
||||
|
@@ -472,7 +472,7 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
|
||||
|
||||
static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn,
|
||||
enum pci_epc_irq_type type,
|
||||
u8 interrupt_num)
|
||||
u16 interrupt_num)
|
||||
{
|
||||
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
|
||||
|
@@ -197,9 +197,20 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
|
||||
int i, best = 1;
|
||||
unsigned long flags;
|
||||
|
||||
if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1)
|
||||
if (vmd->msix_count == 1)
|
||||
return &vmd->irqs[0];
|
||||
|
||||
/*
|
||||
* White list for fast-interrupt handlers. All others will share the
|
||||
* "slow" interrupt vector.
|
||||
*/
|
||||
switch (msi_desc_to_pci_dev(desc)->class) {
|
||||
case PCI_CLASS_STORAGE_EXPRESS:
|
||||
break;
|
||||
default:
|
||||
return &vmd->irqs[0];
|
||||
}
|
||||
|
||||
raw_spin_lock_irqsave(&list_lock, flags);
|
||||
for (i = 1; i < vmd->msix_count; i++)
|
||||
if (vmd->irqs[i].count < vmd->irqs[best].count)
|
||||
|
Reference in New Issue
Block a user