net: documentation: build a directory structure for drivers

Documentation/networking/ is full of cryptically named files with
driver documentation.  This makes finding interesting information
at a glance really hard.  Move all those files into a directory
called device_drivers (since not all drivers are for device) and
fix up references.

RFC v0.1 -> RFC v1:
 - also add .txt suffix to the files which are missing it (Quentin)

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: David Ahern <dsahern@gmail.com>
Acked-by: Henrik Austad <henrik@austad.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Jakub Kicinski
2018-12-03 17:43:28 -08:00
committed by David S. Miller
parent a74f0fa082
commit b255e500c8
58 changed files with 71 additions and 66 deletions

View File

@@ -0,0 +1,260 @@
The QorIQ DPAA Ethernet Driver
==============================
Authors:
Madalin Bucur <madalin.bucur@nxp.com>
Camelia Groza <camelia.groza@nxp.com>
Contents
========
- DPAA Ethernet Overview
- DPAA Ethernet Supported SoCs
- Configuring DPAA Ethernet in your kernel
- DPAA Ethernet Frame Processing
- DPAA Ethernet Features
- DPAA IRQ Affinity and Receive Side Scaling
- Debugging
DPAA Ethernet Overview
======================
DPAA stands for Data Path Acceleration Architecture and it is a
set of networking acceleration IPs that are available on several
generations of SoCs, both on PowerPC and ARM64.
The Freescale DPAA architecture consists of a series of hardware blocks
that support Ethernet connectivity. The Ethernet driver depends upon the
following drivers in the Linux kernel:
- Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms)
drivers/iommu/fsl_*
- Frame Manager (FMan)
drivers/net/ethernet/freescale/fman
- Queue Manager (QMan), Buffer Manager (BMan)
drivers/soc/fsl/qbman
A simplified view of the dpaa_eth interfaces mapped to FMan MACs:
dpaa_eth /eth0\ ... /ethN\
driver | | | |
------------- ---- ----------- ---- -------------
-Ports / Tx Rx \ ... / Tx Rx \
FMan | | | |
-MACs | MAC0 | | MACN |
/ dtsec0 \ ... / dtsecN \ (or tgec)
/ \ / \(or memac)
--------- -------------- --- -------------- ---------
FMan, FMan Port, FMan SP, FMan MURAM drivers
---------------------------------------------------------
FMan HW blocks: MURAM, MACs, Ports, SP
---------------------------------------------------------
The dpaa_eth relation to the QMan, BMan and FMan:
________________________________
dpaa_eth / eth0 \
driver / \
--------- -^- -^- -^- --- ---------
QMan driver / \ / \ / \ \ / | BMan |
|Rx | |Rx | |Tx | |Tx | | driver |
--------- |Dfl| |Err| |Cnf| |FQs| | |
QMan HW |FQ | |FQ | |FQs| | | | |
/ \ / \ / \ \ / | |
--------- --- --- --- -v- ---------
| FMan QMI | |
| FMan HW FMan BMI | BMan HW |
----------------------- --------
where the acronyms used above (and in the code) are:
DPAA = Data Path Acceleration Architecture
FMan = DPAA Frame Manager
QMan = DPAA Queue Manager
BMan = DPAA Buffers Manager
QMI = QMan interface in FMan
BMI = BMan interface in FMan
FMan SP = FMan Storage Profiles
MURAM = Multi-user RAM in FMan
FQ = QMan Frame Queue
Rx Dfl FQ = default reception FQ
Rx Err FQ = Rx error frames FQ
Tx Cnf FQ = Tx confirmation FQs
Tx FQs = transmission frame queues
dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)
tgec = ten gigabit Ethernet controller (10 Gbps)
memac = multirate Ethernet MAC (10/100/1000/10000)
DPAA Ethernet Supported SoCs
============================
The DPAA drivers enable the Ethernet controllers present on the following SoCs:
# PPC
P1023
P2041
P3041
P4080
P5020
P5040
T1023
T1024
T1040
T1042
T2080
T4240
B4860
# ARM
LS1043A
LS1046A
Configuring DPAA Ethernet in your kernel
========================================
To enable the DPAA Ethernet driver, the following Kconfig options are required:
# common for arch/arm64 and arch/powerpc platforms
CONFIG_FSL_DPAA=y
CONFIG_FSL_FMAN=y
CONFIG_FSL_DPAA_ETH=y
CONFIG_FSL_XGMAC_MDIO=y
# for arch/powerpc only
CONFIG_FSL_PAMU=y
# common options needed for the PHYs used on the RDBs
CONFIG_VITESSE_PHY=y
CONFIG_REALTEK_PHY=y
CONFIG_AQUANTIA_PHY=y
DPAA Ethernet Frame Processing
==============================
On Rx, buffers for the incoming frames are retrieved from one of the three
existing buffers pools. The driver initializes and seeds these, each with
buffers of different sizes: 1KB, 2KB and 4KB.
On Tx, all transmitted frames are returned to the driver through Tx
confirmation frame queues. The driver is then responsible for freeing the
buffers. In order to do this properly, a backpointer is added to the buffer
before transmission that points to the skb. When the buffer returns to the
driver on a confirmation FQ, the skb can be correctly consumed.
DPAA Ethernet Features
======================
Currently the DPAA Ethernet driver enables the basic features required for
a Linux Ethernet driver. The support for advanced features will be added
gradually.
The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx
checksum offload feature is enabled by default and cannot be controlled through
ethtool. Also, rx-flow-hash and rx-hashing was added. The addition of RSS
provides a big performance boost for the forwarding scenarios, allowing
different traffic flows received by one interface to be processed by different
CPUs in parallel.
The driver has support for multiple prioritized Tx traffic classes. Priorities
range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with
strict priority levels. Each traffic class contains NR_CPU TX queues. By
default, only one traffic class is enabled and the lowest priority Tx queues
are used. Higher priority traffic classes can be enabled with the mqprio
qdisc. For example, all four traffic classes are enabled on an interface with
the following command. Furthermore, skb priority levels are mapped to traffic
classes as follows:
* priorities 0 to 3 - traffic class 0 (low priority)
* priorities 4 to 7 - traffic class 1 (medium-low priority)
* priorities 8 to 11 - traffic class 2 (medium-high priority)
* priorities 12 to 15 - traffic class 3 (high priority)
tc qdisc add dev <int> root handle 1: \
mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1
DPAA IRQ Affinity and Receive Side Scaling
==========================================
Traffic coming on the DPAA Rx queues or on the DPAA Tx confirmation
queues is seen by the CPU as ingress traffic on a certain portal.
The DPAA QMan portal interrupts are affined each to a certain CPU.
The same portal interrupt services all the QMan portal consumers.
By default the DPAA Ethernet driver enables RSS, making use of the
DPAA FMan Parser and Keygen blocks to distribute traffic on 128
hardware frame queues using a hash on IP v4/v6 source and destination
and L4 source and destination ports, in present in the received frame.
When RSS is disabled, all traffic received by a certain interface is
received on the default Rx frame queue. The default DPAA Rx frame
queues are configured to put the received traffic into a pool channel
that allows any available CPU portal to dequeue the ingress traffic.
The default frame queues have the HOLDACTIVE option set, ensuring that
traffic bursts from a certain queue are serviced by the same CPU.
This ensures a very low rate of frame reordering. A drawback of this
is that only one CPU at a time can service the traffic received by a
certain interface when RSS is not enabled.
To implement RSS, the DPAA Ethernet driver allocates an extra set of
128 Rx frame queues that are configured to dedicated channels, in a
round-robin manner. The mapping of the frame queues to CPUs is now
hardcoded, there is no indirection table to move traffic for a certain
FQ (hash result) to another CPU. The ingress traffic arriving on one
of these frame queues will arrive at the same portal and will always
be processed by the same CPU. This ensures intra-flow order preservation
and workload distribution for multiple traffic flows.
RSS can be turned off for a certain interface using ethtool, i.e.
# ethtool -N fm1-mac9 rx-flow-hash tcp4 ""
To turn it back on, one needs to set rx-flow-hash for tcp4/6 or udp4/6:
# ethtool -N fm1-mac9 rx-flow-hash udp4 sfdn
There is no independent control for individual protocols, any command
run for one of tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 is
going to control the rx-flow-hashing for all protocols on that interface.
Besides using the FMan Keygen computed hash for spreading traffic on the
128 Rx FQs, the DPAA Ethernet driver also sets the skb hash value when
the NETIF_F_RXHASH feature is on (active by default). This can be turned
on or off through ethtool, i.e.:
# ethtool -K fm1-mac9 rx-hashing off
# ethtool -k fm1-mac9 | grep hash
receive-hashing: off
# ethtool -K fm1-mac9 rx-hashing on
Actual changes:
receive-hashing: on
# ethtool -k fm1-mac9 | grep hash
receive-hashing: on
Please note that Rx hashing depends upon the rx-flow-hashing being on
for that interface - turning off rx-flow-hashing will also disable the
rx-hashing (without ethtool reporting it as off as that depends on the
NETIF_F_RXHASH feature flag).
Debugging
=========
The following statistics are exported for each interface through ethtool:
- interrupt count per CPU
- Rx packets count per CPU
- Tx packets count per CPU
- Tx confirmed packets count per CPU
- Tx S/G frames count per CPU
- Tx error count per CPU
- Rx error count per CPU
- Rx error count per type
- congestion related statistics:
- congestion status
- time spent in congestion
- number of time the device entered congestion
- dropped packets count per cause
The driver also exports the following information in sysfs:
- the FQ IDs for each FQ type
/sys/devices/platform/dpaa-ethernet.0/net/<int>/fqids
- the IDs of the buffer pools in use
/sys/devices/platform/dpaa-ethernet.0/net/<int>/bpids

View File

@@ -0,0 +1,158 @@
.. include:: <isonum.txt>
DPAA2 DPIO (Data Path I/O) Overview
===================================
:Copyright: |copy| 2016-2018 NXP
This document provides an overview of the Freescale DPAA2 DPIO
drivers
Introduction
============
A DPAA2 DPIO (Data Path I/O) is a hardware object that provides
interfaces to enqueue and dequeue frames to/from network interfaces
and other accelerators. A DPIO also provides hardware buffer
pool management for network interfaces.
This document provides an overview the Linux DPIO driver, its
subcomponents, and its APIs.
See Documentation/networking/device_drivers/freescale/dpaa2/overview.rst for
a general overview of DPAA2 and the general DPAA2 driver architecture in Linux.
Driver Overview
---------------
The DPIO driver is bound to DPIO objects discovered on the fsl-mc bus and
provides services that:
A) allow other drivers, such as the Ethernet driver, to enqueue and dequeue
frames for their respective objects
B) allow drivers to register callbacks for data availability notifications
when data becomes available on a queue or channel
C) allow drivers to manage hardware buffer pools
The Linux DPIO driver consists of 3 primary components--
DPIO object driver-- fsl-mc driver that manages the DPIO object
DPIO service-- provides APIs to other Linux drivers for services
QBman portal interface-- sends portal commands, gets responses
::
fsl-mc other
bus drivers
| |
+---+----+ +------+-----+
|DPIO obj| |DPIO service|
| driver |---| (DPIO) |
+--------+ +------+-----+
|
+------+-----+
| QBman |
| portal i/f |
+------------+
|
hardware
The diagram below shows how the DPIO driver components fit with the other
DPAA2 Linux driver components::
+------------+
| OS Network |
| Stack |
+------------+ +------------+
| Allocator |. . . . . . . | Ethernet |
|(DPMCP,DPBP)| | (DPNI) |
+-.----------+ +---+---+----+
. . ^ |
. . <data avail, | |<enqueue,
. . tx confirm> | | dequeue>
+-------------+ . | |
| DPRC driver | . +--------+ +------------+
| (DPRC) | . . |DPIO obj| |DPIO service|
+----------+--+ | driver |-| (DPIO) |
| +--------+ +------+-----+
|<dev add/remove> +------|-----+
| | QBman |
+----+--------------+ | portal i/f |
| MC-bus driver | +------------+
| | |
| /soc/fsl-mc | |
+-------------------+ |
|
=========================================|=========|========================
+-+--DPIO---|-----------+
| | |
| QBman Portal |
+-----------------------+
============================================================================
DPIO Object Driver (dpio-driver.c)
----------------------------------
The dpio-driver component registers with the fsl-mc bus to handle objects of
type "dpio". The implementation of probe() handles basic initialization
of the DPIO including mapping of the DPIO regions (the QBman SW portal)
and initializing interrupts and registering irq handlers. The dpio-driver
registers the probed DPIO with dpio-service.
DPIO service (dpio-service.c, dpaa2-io.h)
------------------------------------------
The dpio service component provides queuing, notification, and buffers
management services to DPAA2 drivers, such as the Ethernet driver. A system
will typically allocate 1 DPIO object per CPU to allow queuing operations
to happen simultaneously across all CPUs.
Notification handling
dpaa2_io_service_register()
dpaa2_io_service_deregister()
dpaa2_io_service_rearm()
Queuing
dpaa2_io_service_pull_fq()
dpaa2_io_service_pull_channel()
dpaa2_io_service_enqueue_fq()
dpaa2_io_service_enqueue_qd()
dpaa2_io_store_create()
dpaa2_io_store_destroy()
dpaa2_io_store_next()
Buffer pool management
dpaa2_io_service_release()
dpaa2_io_service_acquire()
QBman portal interface (qbman-portal.c)
---------------------------------------
The qbman-portal component provides APIs to do the low level hardware
bit twiddling for operations such as:
-initializing Qman software portals
-building and sending portal commands
-portal interrupt configuration and processing
The qbman-portal APIs are not public to other drivers, and are
only used by dpio-service.
Other (dpaa2-fd.h, dpaa2-global.h)
----------------------------------
Frame descriptor and scatter-gather definitions and the APIs used to
manipulate them are defined in dpaa2-fd.h.
Dequeue result struct and parsing APIs are defined in dpaa2-global.h.

View File

@@ -0,0 +1,185 @@
.. SPDX-License-Identifier: GPL-2.0
.. include:: <isonum.txt>
===============================
DPAA2 Ethernet driver
===============================
:Copyright: |copy| 2017-2018 NXP
This file provides documentation for the Freescale DPAA2 Ethernet driver.
Supported Platforms
===================
This driver provides networking support for Freescale DPAA2 SoCs, e.g.
LS2080A, LS2088A, LS1088A.
Architecture Overview
=====================
Unlike regular NICs, in the DPAA2 architecture there is no single hardware block
representing network interfaces; instead, several separate hardware resources
concur to provide the networking functionality:
- network interfaces
- queues, channels
- buffer pools
- MAC/PHY
All hardware resources are allocated and configured through the Management
Complex (MC) portals. MC abstracts most of these resources as DPAA2 objects
and exposes ABIs through which they can be configured and controlled. A few
hardware resources, like queues, do not have a corresponding MC object and
are treated as internal resources of other objects.
For a more detailed description of the DPAA2 architecture and its object
abstractions see *Documentation/networking/device_drivers/freescale/dpaa2/overview.rst*.
Each Linux net device is built on top of a Datapath Network Interface (DPNI)
object and uses Buffer Pools (DPBPs), I/O Portals (DPIOs) and Concentrators
(DPCONs).
Configuration interface::
-----------------------
| DPAA2 Ethernet Driver |
-----------------------
. . .
. . .
. . . . . . . . . . . .
. . .
. . .
---------- ---------- -----------
| DPBP API | | DPNI API | | DPCON API |
---------- ---------- -----------
. . . software
======= . ========== . ============ . ===================
. . . hardware
------------------------------------------
| MC hardware portals |
------------------------------------------
. . .
. . .
------ ------ -------
| DPBP | | DPNI | | DPCON |
------ ------ -------
The DPNIs are network interfaces without a direct one-on-one mapping to PHYs.
DPBPs represent hardware buffer pools. Packet I/O is performed in the context
of DPCON objects, using DPIO portals for managing and communicating with the
hardware resources.
Datapath (I/O) interface::
-----------------------------------------------
| DPAA2 Ethernet Driver |
-----------------------------------------------
| ^ ^ | |
| | | | |
enqueue| dequeue| data | dequeue| seed |
(Tx) | (Rx, TxC)| avail.| request| buffers|
| | notify| | |
| | | | |
V | | V V
-----------------------------------------------
| DPIO Driver |
-----------------------------------------------
| | | | | software
| | | | | ================
| | | | | hardware
-----------------------------------------------
| I/O hardware portals |
-----------------------------------------------
| ^ ^ | |
| | | | |
| | | V |
V | ================ V
---------------------- | -------------
queues ---------------------- | | Buffer pool |
---------------------- | -------------
=======================
Channel
Datapath I/O (DPIO) portals provide enqueue and dequeue services, data
availability notifications and buffer pool management. DPIOs are shared between
all DPAA2 objects (and implicitly all DPAA2 kernel drivers) that work with data
frames, but must be affine to the CPUs for the purpose of traffic distribution.
Frames are transmitted and received through hardware frame queues, which can be
grouped in channels for the purpose of hardware scheduling. The Ethernet driver
enqueues TX frames on egress queues and after transmission is complete a TX
confirmation frame is sent back to the CPU.
When frames are available on ingress queues, a data availability notification
is sent to the CPU; notifications are raised per channel, so even if multiple
queues in the same channel have available frames, only one notification is sent.
After a channel fires a notification, is must be explicitly rearmed.
Each network interface can have multiple Rx, Tx and confirmation queues affined
to CPUs, and one channel (DPCON) for each CPU that services at least one queue.
DPCONs are used to distribute ingress traffic to different CPUs via the cores'
affine DPIOs.
The role of hardware buffer pools is storage of ingress frame data. Each network
interface has a privately owned buffer pool which it seeds with kernel allocated
buffers.
DPNIs are decoupled from PHYs; a DPNI can be connected to a PHY through a DPMAC
object or to another DPNI through an internal link, but the connection is
managed by MC and completely transparent to the Ethernet driver.
::
--------- --------- ---------
| eth if1 | | eth if2 | | eth ifn |
--------- --------- ---------
. . .
. . .
. . .
---------------------------
| DPAA2 Ethernet Driver |
---------------------------
. . .
. . .
. . .
------ ------ ------ -------
| DPNI | | DPNI | | DPNI | | DPMAC |----+
------ ------ ------ ------- |
| | | | |
| | | | -----
=========== ================== | PHY |
-----
Creating a Network Interface
============================
A net device is created for each DPNI object probed on the MC bus. Each DPNI has
a number of properties which determine the network interface configuration
options and associated hardware resources.
DPNI objects (and the other DPAA2 objects needed for a network interface) can be
added to a container on the MC bus in one of two ways: statically, through a
Datapath Layout Binary file (DPL) that is parsed by MC at boot time; or created
dynamically at runtime, via the DPAA2 objects APIs.
Features & Offloads
===================
Hardware checksum offloading is supported for TCP and UDP over IPv4/6 frames.
The checksum offloads can be independently configured on RX and TX through
ethtool.
Hardware offload of unicast and multicast MAC filtering is supported on the
ingress path and permanently enabled.
Scatter-gather frames are supported on both RX and TX paths. On TX, SG support
is configurable via ethtool; on RX it is always enabled.
The DPAA2 hardware can process jumbo Ethernet frames of up to 10K bytes.
The Ethernet driver defines a static flow hashing scheme that distributes
traffic based on a 5-tuple key: src IP, dst IP, IP proto, L4 src port,
L4 dst port. No user configuration is supported for now.
Hardware specific statistics for the network interface as well as some
non-standard driver stats can be consulted through ethtool -S option.

View File

@@ -0,0 +1,10 @@
===================
DPAA2 Documentation
===================
.. toctree::
:maxdepth: 1
overview
dpio-driver
ethernet-driver

View File

@@ -0,0 +1,405 @@
.. include:: <isonum.txt>
=========================================================
DPAA2 (Data Path Acceleration Architecture Gen2) Overview
=========================================================
:Copyright: |copy| 2015 Freescale Semiconductor Inc.
:Copyright: |copy| 2018 NXP
This document provides an overview of the Freescale DPAA2 architecture
and how it is integrated into the Linux kernel.
Introduction
============
DPAA2 is a hardware architecture designed for high-speeed network
packet processing. DPAA2 consists of sophisticated mechanisms for
processing Ethernet packets, queue management, buffer management,
autonomous L2 switching, virtual Ethernet bridging, and accelerator
(e.g. crypto) sharing.
A DPAA2 hardware component called the Management Complex (or MC) manages the
DPAA2 hardware resources. The MC provides an object-based abstraction for
software drivers to use the DPAA2 hardware.
The MC uses DPAA2 hardware resources such as queues, buffer pools, and
network ports to create functional objects/devices such as network
interfaces, an L2 switch, or accelerator instances.
The MC provides memory-mapped I/O command interfaces (MC portals)
which DPAA2 software drivers use to operate on DPAA2 objects.
The diagram below shows an overview of the DPAA2 resource management
architecture::
+--------------------------------------+
| OS |
| DPAA2 drivers |
| | |
+-----------------------------|--------+
|
| (create,discover,connect
| config,use,destroy)
|
DPAA2 |
+------------------------| mc portal |-+
| | |
| +- - - - - - - - - - - - -V- - -+ |
| | | |
| | Management Complex (MC) | |
| | | |
| +- - - - - - - - - - - - - - - -+ |
| |
| Hardware Hardware |
| Resources Objects |
| --------- ------- |
| -queues -DPRC |
| -buffer pools -DPMCP |
| -Eth MACs/ports -DPIO |
| -network interface -DPNI |
| profiles -DPMAC |
| -queue portals -DPBP |
| -MC portals ... |
| ... |
| |
+--------------------------------------+
The MC mediates operations such as create, discover,
connect, configuration, and destroy. Fast-path operations
on data, such as packet transmit/receive, are not mediated by
the MC and are done directly using memory mapped regions in
DPIO objects.
Overview of DPAA2 Objects
=========================
The section provides a brief overview of some key DPAA2 objects.
A simple scenario is described illustrating the objects involved
in creating a network interfaces.
DPRC (Datapath Resource Container)
----------------------------------
A DPRC is a container object that holds all the other
types of DPAA2 objects. In the example diagram below there
are 8 objects of 5 types (DPMCP, DPIO, DPBP, DPNI, and DPMAC)
in the container.
::
+---------------------------------------------------------+
| DPRC |
| |
| +-------+ +-------+ +-------+ +-------+ +-------+ |
| | DPMCP | | DPIO | | DPBP | | DPNI | | DPMAC | |
| +-------+ +-------+ +-------+ +---+---+ +---+---+ |
| | DPMCP | | DPIO | |
| +-------+ +-------+ |
| | DPMCP | |
| +-------+ |
| |
+---------------------------------------------------------+
From the point of view of an OS, a DPRC behaves similar to a plug and
play bus, like PCI. DPRC commands can be used to enumerate the contents
of the DPRC, discover the hardware objects present (including mappable
regions and interrupts).
::
DPRC.1 (bus)
|
+--+--------+-------+-------+-------+
| | | | |
DPMCP.1 DPIO.1 DPBP.1 DPNI.1 DPMAC.1
DPMCP.2 DPIO.2
DPMCP.3
Hardware objects can be created and destroyed dynamically, providing
the ability to hot plug/unplug objects in and out of the DPRC.
A DPRC has a mappable MMIO region (an MC portal) that can be used
to send MC commands. It has an interrupt for status events (like
hotplug).
All objects in a container share the same hardware "isolation context".
This means that with respect to an IOMMU the isolation granularity
is at the DPRC (container) level, not at the individual object
level.
DPRCs can be defined statically and populated with objects
via a config file passed to the MC when firmware starts it.
DPAA2 Objects for an Ethernet Network Interface
-----------------------------------------------
A typical Ethernet NIC is monolithic-- the NIC device contains TX/RX
queuing mechanisms, configuration mechanisms, buffer management,
physical ports, and interrupts. DPAA2 uses a more granular approach
utilizing multiple hardware objects. Each object provides specialized
functions. Groups of these objects are used by software to provide
Ethernet network interface functionality. This approach provides
efficient use of finite hardware resources, flexibility, and
performance advantages.
The diagram below shows the objects needed for a simple
network interface configuration on a system with 2 CPUs.
::
+---+---+ +---+---+
CPU0 CPU1
+---+---+ +---+---+
| |
+---+---+ +---+---+
DPIO DPIO
+---+---+ +---+---+
\ /
\ /
\ /
+---+---+
DPNI --- DPBP,DPMCP
+---+---+
|
|
+---+---+
DPMAC
+---+---+
|
port/PHY
Below the objects are described. For each object a brief description
is provided along with a summary of the kinds of operations the object
supports and a summary of key resources of the object (MMIO regions
and IRQs).
DPMAC (Datapath Ethernet MAC)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Represents an Ethernet MAC, a hardware device that connects to an Ethernet
PHY and allows physical transmission and reception of Ethernet frames.
- MMIO regions: none
- IRQs: DPNI link change
- commands: set link up/down, link config, get stats,
IRQ config, enable, reset
DPNI (Datapath Network Interface)
Contains TX/RX queues, network interface configuration, and RX buffer pool
configuration mechanisms. The TX/RX queues are in memory and are identified
by queue number.
- MMIO regions: none
- IRQs: link state
- commands: port config, offload config, queue config,
parse/classify config, IRQ config, enable, reset
DPIO (Datapath I/O)
~~~~~~~~~~~~~~~~~~~
Provides interfaces to enqueue and dequeue
packets and do hardware buffer pool management operations. The DPAA2
architecture separates the mechanism to access queues (the DPIO object)
from the queues themselves. The DPIO provides an MMIO interface to
enqueue/dequeue packets. To enqueue something a descriptor is written
to the DPIO MMIO region, which includes the target queue number.
There will typically be one DPIO assigned to each CPU. This allows all
CPUs to simultaneously perform enqueue/dequeued operations. DPIOs are
expected to be shared by different DPAA2 drivers.
- MMIO regions: queue operations, buffer management
- IRQs: data availability, congestion notification, buffer
pool depletion
- commands: IRQ config, enable, reset
DPBP (Datapath Buffer Pool)
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Represents a hardware buffer pool.
- MMIO regions: none
- IRQs: none
- commands: enable, reset
DPMCP (Datapath MC Portal)
~~~~~~~~~~~~~~~~~~~~~~~~~~
Provides an MC command portal.
Used by drivers to send commands to the MC to manage
objects.
- MMIO regions: MC command portal
- IRQs: command completion
- commands: IRQ config, enable, reset
Object Connections
==================
Some objects have explicit relationships that must
be configured:
- DPNI <--> DPMAC
- DPNI <--> DPNI
- DPNI <--> L2-switch-port
A DPNI must be connected to something such as a DPMAC,
another DPNI, or L2 switch port. The DPNI connection
is made via a DPRC command.
::
+-------+ +-------+
| DPNI | | DPMAC |
+---+---+ +---+---+
| |
+==========+
- DPNI <--> DPBP
A network interface requires a 'buffer pool' (DPBP
object) which provides a list of pointers to memory
where received Ethernet data is to be copied. The
Ethernet driver configures the DPBPs associated with
the network interface.
Interrupts
==========
All interrupts generated by DPAA2 objects are message
interrupts. At the hardware level message interrupts
generated by devices will normally have 3 components--
1) a non-spoofable 'device-id' expressed on the hardware
bus, 2) an address, 3) a data value.
In the case of DPAA2 devices/objects, all objects in the
same container/DPRC share the same 'device-id'.
For ARM-based SoC this is the same as the stream ID.
DPAA2 Linux Drivers Overview
============================
This section provides an overview of the Linux kernel drivers for
DPAA2-- 1) the bus driver and associated "DPAA2 infrastructure"
drivers and 2) functional object drivers (such as Ethernet).
As described previously, a DPRC is a container that holds the other
types of DPAA2 objects. It is functionally similar to a plug-and-play
bus controller.
Each object in the DPRC is a Linux "device" and is bound to a driver.
The diagram below shows the Linux drivers involved in a networking
scenario and the objects bound to each driver. A brief description
of each driver follows.
::
+------------+
| OS Network |
| Stack |
+------------+ +------------+
| Allocator |. . . . . . . | Ethernet |
|(DPMCP,DPBP)| | (DPNI) |
+-.----------+ +---+---+----+
. . ^ |
. . <data avail, | | <enqueue,
. . tx confirm> | | dequeue>
+-------------+ . | |
| DPRC driver | . +---+---V----+ +---------+
| (DPRC) | . . . . . .| DPIO driver| | MAC |
+----------+--+ | (DPIO) | | (DPMAC) |
| +------+-----+ +-----+---+
|<dev add/remove> | |
| | |
+--------+----------+ | +--+---+
| MC-bus driver | | | PHY |
| | | |driver|
| /bus/fsl-mc | | +--+---+
+-------------------+ | |
| |
========================= HARDWARE =========|=================|======
DPIO |
| |
DPNI---DPBP |
| |
DPMAC |
| |
PHY ---------------+
============================================|========================
A brief description of each driver is provided below.
MC-bus driver
-------------
The MC-bus driver is a platform driver and is probed from a
node in the device tree (compatible "fsl,qoriq-mc") passed in by boot
firmware. It is responsible for bootstrapping the DPAA2 kernel
infrastructure.
Key functions include:
- registering a new bus type named "fsl-mc" with the kernel,
and implementing bus call-backs (e.g. match/uevent/dev_groups)
- implementing APIs for DPAA2 driver registration and for device
add/remove
- creates an MSI IRQ domain
- doing a 'device add' to expose the 'root' DPRC, in turn triggering
a bind of the root DPRC to the DPRC driver
The binding for the MC-bus device-tree node can be consulted at
*Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt*.
The sysfs bind/unbind interfaces for the MC-bus can be consulted at
*Documentation/ABI/testing/sysfs-bus-fsl-mc*.
DPRC driver
-----------
The DPRC driver is bound to DPRC objects and does runtime management
of a bus instance. It performs the initial bus scan of the DPRC
and handles interrupts for container events such as hot plug by
re-scanning the DPRC.
Allocator
---------
Certain objects such as DPMCP and DPBP are generic and fungible,
and are intended to be used by other drivers. For example,
the DPAA2 Ethernet driver needs:
- DPMCPs to send MC commands, to configure network interfaces
- DPBPs for network buffer pools
The allocator driver registers for these allocatable object types
and those objects are bound to the allocator when the bus is probed.
The allocator maintains a pool of objects that are available for
allocation by other DPAA2 drivers.
DPIO driver
-----------
The DPIO driver is bound to DPIO objects and provides services that allow
other drivers such as the Ethernet driver to enqueue and dequeue data for
their respective objects.
Key services include:
- data availability notifications
- hardware queuing operations (enqueue and dequeue of data)
- hardware buffer pool management
To transmit a packet the Ethernet driver puts data on a queue and
invokes a DPIO API. For receive, the Ethernet driver registers
a data availability notification callback. To dequeue a packet
a DPIO API is used.
There is typically one DPIO object per physical CPU for optimum
performance, allowing different CPUs to simultaneously enqueue
and dequeue data.
The DPIO driver operates on behalf of all DPAA2 drivers
active in the kernel-- Ethernet, crypto, compression,
etc.
Ethernet driver
---------------
The Ethernet driver is bound to a DPNI and implements the kernel
interfaces needed to connect the DPAA2 network interface to
the network stack.
Each DPNI corresponds to a Linux network interface.
MAC driver
----------
An Ethernet PHY is an off-chip, board specific component and is managed
by the appropriate PHY driver via an mdio bus. The MAC driver
plays a role of being a proxy between the PHY driver and the
MC. It does this proxy via the MC commands to a DPMAC object.
If the PHY driver signals a link change, the MAC driver notifies
the MC via a DPMAC command. If a network interface is brought
up or down, the MC notifies the DPMAC driver via an interrupt and
the driver can take appropriate action.

View File

@@ -0,0 +1,42 @@
The Gianfar Ethernet Driver
Author: Andy Fleming <afleming@freescale.com>
Updated: 2005-07-28
CHECKSUM OFFLOADING
The eTSEC controller (first included in parts from late 2005 like
the 8548) has the ability to perform TCP, UDP, and IP checksums
in hardware. The Linux kernel only offloads the TCP and UDP
checksums (and always performs the pseudo header checksums), so
the driver only supports checksumming for TCP/IP and UDP/IP
packets. Use ethtool to enable or disable this feature for RX
and TX.
VLAN
In order to use VLAN, please consult Linux documentation on
configuring VLANs. The gianfar driver supports hardware insertion and
extraction of VLAN headers, but not filtering. Filtering will be
done by the kernel.
MULTICASTING
The gianfar driver supports using the group hash table on the
TSEC (and the extended hash table on the eTSEC) for multicast
filtering. On the eTSEC, the exact-match MAC registers are used
before the hash tables. See Linux documentation on how to join
multicast groups.
PADDING
The gianfar driver supports padding received frames with 2 bytes
to align the IP header to a 16-byte boundary, when supported by
hardware.
ETHTOOL
The gianfar driver supports the use of ethtool for many
configuration options. You must run ethtool only on currently
open interfaces. See ethtool documentation for details.