Merge tag 'v5.5-rc4' into locking/kcsan, to resolve conflicts
Conflicts: init/main.c lib/Kconfig.debug Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
@@ -25,6 +25,7 @@ whole; patches welcome!
|
||||
gdb-kernel-debugging
|
||||
kgdb
|
||||
kselftest
|
||||
kunit/index
|
||||
|
||||
|
||||
.. only:: subproject and html
|
||||
|
@@ -218,3 +218,66 @@ brk handler is used to print bug reports.
|
||||
A potential expansion of this mode is a hardware tag-based mode, which would
|
||||
use hardware memory tagging support instead of compiler instrumentation and
|
||||
manual shadow memory manipulation.
|
||||
|
||||
What memory accesses are sanitised by KASAN?
|
||||
--------------------------------------------
|
||||
|
||||
The kernel maps memory in a number of different parts of the address
|
||||
space. This poses something of a problem for KASAN, which requires
|
||||
that all addresses accessed by instrumented code have a valid shadow
|
||||
region.
|
||||
|
||||
The range of kernel virtual addresses is large: there is not enough
|
||||
real memory to support a real shadow region for every address that
|
||||
could be accessed by the kernel.
|
||||
|
||||
By default
|
||||
~~~~~~~~~~
|
||||
|
||||
By default, architectures only map real memory over the shadow region
|
||||
for the linear mapping (and potentially other small areas). For all
|
||||
other areas - such as vmalloc and vmemmap space - a single read-only
|
||||
page is mapped over the shadow area. This read-only shadow page
|
||||
declares all memory accesses as permitted.
|
||||
|
||||
This presents a problem for modules: they do not live in the linear
|
||||
mapping, but in a dedicated module space. By hooking in to the module
|
||||
allocator, KASAN can temporarily map real shadow memory to cover
|
||||
them. This allows detection of invalid accesses to module globals, for
|
||||
example.
|
||||
|
||||
This also creates an incompatibility with ``VMAP_STACK``: if the stack
|
||||
lives in vmalloc space, it will be shadowed by the read-only page, and
|
||||
the kernel will fault when trying to set up the shadow data for stack
|
||||
variables.
|
||||
|
||||
CONFIG_KASAN_VMALLOC
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
|
||||
cost of greater memory usage. Currently this is only supported on x86.
|
||||
|
||||
This works by hooking into vmalloc and vmap, and dynamically
|
||||
allocating real shadow memory to back the mappings.
|
||||
|
||||
Most mappings in vmalloc space are small, requiring less than a full
|
||||
page of shadow space. Allocating a full shadow page per mapping would
|
||||
therefore be wasteful. Furthermore, to ensure that different mappings
|
||||
use different shadow pages, mappings would have to be aligned to
|
||||
``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``.
|
||||
|
||||
Instead, we share backing space across multiple mappings. We allocate
|
||||
a backing page when a mapping in vmalloc space uses a particular page
|
||||
of the shadow region. This page can be shared by other vmalloc
|
||||
mappings later on.
|
||||
|
||||
We hook in to the vmap infrastructure to lazily clean up unused shadow
|
||||
memory.
|
||||
|
||||
To avoid the difficulties around swapping mappings around, we expect
|
||||
that the part of the shadow region that covers the vmalloc space will
|
||||
not be covered by the early shadow page, but will be left
|
||||
unmapped. This will require changes in arch-specific code.
|
||||
|
||||
This allows ``VMAP_STACK`` support on x86, and can simplify support of
|
||||
architectures that do not have a fixed module region.
|
||||
|
@@ -34,6 +34,7 @@ Profiling data will only become accessible once debugfs has been mounted::
|
||||
|
||||
Coverage collection
|
||||
-------------------
|
||||
|
||||
The following program demonstrates coverage collection from within a test
|
||||
program using kcov:
|
||||
|
||||
@@ -128,6 +129,7 @@ only need to enable coverage (disable happens automatically on thread end).
|
||||
|
||||
Comparison operands collection
|
||||
------------------------------
|
||||
|
||||
Comparison operands collection is similar to coverage collection:
|
||||
|
||||
.. code-block:: c
|
||||
@@ -202,3 +204,130 @@ Comparison operands collection is similar to coverage collection:
|
||||
|
||||
Note that the kcov modes (coverage collection or comparison operands) are
|
||||
mutually exclusive.
|
||||
|
||||
Remote coverage collection
|
||||
--------------------------
|
||||
|
||||
With KCOV_ENABLE coverage is collected only for syscalls that are issued
|
||||
from the current process. With KCOV_REMOTE_ENABLE it's possible to collect
|
||||
coverage for arbitrary parts of the kernel code, provided that those parts
|
||||
are annotated with kcov_remote_start()/kcov_remote_stop().
|
||||
|
||||
This allows to collect coverage from two types of kernel background
|
||||
threads: the global ones, that are spawned during kernel boot in a limited
|
||||
number of instances (e.g. one USB hub_event() worker thread is spawned per
|
||||
USB HCD); and the local ones, that are spawned when a user interacts with
|
||||
some kernel interface (e.g. vhost workers).
|
||||
|
||||
To enable collecting coverage from a global background thread, a unique
|
||||
global handle must be assigned and passed to the corresponding
|
||||
kcov_remote_start() call. Then a userspace process can pass a list of such
|
||||
handles to the KCOV_REMOTE_ENABLE ioctl in the handles array field of the
|
||||
kcov_remote_arg struct. This will attach the used kcov device to the code
|
||||
sections, that are referenced by those handles.
|
||||
|
||||
Since there might be many local background threads spawned from different
|
||||
userspace processes, we can't use a single global handle per annotation.
|
||||
Instead, the userspace process passes a non-zero handle through the
|
||||
common_handle field of the kcov_remote_arg struct. This common handle gets
|
||||
saved to the kcov_handle field in the current task_struct and needs to be
|
||||
passed to the newly spawned threads via custom annotations. Those threads
|
||||
should in turn be annotated with kcov_remote_start()/kcov_remote_stop().
|
||||
|
||||
Internally kcov stores handles as u64 integers. The top byte of a handle
|
||||
is used to denote the id of a subsystem that this handle belongs to, and
|
||||
the lower 4 bytes are used to denote the id of a thread instance within
|
||||
that subsystem. A reserved value 0 is used as a subsystem id for common
|
||||
handles as they don't belong to a particular subsystem. The bytes 4-7 are
|
||||
currently reserved and must be zero. In the future the number of bytes
|
||||
used for the subsystem or handle ids might be increased.
|
||||
|
||||
When a particular userspace proccess collects coverage by via a common
|
||||
handle, kcov will collect coverage for each code section that is annotated
|
||||
to use the common handle obtained as kcov_handle from the current
|
||||
task_struct. However non common handles allow to collect coverage
|
||||
selectively from different subsystems.
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct kcov_remote_arg {
|
||||
unsigned trace_mode;
|
||||
unsigned area_size;
|
||||
unsigned num_handles;
|
||||
uint64_t common_handle;
|
||||
uint64_t handles[0];
|
||||
};
|
||||
|
||||
#define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
|
||||
#define KCOV_DISABLE _IO('c', 101)
|
||||
#define KCOV_REMOTE_ENABLE _IOW('c', 102, struct kcov_remote_arg)
|
||||
|
||||
#define COVER_SIZE (64 << 10)
|
||||
|
||||
#define KCOV_TRACE_PC 0
|
||||
|
||||
#define KCOV_SUBSYSTEM_COMMON (0x00ull << 56)
|
||||
#define KCOV_SUBSYSTEM_USB (0x01ull << 56)
|
||||
|
||||
#define KCOV_SUBSYSTEM_MASK (0xffull << 56)
|
||||
#define KCOV_INSTANCE_MASK (0xffffffffull)
|
||||
|
||||
static inline __u64 kcov_remote_handle(__u64 subsys, __u64 inst)
|
||||
{
|
||||
if (subsys & ~KCOV_SUBSYSTEM_MASK || inst & ~KCOV_INSTANCE_MASK)
|
||||
return 0;
|
||||
return subsys | inst;
|
||||
}
|
||||
|
||||
#define KCOV_COMMON_ID 0x42
|
||||
#define KCOV_USB_BUS_NUM 1
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int fd;
|
||||
unsigned long *cover, n, i;
|
||||
struct kcov_remote_arg *arg;
|
||||
|
||||
fd = open("/sys/kernel/debug/kcov", O_RDWR);
|
||||
if (fd == -1)
|
||||
perror("open"), exit(1);
|
||||
if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
|
||||
perror("ioctl"), exit(1);
|
||||
cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
|
||||
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
|
||||
if ((void*)cover == MAP_FAILED)
|
||||
perror("mmap"), exit(1);
|
||||
|
||||
/* Enable coverage collection via common handle and from USB bus #1. */
|
||||
arg = calloc(1, sizeof(*arg) + sizeof(uint64_t));
|
||||
if (!arg)
|
||||
perror("calloc"), exit(1);
|
||||
arg->trace_mode = KCOV_TRACE_PC;
|
||||
arg->area_size = COVER_SIZE;
|
||||
arg->num_handles = 1;
|
||||
arg->common_handle = kcov_remote_handle(KCOV_SUBSYSTEM_COMMON,
|
||||
KCOV_COMMON_ID);
|
||||
arg->handles[0] = kcov_remote_handle(KCOV_SUBSYSTEM_USB,
|
||||
KCOV_USB_BUS_NUM);
|
||||
if (ioctl(fd, KCOV_REMOTE_ENABLE, arg))
|
||||
perror("ioctl"), free(arg), exit(1);
|
||||
free(arg);
|
||||
|
||||
/*
|
||||
* Here the user needs to trigger execution of a kernel code section
|
||||
* that is either annotated with the common handle, or to trigger some
|
||||
* activity on USB bus #1.
|
||||
*/
|
||||
sleep(2);
|
||||
|
||||
n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
|
||||
for (i = 0; i < n; i++)
|
||||
printf("0x%lx\n", cover[i + 1]);
|
||||
if (ioctl(fd, KCOV_DISABLE, 0))
|
||||
perror("ioctl"), exit(1);
|
||||
if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
|
||||
perror("munmap"), exit(1);
|
||||
if (close(fd))
|
||||
perror("close"), exit(1);
|
||||
return 0;
|
||||
}
|
||||
|
@@ -69,7 +69,7 @@ the kernel command line.
|
||||
|
||||
Memory may be allocated or freed before kmemleak is initialised and
|
||||
these actions are stored in an early log buffer. The size of this buffer
|
||||
is configured via the CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE option.
|
||||
is configured via the CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE option.
|
||||
|
||||
If CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF are enabled, the kmemleak is
|
||||
disabled by default. Passing ``kmemleak=on`` on the kernel command
|
||||
|
@@ -203,12 +203,12 @@ Test Module
|
||||
Kselftest tests the kernel from userspace. Sometimes things need
|
||||
testing from within the kernel, one method of doing this is to create a
|
||||
test module. We can tie the module into the kselftest framework by
|
||||
using a shell script test runner. ``kselftest_module.sh`` is designed
|
||||
using a shell script test runner. ``kselftest/module.sh`` is designed
|
||||
to facilitate this process. There is also a header file provided to
|
||||
assist writing kernel modules that are for use with kselftest:
|
||||
|
||||
- ``tools/testing/kselftest/kselftest_module.h``
|
||||
- ``tools/testing/kselftest/kselftest_module.sh``
|
||||
- ``tools/testing/kselftest/kselftest/module.sh``
|
||||
|
||||
How to use
|
||||
----------
|
||||
@@ -247,7 +247,7 @@ A bare bones test module might look like this:
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include "../tools/testing/selftests/kselftest_module.h"
|
||||
#include "../tools/testing/selftests/kselftest/module.h"
|
||||
|
||||
KSTM_MODULE_GLOBALS();
|
||||
|
||||
@@ -276,7 +276,7 @@ Example test script
|
||||
|
||||
#!/bin/bash
|
||||
# SPDX-License-Identifier: GPL-2.0+
|
||||
$(dirname $0)/../kselftest_module.sh "foo" test_foo
|
||||
$(dirname $0)/../kselftest/module.sh "foo" test_foo
|
||||
|
||||
|
||||
Test Harness
|
||||
|
16
Documentation/dev-tools/kunit/api/index.rst
Normal file
16
Documentation/dev-tools/kunit/api/index.rst
Normal file
@@ -0,0 +1,16 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
=============
|
||||
API Reference
|
||||
=============
|
||||
.. toctree::
|
||||
|
||||
test
|
||||
|
||||
This section documents the KUnit kernel testing API. It is divided into the
|
||||
following sections:
|
||||
|
||||
================================= ==============================================
|
||||
:doc:`test` documents all of the standard testing API
|
||||
excluding mocking or mocking related features.
|
||||
================================= ==============================================
|
11
Documentation/dev-tools/kunit/api/test.rst
Normal file
11
Documentation/dev-tools/kunit/api/test.rst
Normal file
@@ -0,0 +1,11 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
========
|
||||
Test API
|
||||
========
|
||||
|
||||
This file documents all of the standard testing API excluding mocking or mocking
|
||||
related features.
|
||||
|
||||
.. kernel-doc:: include/kunit/test.h
|
||||
:internal:
|
62
Documentation/dev-tools/kunit/faq.rst
Normal file
62
Documentation/dev-tools/kunit/faq.rst
Normal file
@@ -0,0 +1,62 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
==========================
|
||||
Frequently Asked Questions
|
||||
==========================
|
||||
|
||||
How is this different from Autotest, kselftest, etc?
|
||||
====================================================
|
||||
KUnit is a unit testing framework. Autotest, kselftest (and some others) are
|
||||
not.
|
||||
|
||||
A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
|
||||
test a single unit of code in isolation, hence the name. A unit test should be
|
||||
the finest granularity of testing and as such should allow all possible code
|
||||
paths to be tested in the code under test; this is only possible if the code
|
||||
under test is very small and does not have any external dependencies outside of
|
||||
the test's control like hardware.
|
||||
|
||||
There are no testing frameworks currently available for the kernel that do not
|
||||
require installing the kernel on a test machine or in a VM and all require
|
||||
tests to be written in userspace and run on the kernel under test; this is true
|
||||
for Autotest, kselftest, and some others, disqualifying any of them from being
|
||||
considered unit testing frameworks.
|
||||
|
||||
Does KUnit support running on architectures other than UML?
|
||||
===========================================================
|
||||
|
||||
Yes, well, mostly.
|
||||
|
||||
For the most part, the KUnit core framework (what you use to write the tests)
|
||||
can compile to any architecture; it compiles like just another part of the
|
||||
kernel and runs when the kernel boots. However, there is some infrastructure,
|
||||
like the KUnit Wrapper (``tools/testing/kunit/kunit.py``) that does not support
|
||||
other architectures.
|
||||
|
||||
In short, this means that, yes, you can run KUnit on other architectures, but
|
||||
it might require more work than using KUnit on UML.
|
||||
|
||||
For more information, see :ref:`kunit-on-non-uml`.
|
||||
|
||||
What is the difference between a unit test and these other kinds of tests?
|
||||
==========================================================================
|
||||
Most existing tests for the Linux kernel would be categorized as an integration
|
||||
test, or an end-to-end test.
|
||||
|
||||
- A unit test is supposed to test a single unit of code in isolation, hence the
|
||||
name. A unit test should be the finest granularity of testing and as such
|
||||
should allow all possible code paths to be tested in the code under test; this
|
||||
is only possible if the code under test is very small and does not have any
|
||||
external dependencies outside of the test's control like hardware.
|
||||
- An integration test tests the interaction between a minimal set of components,
|
||||
usually just two or three. For example, someone might write an integration
|
||||
test to test the interaction between a driver and a piece of hardware, or to
|
||||
test the interaction between the userspace libraries the kernel provides and
|
||||
the kernel itself; however, one of these tests would probably not test the
|
||||
entire kernel along with hardware interactions and interactions with the
|
||||
userspace.
|
||||
- An end-to-end test usually tests the entire system from the perspective of the
|
||||
code under test. For example, someone might write an end-to-end test for the
|
||||
kernel by installing a production configuration of the kernel on production
|
||||
hardware with a production userspace and then trying to exercise some behavior
|
||||
that depends on interactions between the hardware, the kernel, and userspace.
|
80
Documentation/dev-tools/kunit/index.rst
Normal file
80
Documentation/dev-tools/kunit/index.rst
Normal file
@@ -0,0 +1,80 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
=========================================
|
||||
KUnit - Unit Testing for the Linux Kernel
|
||||
=========================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
start
|
||||
usage
|
||||
kunit-tool
|
||||
api/index
|
||||
faq
|
||||
|
||||
What is KUnit?
|
||||
==============
|
||||
|
||||
KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
|
||||
These tests are able to be run locally on a developer's workstation without a VM
|
||||
or special hardware.
|
||||
|
||||
KUnit is heavily inspired by JUnit, Python's unittest.mock, and
|
||||
Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
|
||||
cases, grouping related test cases into test suites, providing common
|
||||
infrastructure for running tests, and much more.
|
||||
|
||||
Get started now: :doc:`start`
|
||||
|
||||
Why KUnit?
|
||||
==========
|
||||
|
||||
A unit test is supposed to test a single unit of code in isolation, hence the
|
||||
name. A unit test should be the finest granularity of testing and as such should
|
||||
allow all possible code paths to be tested in the code under test; this is only
|
||||
possible if the code under test is very small and does not have any external
|
||||
dependencies outside of the test's control like hardware.
|
||||
|
||||
Outside of KUnit, there are no testing frameworks currently
|
||||
available for the kernel that do not require installing the kernel on a test
|
||||
machine or in a VM and all require tests to be written in userspace running on
|
||||
the kernel; this is true for Autotest, and kselftest, disqualifying
|
||||
any of them from being considered unit testing frameworks.
|
||||
|
||||
KUnit addresses the problem of being able to run tests without needing a virtual
|
||||
machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
|
||||
architecture, like ARM or x86; however, unlike other architectures it compiles
|
||||
to a standalone program that can be run like any other program directly inside
|
||||
of a host operating system; to be clear, it does not require any virtualization
|
||||
support; it is just a regular program.
|
||||
|
||||
KUnit is fast. Excluding build time, from invocation to completion KUnit can run
|
||||
several dozen tests in only 10 to 20 seconds; this might not sound like a big
|
||||
deal to some people, but having such fast and easy to run tests fundamentally
|
||||
changes the way you go about testing and even writing code in the first place.
|
||||
Linus himself said in his `git talk at Google
|
||||
<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
|
||||
|
||||
"... a lot of people seem to think that performance is about doing the
|
||||
same thing, just doing it faster, and that is not true. That is not what
|
||||
performance is all about. If you can do something really fast, really
|
||||
well, people will start using it differently."
|
||||
|
||||
In this context Linus was talking about branching and merging,
|
||||
but this point also applies to testing. If your tests are slow, unreliable, are
|
||||
difficult to write, and require a special setup or special hardware to run,
|
||||
then you wait a lot longer to write tests, and you wait a lot longer to run
|
||||
tests; this means that tests are likely to break, unlikely to test a lot of
|
||||
things, and are unlikely to be rerun once they pass. If your tests are really
|
||||
fast, you run them all the time, every time you make a change, and every time
|
||||
someone sends you some code. Why trust that someone ran all their tests
|
||||
correctly on every change when you can just run them yourself in less time than
|
||||
it takes to read their test log?
|
||||
|
||||
How do I use it?
|
||||
================
|
||||
|
||||
* :doc:`start` - for new users of KUnit
|
||||
* :doc:`usage` - for a more detailed explanation of KUnit features
|
||||
* :doc:`api/index` - for the list of KUnit APIs used for testing
|
57
Documentation/dev-tools/kunit/kunit-tool.rst
Normal file
57
Documentation/dev-tools/kunit/kunit-tool.rst
Normal file
@@ -0,0 +1,57 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
=================
|
||||
kunit_tool How-To
|
||||
=================
|
||||
|
||||
What is kunit_tool?
|
||||
===================
|
||||
|
||||
kunit_tool is a script (``tools/testing/kunit/kunit.py``) that aids in building
|
||||
the Linux kernel as UML (`User Mode Linux
|
||||
<http://user-mode-linux.sourceforge.net/>`_), running KUnit tests, parsing
|
||||
the test results and displaying them in a user friendly manner.
|
||||
|
||||
What is a kunitconfig?
|
||||
======================
|
||||
|
||||
It's just a defconfig that kunit_tool looks for in the base directory.
|
||||
kunit_tool uses it to generate a .config as you might expect. In addition, it
|
||||
verifies that the generated .config contains the CONFIG options in the
|
||||
kunitconfig; the reason it does this is so that it is easy to be sure that a
|
||||
CONFIG that enables a test actually ends up in the .config.
|
||||
|
||||
How do I use kunit_tool?
|
||||
========================
|
||||
|
||||
If a kunitconfig is present at the root directory, all you have to do is:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/testing/kunit/kunit.py run
|
||||
|
||||
However, you most likely want to use it with the following options:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/testing/kunit/kunit.py run --timeout=30 --jobs=`nproc --all`
|
||||
|
||||
- ``--timeout`` sets a maximum amount of time to allow tests to run.
|
||||
- ``--jobs`` sets the number of threads to use to build the kernel.
|
||||
|
||||
If you just want to use the defconfig that ships with the kernel, you can
|
||||
append the ``--defconfig`` flag as well:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/testing/kunit/kunit.py run --timeout=30 --jobs=`nproc --all` --defconfig
|
||||
|
||||
.. note::
|
||||
This command is particularly helpful for getting started because it
|
||||
just works. No kunitconfig needs to be present.
|
||||
|
||||
For a list of all the flags supported by kunit_tool, you can run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/testing/kunit/kunit.py run --help
|
180
Documentation/dev-tools/kunit/start.rst
Normal file
180
Documentation/dev-tools/kunit/start.rst
Normal file
@@ -0,0 +1,180 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
===============
|
||||
Getting Started
|
||||
===============
|
||||
|
||||
Installing dependencies
|
||||
=======================
|
||||
KUnit has the same dependencies as the Linux kernel. As long as you can build
|
||||
the kernel, you can run KUnit.
|
||||
|
||||
KUnit Wrapper
|
||||
=============
|
||||
Included with KUnit is a simple Python wrapper that helps format the output to
|
||||
easily use and read KUnit output. It handles building and running the kernel, as
|
||||
well as formatting the output.
|
||||
|
||||
The wrapper can be run with:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/testing/kunit/kunit.py run --defconfig
|
||||
|
||||
For more information on this wrapper (also called kunit_tool) checkout the
|
||||
:doc:`kunit-tool` page.
|
||||
|
||||
Creating a .kunitconfig
|
||||
=======================
|
||||
The Python script is a thin wrapper around Kbuild. As such, it needs to be
|
||||
configured with a ``.kunitconfig`` file. This file essentially contains the
|
||||
regular Kernel config, with the specific test targets as well.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd $PATH_TO_LINUX_REPO
|
||||
cp arch/um/configs/kunit_defconfig .kunitconfig
|
||||
|
||||
Verifying KUnit Works
|
||||
---------------------
|
||||
|
||||
To make sure that everything is set up correctly, simply invoke the Python
|
||||
wrapper from your kernel repo:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/testing/kunit/kunit.py run
|
||||
|
||||
.. note::
|
||||
You may want to run ``make mrproper`` first.
|
||||
|
||||
If everything worked correctly, you should see the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
Generating .config ...
|
||||
Building KUnit Kernel ...
|
||||
Starting KUnit Kernel ...
|
||||
|
||||
followed by a list of tests that are run. All of them should be passing.
|
||||
|
||||
.. note::
|
||||
Because it is building a lot of sources for the first time, the
|
||||
``Building KUnit kernel`` step may take a while.
|
||||
|
||||
Writing your first test
|
||||
=======================
|
||||
|
||||
In your kernel repo let's add some code that we can test. Create a file
|
||||
``drivers/misc/example.h`` with the contents:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
int misc_example_add(int left, int right);
|
||||
|
||||
create a file ``drivers/misc/example.c``:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
#include <linux/errno.h>
|
||||
|
||||
#include "example.h"
|
||||
|
||||
int misc_example_add(int left, int right)
|
||||
{
|
||||
return left + right;
|
||||
}
|
||||
|
||||
Now add the following lines to ``drivers/misc/Kconfig``:
|
||||
|
||||
.. code-block:: kconfig
|
||||
|
||||
config MISC_EXAMPLE
|
||||
bool "My example"
|
||||
|
||||
and the following lines to ``drivers/misc/Makefile``:
|
||||
|
||||
.. code-block:: make
|
||||
|
||||
obj-$(CONFIG_MISC_EXAMPLE) += example.o
|
||||
|
||||
Now we are ready to write the test. The test will be in
|
||||
``drivers/misc/example-test.c``:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
#include <kunit/test.h>
|
||||
#include "example.h"
|
||||
|
||||
/* Define the test cases. */
|
||||
|
||||
static void misc_example_add_test_basic(struct kunit *test)
|
||||
{
|
||||
KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
|
||||
KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
|
||||
KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
|
||||
KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
|
||||
KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
|
||||
}
|
||||
|
||||
static void misc_example_test_failure(struct kunit *test)
|
||||
{
|
||||
KUNIT_FAIL(test, "This test never passes.");
|
||||
}
|
||||
|
||||
static struct kunit_case misc_example_test_cases[] = {
|
||||
KUNIT_CASE(misc_example_add_test_basic),
|
||||
KUNIT_CASE(misc_example_test_failure),
|
||||
{}
|
||||
};
|
||||
|
||||
static struct kunit_suite misc_example_test_suite = {
|
||||
.name = "misc-example",
|
||||
.test_cases = misc_example_test_cases,
|
||||
};
|
||||
kunit_test_suite(misc_example_test_suite);
|
||||
|
||||
Now add the following to ``drivers/misc/Kconfig``:
|
||||
|
||||
.. code-block:: kconfig
|
||||
|
||||
config MISC_EXAMPLE_TEST
|
||||
bool "Test for my example"
|
||||
depends on MISC_EXAMPLE && KUNIT
|
||||
|
||||
and the following to ``drivers/misc/Makefile``:
|
||||
|
||||
.. code-block:: make
|
||||
|
||||
obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
|
||||
|
||||
Now add it to your ``.kunitconfig``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
CONFIG_MISC_EXAMPLE=y
|
||||
CONFIG_MISC_EXAMPLE_TEST=y
|
||||
|
||||
Now you can run the test:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./tools/testing/kunit/kunit.py run
|
||||
|
||||
You should see the following failure:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
...
|
||||
[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
|
||||
[16:08:57] [FAILED] misc-example:misc_example_test_failure
|
||||
[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
|
||||
[16:08:57] This test never passes.
|
||||
...
|
||||
|
||||
Congrats! You just wrote your first KUnit test!
|
||||
|
||||
Next Steps
|
||||
==========
|
||||
* Check out the :doc:`usage` page for a more
|
||||
in-depth explanation of KUnit.
|
576
Documentation/dev-tools/kunit/usage.rst
Normal file
576
Documentation/dev-tools/kunit/usage.rst
Normal file
@@ -0,0 +1,576 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
===========
|
||||
Using KUnit
|
||||
===========
|
||||
|
||||
The purpose of this document is to describe what KUnit is, how it works, how it
|
||||
is intended to be used, and all the concepts and terminology that are needed to
|
||||
understand it. This guide assumes a working knowledge of the Linux kernel and
|
||||
some basic knowledge of testing.
|
||||
|
||||
For a high level introduction to KUnit, including setting up KUnit for your
|
||||
project, see :doc:`start`.
|
||||
|
||||
Organization of this document
|
||||
=============================
|
||||
|
||||
This document is organized into two main sections: Testing and Isolating
|
||||
Behavior. The first covers what unit tests are and how to use KUnit to write
|
||||
them. The second covers how to use KUnit to isolate code and make it possible
|
||||
to unit test code that was otherwise un-unit-testable.
|
||||
|
||||
Testing
|
||||
=======
|
||||
|
||||
What is KUnit?
|
||||
--------------
|
||||
|
||||
"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
|
||||
Framework." KUnit is intended first and foremost for writing unit tests; it is
|
||||
general enough that it can be used to write integration tests; however, this is
|
||||
a secondary goal. KUnit has no ambition of being the only testing framework for
|
||||
the kernel; for example, it does not intend to be an end-to-end testing
|
||||
framework.
|
||||
|
||||
What is Unit Testing?
|
||||
---------------------
|
||||
|
||||
A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
|
||||
tests code at the smallest possible scope, a *unit* of code. In the C
|
||||
programming language that's a function.
|
||||
|
||||
Unit tests should be written for all the publicly exposed functions in a
|
||||
compilation unit; so that is all the functions that are exported in either a
|
||||
*class* (defined below) or all functions which are **not** static.
|
||||
|
||||
Writing Tests
|
||||
-------------
|
||||
|
||||
Test Cases
|
||||
~~~~~~~~~~
|
||||
|
||||
The fundamental unit in KUnit is the test case. A test case is a function with
|
||||
the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
|
||||
and then sets *expectations* for what should happen. For example:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
void example_test_success(struct kunit *test)
|
||||
{
|
||||
}
|
||||
|
||||
void example_test_failure(struct kunit *test)
|
||||
{
|
||||
KUNIT_FAIL(test, "This test never passes.");
|
||||
}
|
||||
|
||||
In the above example ``example_test_success`` always passes because it does
|
||||
nothing; no expectations are set, so all expectations pass. On the other hand
|
||||
``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
|
||||
a special expectation that logs a message and causes the test case to fail.
|
||||
|
||||
Expectations
|
||||
~~~~~~~~~~~~
|
||||
An *expectation* is a way to specify that you expect a piece of code to do
|
||||
something in a test. An expectation is called like a function. A test is made
|
||||
by setting expectations about the behavior of a piece of code under test; when
|
||||
one or more of the expectations fail, the test case fails and information about
|
||||
the failure is logged. For example:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
void add_test_basic(struct kunit *test)
|
||||
{
|
||||
KUNIT_EXPECT_EQ(test, 1, add(1, 0));
|
||||
KUNIT_EXPECT_EQ(test, 2, add(1, 1));
|
||||
}
|
||||
|
||||
In the above example ``add_test_basic`` makes a number of assertions about the
|
||||
behavior of a function called ``add``; the first parameter is always of type
|
||||
``struct kunit *``, which contains information about the current test context;
|
||||
the second parameter, in this case, is what the value is expected to be; the
|
||||
last value is what the value actually is. If ``add`` passes all of these
|
||||
expectations, the test case, ``add_test_basic`` will pass; if any one of these
|
||||
expectations fail, the test case will fail.
|
||||
|
||||
It is important to understand that a test case *fails* when any expectation is
|
||||
violated; however, the test will continue running, potentially trying other
|
||||
expectations until the test case ends or is otherwise terminated. This is as
|
||||
opposed to *assertions* which are discussed later.
|
||||
|
||||
To learn about more expectations supported by KUnit, see :doc:`api/test`.
|
||||
|
||||
.. note::
|
||||
A single test case should be pretty short, pretty easy to understand,
|
||||
focused on a single behavior.
|
||||
|
||||
For example, if we wanted to properly test the add function above, we would
|
||||
create additional tests cases which would each test a different property that an
|
||||
add function should have like this:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
void add_test_basic(struct kunit *test)
|
||||
{
|
||||
KUNIT_EXPECT_EQ(test, 1, add(1, 0));
|
||||
KUNIT_EXPECT_EQ(test, 2, add(1, 1));
|
||||
}
|
||||
|
||||
void add_test_negative(struct kunit *test)
|
||||
{
|
||||
KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
|
||||
}
|
||||
|
||||
void add_test_max(struct kunit *test)
|
||||
{
|
||||
KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
|
||||
KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
|
||||
}
|
||||
|
||||
void add_test_overflow(struct kunit *test)
|
||||
{
|
||||
KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
|
||||
}
|
||||
|
||||
Notice how it is immediately obvious what all the properties that we are testing
|
||||
for are.
|
||||
|
||||
Assertions
|
||||
~~~~~~~~~~
|
||||
|
||||
KUnit also has the concept of an *assertion*. An assertion is just like an
|
||||
expectation except the assertion immediately terminates the test case if it is
|
||||
not satisfied.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
static void mock_test_do_expect_default_return(struct kunit *test)
|
||||
{
|
||||
struct mock_test_context *ctx = test->priv;
|
||||
struct mock *mock = ctx->mock;
|
||||
int param0 = 5, param1 = -5;
|
||||
const char *two_param_types[] = {"int", "int"};
|
||||
const void *two_params[] = {¶m0, ¶m1};
|
||||
const void *ret;
|
||||
|
||||
ret = mock->do_expect(mock,
|
||||
"test_printk", test_printk,
|
||||
two_param_types, two_params,
|
||||
ARRAY_SIZE(two_params));
|
||||
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
|
||||
KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
|
||||
}
|
||||
|
||||
In this example, the method under test should return a pointer to a value, so
|
||||
if the pointer returned by the method is null or an errno, we don't want to
|
||||
bother continuing the test since the following expectation could crash the test
|
||||
case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
|
||||
the appropriate conditions have not been satisfied to complete the test.
|
||||
|
||||
Test Suites
|
||||
~~~~~~~~~~~
|
||||
|
||||
Now obviously one unit test isn't very helpful; the power comes from having
|
||||
many test cases covering all of a unit's behaviors. Consequently it is common
|
||||
to have many *similar* tests; in order to reduce duplication in these closely
|
||||
related tests most unit testing frameworks - including KUnit - provide the
|
||||
concept of a *test suite*. A *test suite* is just a collection of test cases
|
||||
for a unit of code with a set up function that gets invoked before every test
|
||||
case and then a tear down function that gets invoked after every test case
|
||||
completes.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
static struct kunit_case example_test_cases[] = {
|
||||
KUNIT_CASE(example_test_foo),
|
||||
KUNIT_CASE(example_test_bar),
|
||||
KUNIT_CASE(example_test_baz),
|
||||
{}
|
||||
};
|
||||
|
||||
static struct kunit_suite example_test_suite = {
|
||||
.name = "example",
|
||||
.init = example_test_init,
|
||||
.exit = example_test_exit,
|
||||
.test_cases = example_test_cases,
|
||||
};
|
||||
kunit_test_suite(example_test_suite);
|
||||
|
||||
In the above example the test suite, ``example_test_suite``, would run the test
|
||||
cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``,
|
||||
each would have ``example_test_init`` called immediately before it and would
|
||||
have ``example_test_exit`` called immediately after it.
|
||||
``kunit_test_suite(example_test_suite)`` registers the test suite with the
|
||||
KUnit test framework.
|
||||
|
||||
.. note::
|
||||
A test case will only be run if it is associated with a test suite.
|
||||
|
||||
For more information on these types of things see the :doc:`api/test`.
|
||||
|
||||
Isolating Behavior
|
||||
==================
|
||||
|
||||
The most important aspect of unit testing that other forms of testing do not
|
||||
provide is the ability to limit the amount of code under test to a single unit.
|
||||
In practice, this is only possible by being able to control what code gets run
|
||||
when the unit under test calls a function and this is usually accomplished
|
||||
through some sort of indirection where a function is exposed as part of an API
|
||||
such that the definition of that function can be changed without affecting the
|
||||
rest of the code base. In the kernel this primarily comes from two constructs,
|
||||
classes, structs that contain function pointers that are provided by the
|
||||
implementer, and architecture specific functions which have definitions selected
|
||||
at compile time.
|
||||
|
||||
Classes
|
||||
-------
|
||||
|
||||
Classes are not a construct that is built into the C programming language;
|
||||
however, it is an easily derived concept. Accordingly, pretty much every project
|
||||
that does not use a standardized object oriented library (like GNOME's GObject)
|
||||
has their own slightly different way of doing object oriented programming; the
|
||||
Linux kernel is no exception.
|
||||
|
||||
The central concept in kernel object oriented programming is the class. In the
|
||||
kernel, a *class* is a struct that contains function pointers. This creates a
|
||||
contract between *implementers* and *users* since it forces them to use the
|
||||
same function signature without having to call the function directly. In order
|
||||
for it to truly be a class, the function pointers must specify that a pointer
|
||||
to the class, known as a *class handle*, be one of the parameters; this makes
|
||||
it possible for the member functions (also known as *methods*) to have access
|
||||
to member variables (more commonly known as *fields*) allowing the same
|
||||
implementation to have multiple *instances*.
|
||||
|
||||
Typically a class can be *overridden* by *child classes* by embedding the
|
||||
*parent class* in the child class. Then when a method provided by the child
|
||||
class is called, the child implementation knows that the pointer passed to it is
|
||||
of a parent contained within the child; because of this, the child can compute
|
||||
the pointer to itself because the pointer to the parent is always a fixed offset
|
||||
from the pointer to the child; this offset is the offset of the parent contained
|
||||
in the child struct. For example:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct shape {
|
||||
int (*area)(struct shape *this);
|
||||
};
|
||||
|
||||
struct rectangle {
|
||||
struct shape parent;
|
||||
int length;
|
||||
int width;
|
||||
};
|
||||
|
||||
int rectangle_area(struct shape *this)
|
||||
{
|
||||
struct rectangle *self = container_of(this, struct shape, parent);
|
||||
|
||||
return self->length * self->width;
|
||||
};
|
||||
|
||||
void rectangle_new(struct rectangle *self, int length, int width)
|
||||
{
|
||||
self->parent.area = rectangle_area;
|
||||
self->length = length;
|
||||
self->width = width;
|
||||
}
|
||||
|
||||
In this example (as in most kernel code) the operation of computing the pointer
|
||||
to the child from the pointer to the parent is done by ``container_of``.
|
||||
|
||||
Faking Classes
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
In order to unit test a piece of code that calls a method in a class, the
|
||||
behavior of the method must be controllable, otherwise the test ceases to be a
|
||||
unit test and becomes an integration test.
|
||||
|
||||
A fake just provides an implementation of a piece of code that is different than
|
||||
what runs in a production instance, but behaves identically from the standpoint
|
||||
of the callers; this is usually done to replace a dependency that is hard to
|
||||
deal with, or is slow.
|
||||
|
||||
A good example for this might be implementing a fake EEPROM that just stores the
|
||||
"contents" in an internal buffer. For example, let's assume we have a class that
|
||||
represents an EEPROM:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct eeprom {
|
||||
ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
|
||||
ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
|
||||
};
|
||||
|
||||
And we want to test some code that buffers writes to the EEPROM:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct eeprom_buffer {
|
||||
ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
|
||||
int flush(struct eeprom_buffer *this);
|
||||
size_t flush_count; /* Flushes when buffer exceeds flush_count. */
|
||||
};
|
||||
|
||||
struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
|
||||
void destroy_eeprom_buffer(struct eeprom *eeprom);
|
||||
|
||||
We can easily test this code by *faking out* the underlying EEPROM:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct fake_eeprom {
|
||||
struct eeprom parent;
|
||||
char contents[FAKE_EEPROM_CONTENTS_SIZE];
|
||||
};
|
||||
|
||||
ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
|
||||
{
|
||||
struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
|
||||
|
||||
count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
|
||||
memcpy(buffer, this->contents + offset, count);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
ssize_t fake_eeprom_write(struct eeprom *parent, size_t offset, const char *buffer, size_t count)
|
||||
{
|
||||
struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
|
||||
|
||||
count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
|
||||
memcpy(this->contents + offset, buffer, count);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
void fake_eeprom_init(struct fake_eeprom *this)
|
||||
{
|
||||
this->parent.read = fake_eeprom_read;
|
||||
this->parent.write = fake_eeprom_write;
|
||||
memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
|
||||
}
|
||||
|
||||
We can now use it to test ``struct eeprom_buffer``:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct eeprom_buffer_test {
|
||||
struct fake_eeprom *fake_eeprom;
|
||||
struct eeprom_buffer *eeprom_buffer;
|
||||
};
|
||||
|
||||
static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
|
||||
{
|
||||
struct eeprom_buffer_test *ctx = test->priv;
|
||||
struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
|
||||
struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
|
||||
char buffer[] = {0xff};
|
||||
|
||||
eeprom_buffer->flush_count = SIZE_MAX;
|
||||
|
||||
eeprom_buffer->write(eeprom_buffer, buffer, 1);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
|
||||
|
||||
eeprom_buffer->write(eeprom_buffer, buffer, 1);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
|
||||
|
||||
eeprom_buffer->flush(eeprom_buffer);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
|
||||
}
|
||||
|
||||
static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
|
||||
{
|
||||
struct eeprom_buffer_test *ctx = test->priv;
|
||||
struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
|
||||
struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
|
||||
char buffer[] = {0xff};
|
||||
|
||||
eeprom_buffer->flush_count = 2;
|
||||
|
||||
eeprom_buffer->write(eeprom_buffer, buffer, 1);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
|
||||
|
||||
eeprom_buffer->write(eeprom_buffer, buffer, 1);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
|
||||
}
|
||||
|
||||
static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
|
||||
{
|
||||
struct eeprom_buffer_test *ctx = test->priv;
|
||||
struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
|
||||
struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
|
||||
char buffer[] = {0xff, 0xff};
|
||||
|
||||
eeprom_buffer->flush_count = 2;
|
||||
|
||||
eeprom_buffer->write(eeprom_buffer, buffer, 1);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
|
||||
|
||||
eeprom_buffer->write(eeprom_buffer, buffer, 2);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
|
||||
/* Should have only flushed the first two bytes. */
|
||||
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
|
||||
}
|
||||
|
||||
static int eeprom_buffer_test_init(struct kunit *test)
|
||||
{
|
||||
struct eeprom_buffer_test *ctx;
|
||||
|
||||
ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
|
||||
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
|
||||
|
||||
ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
|
||||
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
|
||||
fake_eeprom_init(ctx->fake_eeprom);
|
||||
|
||||
ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
|
||||
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
|
||||
|
||||
test->priv = ctx;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void eeprom_buffer_test_exit(struct kunit *test)
|
||||
{
|
||||
struct eeprom_buffer_test *ctx = test->priv;
|
||||
|
||||
destroy_eeprom_buffer(ctx->eeprom_buffer);
|
||||
}
|
||||
|
||||
.. _kunit-on-non-uml:
|
||||
|
||||
KUnit on non-UML architectures
|
||||
==============================
|
||||
|
||||
By default KUnit uses UML as a way to provide dependencies for code under test.
|
||||
Under most circumstances KUnit's usage of UML should be treated as an
|
||||
implementation detail of how KUnit works under the hood. Nevertheless, there
|
||||
are instances where being able to run architecture specific code or test
|
||||
against real hardware is desirable. For these reasons KUnit supports running on
|
||||
other architectures.
|
||||
|
||||
Running existing KUnit tests on non-UML architectures
|
||||
-----------------------------------------------------
|
||||
|
||||
There are some special considerations when running existing KUnit tests on
|
||||
non-UML architectures:
|
||||
|
||||
* Hardware may not be deterministic, so a test that always passes or fails
|
||||
when run under UML may not always do so on real hardware.
|
||||
* Hardware and VM environments may not be hermetic. KUnit tries its best to
|
||||
provide a hermetic environment to run tests; however, it cannot manage state
|
||||
that it doesn't know about outside of the kernel. Consequently, tests that
|
||||
may be hermetic on UML may not be hermetic on other architectures.
|
||||
* Some features and tooling may not be supported outside of UML.
|
||||
* Hardware and VMs are slower than UML.
|
||||
|
||||
None of these are reasons not to run your KUnit tests on real hardware; they are
|
||||
only things to be aware of when doing so.
|
||||
|
||||
The biggest impediment will likely be that certain KUnit features and
|
||||
infrastructure may not support your target environment. For example, at this
|
||||
time the KUnit Wrapper (``tools/testing/kunit/kunit.py``) does not work outside
|
||||
of UML. Unfortunately, there is no way around this. Using UML (or even just a
|
||||
particular architecture) allows us to make a lot of assumptions that make it
|
||||
possible to do things which might otherwise be impossible.
|
||||
|
||||
Nevertheless, all core KUnit framework features are fully supported on all
|
||||
architectures, and using them is straightforward: all you need to do is to take
|
||||
your kunitconfig, your Kconfig options for the tests you would like to run, and
|
||||
merge them into whatever config your are using for your platform. That's it!
|
||||
|
||||
For example, let's say you have the following kunitconfig:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
CONFIG_KUNIT=y
|
||||
CONFIG_KUNIT_EXAMPLE_TEST=y
|
||||
|
||||
If you wanted to run this test on an x86 VM, you might add the following config
|
||||
options to your ``.config``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
CONFIG_KUNIT=y
|
||||
CONFIG_KUNIT_EXAMPLE_TEST=y
|
||||
CONFIG_SERIAL_8250=y
|
||||
CONFIG_SERIAL_8250_CONSOLE=y
|
||||
|
||||
All these new options do is enable support for a common serial console needed
|
||||
for logging.
|
||||
|
||||
Next, you could build a kernel with these tests as follows:
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
make ARCH=x86 olddefconfig
|
||||
make ARCH=x86
|
||||
|
||||
Once you have built a kernel, you could run it on QEMU as follows:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
qemu-system-x86_64 -enable-kvm \
|
||||
-m 1024 \
|
||||
-kernel arch/x86_64/boot/bzImage \
|
||||
-append 'console=ttyS0' \
|
||||
--nographic
|
||||
|
||||
Interspersed in the kernel logs you might see the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
TAP version 14
|
||||
# Subtest: example
|
||||
1..1
|
||||
# example_simple_test: initializing
|
||||
ok 1 - example_simple_test
|
||||
ok 1 - example
|
||||
|
||||
Congratulations, you just ran a KUnit test on the x86 architecture!
|
||||
|
||||
Writing new tests for other architectures
|
||||
-----------------------------------------
|
||||
|
||||
The first thing you must do is ask yourself whether it is necessary to write a
|
||||
KUnit test for a specific architecture, and then whether it is necessary to
|
||||
write that test for a particular piece of hardware. In general, writing a test
|
||||
that depends on having access to a particular piece of hardware or software (not
|
||||
included in the Linux source repo) should be avoided at all costs.
|
||||
|
||||
Even if you only ever plan on running your KUnit test on your hardware
|
||||
configuration, other people may want to run your tests and may not have access
|
||||
to your hardware. If you write your test to run on UML, then anyone can run your
|
||||
tests without knowing anything about your particular setup, and you can still
|
||||
run your tests on your hardware setup just by compiling for your architecture.
|
||||
|
||||
.. important::
|
||||
Always prefer tests that run on UML to tests that only run under a particular
|
||||
architecture, and always prefer tests that run under QEMU or another easy
|
||||
(and monetarily free) to obtain software environment to a specific piece of
|
||||
hardware.
|
||||
|
||||
Nevertheless, there are still valid reasons to write an architecture or hardware
|
||||
specific test: for example, you might want to test some code that really belongs
|
||||
in ``arch/some-arch/*``. Even so, try your best to write the test so that it
|
||||
does not depend on physical hardware: if some of your test cases don't need the
|
||||
hardware, only require the hardware for tests that actually need it.
|
||||
|
||||
Now that you have narrowed down exactly what bits are hardware specific, the
|
||||
actual procedure for writing and running the tests is pretty much the same as
|
||||
writing normal KUnit tests. One special caveat is that you have to reset
|
||||
hardware state in between test cases; if this is not possible, you may only be
|
||||
able to run one test case per invocation.
|
||||
|
||||
.. TODO(brendanhiggins@google.com): Add an actual example of an architecture
|
||||
dependent KUnit test.
|
Reference in New Issue
Block a user