A package for running @testitem
s in parallel.
Wrap your tests in the @testitem
macro, place them in a file named *_test.jl
, and use runtests
to run them:
# test/arithmetic_tests.jl
@testitem "addition" begin
@test 1 + 2 == 3
@test 0 + 2 == 2
@test -1 + 2 == 1
end
@testitem "multiplication" begin
@test 1 * 2 == 2
@test 0 * 2 == 0
@test -1 * 2 == -2
end
julia> using ReTestItems
julia> runtests("test/arithmetic_tests.jl")
Run test-items in parallel on multiple processes by passing nworkers
:
julia> runtests("test/arithmetic_tests.jl"; nworkers=2)
You can run tests using the runtests
function,
which will run all tests for the current active project.
julia> using ReTestItems
julia> runtests()
Test-items must be in files named with the suffix _test.jl
or _tests.jl
.
ReTestItems uses these file suffixes to identify which files are "test files";
all other files will be ignored by runtests
.
runtests
allows you to run a subset of tests by passing the directory or file path(s) you want to run.
julia> runtests(
"test/Database/physical_representation_tests.jl",
"test/PhysicalRepresentation/",
)
For interactive sessions, all logs from the tests will be printed out in the REPL by default.
You can disable this by passing logs=:issues
in which case logs from a test-item are only printed if that test-items errors or fails.
logs=:issues
is also the default for non-interactive sessions.
julia> runtests("test/Database/"; logs=:issues)
You can use the name
keyword to select test-items by name.
Pass a string to select a test-item by its exact name,
or pass a regular expression (regex) to match multiple test-item names.
julia> runtests("test/Database/"; name="issue-123")
julia> runtests("test/Database/"; name=r"^issue")
You can pass tags
to select test-items by tag.
When passing multiple tags a test-item is only run if it has all the requested tags.
# Run tests that are tagged as both `regression` and `fast`
julia> runtests("test/Database/"; tags=[:regression, :fast])
Filtering by name
and tags
can be combined to run only test-items that match both the name and tags.
# Run tests named `issue*` which also have tag `regression`.
julia> runtests("test/Database/"; tags=:regression, name=r"^issue")
You can run tests in parallel on multiple worker processes using the nworkers
keyword.
The nworkers
keyword specifies the number of worker processes to use for running tests in parallel:
nworkers=0
(the default), runs tests sequentially on the current process.nworkers=1
runs tests sequentially in a new process.nworkers=2
runs tests in parallel on 2 new processes (and so on fornworkers=3
,nworkers=4
, ...)
All new workers processes are started before any tests are run. Each worker runs the worker_init_expr
(if given), and then selects a test-item to run from a global queue in a work-stealing manner until all test-items have been run.
The number of threads each worker processes should have is specified by the nworker_threads
keyword.
For example, nworker_threads=2
will start each worker process with 2 threads, and
nworker_threads="2,1"
with start each worker with 2 default threads and 1 interactive
thread
(see the Julia manual on
threadpools).
Note ReTestItems.jl uses distributed parallelism, not multi-threading, to run test-items in parallel.
You can set runtests
to stop on the first test-item failure by passing failfast=true
.
Note
Note failfast
prevents any new test-items starting to run after the first test-item failure, but
test-items that were already running on another worker in parallel with the failing test will complete and appear in the test report.
Tests that were not run will not appear in the test report.
This may be improved in a future version of ReTestItems.jl, so that all test-items running in parallel are stopped on when one test-item fails
If you want individual test-items to stop on their first test failure, but not stop the whole runtests
early, you can instead pass just testitem_failfast=true
to runtests
.
Tests must be wrapped in a @testitem
.
In most cases, a @testitem
can just be used instead of a @testset
, wrapping together a bunch of related tests:
@testitem "min/max" begin
@test min(1, 2) == 1
@test max(1, 2) == 2
end
The test-item's code is evaluated as top-level code in a new module,
so it can include imports, define new structs or helper functions, as well as declare @test
s and @testset
s.
@testitem "Do cool stuff" begin
using MyPkgDep
function really_cool_stuff()
# ...
end
@testset "Cool stuff doing" begin
@test really_cool_stuff()
end
end
By default, Test
and the package being tested will be imported into the @testitem
automatically.
Since a @testitem
is the block of code that will be executed, @testitem
s cannot be nested.
If some test-specific code needs to be shared by multiple @testitem
s, this code can be placed in a module
and marked as @testsetup
and the @testitem
s can depend on it via the setup
keyword.
@testsetup module TestIrrationals
export PI, area
const PI = 3.14159
area(radius) = PI * radius^2
end
@testitem "Arithmetic" setup=[TestIrrationals] begin
@test 1 / PI ≈ 0.31831 atol=1e-6
end
@testitem "Geometry" setup=[TestIrrationals] begin
@test area(1) ≈ PI
end
The setup
is run once on each worker process that requires it;
it is not run before every @testitem
that depends on the setup.
The skip
keyword can be used to skip a @testitem
, meaning no code inside that test-item will run.
A skipped test-item logs that it is being skipped and records a single "skipped" test result, similar to @test_skip
.
@testitem "skipped" skip=true begin
@test false
end
If skip
is given as an Expr
, it must return a Bool
indicating whether or not to skip the test-item.
This expression will be run in a new module similar to a test-item immediately before the test-item would be run.
# Don't run "orc v1" tests if we don't have orc v1
@testitem "orc v1" skip=:(using LLVM; !LLVM.has_orc_v1()) begin
# tests
end
The skip
keyword allows you to define the condition under which a test needs to be skipped,
for example if it can only be run on a certain platform.
See filtering tests for controlling which tests run in a particular runtests
call.
Each test-item can control whether or not it stops on the first failure using the failfast
keyword.
@testitem "stops on first failure" failfast=true begin
@test 1 + 1 == 3
@test 2 * 2 == 4
end
If a test-items set the failfast
then that value takes precedence over the testitem_failfast
keyword passed to runtests
.
If there is something that should be checked after every single @testitem
, then it's possible to pass an expression to runtests
using the test_end_expr
keyword.
This expression will be run immediately after each @testitem
.
test_end_expr = quote
@testset "global Foo unchanged" begin
foo = get_global_foo()
@test foo.changes == 0
end
end
runtests("frozzle_tests.jl"; test_end_expr)
If there is some set-up that should be done on each worker process before it is used to evaluated test-items, then it is possible to pass an expression to runtests
via the worker_init_expr
keyword.
This expression will be run on each worker process as soon as it is started.
nworkers = 3
worker_init_expr = quote
set_global_foo_memory_limit!(Sys.total_memory()/nworkers)
end
runtests("frobble_tests.jl"; nworkers, worker_init_expr)
- Write tests inside of an
@testitem
block.- These are like an
@testset
, except that they must contain all the code they need to run; any imports or definitions required for the tests must be inside the@testitem
. - A
@testset
can still be used, but all@testset
s must be inside an@testitem
. These nested@testset
s can add structure to your test code and to the summary of results printed out at the end of the tests, but serve no other functional purpose. - Tests that might previously have had imports and
struct
orfunction
definitions outside of an@testset
should instead now declare these inside of a@testitem
. @testitem
can be run in parallel by setting thenworkers
keyword.
- These are like an
- Write shared/re-used testing code in a
@testsetup module
- If you want to split tests up into multiple
@testitem
(so they can run in parallel), but also want to share common helper functions, types, or constants, then put the shared helper code in a module marked with@testsetup
. - Each
@testsetup
will only be evaluated once per worker process that requires it. - A
@testsetup module
is recommended to be used for sharing helper definitions or shared immutable data; not for initializing shared global state that is meant to be mutated (like starting a server). For example, a server should be explicitly started and stopped as needed in a@testitem
, not started within a@testsetup module
.
- If you want to split tests up into multiple
- Write tests in files named
*_test.jl
or*_tests.jl
.- ReTestItems scans the directory tree for any file with the correct naming scheme and automatically schedules for evaluation the
@testitem
they contain. - Files without this naming convention will not run.
- Test files can reside in either the
src/
ortest/
directory, so long as they are named likesrc/sorted_set_tests.jl
(note the_tests.jl
suffix). - No explicit
include
of these files is required. - Files containing only
@testsetup
s can be named*_testsetup.jl
or*_testsetups.jl
, and these files will always be included. - Note that
test/runtests.jl
does not meet the naming convention, and should not itself contain@testitems
.
- ReTestItems scans the directory tree for any file with the correct naming scheme and automatically schedules for evaluation the
- Make sure your
test/runtests.jl
script callsruntests
.test/runtests.jl
is the script run when you callPkg.test()
or] test
at the REPL.- This script can have ReTestItems.jl run tests by calling
runtests
, for example# test/runtests.jl using ReTestItems, MyPackage runtests(MyPackage)
- Pass to
runtests
any configuration you want your tests to run with, such asretries
,failfast
,testitem_failfast
,testitem_timeout
,nworkers
,nworker_threads
,worker_init_expr
,test_end_expr
. See theruntests
docstring for details.
Issues and pull requests are welcome! New contributors should make sure to read the ColPrac Contributor Guide. For significant changes please open an issue for discussion before opening a PR. Information on adding tests is in the test/README.md.