Benchmark tests for Clarabel solver
Function pointers for all problems in the collection are available in ClarabelBenchmarks.PROBLEMS
. This is a Dict object with keys matching the problem classes defined in the problem_sets
, e.g. ClarabelBenchmarks.PROBLEMS["maros"]
gives a Dict of function pointers for the Maros problem set. Individual problems can be solved like so:
using ClarabelBenchmaks, Clarabel
model = ClarabelBenchmarks.PROBLEMS["maros"]["AUG2D"](Clarabel.Optimizer)
println("solve time: ", solve_time(model))
It is also possible to pass a JuMP model object to these functions, e.g. to solve using non-standard settings. To solve the same problem with a different iteration limit:
To run one of the benchmark tests you should execute the appropriate script in ./src/benchmarks
.
If the benchmark has previously been run then the script will load results from ./src/benchmarks/results
. If it has not been run before, or if new solvers have been added to the list of benchmarks for the test, then the data will be generated by running each solver that has not already been tested.
Be warned that some of the benchmark collections take a very long time to run, particular when multiple solvers are tested.
The Rust implementation comes with an (undocumented) interface to Julia for benchmarking. It can be built via cargo
in the Rust repository using
cargo build --release --features julia
If you want to benchmark SDP problems, you will need to specify an appropriate SDP feature with BLAS support, e.g. on OSX you can use the native Accelerate framework
cargo build --release --features julia,sdp-accelerate
You will also need to manually add the ClarabelRs
package to your Julia environment. It lives in the Clarabel.rs
github repository in <repo>/src/julia/ClarabelRs
.
Once you have done both of the above then you call the Rust version of the solver in the same way as the Julia version, e.g.
using JuMP, ClarabelRs
model = Model(ClarabelRs.Optimizer)
.....