SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2020

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides

QF_LRA (Model Validation Track)

Competition results for the QF_LRA division in the Model Validation Track.

Page generated on 2020-07-04 11:50:14 +0000

Benchmarks: 378
Time Limit: 1200 seconds
Memory Limit: 60 GB

This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
OpenSMT 0 366 23196.707 23147.05312 0
Yices2 Model Validation 0 361 27231.252 27220.59217 0
Yices2-fixed Model Validationn 0 361 27293.683 27273.16417 0
z3n 0 361 30902.311 30890.31417 0
CVC4-mv 0 360 36177.219 36183.08818 0
SMTInterpol-fixedn 0 351 54948.511 52217.77927 0
SMTInterpol 2* 350 54559.699 51766.01326 0
MathSAT5-mvn 2* 345 53776.173 53764.77131 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
OpenSMT 0 36623197.62723146.59312 0
Yices2 Model Validation 0 36127233.07227220.21217 0
Yices2-fixed Model Validationn 0 36127294.95327272.74417 0
z3n 0 36130904.96130889.86417 0
CVC4-mv 0 36036181.65936182.57818 0
SMTInterpol-fixedn 0 35154948.51152217.77927 0
SMTInterpol 2* 35054559.69951766.01326 0
MathSAT5-mvn 2* 34553783.17353763.35131 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.
* The error score is caused by SMTInterpol giving values to defined functions (syntactic problems). It does not indicate an unsoundness.
* The error score is caused by MathSAT not giving full models (syntactic problems). It does not indicate an unsoundness.