SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Competitions by Year

SMT-COMP 2020

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides

QF_LRA (Model Validation Track)

Competition results for the QF_LRA division in the Model Validation Track.

Page generated on 2020-07-04 11:50:14 +0000

Benchmarks: 378
Time Limit: 1200 seconds
Memory Limit: 60 GB

This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
OpenSMT 0 366 23196.707 23147.05312 0
Yices2 Model Validation 0 361 27231.252 27220.59217 0
Yices2-fixed Model Validationn 0 361 27293.683 27273.16417 0
z3n 0 361 30902.311 30890.31417 0
CVC4-mv 0 360 36177.219 36183.08818 0
SMTInterpol-fixedn 0 351 54948.511 52217.77927 0
SMTInterpol 2* 350 54559.699 51766.01326 0
MathSAT5-mvn 2* 345 53776.173 53764.77131 0

Parallel Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
OpenSMT 0 366 23197.627 23146.59312 0
Yices2 Model Validation 0 361 27233.072 27220.21217 0
Yices2-fixed Model Validationn 0 361 27294.953 27272.74417 0
z3n 0 361 30904.961 30889.86417 0
CVC4-mv 0 360 36181.659 36182.57818 0
SMTInterpol-fixedn 0 351 54948.511 52217.77927 0
SMTInterpol 2* 350 54559.699 51766.01326 0
MathSAT5-mvn 2* 345 53783.173 53763.35131 0

n Non-competing.
* The error score is caused by SMTInterpol giving values to defined functions (syntactic problems). It does not indicate an unsoundness.
* The error score is caused by MathSAT not giving full models (syntactic problems). It does not indicate an unsoundness.