SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2020

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides

QF_IDL (Model Validation Track)

Competition results for the QF_IDL division in the Model Validation Track.

Page generated on 2020-07-04 11:50:14 +0000

Benchmarks: 512
Time Limit: 1200 seconds
Memory Limit: 60 GB

This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
Yices2 Model Validation 0 487 35389.528 35376.16125 0
Yices2-fixed Model Validationn 0 487 35405.088 35393.34925 0
z3n 0 478 44659.544 44679.08629 0
CVC4-mv 0 441 130578.806 130542.96371 0
SMTInterpol-fixedn 0 320 268434.789 264935.925192 0
SMTInterpol 0 319 268666.114 265253.85193 0
MathSAT5-mvn 5* 334 238095.162 238118.56173 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
Yices2 Model Validation 0 48735391.70835375.52125 0
Yices2-fixed Model Validationn 0 48735407.61835392.46925 0
z3n 0 47844663.60444678.17629 0
CVC4-mv 0 441130590.026130540.47371 0
SMTInterpol-fixedn 0 321268441.109264925.145191 0
SMTInterpol 0 320268671.694265242.24192 0
MathSAT5-mvn 5* 334238118.942238111.74173 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.
* The error score is caused by MathSAT not giving full models (syntactic problems). It does not indicate an unsoundness.