SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2020

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides

QF_IDL (Model Validation Track)

Competition results for the QF_IDL division in the Model Validation Track.

Page generated on 2020-07-04 11:50:14 +0000

Benchmarks: 512
Time Limit: 1200 seconds
Memory Limit: 60 GB

This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
Yices2 Model Validation 0 487 35389.528 35376.16125 0
Yices2-fixed Model Validationn 0 487 35405.088 35393.34925 0
z3n 0 478 44659.544 44679.08629 0
CVC4-mv 0 441 130578.806 130542.96371 0
SMTInterpol-fixedn 0 320 268434.789 264935.925192 0
SMTInterpol 0 319 268666.114 265253.85193 0
MathSAT5-mvn 5* 334 238095.162 238118.56173 0

Parallel Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
Yices2 Model Validation 0 487 35391.708 35375.52125 0
Yices2-fixed Model Validationn 0 487 35407.618 35392.46925 0
z3n 0 478 44663.604 44678.17629 0
CVC4-mv 0 441 130590.026 130540.47371 0
SMTInterpol-fixedn 0 321 268441.109 264925.145191 0
SMTInterpol 0 320 268671.694 265242.24192 0
MathSAT5-mvn 5* 334 238118.942 238111.74173 0

n Non-competing.
* The error score is caused by MathSAT not giving full models (syntactic problems). It does not indicate an unsoundness.