SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2020

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides

QF_LIA (Model Validation Track)

Competition results for the QF_LIA division in the Model Validation Track.

Page generated on 2020-07-04 11:50:14 +0000

Benchmarks: 1532
Time Limit: 1200 seconds
Memory Limit: 60 GB

This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
z3n 0 1461 188635.889 188590.75770 1
CVC4-mv 0 1413 171782.588 171520.789116 0
Yices2 Model Validation 0 1293 318640.804 318586.617239 0
Yices2-fixed Model Validationn 0 1292 318817.204 318835.381240 0
SMTInterpol-fixedn 0 1273 404680.659 389779.138259 0
SMTInterpol 0 1272 404643.459 389786.135260 0
MathSAT5-mvn 14* 1440 176506.816 176433.95278 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
z3n 0 1461188641.519188588.46770 1
CVC4-mv 0 1413171792.048171517.539116 0
Yices2 Model Validation 0 1293318653.504318579.807239 0
Yices2-fixed Model Validationn 0 1292318829.934318829.741240 0
SMTInterpol-fixedn 0 1275404712.029389760.068257 0
SMTInterpol 0 1274404650.729389736.485258 0
MathSAT5-mvn 14* 1440176516.546176431.02278 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.
* The error score is caused by MathSAT not giving full models (syntactic problems). It does not indicate an unsoundness.