SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Competitions by Year

SMT-COMP 2020

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides

QF_LIA (Model Validation Track)

Competition results for the QF_LIA division in the Model Validation Track.

Page generated on 2020-07-04 11:50:14 +0000

Benchmarks: 1532
Time Limit: 1200 seconds
Memory Limit: 60 GB

This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
z3n 0 1461 188635.889 188590.75770 1
CVC4-mv 0 1413 171782.588 171520.789116 0
Yices2 Model Validation 0 1293 318640.804 318586.617239 0
Yices2-fixed Model Validationn 0 1292 318817.204 318835.381240 0
SMTInterpol-fixedn 0 1273 404680.659 389779.138259 0
SMTInterpol 0 1272 404643.459 389786.135260 0
MathSAT5-mvn 14* 1440 176506.816 176433.95278 0

Parallel Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
z3n 0 1461 188641.519 188588.46770 1
CVC4-mv 0 1413 171792.048 171517.539116 0
Yices2 Model Validation 0 1293 318653.504 318579.807239 0
Yices2-fixed Model Validationn 0 1292 318829.934 318829.741240 0
SMTInterpol-fixedn 0 1275 404712.029 389760.068257 0
SMTInterpol 0 1274 404650.729 389736.485258 0
MathSAT5-mvn 14* 1440 176516.546 176431.02278 0

n Non-competing.
* The error score is caused by MathSAT not giving full models (syntactic problems). It does not indicate an unsoundness.