SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2021

Rules
Benchmarks
Tools
Specs
Parallel & Cloud Tracks
Participants
Results
Slides

QF_LIA (Model Validation Track)

Competition results for the QF_LIA logic in the Model Validation Track.

Page generated on 2021-07-18 17:31:50 +0000

Benchmarks: 1923
Time Limit: 1200 seconds
Memory Limit: 60 GB

Winners

Sequential PerformanceParallel Performance
cvc5-mvcvc5-mv

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
MathSAT5n 0 1830 204232.357 204190.13480 0
2020-z3n 0 1819 239179.029 239036.97893 0
cvc5-mv 0 1754 297407.626 297371.91169 0
z3-mvn 0 1706 364901.925 364851.103217 0
2020-Yices2-fixed Model Validationn 0 1638 376806.041 376819.762285 0
2020-Yices2 Model Validationn 0 1638 376813.696 376796.183285 0
Yices2 model-validation 0 1634 380956.556 380978.7289 0
SMTInterpol 0 1604 499491.927 479261.96319 0
OpenSMT 0 1222 871292.562 976940.2228 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreTimeout Memout
MathSAT5n 0 1830204241.777204187.06480 0
2020-z3n 0 1819239192.309239033.67893 0
cvc5-mv 0 1754297421.676297367.13169 0
z3-mvn 0 1706364931.845364842.353217 0
2020-Yices2 Model Validationn 0 1638376836.096376786.983285 0
2020-Yices2-fixed Model Validationn 0 1638376827.861376810.512285 0
Yices2 model-validation 0 1634380977.076380972.16289 0
SMTInterpol 0 1605500189.947479097.87318 0
OpenSMT 0 1222897206.746976930.21228 0

n Non-competing.
N/A: Benchmarks not known to be SAT/UNSAT, respectively.