SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2020

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides

QF_LIRA (Model Validation Track)

Competition results for the QF_LIRA division in the Model Validation Track.

Page generated on 2020-07-04 11:50:14 +0000

Benchmarks: 1
Time Limit: 1200 seconds
Memory Limit: 60 GB

This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
Yices2-fixed Model Validationn 0 1 0.098 0.0980 0
Yices2 Model Validation 0 1 0.098 0.0980 0
z3n 0 1 0.343 0.3440 0
CVC4-mv 0 1 1.357 1.3570 0
MathSAT5-mvn 0 1 2.52 2.5230 0
SMTInterpol 0 1 47.578 28.5920 0
SMTInterpol-fixedn 0 1 49.014 29.00 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
Yices2-fixed Model Validationn 0 10.0980.0980 0
Yices2 Model Validation 0 10.0980.0980 0
z3n 0 10.3430.3440 0
CVC4-mv 0 11.3571.3570 0
MathSAT5-mvn 0 12.522.5230 0
SMTInterpol 0 147.57828.5920 0
SMTInterpol-fixedn 0 149.01429.00 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.