SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2022

Rules
Benchmarks
Tools
Specs
Proof Exhibition Track
Parallel & Cloud Tracks
Participants
Results
Statistics
Comparisons
Slides

QF_LIA (Model Validation Track)

Competition results for the QF_LIA logic in the Model Validation Track.

Page generated on 2022-08-10 11:19:11 +0000

Benchmarks: 4196
Time Limit: 1200 seconds
Memory Limit: 60 GB

Winners

Sequential PerformanceParallel Performance
Z3++Z3++

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
Z3++ 0 4078 223671.692 224739.043106 6
2020-z3n 0 4073 272161.825 272055.953108 2
MathSATn 0 4050 266606.64 266561.143132 0
z3-4.8.17n 0 4032 284918.865 284670.43162 2
OpenSMT 0 3994 477436.125 477337.742201 0
cvc5 0 3992 352537.004 352478.219203 0
Yices2 0 3905 408746.205 408736.121291 0
smtinterpol 0 3814 600133.529 575944.074382 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreTimeout Memout
Z3++ 0 4078223688.172224734.873106 6
2020-z3n 0 4073272179.315272050.813108 2
MathSATn 0 4050266622.02266555.603132 0
z3-4.8.17n 0 4032284933.215284664.68162 2
OpenSMT 0 3994477456.615477330.582201 0
cvc5 0 3992352563.424352470.659203 0
Yices2 0 3905408770.615408727.321291 0
smtinterpol 0 3814600133.529575944.074382 0

n Non-competing.
N/A: Benchmarks not known to be SAT/UNSAT, respectively.