SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2022

Rules
Benchmarks
Tools
Specs
Proof Exhibition Track
Parallel & Cloud Tracks
Participants
Results
Statistics
Comparisons
Slides

QF_LinearIntArith (Model Validation Track)

Competition results for the QF_LinearIntArith division in the Model Validation Track.

Page generated on 2022-08-10 11:19:11 +0000

Benchmarks: 4940
Time Limit: 1200 seconds
Memory Limit: 60 GB

Logics:

Winners

Sequential PerformanceParallel Performance
Z3++Z3++

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
Z3++ 0 4762 332828.245 333903.963165 6
2020-z3n 0 4733 394915.693 394789.006188 2
z3-4.8.17n 0 4698 409492.92 409192.407240 2
OpenSMT 0 4590 708462.598 708387.132348 0
Yices2 0 4557 536617.987 536595.394383 0
cvc5 0 4554 631309.776 631299.719385 0
MathSATn 0 4477 667617.179 667610.936437 0
smtinterpol 0 4262 1014716.304 984274.658678 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
Z3++ 0 4762332852.845333897.413165 6
2020-z3n 0 4733394945.643394780.196188 2
z3-4.8.17n 0 4698409514.15409184.857240 2
OpenSMT 0 4590708497.928708374.352348 0
Yices2 0 4557536651.157536583.814383 0
cvc5 0 4554631377.756631283.769385 0
MathSATn 0 4477667673.659667593.876437 0
smtinterpol 0 42621014716.304984274.658678 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.