SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2023

Rules
Benchmarks
Specs
Model Validation Track
Proof Exhibition Track
Parallel & Cloud Tracks
Participants
Results
Statistics
Comparisons
Slides

QF_Equality+LinearArith (Model Validation Track)

Competition results for the QF_Equality+LinearArith division in the Model Validation Track.

Page generated on 2023-07-06 16:06:00 +0000

Benchmarks: 921
Time Limit: 1200 seconds
Memory Limit: 60 GB

Logics:

Winners

Sequential PerformanceParallel Performance
SMTInterpolSMTInterpol

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
2022-smtinterpoln 0 909 20222.626 15663.03712 0
SMTInterpol 0 895 25487.104 21160.7226 0
OpenSMT 0 893 12161.31 12156.12428 0
cvc5 0 870 30144.941 30148.71151 0
Yices2 0 844 20388.98 20392.09777 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
2022-smtinterpoln 0 91021438.56616805.95711 0
SMTInterpol 0 89626728.82422334.6125 0
OpenSMT 0 89312161.3112156.12428 0
cvc5 0 87030144.94130148.71151 0
Yices2 0 84420388.9820392.09777 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.