SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2021

Rules
Benchmarks
Tools
Specs
Parallel & Cloud Tracks
Participants
Results
Slides

QF_Equality (Model Validation Track)

Competition results for the QF_Equality division in the Model Validation Track.

Page generated on 2021-07-18 17:31:50 +0000

Benchmarks: 1571
Time Limit: 1200 seconds
Memory Limit: 60 GB

Logics: This division is experimental. Solvers are only ranked by performance, but no winner is selected.

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
Yices2 model-validation 0 1571 73.098 77.2240 0
OpenSMT 0 1571 562.781 572.4290 0
cvc5-mv 0 1571 731.025 728.4360 0
SMTInterpol 0 1571 4841.795 1974.7910 0
z3-mvn 0 1555 243.214 232.2620 0
MathSAT5n 0 640 227.882 226.9470 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
Yices2 model-validation 0 157173.09877.2240 0
OpenSMT 0 1571562.781572.4290 0
cvc5-mv 0 1571731.025728.4360 0
SMTInterpol 0 15714841.7951974.7910 0
z3-mvn 0 1555243.214232.2620 0
MathSAT5n 0 640227.882226.9470 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.