SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2023

Rules
Benchmarks
Specs
Model Validation Track
Proof Exhibition Track
Parallel & Cloud Tracks
Participants
Results
Statistics
Comparisons
Slides

QF_Equality (Model Validation Track)

Competition results for the QF_Equality division in the Model Validation Track.

Page generated on 2023-07-06 16:06:00 +0000

Benchmarks: 1571
Time Limit: 1200 seconds
Memory Limit: 60 GB

Logics:

Winners

Sequential PerformanceParallel Performance
Yices2Yices2

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
2021-Yices2 model-validationn 0 1571 67.859 70.1240 0
2022-Yices2n 0 1571 67.877 70.1280 0
Yices2 0 1571 68.695 70.910 0
OpenSMT 0 1571 260.241 269.5460 0
cvc5 0 1571 459.001 453.6260 0
SMTInterpol 0 1571 4261.414 1700.0920 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
2021-Yices2 model-validationn 0 157167.85970.1240 0
2022-Yices2n 0 157167.87770.1280 0
Yices2 0 157168.69570.910 0
OpenSMT 0 1571260.241269.5460 0
cvc5 0 1571459.001453.6260 0
SMTInterpol 0 15714261.4141700.0920 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.