SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2023

Rules
Benchmarks
Specs
Model Validation Track
Proof Exhibition Track
Parallel & Cloud Tracks
Participants
Results
Statistics
Comparisons
Slides

QF_LinearRealArith (Model Validation Track)

Competition results for the QF_LinearRealArith division in the Model Validation Track.

Page generated on 2023-07-06 16:06:00 +0000

Benchmarks: 631
Time Limit: 1200 seconds
Memory Limit: 60 GB

Logics:

Winners

Sequential PerformanceParallel Performance
OpenSMTOpenSMT

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
2022-OpenSMTn 0 611 24675.184 24670.63720 0
OpenSMT 0 609 23952.721 23935.3422 0
Yices2 0 608 12932.989 12935.27423 0
cvc5 0 601 27683.355 27644.34530 0
SMTInterpol 0 580 41731.454 36739.50451 0
Yaga 0 409 10502.71 10505.51184 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
2022-OpenSMTn 0 61124675.18424670.63720 0
OpenSMT 0 60923952.72123935.3422 0
Yices2 0 60812932.98912935.27423 0
cvc5 0 60127683.35527644.34530 0
SMTInterpol 0 58041731.45436739.50451 0
Yaga 0 40910502.7110505.51184 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.