SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2022

Rules
Benchmarks
Tools
Specs
Proof Exhibition Track
Parallel & Cloud Tracks
Participants
Results
Statistics
Comparisons
Slides

QF_LRA (Model Validation Track)

Competition results for the QF_LRA logic in the Model Validation Track.

Page generated on 2022-08-10 11:19:11 +0000

Benchmarks: 516
Time Limit: 1200 seconds
Memory Limit: 60 GB

Winners

Sequential PerformanceParallel Performance
OpenSMTOpenSMT

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreTimeout Memout
OpenSMT 0 503 30845.95 30855.66413 0
z3-4.8.17n 0 493 47161.62 47076.56823 0
2021-Yices2 model-validationn 0 492 38500.854 38465.4524 0
Yices2 0 492 38593.063 38601.25824 0
cvc5 0 490 55737.64 55751.91226 0
smtinterpol 0 478 81036.342 76749.91238 0
MathSATn 0 474 69800.441 69815.67841 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreTimeout Memout
OpenSMT 0 50330849.4730855.16413 0
z3-4.8.17n 0 49347165.1147075.80823 0
2021-Yices2 model-validationn 0 49238502.49438464.7124 0
Yices2 0 49238595.66338600.19824 0
cvc5 0 49055744.3755750.71226 0
smtinterpol 0 47981063.06276719.74237 0
MathSATn 0 47469807.62169813.78841 0

n Non-competing.
N/A: Benchmarks not known to be SAT/UNSAT, respectively.