SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2022

Rules
Benchmarks
Tools
Specs
Proof Exhibition Track
Parallel & Cloud Tracks
Participants
Results
Statistics
Comparisons
Slides

QF_Bitvec (Model Validation Track)

Competition results for the QF_Bitvec division in the Model Validation Track.

Page generated on 2022-08-10 11:19:11 +0000

Benchmarks: 7324
Time Limit: 1200 seconds
Memory Limit: 60 GB

Logics:

Winners

Sequential PerformanceParallel Performance
BitwuzlaBitwuzla

Sequential Performance

Solver Error Score Correct Score CPU Time Score Wall Time ScoreAbstainedTimeout Memout
2020-Bitwuzlan 0 7274 135131.97 134864.60849 0
Bitwuzla 0 7272 138254.408 138058.65551 0
Yices2 0 7246 178045.342 177848.42777 0
STP 0 7213 192246.216 192180.224110 0
z3-4.8.17n 0 7009 571080.815 570633.122314 0
cvc5 0 6958 566723.606 566246.512360 3
MathSATn 0 6729 756992.123 756822.668474 2
Z3++BV 9 7068 396744.629 396609.891163 0

Parallel Performance

Solver Error Score Correct ScoreCPU Time ScoreWall Time ScoreAbstainedTimeout Memout
2020-Bitwuzlan 0 7274135139.38134862.60849 0
Bitwuzla 0 7272138262.308138056.09551 0
Yices2 0 7246178052.422177845.71777 0
STP 0 7213192266.876192176.014110 0
z3-4.8.17n 0 7009571123.255570621.912314 0
cvc5 0 6958566815.106566229.812360 3
MathSATn 0 6728757092.603756800.388474 2
Z3++BV 9 7068396897.859396601.771163 0

n Non-competing.
Abstained: Total of benchmarks in logics in this division that solver chose to abstain from. For SAT/UNSAT scores, this column also includes benchmarks not known to be SAT/UNSAT.