SMT-COMP

The International Satisfiability Modulo Theories (SMT) Competition.

GitHub

Home
Introduction
Benchmark Submission
Publications
SMT-LIB
Previous Editions

SMT-COMP 2017

Rules
Benchmarks
Tools
Specs
Participants
Results
Slides
Report

Application Track (Summary)

Competition results as of Tue Jul 18 22:06:21 GMT

Logic Solvers Benchmarks Order (parallel performance)
ANIA 2 3 CVC4; z3-4.5.0n
QF_ANIA 2 5 z3-4.5.0n; CVC4
QF_ALIA 5 44 z3-4.5.0n; SMTInterpol; Yices2; mathsat-5.4.1n; CVC4
QF_UFNIA 2 1 z3-4.5.0n; CVC4
QF_BVFP 1 2 z3-4.5.0n
LIA 2 6 z3-4.5.0n; CVC4
ALIA 2 24 z3-4.5.0n; CVC4
QF_UFLRA 5 3056 Yices2; z3-4.5.0n; SMTInterpol; CVC4; mathsat-5.4.1n
UFLRA 2 1870 z3-4.5.0n; CVC4
QF_UFLIA 5 780 z3-4.5.0n; CVC4; Yices2; SMTInterpol; mathsat-5.4.1n
QF_NIA 2 10 CVC4; z3-4.5.0n
QF_FP 1 2 z3-4.5.0n
QF_BV 4 18 mathsat-5.4.1n; Yices2; CVC4; z3-4.5.0n
QF_LRA 6 10 mathsat-5.4.1n; SMTInterpol; Yices2; z3-4.5.0n; CVC4; opensmt2
QF_LIA 5 68 Yices2; z3-4.5.0n; SMTInterpol; mathsat-5.4.1n; CVC4
QF_AUFLIA 5 72 Yices2; z3-4.5.0n; SMTInterpol; CVC4; mathsat-5.4.1n

n. Non-competing.