[SMT-COMP] [smt-lib] New official release of SMT-LIB benchmark library

Clark Barrett barrett at cs.stanford.edu
Wed May 23 17:36:08 EDT 2018


Thank you Bruno for this report.  These benchmarks have now been fixed on
the GitLab server.

We also fixed three QF_FP benchmarks on GitLab:

Removed an unused declare-sort command from:
griggio/fmcad12/f23.smt2

Removed from QF_FP because they use uninterpreted functions (will look into
adding again next year):
schanda/spark/O402-020_1.smt2
schanda/spark/O402-020_2.smt2

We haven't yet updated StarExec with these fixes.  We will probably wait to
see if any other minor issues are reported and then make a patch to
StarExec early next week.  I will be sure to report any additional updates
on the SMT-LIB and SMT-COMP mailing lists.

-Clark

On Wed, May 23, 2018 at 10:39 AM, Bruno Dutertre <bruno.dutertre at sri.com>
wrote:

> On 5/20/18 9:02 AM, Clark Barrett wrote:
>
>> A new release of the SMT-LIB benchmark library (2018-05-20) is now
>> available, both on the GitLab server and on StarExec.
>> For this release we have:
>>
>>     - Fixed minor errors in formatting
>>     - Removed duplicate benchmarks
>>     - Updated statuses of 18,739 previously unknown non-incremental
>>     benchmarks (based on the results from 2 or more solvers from
>> SMT-COMP'17)
>>     - Updated 232,167 statuses of previously unknown incremental check-sat
>>     calls (based on the results from 2 or more solvers from SMT-COMP'17)
>>     - Added 3121 new benchmarks in new logics:
>>     - non-incremental: AUFNIA (3)
>>        - incremental: QF_ABV (15), QF_AUFBV (10), QF_UF (766), QF_UFBV
>> (2327)
>>     - Added 7969 new benchmarks in existing logics:
>>     - non-incremental: AUFLIA (3272), BV (600), NRA (2), QF_ABV (5),
>>        QF_AUFLIA (294), QF_BV (67), QF_LIA (806), QF_NRA (135), QF_UF
>> (807),
>>        QF_UFBV (1193)
>>        - incremental: QF_BV (787), QF_LIA (1)
>>
>>
> Clark,
>
> The following new benchmarks in QF_UF are mislabeled:
>
> 20170829-Rodin/smt2831655880469397696.smt2
> 20170829-Rodin/smt3232867547761696161.smt2
> 20170829-Rodin/smt3248576982810563470.smt2
> 20170829-Rodin/smt3508124013603727984.smt2
> 20170829-Rodin/smt3809952321495040629.smt2
> 20170829-Rodin/smt3910673230463462036.smt2
> 20170829-Rodin/smt4027072204816894856.smt2
> 20170829-Rodin/smt4241633917533372498.smt2
> 20170829-Rodin/smt4480564921249140261.smt2
> 20170829-Rodin/smt6109211130895037835.smt2
> 20170829-Rodin/smt6377531776677660648.smt2
> 20170829-Rodin/smt834303034702425531.smt2
> 20170829-Rodin/smt862177804180920815.smt2
> 20170829-Rodin/smt8855268942650190404.smt2
>
> They are all marked unsat but they are all sat.
> This can be easily seen by inspection or by running
> CVC4, mathsat, yices, z3 on them.
>
> Bruno
>
>
>
> For details on the changes to SMT-LIB, please check the git logs at:
>>
>>
>
>


More information about the SMT-COMP mailing list