[SMT-COMP] Contribution of QF_FPBV and QF_FPABV benchmarks
cwinter at microsoft.com
Fri Apr 21 11:06:19 EDT 2017
Space is free, submit them all! If it turns out there are too many of them, we can let others (e.g. the competition organizers) do the sub-sampling themselves and by their own criteria.
In other parts of the library it turned out that we had a bunch of duplicates with different file names (e.g. QF_BV/sage2 if I remember correctly). If there's an easy way for you to filter those, that would be nice.
From: Delcypher [mailto:delcypher at gmail.com]
Sent: 21 April 2017 02:20
To: Clark Barrett <barrett at cs.stanford.edu>; Christoph Wintersteiger <cwinter at microsoft.com>
Cc: smt-comp at cs.nyu.edu
Subject: Re: [SMT-COMP] Contribution of QF_FPBV and QF_FPABV benchmarks
On 23 March 2017 at 21:52, Clark Barrett <barrett at cs.stanford.edu> wrote:
> Hi Dan,
> Let me try to answer your questions.
> * Do I need to curate these benchmarks in some way. If so what
> criteria should I use? E.g. Should I pick the longest running? I have
> quite a few benchmarks (~ 8000) so having that many might not be
> This is really up to you, but generally my hope would be a set of
> benchmarks that does a good job of representing all the diversity of
> the entire set of benchmarks. If this means you can prune some,
> great. But you should be sure to include representatives from
> different difficulty levels, different sizes, different statuses, different applications etc.
I've now collected my initial queries from my tool (removing uses of `fp.to_ieee_bv`). The numbers are
This is rather a lot of queries. My plan is to compute several different distributions (e.g. distribution of execution times) and semi-randomly sample from them to reduce the number. To do this I need to know roughly how many queries I should be submitting. Any opinions on this?
More information about the SMT-COMP