August 29, New York, NY, USA
A satellite workshop of SAS 2017
In recent years, a wide range of static analysis tools have emerged, some of which are currently in industrial use or are well beyond the advanced prototype level. Many impressive practical results have been obtained, which allow complex properties to be proven or checked in a fully or semi-automatic way, even in the context of complex software developments. In parallel, the techniques to design and implement static analysis tools have improved significantly, and much effort is being put into engineering the tools. This workshop is intended to promote discussions and exchange experience between specialists in all areas of program analysis design and implementation and static analysis tool users.
Previous workshops have been held in Perpignan, France (2010), Venice, Italy (2011), Deauville, France (2012), Seattle, WA, USA (2013), Munich, Germany (2014), Saint-Malo, France (2015), and Edinburgh, UK (2016).
The technical program of TAPAS 2017 will consist of invited lectures together with presentations based on submitted extended abstracts.
Please submit your abstract via the TAPAS 2017 author interface of EasyChair.
Submitted abstracts should be up to 3 pages in the two-column sub-format of the new ACM proceedings format . Please see this template.
Submissions can cover any aspect of program analysis tools including, but not limited to the following:
All talks will take place in room 102 of NYU's Warren Weaver Hall.
Session 1 | |
9:00-10:00 | Invited talk (joint
with NSAD 2017):
Sriram Sankaranarayanan Static Analysis of Programs with Probabilities Reasoning about the effect of uncertainties on programs is a fundamental challenge in static analysis. A large volume of research has focused on numerical and symbolic abstract domains for reasoning about nondeterministic choices in programs. In this talk, we will explore recent results on proving properties of programs with probabilities. Such programs arise in a wide variety of applications ranging from data mining to cyber-physical systems. We examine how existing abstract domains can reason about such programs, and present recent work that considers the combination of abstract domain reasoning with results from martingale theory and concentration of measure inequalities. We conclude with an examination of open challenges and future applications in this area. |
10:00-10:30 |
Contributed talk:
Vivek Notani and Roberto Giacobazzi Learning-based Widening Even though design of a widening operator is an integral step in the design of an abstract interpreter using an infinite domain, not much work has been done to systematize the design of widening operator. While there exist works that derive widening of higher-level domains by lifting the widening of the base-level domain, the design of widening for base-level domains remains largely creative process, especially for numerical domains. This is partly because the definition of widening requires convergence and termination while providing for very weak algebraic properties, making the problem of creating generic widening construction techniques hard. We provide a template widening operator, parametric on the shape of the domain, by leveraging a duality between Supervised Machine Learning (Regression) and Abstract Interpretation (Widening). Extended abstract |
10:30-11:00 | coffee break |
Session 2 | |
11:00-12:00 | Invited talk:
Alexander Summers Hybrid Program Analyses for Pointwise Permission Inference Ownership and permissions are concepts commonly employed to aid reasoning about programs with mutable state and concurrency, e.g. in custom program logics such as separation logic. Permissions can be used to specify the potential side-effects of code, guaranteeing which facts can be preserved across changes to the program state. One way to generalise permissions to unbounded data is to support them under quantifiers; e.g. specifying access to a graph structures by ranging over its sets of nodes, or to array segments by ranging over integer indices. In this talk, I'll describe ongoing work to automatically infer such quantified permission specifications for a variety of heap-based data structures. Permission-based program logics extend first-order logic with powerful additional connectives, but I'll show that the constraints arising from our inference problem can nonetheless be summarised within first-order arithmetic. Using this idea, we define a precise analysis for straight-line code, which can e.g. summarise the permissions needed to execute a single loop iteration. We then generate loop invariants using several novel techniques for projecting such expressions out over all loop iterations, leveraging complementary static analyses and quantifier elimination algorithms. The talk will include demonstrations using the Viper verification infrastructure, which I'll introduce along the way. |
12:00-12:30 |
Contributed talk:
William Hallahan, Anton Xue and Ruzica
Piskac Building a Symbolic Execution Engine for Haskell Symbolic execution is a powerful software analysis technique that reasons about the possible execution states of a program due to logical branching. Historically, such techniques are primarily geared towards imperative languages such as C and Java, with less effort in the development of frameworks for functional languages. While such works do exist, many are focused on contract-based analysis, and some lack full support for reasoning about functional expressions. In this talk we outline the integration of symbolic execution techniques to Haskell in the G2 engine. Our methods for program model extraction, defunctionalizaiton, execution semantics, and constraint solving have strong implications for static analysis based testing, and for future work in program verification and synthesis. Extended abstract |
12:30-14:00 | lunch |
Session 3 | |
14:00-15:00 | Invited talk:
Michael Perin The development of a Verified Polyhedra Library made of untrusted parts mixing Ocaml, C++, threads, ... and just a little bit of Coq for certification Next year will be the 40th anniversary of the seminal paper by Cousot & Halbwachs introducing the polyhedra abstract domain. Since that time, several polyhedra libraries were implemented based on the double description of polyhedra as couples (constraints,generators). In 2013, we started the development of yet another polyhedra library, the Verimag Verified Polyhedra Library, with two goals in mind: efficiency and certification. The latter brought us to restrict polyhedra to constraints-only and therefore to revisit the algorithms. In this talk, I will report our experience of developing a certified tool made of ugly code with just a Coq polish. The VPL is the result of a joint work with S.Boulmé, A.Fouilhé, A.Maréchal, D.Monniaux, M.Périn and H.Yu |
15:00-15:30 |
Contributed talk:
Shaobo He, Shuvendu Lahiri, Akash Lal and
Zvonimir Rakamaric Static Assertion Checking of Production Software with Angelic Verification The ability to statically detect violations of assertions can add great value to developers. However, in spite of decades of progress in program verification, static assertion checking is far from being cost-effective for production software. The two main obstacles to finding high-quality defects are (a) false alarms due to under-constrained environment, and (b) finding violations to deeply nested procedures. In this talk, we will describe our experience with the {\it angelic verification} (AV) tool for statically finding assertion violations in real-world software. The basic idea of AV is to pair an interprocedural assertion verifier with a framework for automatic inference of {\it likely specifications} on unknown values from the environment. We will summarize the approach, and will focus on design choices required to find high-quality violations of memory-safety violations with low false alarms. We discuss some results on Microsoft codebases and open source software, and challenges ahead. Extended abstract |
15:30-16:00 | coffee break |
Session 4 | |
16:00-17:00 | Invited talk:
Andreas Podelski and Daniel Dietsch Trace Abstraction We present an approach to program verification that is embodied by the tool Ultimate Automizer. Ultimate Automizer exploits the idea of behavioral decomposition: we can decompose the set of behaviors of a given program (whose correctness we want to prove) according to sets of behaviors for which we already have a proof. We obtain proven behaviors by constructing auxiliary programs from traces (i.e., sequences of statements) of the input program. A trace is a simple case of a program, specifically a straight-line program. At the same time, a trace is a word over a finite alphabet, which can be accepted by an automaton. Just as we ask whether a word has an accepting run, we can ask whether a trace has a correctness proof, e.g. in the form of a set of Hoare triples. From such a set of Hoare-triples, we can construct an automaton that recognizes all words over the alphabet of program statements that are correct because of this proof. Such an automaton is a program whose set of behaviors covers a non-empty subset of behaviors of our input program. We repeatedly construct such programs from proofs until the constructed programs together cover all possible behaviors of the input program. The two crucial steps of the algorithm are the covering check and the construction of a set of Hoare triples. Covering utilizes many algorithms for automata, e.g. inclusion, difference and minimization. Construction and extension of Hoare triple sets can be performed by Craig interpolation, abstract interpretation, and a combination of strongest post with unsatisfiable cores and dataflow analysis. Ultimate Automizer and other tools of the Ultimate family apply this approach to a wide range of verification problems, most prominently checking safety and termination of sequential C programs with recursive procedures, but also treating liveness properties or multi-threaded programs with possibly unboundedly many threads. |