Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
Ph.D. Thesis
2011
Collusion Preserving Computation
Alwen, Joel
Abstract
|
PDF
Title: Collusion Preserving Computation
Candidate: Alwen, Joel
Advisor(s): Dodis, Yevgeniy
Abstract:
In collusion-free protocols, subliminal communication is impossible and parties are thus unable to communicate any information beyond what the protocol allows". Collusion-free protocols are interesting for several reasons, but have specifically attracted attention because they can be used to reduce trust in game-theoretic mechanisms. Collusion-free protocols are impossible to achieve (in general) when all parties are connected by point-to-point channels, but exist under certain physical assumptions (Lepinksi et al., STOC 2005) or in specific network topologies (Alwen et al., Crypto 2008).
In addition to proposing the definition, we explore necessary properties of the underlying communication resource. Next we provide a general feasibility result for collusion-preserving computation of arbitrary functionalities. We show that the resulting protocols enjoy an elegant (and surprisingly strong) fallback security even in the case when the underlying communication resource acts in a Byzantine manner. Finally, we investigate the implications of these results in the context of mechanism design. -
Ph.D. Thesis
2011
Re-architecting Web and Mobile Information Access for Emerging Regions
Chen, Jay
Abstract
|
PDF
Title: Re-architecting Web and Mobile Information Access for Emerging Regions
Candidate: Chen, Jay
Advisor(s): Subramanian; Lakshminarayanan
Abstract:
Providing access to information for people in emerging regions is an important problem. Over the past decade there have been many proposed and increasingly numerous deployed systems to enable information access, but successes are few and modest at best. Internet in emerging regions is still generally unusable or intolerably slow. Mobile phone applications are either not designed for the phones that poor people own, otherwise, the applications lack functionality, are difficult to use, or expensive to operate. In this work we focus on enabling digital information access for people in emerging regions.
To advance the state of the art, we contribute numerous observations about how people access information in emerging regions, why the current models for web access and SMS platforms are broken, and techniques to enable applications over constrained Internet or SMS. The mechanisms presented here were designed after extensive field work in several different regions including rural, peri-urban, and urban areas in India, Kenya, Ghana, and Mexico. Multiple user studies were conducted throughout the course of system design and prototyping. We present a novel set of context appropriate platforms and tools, some spanning several layers of the networking stack. Five complete systems were implemented and deployed in the field. First, Event Logger for Firefox (ELF) is an easily deployable Firefox extension which functions as both a web browsing analysis tool and an in-browser web optimization platform. Second, RuralCafe provides a platform for web search and browsing over extremely slow or intermittent networks. Third, Contextual Information Portals (CIP) provide cached repositories of web pages tailored to the particular context in which it is to be used. Fourth, UjU is a mobile application platform that simplies the design of new SMS-based mobile applications. Finally, SMSFind is a SMS-based search service that runs on mobile phones without setup or subscription to a data plan.
Taken as a whole, the systems here are a comprehensive solution for addressing the problem of enabling digital information access in emerging regions. -
Ph.D. Thesis
2011
Automatic Deduction for Theories of Algebraic Data Types
Chikanian, Igor
Abstract
|
PDF
Title: Automatic Deduction for Theories of Algebraic Data Types
Candidate: Chikanian, Igor
Advisor(s): Barrett, Clark
Abstract:
In this thesis we present formal logical systems, concerned with reasoning about algebraic data types.
The first formal system is based on the quantifier-free calculus (outermost universally quantified). This calculus is comprised of state change rules, and computations are performed by successive applications of these rules. Thereby, our calculus gives rise to an abstract decision procedure. This decision procedure determines if a given formula involving algebraic type members is valid. It is shown that this calculus is sound and complete. We also examine how this system performs practically and give experimental results. Our main contribution, as compared to previous work on this subject,is a new and more efficient decision procedure for checking satisfiability of the universal fragment within the theory of algebraic data types.
The second formal system, called Term Builder, is the deductive system based on higher order type theory, which subsumes second order and higher order logics. The main purpose of this calculus is to formulate and prove theorems about algebraic or other arbitrary user-defined types.Term Builder supports proof objects and is both, an interactive theorem prover, and verifier. We describe the built-in deductive capabilities of Term Builder and show its consistency. The logic represented by our prover is intuitionistic. Naturally, it is also incomplete and undecidable, but its expressive power is much higher than that of the first formal system.
Among our achievements in building this theorem prover is an elegant and intuitive GUI for building proofs. Also, a new feature from the foundational viewpoint is that, in contrast with other approaches, we have uniqueness-of-types property, which is not modulo beta-conversion.
-
TR2011-943
2011
Two-Level Overlapping Schwarz Algorithms for a Staggered Discontinuous Galerkin Method
Chung, Eric T.;
Kim, Hyea Hyun; Widlund, Olof B.
Abstract
|
PDF
Title: Two-Level Overlapping Schwarz Algorithms for a Staggered Discontinuous Galerkin Method
Author(s): Chung, Eric T.; Kim, Hyea Hyun; Widlund, Olof B.
Abstract:
Two overlapping Schwarz algorithms are developed for a discontinuous Galerkin (DG) finite element approximation of second order scalar elliptic problems in both two and three dimensions. The discontinuous Galerkin formulation is based on a staggered discretization introduced by Chung and Engquist for the acoustic wave equation. Two types of coarse problems are introduced for the two-level Schwarz algorithms. The first is built on a non-overlapping subdomain partition, which allows quite general subdomain partitions, and the second on introducing an additional coarse triangulation that can also be quite independent of the fine triangulation. Condition number ounds are established and numerical results are presented.
-
TR2011-946
2011
An Alternative Coarse Space for Irregular Subdomains and an Overlapping Schwarz Algorithm
Dohrmann, Clark R.;
Widlund, Olof B.
Abstract
|
PDF
Title: An Alternative Coarse Space for Irregular Subdomains and an Overlapping Schwarz Algorithm
Author(s): Dohrmann, Clark R.; Widlund, Olof B.
Abstract:
In earlier work on domain decomposition methods for elliptic problems in the plane, an assumption that each subdomain is triangular, or a union of a few coarse triangles, has often been made. This is similar to what is required in geometric multigrid theory and is unrealistic if the subdomains are produced by a mesh partitioner. In an earlier paper, coauthored with Axel Klawonn, the authors introduced a coarse subspace for an overlapping Schwarz method with one degree of freedom for each subdomain vertex and one for each subdomain edge. A condition number bound proportional to \((1+\log(H/h))^2(1+H/\delta)\) was established assuming only that the subdomains are John domains; here \(H/\delta\) measures the relative overlap between neighboring subdomains and \(H/h\) the maximum number of elements across individual subdomains. We were also able to relate the rate of convergence to a parameter in an isoperimetric inequality for the subdomains into which the domain of the problem has been partitioned.
In this paper, the dimension of the coarse subspace is decreased by using only one degree of freedom for each subdomain vertex; if all subdomains have three edges, this leads to a reduction of the dimension of the coarse subspace by approximately a factor four. In addition, the condition number bound is shown to be proportional to \((1+\log(H/h))(1+H/\delta)\) under a quite mild assumption on the relative length of adjacent subdomain edges.
In this study, the subdomains are assumed to be uniform in the sense of Peter Jones. As in our earlier work, the results are insensitive to arbitrary large jumps in the coefficients of the elliptic problem across the interface between the subdomains.
Numerical results are presented which confirm the theory and demonstrate the usefulness of the algorithm for a variety of mesh decompositions and distributions of material properties. It is also shown that the new algorithm often converges faster than the older one in spite of the fact that the dimension of the coarse space has been decreased considerably.
-
TR2011-939
2011
Parsing All of C by Taming the Preprocessor
Gazzillo, Paul;
Grimm, Robert
Abstract
|
PDF
Title: Parsing All of C by Taming the Preprocessor
Author(s): Gazzillo, Paul; Grimm, Robert
Abstract:
Given the continuing popularity of C for building large-scale programs, such as Linux, Apache, and Bind, it is critical to provide effective tool support, including, for example, code browsing, bug finding, and automated refactoring. Common to all such tools is a need to parse C. But C programs contain not only the C language proper but also preprocessor invocations for file inclusion (#include), conditional compilation (#if, #ifdef, and so on), and macro definition/expansion (#define). Worse, the preprocessor is a textual substitution system, which is oblivious to C constructs and operates on individual tokens. At the same time, the preprocessor is indispensable for improving C's expressivity, abstracting over software/hardware dependencies, and deriving variations from the same code base. The x86 version of the Linux kernel, for example, depends on about 7,600 header files for file inclusion, 7,000 configuration variables for conditional compilation, and 520,000 macros for code expansion.
In this paper, we present a new tool for parsing all of C, including arbitrary preprocessor use. Our tool, which is called SuperC, is based on a systematic analysis of all interactions between lexing, preprocessing, and parsing to ensure completeness. It first lexes and preprocesses source code while preserving conditionals. It then parses the result using a novel variant of LR parsing, which automatically forks parsers when encountering a conditional and merges them again when reaching the same input in the same state. The result is a well-formed AST, containing static choice nodes for conditionals. While the parsing algorithm and engine are new, neither grammar nor LR parser table generator need to change. We discuss the results of our problem analysis, the parsing algorithm itself, the pragmatics of building a real-world tool, and a demonstration on the x86 version of the Linux kernel.
-
Ph.D. Thesis
2011
Efficient Cryptographic Primitives for Non-Interactive Zero-Knowledge Proofs and Applications
Haralambiev, Kristiyan
Abstract
|
PDF
Title: Efficient Cryptographic Primitives for Non-Interactive Zero-Knowledge Proofs and Applications
Candidate: Haralambiev, Kristiyan
Advisor(s): Shoup, Victor
Abstract:
Non-interactive zero-knowledge (NIZK) proofs have enjoyed much interest in cryptography since they were introduced more than twenty years ago by Blum et al. [BFM88]. While quite useful when designing modular cryptographic schemes, until recently NIZK could be realized efficiently only using certain heuristics. However, such heuristic schemes have been widely criticized. In this work we focus on designing schemes which avoid them. In [GS08], Groth and Sahai presented the first efficient (and currently the only) NIZK proof system in the standard model. The construction is based on bilinear maps and is limited to languages of certain satisfiable system of equations. Given this expressibility limitation of the system of equations, we are interested in cryptographic primitives that are "compatible" with it. Equipped with such primitives and Groth-Sahai proof system, we show how to construct cryptographic schemes efficiently in a modular fashion.
In this work, we describe properties required by any cryptographic scheme to mesh well with Groth-Sahai proofs. Towards this, we introduce the notion of "structure-preserving" cryptographic scheme. We present the first constant-size structure-preserving signature scheme for messages consisting of general bilinear group elements. This allows us (for the first time) to instantiate efficiently a modular construction of round-optimal blind signature based on the framework of Fischlin [Fis06].
Our structure-preserving homomorphic trapdoor commitment schemes yield efficient leakage-resilient signatures (in the bounded leakage model) which satisfy the standard security requirements and additionally tolerates any amount of leakage; all previous works satisfied at most two of those three properties.
Lastly, we build a structure-preserving encryption scheme which satisfies the standard CCA security requirements. While somewhat similar to the notion of verifiable encryption, it provides better properties and yields the first efficient two-party protocol for joint ciphertext computation. Note that the efficient realization of such a protocol was not previously possible even using the heuristics mentioned above.
Lastly, in this line of work, we revisit the notion of simulation extractability and define "true-simulation extractable" NIZK proofs. Although quite similar to the notion of simulation-sound extractable NIZK proofs, there is a subtle but rather important difference which makes it weaker and easier to instantiate efficiently. As it turns out, in many scenarios, this new notion is sufficient, and using it, we can construct efficient leakage resilient signatures and CCA encryption scheme. -
TR2011-940
2011
Sharing is Caring: Combination of Theories
Jovanovic, Dejan;
Barrett, Clark
Abstract
|
PDF
Title: Sharing is Caring: Combination of Theories
Author(s): Jovanovic, Dejan; Barrett, Clark
Abstract:
One of the main shortcomings of the traditional methods for combining theories is the complexity of guessing the arrangement of the variables shared by the individual theories. This paper presents a reformulation of the Nelson-Oppen method that takes into account explicit equality propagation and can ignore pairs of shared variables that the theories do not care about. We show the correctness of the new approach and present care functions for the theory of uninterpreted functions and the theory of arrays. The effectiveness of the new method is illustrated by experimental results demonstrating a dramatic performance improvement on benchmarks combining arrays and bit-vectors.
-
Ph.D. Thesis
2011
Learning Feature Hierarchies for Object Recognition
Kavukcuoglu, Koray
Abstract
|
PDF
Title: Learning Feature Hierarchies for Object Recognition
Candidate: Kavukcuoglu, Koray
Advisor(s): LeCun, Yann
Abstract:
In this thesis we study unsupervised learning algorithms for training feature extractors and building deep learning models. We propose sparse-modeling algo- rithms as the foundation for unsupervised feature extraction systems. To reduce the cost of the inference process required to obtain the optimal sparse code, we model a feed-forward function that is trained to predict this optimal sparse code. Using an efficient predictor function enables the use of sparse coding in hierarchical models for object recognition. We demonstrate the performance of the developed system on several recognition tasks, including object recognition, handwritten digit classification and pedestrian detection. Robustness to noise or small variations in the input is a very desirable property for a feature extraction algorithm. In order to train locally-invariant feature extractors in an unsupervised manner, we use group sparsity criteria that promote similarity between the dictionary elements within a group. This model produces locally-invariant representations under small pertur- bations of the input, thus improving the robustness of the features. Many sparse modeling algorithms are trained on small image patches that are the same size as the dictionary elements. This forces the system to learn multiple shifted versions of each dictionary element. However, when used convolutionally over large im- ages to extract features, these models produce very redundant representations. To avoid this problem, we propose convolutional sparse coding algorithms that yield a richer set of dictionary elements, reduce the redundancy of the representation and improve recognition performance.
-
TR2011-944
2011
Effective Synthesis of Asynchronous Systems from GR(1) Specifications
Klein, Uri;
Piterman, Nir; Pnueli, Amir
Abstract
|
PDF
Title: Effective Synthesis of Asynchronous Systems from GR(1) Specifications
Author(s): Klein, Uri; Piterman, Nir; Pnueli, Amir
Abstract:
We consider automatic synthesis from linear temporal logic specifications for asynchronous systems. We aim the produced reactive systems to be used as software in a multi-threaded environment. We extend previous reduction of asynchronous synthesis to synchronous synthesis to the setting of multiple input and multiple output variables. Much like synthesis for synchronous designs, this solution is not practical as it requires determinization of automata on infinite words and solution of complicated games. We follow advances in synthesis of synchronous designs, which restrict the handled specifications but achieve scalability and efficiency. We propose a heuristic that, in some cases, maintains scalability for asynchronous synthesis. Our heuristic can prove that specifications are realizable and extract designs. This is done by a reduction to synchronous synthesis that is inspired by the theoretical reduction.
-
TR2011-938
2011
Formalization and Automated Verification of RESTful Behavior
Klein, Uri;
Namjoshi, Kedar S.
Abstract
|
PDF
Title: Formalization and Automated Verification of RESTful Behavior
Author(s): Klein, Uri; Namjoshi, Kedar S.
Abstract:
REST is a software architectural style used for the design of highly scalable web applications. Interest in REST has grown rapidly over the past decade, spurred by the growth of open web APIs. On the other hand, there is also considerable confusion surrounding REST: many examples of supposedly RESTful APIs violate key REST constraints. We show that the constraints of REST and of RESTful HTTP can be pre- cisely formulated within temporal logic. This leads to methods for model checking and run-time verfication of RESTful behavior. We formulate several relevant verification questions and analyze their complexity.
-
Ph.D. Thesis
2011
Topics in Formal Synthesis and Modeling
Klein, Uri
Abstract
|
PDF
Title: Topics in Formal Synthesis and Modeling
Candidate: Klein, Uri
Advisor(s): Pnueli, Amir; Zuck, Lenore
Abstract:
The work presented focuses on two problems, that of synthesizing systems from formal specifications, and that of formalizing REST -- a popular web applications' development pattern.
For the synthesis problem, we distinguish between the synchronous and the asynchronous case. For the former, we solve a problem concerning a fundamental flaw in specification construction in previous work. We continue with exploring effective synthesis of asynchronous systems (programs on multi-threaded systems). Two alternative models of asynchrony are presented, and shown to be equally expressive for the purpose of synthesis.
REST is a software architectural style used for the design of highly scalable web applications. Interest in REST has grown rapidly over the past decade. However, there is also considerable confusion surrounding REST: many examples of supposedly RESTful APIs violate key REST constraints. We show that the constraints of REST and of RESTful HTTP can be precisely formulated within temporal logic. This leads to methods for model checking and run-time verification of RESTful behavior. We formulate several relevant verification questions and analyze their complexity. -
TR2011-937
2011
Domain Decomposition Methods for Reissner-Mindlin Plates Discretized with the Falk-Tu Elements
Lee, Jong Ho
Abstract
|
PDF
Title: Domain Decomposition Methods for Reissner-Mindlin Plates Discretized with the Falk-Tu Elements
Author(s): Lee, Jong Ho
Abstract:
The Reissner-Mindlin plate theory models a thin plate with thickness t. The condition number of finite element approximations of this model deteriorates badly as the thickness t of the plate converges to 0. In this thesis, we develop an overlapping domain decomposition method for the Reissner-Mindlin plate model discretized by Falk-Tu elements with a convergence rate which does not deteriorate when t converges to 0. We use modern overlapping methods which use the Schur complements to define coarse basis functions and show that the condition number of this overlapping method is bounded by C(1 + H/delta )^3*(1 + log(H/h))^2. Here H is the maximum diameter of the subdomains, delta the size of overlap between subdomains, and h the element size. Numerical examples are provided to confirm the theory. We also modify the overlapping method to develop a BDDC method for the Reissner-Mindlin model. We establish numerically an extension lemma to obtain a constant bound and an edge lemma to obtain a C(1 + log(H/h))^2 bound. Given such bounds, the condition number of this BDDC method is shown to be bounded by C(1 + log(H/h))^2.
-
Ph.D. Thesis
2011
Adaptive Isotopic Approximation of Nonsingular Curves and Surfaces
Lin, Long
Abstract
|
PDF
Title: Adaptive Isotopic Approximation of Nonsingular Curves and Surfaces
Candidate: Lin, Long
Advisor(s): Yap, Chee
Abstract:
Consider the problem of computing isotopic approximations of nonsingular curves and surfaces that are implicitly represented by equations of the form f (X, Y )=0 and f (X,Y, Z)=0. Thisfundamentalproblem has seen much progress along several fronts, but we will focus on domain subdivision algorithms. Two algorithms in this area are from Snyder(1992) and Plantinga and Vegter(2004). We introduce a family of new algorithms that combines the advantages of these two algorithms: like Snyder, we use the parameterizability criterion for subdivision, and like Plantinga and Vegter, we exploit nonlocal isotopy.
We first apply our approach to curves, resulting in a more efficient algorithm. We then extend our approach to surfaces. The extension is by no means routine, as the correctness arguments and case analysis are more subtle. Also, a new phenomenon arises in which local rules for constructing surfaces are no longer sufficient.
We further extend our algorithms in two important and practical directions: first, we allow subdivision cells to be non squares or non cubes, with arbitrary but bounded aspect ratios: in 2D, we allow boxes to be split into 2 or 4 children; and in 3D, we allow boxes to be split into 2, 4 or 8 children. Second, we allow the inputregion-of-interest(ROI) to have arbitrary geometry represented by anquadtreeoroctree,aslongas the curves or surfaces has no singularities in the ROI and intersects the boundary of ROI transversally.
Our algorithm is numerical because our primitives are based on interval arithmetic and exact BigFloat numbers. It is practical, easy to implement exactly (compared to algebraic approaches) and does not suffer from implementation gaps (compared to geometric approaches). We report some very encouraging experimental results,showing that our algorithms can be much more efficient than the algorithms of Plantinga and Vegter(2D and 3D)and Snyder(2D only).
-
Ph.D. Thesis
2011
Real-Space Localization Methods for Minimizing the Kohn-Sham Energy
Millstone, Marc
Abstract
|
PDF
Title: Real-Space Localization Methods for Minimizing the Kohn-Sham Energy
Candidate: Millstone, Marc
Advisor(s): Overton, Michael
Abstract:
The combination of ever increasing computational power and new mathematical models has fundamentally changed the field of computational chemistry. One example of this is the use of new algorithms for computing the charge density of a molecular system from which one can predict many physical properties of the system.
This thesis presents two new algorithms for minimizing the Kohn-Sham energy, which is used to describe a system of non-interacting electrons through a set of single-particle wavefunctions. By exploiting a known localization region of the wavefunctions, each algorithm evaluates the Kohn-Sham energy function and gradient at a set of iterates that have a special sparsity structure. We have chosen to represent the problem in real-space using finite-differences, allowing us to efficiently evaluate the energy function and gradient using sparse linear algebra. Detailed numerical experiments are provided on a set of representative molecules demonstrating the performance and robustness of these methods.
-
Ph.D. Thesis
2011
Scoring-and-Unfolding Trimmed Tree Assembler: Algorithms for Assembling Genome Sequences Accurately and Efficiently
Narzisi, Giuseppe
Abstract
|
PDF
Title: Scoring-and-Unfolding Trimmed Tree Assembler: Algorithms for Assembling Genome Sequences Accurately and Efficiently
Candidate: Narzisi, Giuseppe
Advisor(s): Mishra, Bud
Abstract:
The recent advances in DNA sequencing technology and their many potential applications to Biology and Medicine have rekindled enormous interest in several classical algorithmic problems at the core of Genomics and Computational Biology: primarily, the whole-genome sequence assembly problem (WGSA). Two decades back, in the context of the Human Genome Project, the problem had received unprecedented scientific prominence: its computational complexity and intractability were thought to have been well understood; various competitive heuristics, thoroughly explored and the necessary software, properly implemented and validated. However, several recent studies, focusing on the experimental validation of de novo assemblies, have highlighted several limitations of the current assemblers.
Intrigued by these negative results, this dissertation reinvestigates the algorithmic techniques required to correctly and efficiently assemble genomes. Mired by its connection to a well-known NP-complete combinatorial optimization problem, historically, WGSA has been assumed to be amenable only to greedy and heuristic methods. By placing efficiency as their priority, these methods opted to rely on local searches, and are thus inherently approximate, ambiguous or error-prone. This dissertation presents a novel sequence assembler, SUTTA, that dispenses with the idea of limiting the solutions to just the approximated ones, and instead favors an approach that could potentially lead to an exhaustive (exponential-time) search of all possible layouts but tames the complexity through constrained search (Branch-and-Bound) and quick identification and pruning of implausible solutions.
Complementary to this problem is the task of validating the generated assemblies. Unfortunately, no commonly accepted method exists yet and widely used metrics to compare the assembled sequences emphasize only size, poorly capturing quality and accuracy. This dissertation also addresses these concerns by developing a more comprehensive metric, the Feature-Response Curve, that, using ideas from classical ROC (receiver-operating characteristic) curve, more faithfully captures the trade-off between contiguity and quality.
Finally, this dissertation demonstrates the advantages of a complete pipeline integrating base-calling (TotalReCaller) with assembly (SUTTA) in a Bayesian manner.
-
TR2011-942
2011
Domain Decomposition Methods for Raviart-Thomas Vector Fields
Oh, Duk-Soon
Abstract
|
PDF
Title: Domain Decomposition Methods for Raviart-Thomas Vector Fields
Author(s): Oh, Duk-Soon
Abstract:
Raviart-Thomas finite elements are very useful for problems posed in H(div) since they are H(div)-conforming. We introduce two domain decomposition methods for solving vector field problems posed in H(div) discretized by Raviart-Thomas finite elements.
A two-level overlapping Schwarz method is developed. The coarse part of the preconditioner is based on energy-minimizing extensions and the local parts consist of traditional solvers on overlapping subdomains. We prove that our method is scalable and that the condition number grows linearly with the logarithm of the number of degrees of freedom in the individual subdomains and linearly with the relative overlap between the overlapping subdomains. The condition number of the method is also independent of the values and jumps of the coefficients across the interface between subdomains. We provide numerical results to support our theory.
We also consider a balancing domain decomposition by constraints (BDDC) method. The BDDC preconditioner consists of a coarse part involving primal constraints across the interface between subdomains and local parts related to the Schur complements corresponding to the local subdomain problems. We provide bounds of the condition number of the preconditioned linear system and suggest that the condition number has a polylogarithmic bound in terms of the number of degrees of freedom in the individual subdomains from our numerical experiments for arbitrary jumps of the coefficients across the subdomain interfaces.
-
TR2011-945
2011
From a Calculus to an Execution Environment for Stream Processing
Soulé, Robert;
Hirzel, Martin; Gedik, Bugra; Grimm, Robert
Abstract
|
PDF
Title: From a Calculus to an Execution Environment for Stream Processing
Author(s): Soulé, Robert; Hirzel, Martin; Gedik, Bugra; Grimm, Robert
Abstract:
At one level, this paper is about River, a virtual execution environment for stream processing. Stream processing is a paradigm well-suited for many modern data processing systems that ingest high-volume data streams from the real world, such as audio/video streaming, high-frequency trading, and security monitoring. One attractive property of stream processing is that it lends itself to parallelization on multi-cores, and even to distribution on clusters when extreme scale is required. Stream processing has been coevolved by several communities, leading to diverse languages with similar core concepts. Providing a common execution environment reduces language development effort and increases portability. We designed River as a practical realization of Brooklet, a calculus for stream processing. So at another level, this paper is about a journey from theory (the calculus) to practice (the execution environment). The challenge is that, by definition, a calculus abstracts away all but the most central concepts. Hence, there are several research questions in concretizing the missing parts, not to mention a significant engineering effort in implementing them. But the effort is well worth it, because the benefit of using a calculus as a foundation is that it yields clear semantics and proven correctness results.
-
Ph.D. Thesis
2011
Cryptographic Resilience to Continual Information Leakage
Wichs, Daniel
Abstract
|
PDF
Title: Cryptographic Resilience to Continual Information Leakage
Candidate: Wichs, Daniel
Advisor(s): Dodis, Yevgeniy
Abstract:
We study the question of achieving cryptographic security on devices that leak information about their internal secret state to an external attacker.This study is motivated by the prevalence of side-channel attacks, where the physical characteristics of a computation (e.g. timing, power-consumption, temperature, radiation, acoustics, etc.) can be measured, and may reveal useful information about the internal state of a device. Since some such leakage is inevitably present in almost any physical implementation, we believe that this problem cannot just be addressed by physical countermeasures alone. Instead, it should already be taken into account when designing the mathematical specification of cryptographic primitives and included in the formal study of their security.
In this thesis, we propose a new formal framework for modeling the leakage available to an attacker. This framework, called the continual leakage model, assumes that an attacker can continually learn arbitrary information about the internal secret state of a cryptographic scheme at any point in time, subject only to the constraint that the rate of leakage is bounded. More precisely, our model assumes some abstract notion of time periods. In each such period, the attacker can choose to learn arbitrary functions of the current secret state of the scheme, as long as the number of output bits leaked is not too large. In our solutions, cryptographic schemes will continually update their internal secret state at the end of each time period. This will ensure that leakage observed in different time periods cannot be meaningfully combined to break the security of the cryptosystem. Although these updates modify the secret state of the cryptosystem, the desired functionality of the scheme is preserved, and the users can remain oblivious to these updates. We construct signatures, encryption, and secret sharing/storage schemes in this model.
-
Ph.D. Thesis
2011
Surface Representation of Particle Based Fluids
Yu, Jihun
Abstract
|
PDF
Title: Surface Representation of Particle Based Fluids
Candidate: Yu, Jihun
Advisor(s): Yap, Chee
Abstract:
In this thesis, we focus on surface representation for particle-based fluid simulators such as Smoothed Particle Hydrodynamics (SPH). We first present a new surface reconstruction algorithm which formulates the implicit function as a sum of anisotropic smoothing kernels. The direction of anisotropy at a particle is determined by performing Weighted Principal Component Analysis (WPCA) over the neighboring particles. In addition, we perform a smoothing step that re-positions the centers of these smoothing kernels. Since these anisotropic moothing kernels capture the local particle distributions more accurately, our method has advantages over existing methods in representing smooth surfaces, thin streams and sharp features of fluids. This method is fast, easy to implement, and the results demonstrate a significant improvement in the quality of reconstructed surfaces as compared to existing methods. Next,we introduce the idea of using an explicit triangle mesh to track the air/liquid interface in a SPH simulator.
Once an initial surface mesh is created, this mesh is carried forward in time using nearby particle velocities to advect the mesh vertices. The mesh connectivity remains mostly unchanged across time-steps; it is only modified locally for topology change events or for the improvement of triangle quality. In order to ensure that the surface mesh does not diverge from the underlying particle simulation, we periodically project the mesh surface onto an implicit surface defined by the physics simulation. The mesh surface presents several advantages over previous SPH surface tracking techniques: A new method for surface tension calculations clearly outperforms the state of the art in SPH surface tension for computer graphics. A new method for tracking detailed surface information (like colors) is less susceptible to numerical diffusion than competing techniques. Finally, a temporally-coherent surface mesh allows us to simulate high-resolution surface wave dynamics without being limited by the particle resolution of the SPH simulation.