Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
Ph.D. Thesis
2014
On the Human Form: Efficient acquisition, modeling and manipulation of thehuman body
Braga, Otavio
Abstract
|
PDF
Title: On the Human Form: Efficient acquisition, modeling and manipulation of thehuman body
Candidate: Braga, Otavio
Advisor(s): Geiger, Davi
Abstract:
This thesis concerns the acquisition, modeling and manipulation of the human form.
First, we acquire body models. We introduce an efficient bootstraped algorithm that we employed to register over 2,000 high resolution body scans of male and female adult subjects. Our algorithm outputs not only the traditional vertex correspondences, but also directly produces a high quality model which can be immediately deformed. We then employ the result to fit noisy depth maps coming from now commercially available 3D sensors such as Microsoft's Kinect and PrimeSense's Carmine.
We conclude by describing a new real-time system for image-based body manipulation called BodyJam, that lets you change your outfit with a finger snap. BodyJam is inspired by a technique invented by the surrealists a century ago: "Exquisite corpse", a method by which a collection of images (of body parts) is collectively assembled. BodyJam does it on a video display that mirrors the pose in real-time of a real-person standing in front of the camera/display mirror, and allows the user to change clothes and other appearance attributes. Using Microsoft's Kinect, poses are matched to a video database of different torsos and legs, and "pages" showing different clothes are turned by handwitch focus to the topic of body manipulation. We first revisit the more traditional way of specifying bodies from a set of measurements, such as coming from clothing sizing charts, showing how the statistics of the population learned during the registration can aid us in accurately defining the body shape. We then introduce a new manipulation metaphor, where we navigate through the space of body shapes and poses by directly dragging the body mesh surface.
We conclude by describing a new real-time system for image-based body manipulation called BodyJam, that lets you change your outfit with a finger snap. BodyJam is inspired by a technique invented by the surrealists a century ago: "Exquisite Corpse", a method by which a collection of images (of body parts) is collectively assembled. BodyJam does it on a video display that mirrors the pose in real-time of a real-person standing in front of the camera/display mirror, and allows the user to change clothes and other appearance attributes. Using Microsoft's Kinect, poses are matched to a video database of different torsos and legs, and "pages" showing different clothes are turned by hand gestures.
-
TR2014-969
2014
Overlapping Schwarz Algorithms for Almost Incompressible Linear Elasticity
Cai, Mingchao;
Pavarino, Luca F.; Widlund, Olof B.
Abstract
|
PDF
Title: Overlapping Schwarz Algorithms for Almost Incompressible Linear Elasticity
Author(s): Cai, Mingchao; Pavarino, Luca F.; Widlund, Olof B.
Abstract:
Low order finite element discretizations of the linear elasticity system suffer increasingly from locking effects and ill-conditioning, when the material approaches the incompressible limit, if only the displacement variable are used. Mixed finite elements using both displacement and pressure variables provide a well-known remedy, but they yield larger and indefinite discrete systems for which the design of scalable and efficient iterative solvers is challenging. Two-level overlapping Schwarz preconditioner for the almost incompressible system of linear elasticity, discretized by mixed finite elements with discontinuous pressures, are constructed and analyzed. The preconditioned systems are accelerated either by a GMRES (generalized minimum residual) method applied to the resulting discrete saddle point problem or by a PCG (preconditioned conjugate gradient) method applied to a positive definite, although extremely ill-conditioned, reformulation of the problem obtained by eliminating all pressure variables on the element level. A novel theoretical analysis of the algorithm for the positive definite reformulation is given by extending some earlier results by Dohrmann and Widlund. The main result of the paper is a bound on the condition number of the algorithm which is cubic in the relative overlap and grows logarithmically with the number of elements across individual subdomains but is otherwise independent of the number of subdomains, their diameters and mesh sizes, and the incompressibility of the material and possible discontinuities of the material parameters across the subdomain interfaces. Numerical results in the plane confirm the theory and also indicate that an analogous result should hold for the saddle point formulation, as well as for spectral element discretizations.
-
TR2014-965
2014
A BDDC algorithm with deluxe scaling for H(curl) in two dimensions with irregular subdomains
Calvo, Juan G.
Abstract
|
PDF
Title: A BDDC algorithm with deluxe scaling for H(curl) in two dimensions with irregular subdomains
Author(s): Calvo, Juan G.
Abstract:
A bound is obtained for the condition number of a BDDC algorithm for problems posed in H(curl) in two dimensions, where the subdomains are only assumed to be uniform in the sense of Peter Jones. For the primal variable space, a continuity constraint for the tangential average over each interior subdomain edge is imposed.
For the averaging operator, a new technique named deluxe scaling is used. Our bound is independent of jumps in the coefficients across the interface between the subdomains and depends only on a few geometric parameters of the decomposition. Numerical results that verify the result are shown, including some with subdomains with fractal edges and others obtained by a mesh partitioner.
-
TR2014-968
2014
A two-level overlapping Schwarz method for H(curl) in two dimensions with irregular subdomains
Calvo, Juan G.
Abstract
|
PDF
Title: A two-level overlapping Schwarz method for H(curl) in two dimensions with irregular subdomains
Author(s): Calvo, Juan G.
Abstract:
A bound is obtained for the condition number of a two-level overlapping Schwarz algorithm for problems posed in H(curl) in two dimensions, where the subdomains are only assumed to be uniform in the sense of Peter Jones. The coarse space is based on energy minimization and its dimension equals the number of interior subdomain edges. Local direct solvers are used on the overlapping subdomains. Our bound depends only on a few geometric parameters of the decomposition. This bound is independent of jumps in the coefficients across the interface between the subdomains for most of the different cases considered. Numerical experiments that verify the result are shown, including some with subdomains with fractal edges and others obtained by a mesh partitioner.
-
Ph.D. Thesis
2014
Analyzing Tatonnement Dynamics in Economic Markets
Cheung, Yun Kuen
Abstract
|
PDF
Title: Analyzing Tatonnement Dynamics in Economic Markets
Candidate: Cheung, Yun Kuen
Advisor(s): Cole, Richard
Abstract:
The impetus for this dissertation is to explain why well-functioning markets might be able to stay at or near a market equilibrium. We argue that tatonnement, a natural, simple and distributed price update dynamic in economic markets, is a plausible candidate to explain how markets might reach their equilibria.
Tatonnement is broadly defined as follows: if the demand for a good is more than the supply, increase the price of the good, and conversely, decrease the price when the demand is less than the supply. Prior works show that tatonnement converges to market equilibrium in some markets while it fails to converge in other markets. Our goal is to extend the classes of markets in which tatonnement is shown to converge. The prior positive results largely concerned markets with substitute goods. We seek market constraints which enable tatonnement to converge in markets with complementary goods, or with a mixture of substitutes and complementary goods. We also show fast convergence rates for some of these markets.
We introduce an amortized analysis technique to handle asynchronous events - in our case asynchronous price updates. On the other hand, for some markets we show that tatonnement is equivalent to generalized gradient descent (GGD). The amortized analysis and our analysis on GGD may be of independent interests.
-
TR2014-964
2014
A BDDC algorithm with deluxe scaling for three-dimensional H(curl) problems
Dohrmann, Clark R.;
Widlund, Olof B.
Abstract
|
PDF
Title: A BDDC algorithm with deluxe scaling for three-dimensional H(curl) problems
Author(s): Dohrmann, Clark R.; Widlund, Olof B.
Abstract:
In this paper, we present and analyze a BDDC algorithm for a class of elliptic problems in the three-dimensional H(curl) space. Compared with existing results, our condition number estimate requires fewer assumptions and also involves two fewer powers of log(H/h), making it consistent with optimal estimates for other elliptic problems. Here, H/h is the maximum of H i /h i over all subdomains, where H i and h i are the diameter and the smallest element diameter for the subdomain Ω i .
The analysis makes use of two recent developments. The first is a new approach to averaging across the subdomain interfaces, while the second is a new technical tool which allows arguments involving trace classes to be avoided. Numerical examples are presented to confirm the theory and demonstrate the importance of the new averaging approach in certain cases.
-
Ph.D. Thesis
2014
Low-latency Image Recognition withGPU-accelerated Convolutional Networksfor Web-based Services
Huang, Fu Jie
Abstract
|
PDF
Title: Low-latency Image Recognition withGPU-accelerated Convolutional Networksfor Web-based Services
Candidate: Huang, Fu Jie
Advisor(s): LeCun, Yann
Abstract:
In this work, we describe an application of convolutional networks to object classification and detection in images. The task of image based object recognition is surveyed in the first chapter. Its application in internet advertisement is one of the main motivations of this work.
The architecture of the convolutional networks is described in details in the following chapter. Stochastic gradient descent is used to train the networks.
We then describe the data collection and labelling process. The set of training data labelled basically decides what kind of recognizer is being built. Four binary classifers are trained for the object types of sailboat, car, motorbike, and dog.
GPU based massive parallel implementation of the convolutional networks is built. This enables us to run the convolution operations at close to 40 times faster than running on a traditional CPU. Details about how to implement the convolutional operation on NVIDIA GPUs using CUDA is disscused.
In order to apply the object recognizer in a production environment where millions of images are processed daily, we have built a platform with cloud computing. We describe how large scale and low latency image processing can be achieved with such a system. -
Ph.D. Thesis
2014
Effective Algorithms for the Satisfiability of Quantifier-Free Formulas Over Linear Real and Integer Arithmetic
King, Tim
Abstract
|
PDF
Title: Effective Algorithms for the Satisfiability of Quantifier-Free Formulas Over Linear Real and Integer Arithmetic
Candidate: King, Tim
Advisor(s): Barrett, Clark
Abstract:
A core technique of modern tools for formally reasoning about computing systems is generating and dispatching queries to automated theorem provers, including Satisfiability Modulo Theories (SMT) provers. SMT provers aim at the tight integration of decision procedures for propositional satisfiability and decision procedures for fixed first-order theories ‒ known as theory solvers. This thesis presents several advancements in the design and implementation of theory solvers for quantifier-free linear real, integer, and mixed integer and real arithmetic. These are implemented within the SMT system CVC4. We begin by formally describing the Satisfiability Modulo Theories problem and the role of theory solvers within CVC4. We discuss known techniques for building solvers for quantifier-free linear real, integer, and mixed integer and real arithmetic around the Simplex for DPLL(T) algorithm. We give several small improvements to theory solvers using this algorithm and describe the implementation and theory of this algorithm in detail. To extend the class of problems that the theory solver can robustly support, we borrow and adapt several techniques from linear programming (LP) and mixed integer programming (MIP) solvers which come from the tradition of optimization. We propose a new decicion procedure for quantifier-free linear real arithmetic that replaces the Simplex for DPLL(T) algorithm with a variant of the Simplex algorithm that performs a form of optimization ‒ minimizing the sum of infeasibilties. In this thesis, we additionally describe techniques for leveraging LP and MIP solvers to improve the performance of SMT solvers without compromising correctness. Previous efforts to leverage such solvers in the context of SMT have concluded that in addition to being potentially unsound, such solvers are too heavyweight to compete in the context of SMT. We present an empirical comparison against other state-of-the-art SMT tools to demonstrate the effectiveness of the proposed solutions.
-
TR2014-966
2014
Local temporal reasoning
Koskinen, Eric
Abstract
|
PDF
Title: Local temporal reasoning
Author(s): Koskinen, Eric
Abstract:
We present the first method for reasoning about temporal logic properties of higher-order, infinite-data programs. By distinguishing between the finite traces and infinite traces in the specification, we obtain rules that permit us to reason about the temporal behavior of program parts via a type-and-effect system, which is then able to compose these facts together to prove the overall target property of the program. The type system alone is strong enough to derive many temporal safety properties using refinement types and temporal effects. We also show how existing techniques can be used as oracles to provide liveness information (e.g. termination) about program parts and that the type-and-effect system can combine this information with temporal safety information to derive nontrivial temporal properties. Our work has application toward verification of higher-order software, as well as modular strategies for procedural programs.
-
TR2014-967
2014
The Push/Pull model of transactions
Koskinen, Eric;
Parkinson, Matthew
Abstract
|
PDF
Title: The Push/Pull model of transactions
Author(s): Koskinen, Eric; Parkinson, Matthew
Abstract:
We present a general theory of serializability, unifying a wide range of transactional algorithms, including some that are yet to come. To this end, we provide a compact semantics in which concurrent transactions push their effects into the shared view (or unpush to recall effects) and pull the effects of potentially uncommitted concurrent transactions into their local view (or unpull to detangle). Each operation comes with simple side-conditions given in terms of commutativity (Lipton's left-movers and right-movers).
The benefit of this model is that most of the elaborate reasoning (coinduction, simulation, subtle invariants, etc.) necessary for proving the serializability of a transactional algorithm is already proved within the semantic model. Thus, proving serializability (or opacity) amounts simply to mapping the algorithm on to our rules, and showing that it satisfies the rules' side-conditions.
-
Ph.D. Thesis
2014
Cryptographic Algorithms for the SecureDelegation of Multiparty Computation
Lopez-Alt, Adriana
Abstract
|
PDF
Title: Cryptographic Algorithms for the SecureDelegation of Multiparty Computation
Candidate: Lopez-Alt, Adriana
Advisor(s): Dodis, Yevgeniy
Abstract:
In today’s world, we store our data and perform expensive computations remotely on powerful servers (a.k.a. “the cloud”) rather than on our local devices. In this dissertation we study the question of achieving cryptographic security in the setting where multiple (mutually distrusting) clients wish to delegate the computation of a joint function on their inputs to an untrusted cloud, while keeping these inputs private. We introduce two frameworks for modeling such protocols.
- The first, called cloud-assisted multiparty computation (cloud-assisted MPC), builds on the standard notion of MPC to incorporate the concept of delegation. In particular, since the cloud is expected to perform the computation of the function, our definition requires the communication complexity of the protocol, as well as the computation time of all clients to be (essentially) independent of the complexity of the function.
- The second, called on-the-fly MPC, builds on the notion of cloud-assisted MPC and further requires that the clients be involved only when initially uploading their input to the cloud, and in a final phase when outputs are revealed. In particular, this allows the server to dynamically choose functions (and subsets of data on which to evaluate these functions) “on- the-fly”, and evaluate them without requiring any interaction with the clients. The only interaction required takes place in the final phase after the computation has been completed, when the clients must retroactively approve both the chosen functions, and the subsets of data upon which these functions were evaluated.
We construct cloud-assisted and on-the-fly MPC protocols using fully homomorphic encryption (FHE). However, FHE requires inputs to be encrypted under the same key; we extend it to the multiparty setting in two ways:
- We introduce the notion of threshold FHE : fully homomorphic encryption that allows the clients to jointly generate a common public key (whose corresponding secret key is shared among them), as well as decrypt a ciphertext under this public key without learning any- thing but the plaintext. Using threshold FHE, we show how to construct an efficient cloud- assisted MPC protocol. We construct threshold FHE using (a modification of) the Brakerski- Vaikuntanathan (ring-based) FHE scheme; however our ideas extend to many other lattice- based FHE schemes in the literature.
- We introduce the notion of multikey FHE : fully homomorphic encryption that allows the cloud to perform homomorphic evaluation on ciphertexts encrypted under different and independent keys. We show a construction of on-the-fly MPC using multikey FHE, and construct a multikey FHE scheme based on NTRU encryption. We highlight that it was previously not known how to make NTRU fully homomorphic, even for a single key. Therefore, we view the construction of (multikey) FHE from NTRU encryption as a main contribution of independent interest.
-
M.S. Thesis
2014
Resolution-Exact Planner for a 2-link Planar Robot using Soft Predicates
Luo, Zhongdi
Abstract
|
PDF
Title: Resolution-Exact Planner for a 2-link Planar Robot using Soft Predicates
Candidate: Luo, Zhongdi
Advisor(s): Yap, Chee
Abstract:
Motion planning is a major topic in robotics. It frequently refers to motion of a robot in a R 2 or R 3 world that contains obstacles. Our goal is to produce algorithms that are practical and have strong theoretical guarantees. Recently, a new framework Soft Subdivision Search (SSS) was introduced to solve various motion planning problems. It is based on soft predicates and a new notion of correctness called resolution-exactness. Unlike most theoretical algorithms, such algorithms can be implemented without exact computation. In this thesis we describe a detailed, realized algorithm of SSS for a 2-link robot in R 2 . We prove the correctness of our predicates and also do experimental study of several strategies to enhance the basic SSS algorithm. In particular, we introduce a technique called T/R Splitting, in which the splittings of the rotational degrees of freedom are deferred to the end. The results give strong evidence of the practicability of SSS.
-
Ph.D. Thesis
2014
Robust and Efficient Methods for Approximation and Optimization of Stability Measures
Mitchell, Tim
Abstract
|
PDF
Title: Robust and Efficient Methods for Approximation and Optimization of Stability Measures
Candidate: Mitchell, Tim
Advisor(s): Overton, Michael
Abstract:
We consider two new algorithms with practical application to the problem of designing controllers for linear dynamical systems with input and output: a new spectral value set based algorithm called hybrid expansion-contraction intended for approximating the H-infinity norm, or equivalently, the complex stability radius, of large-scale systems, and a new BFGS SQP based optimization method for nonsmooth, nonconvex constrained optimization motivated by multi-objective controller design. In comprehensive numerical experiments, we show that both algorithms in their respect domains are significantly faster and more robust compared to other available alternatives. Moreover, we present convergence guarantees for hybrid expansion-contraction, proving that it converges at least superlinearly, and observe that it converges quadratically in practice, and typically to good approximations to the H-infinity norm, for problems which we can verify this. We also extend the hybrid expansion-contraction algorithm to the real stability radius, a measure which is known to be more difficult to compute than the complex stability radius. Finally, for the purposes of comparing multiple optimization methods, we present a new visualization tool called relative minimization profiles that allow for simultaneously assessing the relative performance of algorithms with respect to three important performance characteristics, highlighting how these measures interrelate to one another and compare to the other competing algorithms on heterogenous test sets. We employ relative minimization profiles to empirically validate our proposed BFGS SQP method in terms of quality of minimization, attaining feasibility, and speed of progress compared to other available methods on challenging test sets comprised of nonsmooth, nonconvex constrained optimization problems arising in controller design.
-
Ph.D. Thesis
2014
Building Efficient Distributed In-memory Systems
Power, Russell
Abstract
|
PDF
Title: Building Efficient Distributed In-memory Systems
Candidate: Power, Russell
Advisor(s): Li, Jinyang
Abstract:
The recent cloud computing revolution has changed the distributed computing landscape, making the resources of entire datacenters available to ordinary users. This process has been greatly aided by dataflow style frameworks such as MapReduce which expose simple model for programs, allowing for efficient, fault-tolerant execution across many machines. While the MapReduce model has proved to be effective for many applications, there are a wide class of applications which are difficult to write or inefficient in such a model. This includes many familiar and important applications such as PageRank, matrix factorization and a number of machine learning algorithms. In lieu of a good framework for building these applications, users resort to writing applications using MPI or RPC, a difficult and error-prone construction.
This thesis presents 2 complementary frameworks, Piccolo and Spartan, which help programmers to write in-memory distributed applications not served well by existing approaches.
Piccolo presents a new data-centric programming model for in-memory applications. Unlike data-flow models, Piccolo allows programs running on different machines to share distributed, mutable state via a key-value table interface. This design allows for both high-performance and additional flexibility. Piccolo makes novel use of commutative updates to efficiently resolve write-write conflicts. We find Piccolo provides an efficient backend for a wide-range of applications: from PageRank and matrix multiplication to web-crawling.
While Piccolo provides an efficient backend for distributed computation, it can still be some- what cumbersome to write programs using it directly. To address this, we created Spartan. Spartan implements a distributed implementation of the NumPy array language, and fully sup- ports important array language features such as spatial indexing (slicing), fancy indexing and broadcasting. A key feature of Spartan is its use of a small number of simple, powerful high-level operators to provide most functionality. Not only do these operators dramatically simplify the design and implementation of Spartan, they also allow users to implement new functionality with ease.
We evaluate Piccolo and Spartan on a wide range of applications and find that they both perform significantly better than existing approaches.
-
TR2014-971
2014
VerifiableAuction: An Auction System for a Suspicious World
Rosenberg, Michael;
Shasha, Dennis
Abstract
|
PDF
Title: VerifiableAuction: An Auction System for a Suspicious World
Author(s): Rosenberg, Michael; Shasha, Dennis
Abstract:
This paper presents a cryptosystem that will allow for fair first-price sealed-bid auctions among groups of individuals to be conducted over the internet without the need for a trusted third party. A client who maintains the secrecy of his or her private key will be able to keep his/her bid secret from the server and from all other clients until this client explicitly decides to reveal his/her bid, which will be after all clients publish their obfuscated bids. Each client will be able to verify that every other client's revealed bid corresponds to that client's obfuscated bid at the end of each auction. Each client is provided with a transcript of all auction proceedings so that they may be independently audited.
-
Ph.D. Thesis
2014
Runtime Compilation of Array-Oriented Python Programs
Rubinsteyn, Alex
Abstract
|
PDF
Title: Runtime Compilation of Array-Oriented Python Programs
Candidate: Rubinsteyn, Alex
Advisor(s): Shasha, Dennis
Abstract:
The Python programming language has become a popular platform for data analysis and scientific computing. To mitigate the poor performance of Python's standard interpreter, numerically intensive computations are typically offloaded to library functions written in languages such as Fortran or C. If, however, some algorithm does not have an existing low-level implementation, then the scientific programmer must either accept sub-standard performance (sometimes orders of magnitude slower than native code) or themselves implement the desired functionality in a less productive but more efficient language.
To alleviate this problem, this thesis present Parakeet, a runtime compiler for an array-oriented subset of Python. Parakeet does not replace the Python interpreter, but rather selectively augments it by compiling and executing functions explicitly marked by the programmer. Parakeet uses runtime type specialization to eliminate the performance-defeating dynamicism of untyped Python code. Parakeet's pervasive use of data parallel operators as a means for implementing array operations enables high-level restructuring optimization and compilation to parallel hardware such as multi-core CPUs and graphics processors. We evaluate Parakeet on a collection of numerical benchmarks and demonstrate its dramatic capacity for accelerating array-oriented Python programs.
-
Ph.D. Thesis
2014
A Deep Learning Pipeline for Image Understanding and Acoustic Modeling
Sermanet, Pierre
Abstract
|
PDF
Title: A Deep Learning Pipeline for Image Understanding and Acoustic Modeling
Candidate: Sermanet, Pierre
Advisor(s): LeCun, Yann
Abstract:
One of the biggest challenges artificial intelligence faces is making sense of the real world through sensory signals such as audio or video. Noisy inputs, varying object viewpoints, deformations and lighting conditions turn it into a high-dimensional problem which cannot be efficiently solved without learning from data.
This thesis explores a general way of learning from high dimensional data (video, images, audio, text, financial data, etc.) called deep learning. It strives on the increasingly large amounts of data available to learn robust and invariant internal features in a hierarchical manner directly from the raw signals.
We propose an unified pipeline for feature learning, recognition, localization and detection using Convolutional Networks (ConvNets) that can obtain state-of-the-art accuracy on a number of pattern recognition tasks, including acoustic modeling for speech recognition and object recognition in computer vision. ConvNets are particularly well suited for learning from continuous signals in terms of both accuracy and efficiency.
Additionally, a novel and general deep learning approach to detection is proposed and successfully demonstrated on the most challenging vision datasets. We then generalize it to other modalities such as speech data. This approach allows accurate localization and detection objects in images or phones in voice signals by learning to predict boundaries from internal representations. We extend the reach of deep learning from classification to detection tasks in an integrated fashion by learning multiple tasks using a single deep model. This work is among the first to outperform human vision and establishes a new state of the art on some computer vision and speech recognition benchmarks.
-
Ph.D. Thesis
2014
Towards New Interfaces For Pedagogy
Stein, Murphy
Abstract
|
PDF
Title: Towards New Interfaces For Pedagogy
Candidate: Stein, Murphy
Advisor(s): Perlin, Ken
Abstract:
Developing technology to help people teach and learn is an important topic in Human Computer Interaction (HCI).
In this thesis we present three studies on this topic. In the first study, we demonstrate new games for learning mathematics and discuss the evidence for key design decisions from user studies. In the second study, we develop a real-time video compositing system for distance education and share evidence for its potential value compared to standard techniques from two user studies. In the third study, we demonstrate our markerless hand tracking interface for real-time 3D manipulation and explain its advantages compared to other state-of-the-art methods.
A data-driven methodology is applied intensively throughout the course of this study. Several paraphrase corpora are constructed using automatic techniques, experts and crowdsourcing platforms. Paraphrase systems are trained and evaluated by using these data as a cornerstone. We show that even with a very noisy or a relatively small amount of parallel training data, it is possible to learn paraphrase models which capture linguistic phenomena. This work expands the scope of paraphrase studies to targeting different language variations, and more potential applications, such as text normalization and domain adaptation.
-
Ph.D. Thesis
2014
Computational Complexity Implicationsof Secure Coin-Flipping
Tentes, Aristeidis
Abstract
|
PDF
Title: Computational Complexity Implicationsof Secure Coin-Flipping
Candidate: Tentes, Aristeidis
Advisor(s): Dodis, Yevgeniy
Abstract:
Modern Cryptography is based on computational intractability assumptions, e.g., Factoring, Discrete Logarithm, Diffie-Helman etc. However, since an assumption might be proven incorrect, there has been a lot of focus in order to construct cryptographic primitives based on the possibly most minimal assumption. The most popular minimal assumption, which is implied by the existence of almost all cryptographic primitives, is the existence of One Way Functions. Coin-Flipping protocols are known to be implied by One-Way Functions, however, a complete characterization of the inverse direction is not known. There was even speculation that weak notions of Coin Flipping Protocols might be strictly weaker than One Way Functions. In this thesis we show that even very weak notions of Coin Flipping protocols do imply One Way Functions. In particular we show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g 0.499) implies the existence of One Way Functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias 0.207. Unlike the former result, our result also holds for weak coin-flipping protocols.
-
TR2014-963
2014
On Automating Separation Logic with Trees and Data
Wies, Thomas
Abstract
|
PDF
Title: On Automating Separation Logic with Trees and Data
Author(s): Wies, Thomas
Abstract:
Separation logic (SL) is a widely used formalism for verifying heap manipulating programs. Existing SL solvers focus on decidable fragments for list-like structures. More complex data structures such as trees are typically unsupported in implementations, or handled by incomplete heuristics.
While complete decision procedures for reasoning about trees have been proposed, these procedures suffer from high complexity, or make global assumptions about the heap that contradict the separation logic philosophy of local reasoning. In this paper, we present a fragment of classical first-order logic for local reasoning about tree-like data structures. The logic is decidable in NP and the decision procedure allows for combinations with other decidable first-order theories for reasoning about data. Such extensions are essential for proving functional correctness properties.
We have implemented our decision procedure and, building on earlier work on translating SL proof obligations into classical logic, integrated it into an SL-based verification tool. We successfully used the tool to verify functional correctness of tree-based data structure implementations. -
Ph.D. Thesis
2014
Data-driven Approaches for Paraphrasing across Language Variations
Xu, Wei
Abstract
|
PDF
Title: Data-driven Approaches for Paraphrasing across Language Variations
Candidate: Xu, Wei
Advisor(s): Grishman, Ralph
Abstract:
Our language changes very rapidly, accompanying political, social and cultural trends, as well as the evolution of science and technology. The Internet, especially the social media, has accelerated this process of change. This poses a severe challenge for both human beings and natural language processing (NLP) systems, which usually only model a snapshot of language presented in the form of text corpora within a certain domain and time frame.
While much previous effort has investigated monolingual paraphrase and bilingual translation, we focus on modeling meaning-preserving transformations between variants of a single language. We use Shakespearean and Internet language as examples to investigate various aspects of this new paraphrase problem, including acquisition, generation, detection and evaluation.
A data-driven methodology is applied intensively throughout the course of this study. Several paraphrase corpora are constructed using automatic techniques, experts and crowdsourcing platforms. Paraphrase systems are trained and evaluated by using these data as a cornerstone. We show that even with a very noisy or a relatively small amount of parallel training data, it is possible to learn paraphrase models which capture linguistic phenomena. This work expands the scope of paraphrase studies to targeting different language variations, and more potential applications, such as text normalization and domain adaptation.
-
Ph.D. Thesis
2014
Positive-Unlabeled Learning in the Context of Protein Function Prediction
Youngs, Noah
Abstract
|
PDF
Title: Positive-Unlabeled Learning in the Context of Protein Function Prediction
Candidate: Youngs, Noah
Advisor(s): Shasha, Dennis
Abstract:
With the recent proliferation of large, unlabeled data sets, a particular subclass of semisupervised learning problems has become more prevalent. Known as positiveunlabeled learning (PU learning), this scenario provides only positive labeled examples, usually just a small fraction of the entire dataset, with the remaining examples unknown and thus potentially belonging to either the positive or negative class. Since the vast majority of traditional machine learning classifiers require both positive and negative examples in the training set, a new class of algorithms has been developed to deal with PU learning problems.
A canonical example of this scenario is topic labeling of a large corpus of documents. Once the size of a corpus reaches into the thousands, it becomes largely infeasible to have a curator read even a sizable fraction of the documents, and annotate them with topics. In addition, the entire set of topics may not be known, or may change over time, making it impossible for a curator to annotate which documents are NOT about certain topics. Thus a machine learning algorithm needs to be able to learn from a small set of positive examples, without knowledge of the negative class, and knowing that the unlabeled training examples may contain an arbitrary number of additional but as yet unknown positive examples. Another example of a PU learning scenario recently garnering attention is the protein function prediction problem (PFP problem).
While the number of organisms with fully sequenced genomes continues to grow, the progress of annotating those sequences with the biological functions that they perform lags far behind. Machine learning methods have already been successfully applied to this problem, but with many organisms having a small number of positive annotated training examples, and the lack of availability of almost any labeled negative examples, PU learning algorithms can make large gains in predictive performance.
The first part of this dissertation motivates the protein function prediction problem, explores previous work, and introduces novel methods that improve upon previously reported benchmarks for a particular type of learning algorithm, known as Gaussian Random Field Label Propagation (GRFLP). In addition, we present improvements to the computational efficiency of the GRFLP algorithm, and a modification to the traditional structure of the PFP learning problem that allows for simultaneous prediction across multiple species.
The second part of the dissertation focuses specifically on the positive-unlabeled aspects of the PFP problem. Two novel algorithms are presented, and rigorously compared to existing PU learning techniques in the context of protein function prediction. Additionally, we take a step back and examine some of the theoretical considerations of the PU scenario in general, and provide an additional novel algorithm applicable in any PU context. This algorithm is tailored for situations in which the labeled positive examples are a small fraction of the set of true positive examples, and where the labeling process may be subject to some type of bias rather than being a random selection of true positives (arguably some of the most difficult PU learning scenarios).
The third and fourth sections return to the PFP problem, examining the power of tertiary structure as a predictor of protein function, as well as presenting two case studies of function prediction performance on novel benchmarks. Lastly, we conclude with several promising avenues of future research into both PU learning in general, and the protein function prediction problem specifically.
-
Ph.D. Thesis
2014
Hierarchical Convolutional Deep Learning in Computer Vision
Zeiler, Matthew
Abstract
|
PDF
Title: Hierarchical Convolutional Deep Learning in Computer Vision
Candidate: Zeiler, Matthew
Advisor(s): Fergus, Rob
Abstract:
It has long been the goal in computer vision to learn a hierarchy of features useful for object recognition. Spanning the two traditional paradigms of machine learning, unsupervised and supervised learning, we investigate the application of deep learning methods to tackle this challenging task and to learn robust representations of images.
We begin our investigation with the introduction of a novel unsupervised learning technique called deconvolutional networks. Based on convolutional sparse coding, we show this model learns interesting decompositions of images into parts without object label information. This method, which easily scales to large images, becomes increasingly invariant by learning multiple layers of feature extraction coupled with pooling layers. We introduce a novel pooling method called Gaussian pooling to enable these layers to store continuous location information while being differentiable, creating a unified objective function to optimize.
In the supervised learning domain, a well-established model for recognition of objects is the convolutional network. We introduce a new regularization method for convolutional networks called stochastic pooling which relies on sampling noise to prevent these powerful models from overfitting. Additionally, we show novel visualizations of these complex models to better understand what they learn and to provide insight on how to develop state-of-the-art architectures for large-scale classification of 1,000 different object categories.
We also investigate some other related problems in deep learning. First, we introduce a model for the task of mapping one high dimensional time series sequence onto another. Second, we address the choice of nonlinearity in neural networks, showing evidence that rectified linear units outperform others types in automatic speech recognition. Finally, we introduce a novel optimization method called ADADELTA which shows promising convergence speeds in practice while being robust to hyper-parameter selection.